id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
9,519,121
https://en.wikipedia.org/wiki/Quadratically%20constrained%20quadratic%20program
In mathematical optimization, a quadratically constrained quadratic program (QCQP) is an optimization problem in which both the objective function and the constraints are quadratic functions. It has the form where P0, ..., Pm are n-by-n matrices and x ∈ Rn is the optimization variable. If P0, ..., Pm are all positive semidefinite, then the problem is convex. If these matrices are neither positive nor negative semidefinite, the problem is non-convex. If P1, ... ,Pm are all zero, then the constraints are in fact linear and the problem is a quadratic program. Hardness A convex QCQP problem can be efficiently solved using an interior point method (in a polynomial time), typically requiring around 30-60 iterations to converge. Solving the general non-convex case is an NP-hard problem. To see this, note that the two constraints x1(x1 − 1) ≤ 0 and x1(x1 − 1) ≥ 0 are equivalent to the constraint x1(x1 − 1) = 0, which is in turn equivalent to the constraint x1 ∈ {0, 1}. Hence, any 0–1 integer program (in which all variables have to be either 0 or 1) can be formulated as a quadratically constrained quadratic program. Since 0–1 integer programming is NP-hard in general, QCQP is also NP-hard. However, even for a nonconvex QCQP problem a local solution can generally be found with a nonconvex variant of the interior point method. In some cases (such as when solving nonlinear programming problems with a sequential QCQP approach) these local solutions are sufficiently good to be accepted. Relaxation There are two main relaxations of QCQP: using semidefinite programming (SDP), and using the reformulation-linearization technique (RLT). For some classes of QCQP problems (precisely, QCQPs with zero diagonal elements in the data matrices), second-order cone programming (SOCP) and linear programming (LP) relaxations providing the same objective value as the SDP relaxation are available. Nonconvex QCQPs with non-positive off-diagonal elements can be exactly solved by the SDP or SOCP relaxations, and there are polynomial-time-checkable sufficient conditions for SDP relaxations of general QCQPs to be exact. Moreover, it was shown that a class of random general QCQPs has exact semidefinite relaxations with high probability as long as the number of constraints grows no faster than a fixed polynomial in the number of variables. Semidefinite programming When P0, ..., Pm are all positive-definite matrices, the problem is convex and can be readily solved using interior point methods, as done with semidefinite programming. Example Max Cut is a problem in graph theory, which is NP-hard. Given a graph, the problem is to divide the vertices in two sets, so that as many edges as possible go from one set to the other. Max Cut can be formulated as a QCQP, and SDP relaxation of the dual provides good lower bounds. QCQP is used to finely tune machine setting in high-precision applications such as photolithography. Solvers and scripting (programming) languages References Further reading In statistics External links NEOS Optimization Guide: Quadratic Constrained Quadratic Programming Mathematical optimization
Quadratically constrained quadratic program
[ "Mathematics" ]
721
[ "Mathematical optimization", "Mathematical analysis" ]
9,519,293
https://en.wikipedia.org/wiki/Aboriginal%20stone%20arrangement
Aboriginal stone arrangements are a form of rock art constructed by Aboriginal Australians. Typically, they consist of stones, each of which may be about in size, laid out in a pattern extending over several metres or tens of metres. Notable examples have been made by many different Australian Aboriginal cultures, and in many cases are thought to be associated with spiritual ceremonies. Particularly fine examples are in Victoria, where the stones can be very large (up to high). For example, the stone arrangement at Wurdi Youang consists of about 100 stones arranged in an egg-shaped oval about across. Each stone is well-embedded into the soil, and many have "trigger-stones" to support them. The appearance of the site is very similar to that of the megalithic stone circles found throughout Britain (although the function and culture are presumably completely different). Although its association with Indigenous Australians is well-authenticated and beyond doubt, the purpose is unclear, although it may have a connection with initiation rites. It has also been suggested that the site may have been used for astronomical purposes (Morieson 2003). Other well-known examples in Victoria include the stone arrangements at Carisbrook and Lake Bolac. Australia's largest collection of standing stones is said to be at Murujuga, also known as the Burrup Peninsula or the Dampier Archipelago, in Western Australia, which includes tall standing stones similar to the European menhirs, as well as circular stone arrangements. A very different example is found near Yirrkala in Arnhem Land, where there are detailed images of the praus used by Macassan fisherman fishing for Trepang, several hundred years before European contact. Here the stones are small (typically to ), sit on the surface of the ground, and can easily be moved by hand, which also implies that they can be easily damaged or altered by modern hands, so that caution is needed when interpreting such sites. Similar examples are found scattered throughout Australia, mainly in remote or inaccessible places, and it is likely that there were many more prior to European settlement of Australia. In south-east Australia are the Bora rings, which consist of two circles of stones, one larger than the other, which were used in an initiation ceremony and rite of passage in which boys were transformed into men. Some Aboriginal stone arrangements in south-east Australia are aligned to cardinal directions with an accuracy of a few degrees, while the Wurdi Youang stone arrangement, which indicates the direction of solstitial sunsets, appears to have been built around the east-west direction, again with an accuracy of a few degrees. This requirement for highly accurate direction is also indicated by the practice of orienting the graves of deceased Kamilaroi men to an accuracy of a few degrees. See also Indigenous Australian art Aboriginal Astronomy Archaeoastronomy References Further reading Lane, L., & Fullagar, R., 1980,”Previously unrecorded aboriginal stone arrangements in Victoria”, Records of the Victorian Archaeological Survey, No. 10, June 1980, 134-151. Ministry for Conservation, Victoria. MacKnight & Gray “Aboriginal Stone Pictures - Art In Eastern Arnhem Land”, 1969 Morieson, J., 2003,”Solar-based Lithic Design in Victoria, Australia”, in World Archaeological Congress, Washington DC, 2003 Morieson, J., 2006, “Ceremonial Hill”, published leaflet Mountford, 1927, “Aboriginal Stone Structures in South Australia”, Trans & proc. Roy. Soc. South Australia External links Burrup Rock Art Photos of Burrup Standing Stones Australian Aboriginal cultural history Archaeoastronomy Megalithic monuments Rock art in Australia
Aboriginal stone arrangement
[ "Astronomy" ]
748
[ "Archaeoastronomy", "Astronomical sub-disciplines" ]
9,519,537
https://en.wikipedia.org/wiki/MDMX
The MDMX (MIPS Digital Media eXtension), also known as MaDMaX, is an extension to the MIPS architecture released in October 1996 at the Microprocessor Forum. History MDMX was developed to accelerate multimedia applications that were becoming more popular and common in the 1990s on RISC and CISC systems. Functionality MDMX defines a new set of thirty-two 64-bit registers called media registers, which are mapped onto the existing floating-point registers to save hardware; and a 192-bit extended product accumulator. The media registers hold two new data types: octo byte (OB) and quad half (QH) that contain eight bytes (8-bit) and four halfwords (16-bit) integers. Variants of existing instructions operate on these data types, performing saturating arithmetic, logical, shift, compare and align operations. MDMX also introduced 19 instructions for permutation, manipulating bytes in registers, performing arithmetic with the accumulator, and accumulator access. References Gwennap, Linley (18 November 1996). "Digital, MIPS Add Multimedia Extensions". Microprocessor Report. External links Silicon Graphics Introduces Enhanced MIPS Architecture MIPS architecture SIMD computing
MDMX
[ "Technology" ]
251
[ "Computing stubs", "Computer hardware stubs" ]
9,519,567
https://en.wikipedia.org/wiki/HTTP%20403
HTTP 403 is an HTTP status code meaning access to the requested resource is forbidden. The server understood the request, but will not fulfill it, if it was correct. Specifications HTTP 403 provides a distinct error case from HTTP 401; while HTTP 401 is returned when the client has not authenticated, and implies that a successful response may be returned following valid authentication, HTTP 403 is returned when the client is not permitted access to the resource despite providing authentication such as insufficient permissions of the authenticated account. Error 403: "The server understood the request, but is refusing to authorize it." (RFC 7231) Error 401: "The request requires user authentication. The response MUST include a WWW-Authenticate header field (section 14.47) containing a challenge applicable to the requested resource. The client MAY repeat the request with a suitable Authorization header field (section 14.8). If the request already included Authorization credentials, then the 401 response indicates that authorization has been refused for those credentials." (RFC 2616) The Apache web server returns 403 Forbidden in response to requests for URL paths that corresponded to file system directories when directory listings have been disabled in the server and there is no Directory Index directive to specify an existing file to be returned to the browser. Some administrators configure the Mod proxy extension to Apache to block such requests and this will also return 403 Forbidden. Microsoft IIS responds in the same way when directory listings are denied in that server. In WebDAV, the 403 Forbidden response will be returned by the server if the client issued a PROPFIND request but did not also issue the required Depth header or issued a Depth header of infinity. Causes A 403 status code can occur for the following reasons: Insufficient permissions: The most common reason for a 403 status code is that the user lacks the necessary permissions to access the requested resource. This can mean that the user is not logged in, has not provided valid credentials, or does not belong to the appropriate user group to access the resource. Authentication required: In some cases, the server requires authentication to access certain resources. If the user does not provide valid credentials or if the authentication fails, a 403 status code is returned. IP restrictions: The server may also restrict access to specific IP addresses or IP ranges. If the user's IP address is not included in the list of permitted addresses, a 403 status code is returned. Server configuration: The server's configuration can be set to prohibit access to certain files, directories, or areas of the website. This can be due to a misconfiguration or intentional restrictions imposed by the server administrator. Blocked by firewall or security software: A 403 status code can occur if a firewall or security software blocks access to the resource. This may happen due to security policies, malware detection, or other security measures. Examples Client request: GET /securedpage.php HTTP/1.1 Host: www.example.org Server response: HTTP/1.1 403 Forbidden Content-Type: text/html <html> <head><title>403 Forbidden</title></head> <body> <h1>Forbidden</h1> <p>You don't have permission to access /securedpage.php on this server.</p> </body> </html> See also List of HTTP status codes URL redirection Notes References External links Apache Module mod_proxy – Forward Working with SELinux Contexts Labeling files Hypertext Transfer Protocol (HTTP/1.1): Semantics and Content Computer errors Hypertext Transfer Protocol status codes
HTTP 403
[ "Technology" ]
736
[ "Computer errors" ]
9,519,674
https://en.wikipedia.org/wiki/Waves%20and%20shallow%20water
When waves travel into areas of shallow water, they begin to be affected by the ocean bottom. The free orbital motion of the water is disrupted, and water particles in orbital motion no longer return to their original position. As the water becomes shallower, the swell becomes higher and steeper, ultimately assuming the familiar sharp-crested wave shape. After the wave breaks, it becomes a wave of translation and erosion of the ocean bottom intensifies. Cnoidal waves are exact periodic solutions to the Korteweg–de Vries equation in shallow water, that is, when the wavelength of the wave is much greater than the depth of the water. See also References External links Exploring the World Ocean The Oceans Water waves Water
Waves and shallow water
[ "Physics", "Chemistry", "Environmental_science" ]
147
[ "Physical phenomena", "Hydrology", "Water waves", "Waves", "Water", "Fluid dynamics" ]
9,519,759
https://en.wikipedia.org/wiki/Japanese%20wolf
The Japanese wolf (, , or , [see below]; Canis lupus hodophilax), also known as the Honshū wolf, is an extinct subspecies of the gray wolf that was once endemic to the islands of Honshū, Shikoku and Kyūshū in the Japanese archipelago. It was one of two subspecies that were once found in the Japanese archipelago, the other being the Hokkaido wolf. Genetic sequencing indicates that the Japanese wolf was highly divergent from living wolf populations. Despite long being revered in Japan, the introduction of rabies and canine distemper to Japan led to the decimation of the population, and policies enacted during the Meiji Restoration led to the persecution and eventual extermination of the subspecies by the early 20th century. Well-documented observations of similar canids have been made throughout the 20th and 21st centuries, and have been suggested to be surviving Japanese wolves. However, due to environmental and behavioral factors, doubts persist over their identity. Etymology C. hodopylax'''s binomial name derives from the Greek Hodos (path) and phylax (guardian), in reference to Okuri-inu from Japanese folklore, which portrayed wolves or weasels as the protectors of travelers. There had been numerous other aliases referring to Japanese wolf, and the name ōkami (wolf) is derived from the Old Japanese öpö-kamï, meaning either "great-spirit" where wild animals were associated with the mountain spirit Yama-no-kami in the Shinto religion, or "big dog", or "big bite" (ōkami or ōkame), and "big mouth"; Ōkuchi-no-Makami (Japanese) was an old and deified alias for Japanese wolf where it was both worshipped and feared, and it meant "a true god with big-mouth" based on several theories; either referring to wolf's mouth with associations with several legends and folklore such as the wolf guided Yamato Takeru and was titled so by the prince, or a region in Asuka called Ōkuchi-no-Makami-no-Hara where Asuka no Kinunui no Konoha (Japanese) lived and a number of people were said to be killed by an old wolf there. Taxonomy and origin Nomenclature: "ōkami" and "yamainu" Before Dutch zoologist Coenraad Jacob Temminck classified it, it had been long recognized in Japan that Honshu was inhabited by two distinct canids; ōkami (wolf) and yamainu ("mountain dog", likely a type of feral dog), both of which were described by the herbalist Ono Ranzan in his Honzō kōmoku keimō (“An instructional outline of natural studies”) in 1803. He described the ōkami as an edible, but rapacious, greyish-brown animal with a long, ash-colored, white-tipped tail with webbed toes and triangular eyes that would occasionally threaten people if rabid or hungry. In contrast, the yamainu was described as a similar animal, but with speckled yellowish fur, unwebbed toes, a foul odor and inedible meat. Ranzan's works were studied by German botanist Philipp Franz von Siebold during his tenure in Dejima. He purchased a female mountain dog and a wolf in 1826, describing both in his notes as distinct, and preparing two sketches illustrating their differences. The skin of the mountain dog was subsequently shipped to the Rijksmuseum van Natuurlijke Historie in the Netherlands and mounted. The specimen, along with Siebold's notes, were used by Temminck as references for his scientific classification of the animal in Fauna Japonica (1839). Temminck, however, misinterpreted Siebold's notes distinguishing the wolf and the mountain dog and treated the two as synonyms. In 1842, he wrote a longer description, still confounding the two names, and producing a sketch of a "wolf" based on Siebold's mounted mountain dog specimen. Skeletal and genetic findings The Japanese wolf, or Honshū wolf, (Canis lupus hodophilax Temminck, 1893) is a subspecies of the gray wolf (Canis lupus). Skeletal remains of the Japanese wolf have been found in archaeological sites, such as Torihama shell mounds, dating from the Jōmon period (10,000 to 250 B.C). The Japanese wolf was not the world's smallest wolf. The cranial length of the adult Arab wolf (Canis lupus arabs) measures on average 200.8 mm, which is smaller than most wolves. Specimens of the Japanese wolf were measured between 193.1 mm and 235.9 mm and it was uncertain if these were all from adults. In the mandible, M1 (molar tooth) is relatively larger than in any other canid species.Miyamoto F, Maki I (1983) On the repaired specimen of Japanese wolf (Canis lupus hodophilax Temminck) and its skull newly taken out. Bull Fac Ed Wakayama Univ Nat Sci 32: 9–16 (in Japanese with English abstract) An examination in 1991 found one specimen's condylobasal length (a measure of skull length) to be 205.2mm, and the Alveolar length of P4 (the fourth maxillary premolar or carnassial tooth) to be 20.0mm (left) and 21.0mm (right). In 2009, an osteological study declared that the skull of the Japanese wolf was between 206.4 mm to 226.0 mm in total length, and that morphological characters alone were not sufficient to distinguish the Japanese wolf from large domesticated dogs, such as the Akita breed. Remains of the wild native canine dating from the late Edo period (1603 and 1868), the Yama-Inu, has occasionally been confused with the Japanese wolf because of the osteological similarities between the two.Obara I, Nakamura K (1992) Notes on a skull of so-called "Yama-Inu" or wild canine preserved in the Minamiashigara municipal folklore museum. Bull Kanagawa Pref Mus Nat Sci 21: 105–110 (in Japanese with English abstract) The Japanese wolf inhabited Kyushu, Shikoku, and Honshu Islands but not Hokkaido Island. This indicates that its ancestor may have migrated from the Asian continent through the Korean Peninsula into Japan. The phylogenetic tree generated from its mitochondrial DNA sequences revealed a long branch that separated the Japanese wolf from other gray wolf populations and that it belongs to the ancient mDNA haplogroup 2 (represented today by the Italian wolf and scattered pockets of other wolves across Eurasia), while the Hokkaido wolf belongs to mDNA haplogroup 1 and this suggests that the Japanese wolf was the first arrival on the Japanese archipelago with the Hokkaido wolf arriving more recently from the north. The wolf was estimated to have arrived in Japan during the Late Pleistocene between 25,000 and 125,000 years ago, however a more recent study that looked at the past sea levels of the Korean Strait together with the timing of the Japanese wolf sequences indicated that it arrived to the southern islands less than 20,000 YBP. There have been several excavations of a large canid, which was comparable in size to North American dire wolf, dating the Late Pleistocene from Aomori and Shizuoka prefectures, however its relationship with either C. lupus hodophilax or C. lupus is unclear.See further: Evolution of the wolf – North America and JapanAn examination of sequences from 113 ancient Canis specimens from China and Russia did not match, which indicated that none of these specimens were the ancestors of the Japanese wolf. Analyses of the mitochondrial DNA of 1576 dogs worldwide revealed that one Kishu and one Siberian husky possessed the same haplotype as a Japanese wolf, indicating past cross-breeding. A more-refined study of Japanese wolf mitochondrial DNA showed that they could be further divided into two separate groups, and that the sequences from one Kishu, one Siberian husky and one Shiba Inu could also be divided into the two groups. These dogs correspond to clade F of the mDNA phylogenetic tree among worldwide dogs, with clade F haplogroup dogs originating from a rare admixture between male dogs and more than one female ancestor of Japanese wolves, which have contributed to the dog gene pool. In 2021, a genomic study found the Japanese wolf to descend from Pleistocene Siberian wolves and genetically distinct from living Eurasian wolves. The study found this lineage to occupy its own branch on the gray wolf family tree, with the modern gray wolf and most domestic dogs (aside from Native American dogs and some Asian breeds) being more closely related to each other than Siberian Pleistocene wolves. A 2022 study which sequenced the genome of a 35,000 year old wolf from Japan found that Holocene Japanese wolf represented the hybrid of separate migrations of wolves into Japan, one of Siberian Pleistocene wolves around 57-35,000 years ago, and later waves of mixed Pleistocene Siberian wolf and modern wolf ancestry around 37-14,000 years ago. A 2024 study found that Japanese wolves were nested within the diversity of living wolves as more closely related to (but not nested within) Eurasian wolves than to North American wolves, and that they were more closely related to domestic dogs than to other wolves, though Japanese wolves are unlikely to be the direct ancestors of domestic dogs. Contrary to the results of the 2021 and 2022 studies, no evidence was found for a close relationship with Siberian Pleistocene wolves and Holocene Japanese wolves (with the study finding that the 35,000 year old Japanese wolf more closely related to Pleistocene Siberian wolves than to Holocene Japanese wolves, contrary to the results of the 2022 study) which the authors suggested was likely the result of differences in statistical analysis. Admixture with domestic and feral dogs had been common in Japan, and distinguishing the original wolf was already difficult as scientific approaches for classification and species identification only began in Meiji where authorities were troubled to distinguish damages by wolves and dogs. Intentional cross-breeding between wild wolves and female domestic dogs, being chained outside, to create strong breeds was common, and several "types" of "wolves" had been commonly recognized by publics including potential F1 hybrids. These aspects led Japanese researchers to indicate that hybridization was severe among wide ranges of the archipelago including Hokkaido, and may disrupt genetic and morphological studies to determine the true C. hodophilax and C. hattai.柚兎, 2017, 100年以上前に絶滅したニホンオオカミは、まだ生きているかもしれない?, Da Vinci, Retrieved on October 29, 2021 Genetic analysis of Siebold's yamainu specimen using matrilineal mtDNA has found it to genetically match the Japanese wolf; however, its skull displays significant differences from other Japanese wolves. Due to this, it has been theorized that the yamainu may represent wolfdog hybrids between Japanese wolves and feral dogs, and Siebold's specimen was likely the offspring of a wolf mother and dog father. Range The Japanese wolf inhabited Kyushu, Shikoku, and Honshu Islands but not Hokkaido Island. The remains of a 28,000-year-old wolf specimen from the Yana River on the northern coast of arctic Siberia matched the mDNA haplotype of the Japanese wolf, which indicates that they shared common ancestry and a wider distribution. Physical characteristicsCanis lupus hodophilax was described by Temminck in 1839 as smaller than Canis lupus lupus (Linnaeus 1758) and of shorter legs, with its coat smooth and short. The Japanese wolf was smaller than the Hokkaido wolf and other gray wolves from the Asian and North American continents. It stood 56–58 cm at the withers. There are four mounted specimens believed to be Canis lupus hodophilax located at: the National Museum of Nature and Science, Japan; University of Tokyo, Japan; Wakayama University, Japan; Siebold Collection, and the National Museum of Natural History, Leiden, Netherlands. Alleged theories As above mentioned, descriptions of "ōkami" and "yamainu" by Ono Ranzan don't correspond, and several different "types" of wolves or wolf-like canids in Japanese islands were noted in literatures and reports, indicating these may or may not represent wolfdogs. For example, there exist a "big and black" one, and ones referred to ohokami or ōkame that were aliases and potential synonyms of ōkami; the former to "have paddles on paws and swim" and to "leave footprints with five claws",The textbook from Meiji which is displayed at Kaichi School and the latter to be "slender and long-haired" and could be one of animals kept by Siebold although this could also be a misidentified different canidae such as a dhole or a dog or a hybrid. Some researchers believe yamainu could be one or more of distinct and unrecognized native canidae. One is small and shorter legs, but more primitive and somewhat mustelidae-like appearance, and may represent the art of yamainu kept by Siebold by Kawahara Keiga, depicted with stripes, and the specimen preserved at Ube shrine, claimed to be a C. hodophilax captured in Wakayama in 1949, more than four decades after the last confirmed record. The other is a large canid that also inhabited Hokkaido predating Hokkaido wolf, and was described to "have different paws and fur patterns, different vocalization and behavioral patterns to jump and dance when agitated, disproportionate measurements compared to European wolves with notably shorter legs and a larger head while having similar trunk length for Hokkaido while muzzle for Honshu was shorter than Hokkaido's case". History In AD 713, the wolf first appeared on record in Kofudoki itsubun (Lost Writings on Ancient Customs). From AD 967, historical records indicated the wolf's preference for preying on horse, either wild horses or those in pastures, stables, and villages. In 1701, a lord introduced the first wolf bounty and by 1742 the first professional wolf hunters were using firearms and poison. In 1736, rabies appeared among dogs in eastern Japan, indicating that it had entered from China or Korea, then spread across the nation. Shortly afterward, it spread to the wolf population, turning some wolves from simple horse predators to man-killers that led to organized wolf hunts. Killing wolves became a national policy under the Meiji Restoration, and within one generation the Japanese wolf was extinct. The last Japanese wolf was captured and killed at Washikaguchi of Higashiyoshino village in Honshu Nara Prefecture, Japan on January 23, 1905. Some interpretations of the Japanese wolf's extinction stress the change in local perceptions of the animal: both rabies-induced aggression and increasing deforestation of wolf habitat forced the wolves into conflict with humans, and this led to their being targeted by farmers. Sightings of "short-legged dog like beasts", proposed to be the Japanese wolf, have been claimed since the time of its extinction until the last claim in 1997, but none of these have been verified. A claim in 2000 was dismissed as a hoax. Some Japanese zoologists believe that these reports "merely derive from misidentification of feral dogs". Culture In the Shinto belief, the ōkami ("wolf") is regarded as a messenger of the kami spirits and also offers protection against crop raiders such as the wild boar and deer. Wild animals were associated with the mountain spirit Yama-no-kami. The mountains of Japan, seen as a dangerous, deadly place, were highly associated with the wolf, which was believed to be their protector and guardian. Many mountain villages, such as Okamiiwa ("Wolf Rock") and Okamitaira ("Wolf Plateau"), are named after the wolf; this could be due to a sighting at the location, or a simple homage to the species. There are an estimated 20 Shinto wolf shrines on Honshu alone. The most famous national shrine is located at Mitsumine in Chichibu, Saitama Prefecture and there are a number of smaller wolf shrines on the Kii Peninsula, including the Tamaki Shrine and the Katakati Shrine at Totsukawa village. In Japanese folklore, there is the widely recorded belief of the okuriōkami ("escort wolf") that followed someone walking alone through a forest at night until they reach their home without doing them any harm. An offering was sometimes made for this escort. Another belief was of wolves that raised an infant who had been abandoned in the forests of the Kii Peninsula, and later became the clan leader Fujiwara no Hidehira. Another belief from the Kanto area of eastern Japan was that feeding an infant wolf's milk would make them grow up strong. Some legends portray the Japanese wolf as being prophetic creatures. In the Tamaki Mountains the location of a tree called “the cypress of dog-howls” is said to be the site where wolves howled immediately before a flood in 1889 warning the villagers, and before the great earthquake of 1923 even though the wolf was extinct by that time. Another belief was the "wolf notification" where a traveller does not return home, then a wolf comes to their home and makes a sad howling that signalled their death. Some villages had wolf charms called shishiyoke that were believed to protect their village and their crops against wild boar. Wolf fangs, hide, and hair were carried by travelers to ward off evil spirits, and wolf skulls were kept in some home shrines to ward off misfortune. In some villages such as in Gifu Prefecture, the skull of the wolf was used as the charm for both protection as well as curing possessed villagers. In addition to protecting the crops, the wolf may leave prey for villagers. The Japanese wolf has appeared in various popular media, such as the anime films Wolf Children and Princess Mononoke, the 2006 video game Ōkami, and the television series Kamen Rider Zero-One and Wonderful Pretty Cure!. In Wonderful Pretty Cure!, the main antagonists are vengeful wolf spirits who aim to destroy humanity as revenge for them causing the extinction of their species. Claimed post-extinction records Despite the status, there have been various reports of canines resembling the Japanese wolf throughout the 20th century and in the 21st century including a case by foreign tourists. Three of these, a kill within Fukui Castle in 1910Jiji Press. 守ろう 絶滅危惧種 写真特集 – ニホンオオカミ and two sightings from Chichibu in 1996Yagi H., 1996年10月秩父山中で撮影された犬科動物, WANTED Canis hodophilax and nearby Mount Sobo in 2000,Munakata M., 2017, ニホンオオカミは消えたか?, , , Junppousha involved closely taken images of each animals and scientific investigations, and a potential audio recording was made in 2018. These cases triggered debates both for and against the identities of the animals; however, affirmative biologists claimed morphological correspondences of all to Canis lupus hodophilax rather than misidentifications of feral animals such as a Eurasian wolf for the 1910 capture or Shikoku dog for the sighting in 2000. For 1910 record, scientists agreed that this was a Canis while some pointed the possibility of a Eurasian wolf that fled from a mobile zoo four or five days before; however, a staff of the zoo checked the corpse and confirmed that the animal captured was different. The 1996 sighting was in Chichibu Tama Kai National Park; the photographer, Hiroshi Yagi, spotted a wolf-like animal walking along the side of the road, and photographed it several times; the canine displayed no fear, even walking right up to him. Several experts who analyzed photographs conceded that the animal closely resembled a Japanese wolf. Other reports of wolf-like animals had also been made by Chichibu residents. Yagi had also previously heard potential Japanese wolf howls while working at a mountaineering lodge in the 1970s. Following the 1996 sighting, Yagi began research into the potential survival of the Japanese wolf, being assisted by other individuals over the years. Eventually, Yagi's team set up over 70 camera traps in the Okuchichibu Mountains; in 2018, one camera recorded footage of deer running by, with a howl heard in the background. Analysis of the howl by specialists found it to be nearly identical to that of an eastern wolf (C. lycaon''). Despite all the numerous well-attested sightings or recordings of canids closely resembling or having similar voices to wolves, significant doubt persists among experts for the species' continued survival, as the Japanese wolf primarily travelled in small packs, while most of the alleged sightings have been of singular individuals. In addition, the Japanese wolf inhabited deciduous forests composed largely of Japanese beech, but over 40% of this habitat was logged following World War II and replaced with plantations of sugi and hinoki; these artificial coniferous forests likely would not support the diversity that the Japanese wolf relied on. It is still likely that the Japanese wolf is extinct, and only DNA evidence can confirm or deny the identity of the sighted wild canids as Japanese wolves. References Notes Citations Further reading Extinct subspecies of Canis lupus Extinct mammals of Asia Extinct animals of Japan Mammal extinctions since 1500 † Shinto kami Mammals described in 1839 extinct Species made extinct by deliberate extirpation efforts
Japanese wolf
[ "Biology" ]
4,492
[ "Species impacted by human activities", "Humans and other species" ]
9,519,906
https://en.wikipedia.org/wiki/Valve%20RF%20amplifier
A valve RF amplifier (UK and Aus.) or tube amplifier (U.S.) is a device for electrically amplifying the power of an electrical radio frequency signal. Low to medium power valve amplifiers for frequencies below the microwaves were largely replaced by solid state amplifiers during the 1960s and 1970s, initially for receivers and low power stages of transmitters, transmitter output stages switching to transistors somewhat later. Specially constructed valves are still in use for very high power transmitters, although rarely in new designs. Valve characteristics Valves are high voltage / low current devices in comparison with transistors. Tetrode and pentode valves have very flat anode current vs. anode voltage indicating high anode output impedances. Triodes show a stronger relationship between anode voltage and anode current. The high working voltage makes them well suited for radio transmitters and valves remain in use today for very high power short wave radio transmitters, where solid state techniques would require many devices in parallel, and very high supply currents. High power solid state transmitters also require a complex combination of transformers and tuning networks, whereas a valve-based transmitter would use a single, relatively simple tuned network. Thus while solid state high power short wave transmitters are technically possible, economic considerations still favor valves above 3 MHz and 10,000 watts. Radio amateurs also use valve amplifiers in the 500–1500 watt range mainly for economic reasons. Audio vs. amplifiers Valve audio amplifiers typically amplify the entire audio range between 20 Hz and 20 kHz or higher. They use an iron core transformer to provide a suitable high impedance load to the valve(s) while driving a speaker, which is typically 8 Ohms. Audio amplifiers normally use a single valve in class A, or a pair in class B or . An power amplifier is tuned to a single frequency as low as 18 kHz and as high as the range of frequencies, for the purpose of radio transmission or industrial heating. They use a narrow tuned circuit to provide the valve with a suitably high load impedance and feed a load that is typically 50 or 75 Ohms. amplifiers normally operate class C or class AB. Although the frequency ranges for audio amplifiers and amplifiers overlap, the class of operation, method of output coupling and percent operational bandwidth will differ. Power valves are capable of high frequency response, up to at least 30 MHz. Indeed, many of the Directly Heated Single Ended Triode () audio amplifiers use radio transmitting valves originally designed to operate as amplifiers in the high frequency range. Circuit advantages of valves High input impedance Tubes' input impedance is comparable to that of ‑s, higher than in bipolar transistors, which is beneficial in certain signal amplification applications. Tolerant of high voltages Valves are high voltage devices, inherently suitable for higher voltage circuits than most semiconductors. Tubes can be built oversized to improve cooling Valves can be constructed on a scale large enough to dissipate great amounts of heat. Very high-power models are designed to accommodate water- or vapor-cooling. For that reason, valves remained the only viable technology for handling very high power, and especially high power + high voltage use, such as radio and transmitters, long into the age when transistors had displaced valves in almost all other applications. However, today even for high power/voltage, tubes are increasingly becoming obsolete as new transistor technology improves tolerance of high voltages and capacity for high power. Lower investment cost Because of the simplicity of practical tube-based designs, using tubes for applications like amplifiers above the kilowatt power range can greatly lower manufacturing costs. Also, large, high value power valves (steel clad, not glass tubes) can to some extent be remanufactured to extend residual life. Electrically very robust Tubes can tolerate amazingly high overloads, which would destroy bipolar transistor systems in milliseconds (of particular significance in military and other "strategically important" systems). Indefinite shelf life Even 60 year-old tubes can be perfectly functional, and many types are available for purchase as "new-old-stock". Thus, despite known reliability issues (see next section, below), it is still perfectly possible to run most very old vacuum tube equipment. Comparative ease of replacement Being known to be subject to a number of common failure modes, most systems with tubes were designed with sockets so the tubes can be installed as plug-in devices; they are rarely, if ever, soldered into a circuit. A failed tube can simply be unplugged and replaced by a user, while the failure of a soldered-in semiconductor may constitute damage beyond economical repair for a whole product or sub-assembly. The only difficulty is determining which tube has failed. Disadvantages of valves Cost For most applications, tubes require both greater initial outlay and running expense per amplification stage, requiring more attentive budgeting of the number of stages for a given application compared to semiconductors. Short operational life In the most common applications, valves have a working life of just a few thousand hours, much shorter than solid state parts. This is due various commonplace modes of failure: Cathode depletion, open- or short-circuits (notably of the heater and grid structures), cathode ‘poisoning’, and breaking the glass shell (the glass “tube” itself). Heater failure most often happens due to the mechanical stress of a cold start. Only in certain limited, always-on professional applications, such as specialized computing and undersea cables, have specially designed valves in carefully designed circuits, and well cooled environments reached operational lives of tens or hundreds of thousands of hours. Heater supplies are required for the cathodes Beside the investment cost, the share of the power budget that goes into heating the cathode, without contributing to output, can range from few percent points of anode dissipation (in high power applications at full output), to broadly comparable to anode dissipation in small signal applications. Large circuit temperature swings in on/off cycles Massive stray heat from cathode heaters in common low power tubes means that adjoining circuits experience changes in temperature that can exceed . This requires heat resistant components. In applications this also means that all frequency-determining components may have to heat to thermal equilibrium before frequency stability is reached. While at broadcast (medium wave) receivers and in loosely tuned sets this was not a problem, in typical radio receivers and transmitters with free-running oscillators at frequencies this thermal stabilization required about one hour. On the other hand, miniature ultra-low power direct-heated valves do not produce much heat in absolute terms, cause more modest temperature swings, and allow equipment that contains few of them to stabilize sooner. No "instant on" from a cold start Valve cathodes need to heat to a glow to start conducting. In indirect-heating cathodes this could take up to 20 seconds. Apart from temperature-related instability, this meant that valves would not work instantly when powered. This led to development of always-on preheating systems for vacuum tube appliances that shortened the wait and may have reduced valve failures from thermal shock, but at the price of a continuous power drain, and an increased fire hazard. On the other hand, very small, ultra low power direct-heated valves turn on in tenths of a second from a cold start. Dangerously high voltages Anodes of tubes may require dangerously high voltages to function as intended. In general, tubes themselves will not be troubled by high voltage, but high voltages will demand extra precautions in circuit layout and design, to avoid “flashover”. Wrong impedance for convenient use High impedance output (high voltage/low current) is typically not suitable for directly driving many real world loads, notably various forms of electric motor Valves only have one polarity Compared to transistors, valves have the disadvantage of having a single polarity, whereas for most uses, transistors are available as pairs with complementary polarities (e.g., / ), making possible many circuit configurations that cannot be realized with valves. Distortion The most efficient valve-based RF amplifiers operate class C. If used with no tuned circuit in the output, this would distort the input signal, producing harmonics. However, class C amplifiers normally use a high output network which removes the harmonics, leaving an undistorted sine wave identical to the input waveform. Class C is suitable only for amplifying signals with a constant amplitude, such as , , and some (Morse code) signals. Where the amplitude of the input signal to the amplifier varies as with single-sideband modulation, amplitude modulation, video and complex digital signals, the amplifier must operate class A or AB, to preserve the envelope of the driving signal in an undistorted form. Such amplifiers are referred to as linear amplifiers. It is also common to modify the gain of an amplifier operating class C so as to produce amplitude modulation. If done in a linear manner, this modulated amplifier is capable of low distortion. The output signal can be viewed as a product of the input signal and the modulating signal. The development of broadcasting improved fidelity by using a greater bandwidth which was available in the range, and where atmospheric noise was absent. also has an inherent ability to reject noise, which is mostly amplitude modulated. Valve technology suffers high-frequency limitations due to cathode-anode transit time. However, tetrodes are successfully used into the range and triodes into the low GHz range. Modern broadcast transmitters use both valve and solid state devices, with valves tending to be more used at the highest power levels. transmitters operate class C with very low distortion. Today's digital radio that carries coded data over various phase modulations (such as , , etc.) and also the increasing demand for spectrum have forced a dramatic change in the way radio is used, e.g. the cellular radio concept. Today's cellular radio and digital broadcast standards are extremely demanding in terms of the spectral envelope and out of band emissions that are acceptable (in the case of for example, −70 dB or better just a few hundred kilohertz from center frequency). Digital transmitters must therefore operate in the linear modes, with much attention given to achieving low distortion. Applications Historic transmitters and receivers (High voltage/high power) Valve stages were used to amplify the received radio frequency signals, the intermediate frequencies, the video signal and the audio signals at the various points in the receiver. Historically (pre WWII) "transmitting tubes" were among the most powerful tubes available, were usually direct heated by thoriated filaments that glowed like light bulbs. Some tubes were built to be very rugged, capable of being driven so hard that the anode would itself glow cherry red, the anodes being machined from solid material (rather than fabricated from thin sheet) to be able to withstand this without distorting when heated. Notable tubes of this type are the 845 and 211. Later beam power tubes such as the 807 and (direct heated) 813 were also used in large numbers in (especially military) radio transmitters. Bandwidth of valve vs solid state amplifiers Today, radio transmitters are overwhelmingly solid state, even at microwave frequencies (cellular radio base stations). Depending on the application, a fair number of radio frequency amplifiers continue to have valve construction, due to their simplicity, where as, it takes several output transistors with complex splitting and combining circuits to equal the same amount of output power of a single valve. Valve amplifier circuits are significantly different from broadband solid state circuits. Solid state devices have a very low output impedance which allows matching via a broadband transformer covering a large range of frequencies, for example 1.8 to 30 MHz. With either class C or AB operation, these must include low pass filters to remove harmonics. While the proper low pass filter must be switch selected for the frequency range of interest, the result is considered to be a "no tune" design. Valve amplifiers have a tuned network that serves as both the low pass harmonic filter and impedance matching to the output load. In either case, both solid state and valve devices need such filtering networks before the RF signal is output to the load. Radio circuits Unlike audio amplifiers, in which the analog output signal is of the same form and frequency as the input signal, RF circuits may modulate low frequency information (audio, video, or data) onto a carrier (at a much higher frequency), and the circuitry comprises several distinct stages. For example, a radio transmitter may contain: an audio frequency (AF) stage (typically using conventional broadband small signal circuitry as described in Valve audio amplifier, one or more oscillator stages that generate the carrier wave, one or more mixer stages that modulate the carrier signal from the oscillator, the amplifier stage itself operating at (typically) high frequency. the Transmitter power amp itself is the only high power stage in a radio system, and operates at the carrier frequency. In AM, the modulation (frequency mixing) usually takes place in the final amplifier itself. Transmitter anode circuits The most common anode circuit is a tuned LC circuit where the anodes are connected at a voltage node. This circuit is often known as the anode tank circuit. Active (or tuned grid) amplifier An example of this used at VHF/UHF include the 4CX250B, an example of a twin tetrode is the QQV06/40A. Neutralization is a term used in TGTP (tuned grid tuned plate) amplifiers for the methods and circuits used for stabilization against unwanted oscillations at the operating frequency caused by the inadvertent introduction of some of the output signal back into the input circuits. This mainly occurs via the grid to plate capacity, but can also come via other paths, making circuit layout important. To cancel the unwanted feedback signal, a portion of the output signal is deliberately introduced into the input circuit with the same amplitude but opposite phase. When using a tuned circuit in the input, the network must match the driving source to the input impedance of the grid. This impedance will be determined by the grid current in Class C or AB2 operation. In AB1 operation, the grid circuit should be designed to avoid excessive step up voltage, which although it might provide more stage gain, as in audio designs, it will increase instability and make neutralization more critical. In common with all three basic designs shown here, the anode of the valve is connected to a resonant LC circuit which has another inductive link which allows the RF signal to be passed to the output. The circuit shown has been largely replaced by a Pi network which allows simpler adjustment and adds low pass filtering. Operation The anode current is controlled by the electrical potential (voltage) of the first grid. A DC bias is applied to the valve to ensure that the part of the transfer equation which is most suitable to the required application is used. The input signal is able to perturb (change) the potential of the grid, this in turn will change the anode current (also known as the plate current). In the RF designs shown on this page, a tuned circuit is between the anode and the high voltage supply. This tuned circuit is brought to resonance presenting an inductive load that is well matched to the valve and thus results in an efficient power transfer. As the current flowing through the anode connection is controlled by the grid, then the current flowing through the load is also controlled by the grid. One of the disadvantages of a tuned grid compared to other RF designs is that neutralization is required. Passive grid amplifier A passive grid circuit used at VHF/UHF frequencies might use the 4CX250B tetrode. An example of a twin tetrode would be the QQV06/40A. The tetrode has a screen grid which is between the anode and the first grid, which being grounded for RF, acts as a shield to reducing the effective capacitance between the first grid and the anode. The combination of the effects of the screen grid and the grid damping resistor often allow the use of this design without neutralization. The screen found in tetrodes and pentodes, greatly increases the valve's gain by reducing the effect of anode voltage on anode current. The input signal is applied to the valve's first grid via a capacitor. The value of the grid resistor determines the gain of the amplifier stage. The higher the resistor the greater the gain, the lower the damping effect and the greater the risk of instability. With this type of stage good layout is less vital. Advantages Stable, no neutralizing required normally Constant load on the exciting stage Disadvantages Low gain, more input power is required Less gain than tuned grid Less filtering than tuned grid (more broadband), hence the amplification of out of band spurious signals, such as harmonics, from an exciter is greater Grounded grid amplifier This design normally uses a triode so valves such as the 4CX250B are not suitable for this circuit, unless the screen and control grids are joined, effectively converting the tetrode into a triode. This circuit design has been used at 1296 MHz using disk seal triode valves such as the 2C39A. The grid is grounded and the drive is applied to the cathode through a capacitor. The heater supply must be isolated from the cathode as unlike the other designs the cathode is not connected to RF ground. Some valves, such as the 811A, are designed for "zero bias" operation and the cathode can be at ground potential for DC. Valves that require a negative grid bias can be used by putting a positive DC voltage on the cathode. This can be achieved by putting a zener diode between the cathode and ground or using a separate bias supply. Advantages Stable, no neutralizing required normally Some of the power from exciting stage appears in the output Disadvantages Relatively low gain, typically about 10 dB. The heater must be isolated from ground with chokes. Neutralization The valve interelectrode capacitance which exists between the input and output of the amplifier and other stray coupling may allow enough energy to feed back into input so as to cause self-oscillation in an amplifier stage. For the higher gain designs this effect must be counteracted. Various methods exist for introducing an out-of-phase signal from the output back to the input so that the effect is cancelled. Even when the feed back is not sufficient to cause oscillation it can produce other effects, such as difficult tuning. Therefore, neutralization can be helpful, even for an amplifier that does not oscillate. Many grounded grid amplifiers use no neutralization, but at 30 MHz adding it can smooth out the tuning. An important part of the neutralization of a tetrode or pentode is the design of the screen grid circuit. To provide the greatest shielding effect, the screen must be well-grounded at the frequency of operation. Many valves will have a "self-neutralizing" frequency somewhere in the VHF range. This results from a series resonance consisting of the screen capacity and the inductance of the screen lead, thus providing a very low impedance path to ground. UHF Transit time effects are important at these frequencies, so feedback is not normally usable and for performance critical applications alternative linearisation techniques have to be used such as degeneration and feedforward. Tube noise and noise figure Noise figure is not usually an issue for power amplifier valves, however, in receivers using valves it can be important. While such uses are obsolete, this information is included for historical interest. Like any amplifying device, valves add noise to the signal to be amplified. Even with a hypothetical perfect amplifier, however, noise is unavoidably present due to thermal fluctuations in the signal source (usually assumed to be at room temperature, T = 295 K). Such fluctuations cause an electrical noise power of , where kB is the Boltzmann constant and B the bandwidth. Correspondingly, the voltage noise of a resistance R into an open circuit is and the current noise into a short circuit is . The noise figure is defined as the ratio of the noise power at the output of the amplifier relative to the noise power that would be present at the output if the amplifier were noiseless (due to amplification of thermal noise of the signal source). An equivalent definition is: noise figure is the factor by which insertion of the amplifier degrades the signal to noise ratio. It is often expressed in decibels (dB). An amplifier with a 0 dB noise figure would be perfect. The noise properties of tubes at audio frequencies can be modeled well by a perfect noiseless tube having a source of voltage noise in series with the grid. For the EF86 tube, for example, this voltage noise is specified (see e.g., the Valvo, Telefunken or Philips data sheets) as 2 microvolts integrated over a frequency range of approximately 25 Hz to 10 kHz. (This refers to the integrated noise, see below for the frequency dependence of the noise spectral density.) This equals the voltage noise of a 25 kΩ resistor. Thus, if the signal source has an impedance of 25 kΩ or more, the noise of the tube is actually smaller than the noise of the source. For a source of 25 kΩ, the noise generated by tube and source are the same, so the total noise power at the output of the amplifier is twice the noise power at the output of the perfect amplifier. The noise figure is then two, or 3 dB. For higher impedances, such as 250 kΩ, the EF86's voltage noise is lower than the source's own noise. It therefore adds 1/10 of the noise power caused by the source, and the noise figure is 0.4 dB. For a low-impedance source of 250 Ω, on the other hand, the noise voltage contribution of the tube is 10 times larger than the signal source, so that the noise power is one hundred times larger than that caused by the source. The noise figure in this case is 20 dB. To obtain low noise figure the impedance of the source can be increased by a transformer. This is eventually limited by the input capacity of the tube, which sets a limit on how high the signal impedance can be made if a certain bandwidth is desired. The noise voltage density of a given tube is a function of frequency. At frequencies above 10 kHz or so, it is basically constant ("white noise"). White noise is often expressed by an equivalent noise resistance, which is defined as the resistance which produces the same voltage noise as present at the tube input. For triodes, it is approximately (2-4)/gm, where gm is the transconductivity. For pentodes, it is higher, about (5-7)/gm. Tubes with high gm thus tend to have lower noise at high frequencies. For example, it is 300 Ω for one half of the ECC88, 250 Ω for an E188CC (both have gm = 12.5 mA/V) and as low as 65 Ω for a tride-connected D3a (gm = 40 mA/V). In the audio frequency range (below 1–100 kHz), "1/f" noise becomes dominant, which rises like 1/f. (This is the reason for the relatively high noise resistance of the EF86 in the above example.) Thus, tubes with low noise at high frequency do not necessarily have low noise in the audio frequency range. For special low-noise audio tubes, the frequency at which 1/f noise takes over is reduced as far as possible, maybe to approximately a kilohertz. It can be reduced by choosing very pure materials for the cathode nickel, and running the tube at an optimized (generally low) anode current. At radio frequencies, things are more complicated: (i) The input impedance of a tube has a real component that goes down like 1/f² (due to cathode lead inductance and transit time effects). This means the input impedance can no longer be increased arbitrarily in order to reduce the noise figure. (ii) This input resistance has its own thermal noise, just like any resistor. (The "temperature" of this resistor for noise purposes is more close to the cathode temperature than to room temperature). Thus, the noise figure of tube amplifiers increases with frequency. At 200 MHz, a noise figure of 2.5 (or 4 dB) can be reached with the ECC2000 tube in an optimized "cascode"-circuit with an optimized source impedance. At 800 MHz, tubes like EC8010 have noise figures of about 10 dB or more. Planar triodes are better, but very early, transistors have reached noise figures substantially lower than tubes at UHF. Thus, the tuners of television sets were among the first parts of consumer electronics were transistors were used. Decline Semiconductor amplifiers have overwhelmingly displaced valve amplifiers for low- and medium-power applications at all frequencies. Valves continue to be used in some high-power, high-frequency amplifiers used for short wave broadcasting, VHF and UHF TV and (VHF) FM radio, also in existing "radar, countermeasures equipment, or communications equipment" using specially designed valves, such as the klystron, gyrotron, traveling-wave tube, and crossed-field amplifier; however, new designs for such products are now invariably semiconductor-based. Footnotes Works cited References Radio communication handbook (5th Ed), Radio Society of Great Britain, 1976, External links WebCite query result - AM band (medium wave, short wave) old valve type Radio The Audio Circuit - An almost complete list of manufacturers, DIY kits, materials and parts and 'how they work' sections on valve amplifiers Conversion calculator - distortion factor to distortion attenuation and THD Radio electronics Valve amplifiers
Valve RF amplifier
[ "Engineering" ]
5,372
[ "Radio electronics" ]
9,519,992
https://en.wikipedia.org/wiki/Surf%20zone
The surf zone or breaker zone is the nearshore part of a body of open water between the line at which the waves break and the shore. As ocean surface waves approach a shore, they interact with the bottom, get taller and steeper, and break, forming the foamy surface called surf. The region of breaking waves defines the surf zone. After breaking in the surf zone, the waves (now reduced in height) continue to move in, and they run up onto the sloping front of the beach, forming an uprush of water called swash. The water then runs back again as backwash. The water in the surf zone is relatively shallow, depending on the height and period of the waves. Animal life The animals that often are found living in the surf zone are crabs, clams, and snails. Surf clams and mole crabs are two species that stand out as inhabitants of the surf zone. Both of these animals are very fast burrowers. The surf clam, also known as the variable coquina, is a filter feeder that uses its gills to filter microalgae, tiny zooplankton, and small particulates out of seawater. The mole crab is a suspension feeder that eats by capturing zooplankton with its antennae. All of these creatures burrow down into the sand to escape from being pulled into the ocean from the tides and waves. They also burrow themselves in the sand to protect themselves from predators. The surf zone is full of nutrients, oxygen, and sunlight which leaves the zone very productive with animal life. Rip currents The surf zone can contain dangerous rip currents: strong local currents which flow offshore and pose a threat to swimmers. Rip-current outlooks use the following set of qualifications: Low-risk rip currents: Wind and/or wave conditions are not expected to support the development of rip currents; however, rip currents can sometimes occur, especially in the vicinity of jetties and piers. Know how to swim and heed the advice of lifeguards. Moderate-risk rip currents: Wind and/or wave conditions support stronger or more frequent rip currents. Only experienced surf swimmers should enter the water. High-risk rip currents: Wind and/or wave conditions support dangerous rip currents. Rip currents are life-threatening to anyone entering the surf. See also Intertidal zone Littoral zone Surf fishing References Pinet, Paul R (2008) Invitation to Oceanography, Chapter 11: The Dynamic Shoreline. Edition 5 revised. Jones & Bartlett Learning, "Breaker Zone." The Free Dictionary. Farlex Inc, 2012. Web. 18 Apr. 2012. <http://www.thefreedictionary.com/breaker+zone>. External links MetEd (2012) Rip currents: Nearshore fundamentals University Corporation for Atmospheric Research. Retrieved 17 April 2012. Coastal geography Physical oceanography Oceanographical terminology
Surf zone
[ "Physics" ]
591
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
9,520,346
https://en.wikipedia.org/wiki/International%20Society%20for%20Micropiles
The International Society for Micropiles (ISM) is a consortium of international representatives involved in the research and development, design, and construction of micropiles. In 1994, a core group of ISM members formed what was then IWM, the International Workshop on Micropiles. The intention of the group was to form an international peer review team for the FHWA State of Practice Study on Micropiles (1993-1997), which focused on "classic" micropiles, namely drilled and grouted elements of high capacity. The study was undertaken as a contribution to the ongoing French national research project "FOREVER". This international team of practitioners and academicians have since gathered over the years on several occasions to ensure that the study has reflected current standards and practices worldwide. The synergy of this group of individuals, allied with major demands for technology transfer to "newer" markets mainly from Japan and Scandinavia, led to the organization of an IWM in Seattle, WA in September 1997. This first IWM has since led to eight additional workshops which have been held since this initial meeting: Ube, Japan (1999); Turku, Finland (2000); Lille, France (2001); Venice, Italy (2002); Seattle, WA (2003); Tokyo, Japan (2004); Schrobenhausen, Germany (2006); and Toronto, Canada (2007). Next workshop is scheduled for the second quarter of 2009 and will be held in London, UK. Countries sponsoring delegates and providing major contributions to the IWM throughout the years include the United States, France, Japan, Germany, Finland (and other Scandinavian countries), Italy, Belgium, Canada, and the United Kingdom. The late Dr. Fernando Lizzi of Naples, Italy, who developed the concept of pali radice (root piles), is regarded fondly as the visionary leader of ISM. He obtained the first micropile patents in 1952, and has used micropiles in the restoration of many important and historic monuments internationally. As the "father of micropiles", his creative vision has allowed the technology to blossom worldwide and has sewn the "roots" for its future and that of ISM. External links International Society for Micropiles International Micropile Society Meets in Germany FOREVER The French national project on micropiles Geotechnical organizations
International Society for Micropiles
[ "Engineering" ]
482
[ "Geotechnical organizations", "Civil engineering organizations" ]
9,520,921
https://en.wikipedia.org/wiki/Service%20Evaluation%20System
The Service Evaluation System (SES) was an operations support system developed by Bell Laboratories and used by telephone companies beginning in the late 1960s. Many local, long distance, and operator circuit-switching systems provided special dedicated circuits to the SES to monitor the quality of customer connections during the call setup process. Calls were selected at random by switching systems and one-way voice connections were established to the SES monitoring center. During this era, most voice connections used analog trunk circuits that were designed to conform with the Via Net Loss plan established by Bell Laboratories. The purpose of the VNL plan and five-level long distance switching hierarchy was to minimize the number of trunk circuits in a call and maximize the voice quality of the connections. Excessive loss in a voice connection meant that subscribers may have difficulty hearing each other. This was particularly important in the 1960s when dial up data connections were developed with the use of analog modems. The SES evaluated multi-frequency outpulsing signaling as well as voice impairments including sound amplitude, noise, echo, and a variety of other parameters. Deployment of common-channel signaling systems such as Common Channel Interoffice Signaling and later Signaling System #7 obviated the need to monitor multi-frequency signaling as it became obsolete. The Service Evaluation System was described in Notes on the Network published by AT&T in 1970, 1975, 1980 and later versions published by Bell Communications Research (now Telcordia Technologies) in 1983, 1986, 1990, 1994, and 2000. References Telecommunications standards Telecommunications systems
Service Evaluation System
[ "Technology" ]
310
[ "Telecommunications systems" ]
9,520,954
https://en.wikipedia.org/wiki/Algebraic%20cycle
In mathematics, an algebraic cycle on an algebraic variety V is a formal linear combination of subvarieties of V. These are the part of the algebraic topology of V that is directly accessible by algebraic methods. Understanding the algebraic cycles on a variety can give profound insights into the structure of the variety. The most trivial case is codimension zero cycles, which are linear combinations of the irreducible components of the variety. The first non-trivial case is of codimension one subvarieties, called divisors. The earliest work on algebraic cycles focused on the case of divisors, particularly divisors on algebraic curves. Divisors on algebraic curves are formal linear combinations of points on the curve. Classical work on algebraic curves related these to intrinsic data, such as the regular differentials on a compact Riemann surface, and to extrinsic properties, such as embeddings of the curve into projective space. While divisors on higher-dimensional varieties continue to play an important role in determining the structure of the variety, on varieties of dimension two or more there are also higher codimension cycles to consider. The behavior of these cycles is strikingly different from that of divisors. For example, every curve has a constant N such that every divisor of degree zero is linearly equivalent to a difference of two effective divisors of degree at most N. David Mumford proved that, on a smooth complete complex algebraic surface S with positive geometric genus, the analogous statement for the group of rational equivalence classes of codimension two cycles in S is false. The hypothesis that the geometric genus is positive essentially means (by the Lefschetz theorem on (1,1)-classes) that the cohomology group contains transcendental information, and in effect Mumford's theorem implies that, despite having a purely algebraic definition, it shares transcendental information with . Mumford's theorem has since been greatly generalized. The behavior of algebraic cycles ranks among the most important open questions in modern mathematics. The Hodge conjecture, one of the Clay Mathematics Institute's Millennium Prize Problems, predicts that the topology of a complex algebraic variety forces the existence of certain algebraic cycles. The Tate conjecture makes a similar prediction for étale cohomology. Alexander Grothendieck's standard conjectures on algebraic cycles yield enough cycles to construct his category of motives and would imply that algebraic cycles play a vital role in any cohomology theory of algebraic varieties. Conversely, Alexander Beilinson proved that the existence of a category of motives implies the standard conjectures. Additionally, cycles are connected to algebraic K-theory by Bloch's formula, which expresses groups of cycles modulo rational equivalence as the cohomology of K-theory sheaves. Definition Let X be a scheme which is finite type over a field k. An algebraic r-cycle on X is a formal linear combination of r-dimensional closed integral k-subschemes of X. The coefficient ni is the multiplicity of Vi. The set of all r-cycles is the free abelian group where the sum is over closed integral subschemes V of X. The groups of cycles for varying r together form a group This is called the group of algebraic cycles, and any element is called an algebraic cycle. A cycle is effective or positive if all its coefficients are non-negative. Closed integral subschemes of X are in one-to-one correspondence with the scheme-theoretic points of X under the map that, in one direction, takes each subscheme to its generic point, and in the other direction, takes each point to the unique reduced subscheme supported on the closure of the point. Consequently can also be described as the free abelian group on the points of X. A cycle is rationally equivalent to zero, written , if there are a finite number of -dimensional subvarieties of and non-zero rational functions such that , where denotes the divisor of a rational function on Wi. The cycles rationally equivalent to zero are a subgroup , and the group of r-cycles modulo rational equivalence is the quotient This group is also denoted . Elements of the group are called cycle classes on X. Cycle classes are said to be effective or positive if they can be represented by an effective cycle. If X is smooth, projective, and of pure dimension N, the above groups are sometimes reindexed cohomologically as and In this case, is called the Chow ring of X because it has a multiplication operation given by the intersection product. There are several variants of the above definition. We may substitute another ring for integers as our coefficient ring. The case of rational coefficients is widely used. Working with families of cycles over a base, or using cycles in arithmetic situations, requires a relative setup. Let , where S is a regular Noetherian scheme. An r-cycle is a formal sum of closed integral subschemes of X whose relative dimension is r; here the relative dimension of is the transcendence degree of over minus the codimension of in S. Rational equivalence can also be replaced by several other coarser equivalence relations on algebraic cycles. Other equivalence relations of interest include algebraic equivalence, homological equivalence for a fixed cohomology theory (such as singular cohomology or étale cohomology), numerical equivalence, as well as all of the above modulo torsion. These equivalence relations have (partially conjectural) applications to the theory of motives. Flat pullback and proper pushforward There is a covariant and a contravariant functoriality of the group of algebraic cycles. Let f : X → X' be a map of varieties. If f is flat of some constant relative dimension (i.e. all fibers have the same dimension), we can define for any subvariety Y' ⊂ X': which by assumption has the same codimension as Y′. Conversely, if f is proper, for Y a subvariety of X the pushforward is defined to be where n is the degree of the extension of function fields [k(Y) : k(f(Y))] if the restriction of f to Y is finite and 0 otherwise. By linearity, these definitions extend to homomorphisms of abelian groups (the latter by virtue of the convention) are homomorphisms of abelian groups. See Chow ring for a discussion of the functoriality related to the ring structure. See also divisor (algebraic geometry) Relative cycle References Algebraic geometry
Algebraic cycle
[ "Mathematics" ]
1,353
[ "Fields of abstract algebra", "Algebraic geometry" ]
9,521,106
https://en.wikipedia.org/wiki/Phenyl-D-galactopyranoside
{{DISPLAYTITLE:Phenyl-D-galactopyranoside}} Phenyl--galactopyranoside is a substituted galactoside. See also Lac operon References Galactosides Phenol ethers
Phenyl-D-galactopyranoside
[ "Chemistry" ]
56
[]
9,521,204
https://en.wikipedia.org/wiki/%C5%BDaliakalnis%20Funicular
Žaliakalnis Funicular (English: Green Hill Funicular) is a funicular railway in Kaunas, Lithuania. Built in 1931, it is the oldest funicular in Lithuania and is among the oldest vehicles of such type in the world still operational. The funicular is made of a wood-paneled coachwork and climbs up from behind the Vytautas the Great War Museum to the Basilica of the Resurrection. Upon the city council decision to improve the communication between the Žaliakalnis neighborhood with city center, the funicular was constructed by engineering firm Curt Rudolph Transportanlagen from Dresden, Germany with electrical equipment from AEG and mechanical parts from Bell Maschinenfabrik, Switzerland. The official opening was on 5 August 1931 with one passenger car, while the second car was only a platform ballasted with stones used to counterbalance the passenger car. The electric overhead power cable and the pantographs of the coaches are only used for lighting and heating of the cars. The upper station housed the electrically driven funicular mechanism in the basement, whilst the lower end of the line did not even have a shelter until 1932. The funicular succeeded and became very popular transportation for the city inhabitants and guests. It is known that about 5 million passengers were moved using it between 1950 and 1970. The funicular was renovated between 1935 and 1937. New, larger cars with car bodies from Napoleonas Dobkevičius on underframes from Bell Maschinenfabrik were built, and the lower station was given a proper building. The Žaliakalnis Funicular Railway was included in the Registry of Immovable Cultural Heritage Sites of the Republic of Lithuania in 1993. In 2015, the funicular was one of 44 objects in Kaunas to receive the European Heritage Label. See also Aleksotas Funicular Railway completed in Kaunas in 1935 List of funicular railways References 1931 establishments in Lithuania Transport in Kaunas Buildings and structures in Kaunas Funicular railways in Lithuania Tourist attractions in Kaunas Heritage railways Narrow gauge railways in Lithuania 1200 mm gauge railways in Lithuania
Žaliakalnis Funicular
[ "Engineering" ]
422
[ "Heritage railways", "Engineering preservation societies" ]
9,521,209
https://en.wikipedia.org/wiki/Shortest-path%20tree
In mathematics and computer science, a shortest-path tree rooted at a vertex v of a connected, undirected graph G is a spanning tree T of G, such that the path distance from root v to any other vertex u in T is the shortest path distance from v to u in G. In connected graphs where shortest paths are well-defined (i.e. where there are no negative-length cycles), we may construct a shortest-path tree using the following algorithm: Compute dist(u), the shortest-path distance from root v to vertex u in G using Dijkstra's algorithm or Bellman–Ford algorithm. For all non-root vertices u, we can assign to u a parent vertex pu such that pu is connected to u, and that dist(pu) + edge_dist(pu,u) = dist(u). In case multiple choices for pu exist, choose pu for which there exists a shortest path from v to pu with as few edges as possible; this tie-breaking rule is needed to prevent loops when there exist zero-length cycles. Construct the shortest-path tree using the edges between each node and its parent. The above algorithm guarantees the existence of shortest-path trees. Like minimum spanning trees, shortest-path trees in general are not unique. In graphs for which all edge weights are equal, shortest path trees coincide with breadth-first search trees. In graphs that have negative cycles, the set of shortest simple paths from v to all other vertices do not necessarily form a tree. For simple connected graphs, shortest-path trees can be used to suggest a non-linear relationship between two network centrality measures, closeness and degree. By assuming that the branches of the shortest-path trees are statistically similar for any root node in one network, one may show that the size of the branches depend only on the number of branches connected to the root vertex, i.e. to the degree of the root node. From this one deduces that the inverse of closeness, a length scale associated with each vertex, varies approximately linearly with the logarithm of degree. The relationship is not exact but it captures a correlation between closeness and degree in large number of networks constructed from real data and this success suggests that shortest-path trees can be a useful approximation in network analysis. See also Shortest path problem References References Spanning tree
Shortest-path tree
[ "Mathematics" ]
489
[ "Graph theory stubs", "Mathematical relations", "Graph theory" ]
9,521,581
https://en.wikipedia.org/wiki/DLX%20gene%20family
Genes in the DLX family encode homeodomain transcription factors related to the Drosophila distal-less (Dll) gene. The family has been related to a number of developmental features such as jaws and limbs. The family seems to be well preserved across species. As DLX/Dll are involved in limb development in most of the major phyla, including vertebrates, it has been suggested that Dll was involved in appendage growth in an early bilaterial ancestor. Six members of the family are found in human and mice, numbered DLX1 to DLX6. They form two-gene clusters (bigene clusters) with each other. There are DLX1-DLX2, DLX3-DLX4, DLX5-DLX6 clusters in vertebrates, linked to Hox gene clusters HOXD, HOXB, and HOXA respectively. In higher fishes like the zebrafish, there are two additional DLX genes, dlx2b (dlx5) and dlx4a (dlx8). These additional genes are not linked with each other, or any other DLX gene. All six other genes remain in bigene clusters. DLX4, DLX7, DLX8 and DLX9 are the same gene in vertebrates. They are named differently because every time the same gene was found, the researchers thought they had found a new gene. Function DLX genes, like distal-less, are involved in limb development in most of the major phyla. DLX genes are involved in craniofacial morphogenesis and the tangential migration of interneurons from the subpallium to the pallium during vertebrate brain development. It has been suggested that DLX promotes the migration of interneurons by repressing a set of proteins that are normally expressed in terminally differentiated neurons and act to promote the outgrowth of dendrites and axons. Mice lacking DLX1 exhibit electrophysiological and histological evidence consistent with delayed-onset epilepsy. DLX2 has been associated with a number of areas including development of the zona limitans intrathalamica and the prethalamus. DLX4 (DLX7) is expressed in bone marrow. DLX5 and DLX6 genes are necessary for normal formation of the mandible in vertebrates. References Gene families Transcription factors
DLX gene family
[ "Chemistry", "Biology" ]
505
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
9,521,786
https://en.wikipedia.org/wiki/Submittals%20%28construction%29
Submittals in construction management can include: shop drawings, material data, samples, and product data. Submittals are required primarily for the architect and engineer to verify that the correct products will be installed on the project. This process also gives the architect and sub-consultants the opportunity to select colors, patterns, and types of material that were not chosen prior to completion of the construction drawings. This is not an occasion for the architect to select different materials than specified, but rather to clarify the selection within the quality level indicated in the specification and quantities shown on plans. For materials requiring fabrication, such as reinforcing steel and structural steel, the architect and engineer need to verify details furnished by the fabricator as well as the required quantities are met. The details from the fabricator reflect both material availability and production expediency. One tragic example of a submitted alternate design is the suspension rod and bolting details resulting in the Hyatt Regency walkway collapse. The steel fabricator was unable to produce lengths of steel as originally designed and instead proposed using shorter lengths. The proposed alternate compounded the loads on the bolts, which resulted in the skywalks collapsing on July 17, 1981. 114 people were killed. The contractor also uses this information in installation, using dimensions and installation data from the submittal. The construction documents, specifically the technical specifications, require the contractor to submit product data, samples, and shop drawings to the architect and engineer for approval. This is one of the first steps that is taken by the contractor after execution of the construction contract and issuance of the "Notice to Proceed". The submittal process affects cost, quality, schedule, and project success. On large, commercial projects the submittal process can involve thousands of different materials, fabrications and equipment. Commercial buildings will often have complex pre-fabricated components. These include: elevators, windows, cabinets, air handling units, generators, appliances and cooling towers. These pieces of equipment often require close coordination to ensure that they receive the correct power, fuel, water and structural support. The submittal process gives another level of detail usually not included as part of the design documents. An "approved" submittal authorizes quantity and quality of a material or an assembly to be released for fabrication and shipment. It ensures that the submittals have been properly vetted before final ordering. In essence, this is the final quality control mechanism before a product arrives on-site. Types of Submittals Product Data Submittal The product data submittal usually consists of the manufacturer's product information. The information included in this submittal are: Manufacturer, trade name, model or type number and quantities: This information is necessary to compare the submitted item with the specified products and acceptable products listed, in the specification and addenda. Description of use and performance characteristics: Information should be furnished describing the normal use and expected performance of the product. The architect and contractor reviews this information to confirm that the product is appropriate for the intended use. Size and physical characteristics: The size and physical characteristics, such as adjustment capabilities, which is reviewed by both the contractor and architect. The contractor has the most available information for comparing adjoining materials and equipment. The contractor also needs to know the size and weight of the equipment for lifting and handling considerations. Finish characteristics: The architect reviews the available finishes and selects the appropriate finish, if the finish was not previously specified in the documents. The contractor should confirm that finish requirements in the specification are being met by the product. Specific request for jobsite dimensions: Some materials are custom-fabricated to job conditions, requiring dimensions from the jobsite. These jobsite dimensions are provided by the contractor, prior to release of the product for manufacture. Shop Drawing See the article: Shop drawing. Samples Many products require submission of samples. A sample is a physical portion of the specified product. Some samples are full product samples, such as a brick or section of precast concrete, or a partial sample that indicates color or texture. The product sample is often required when several products are acceptable, to confirm the quality and aesthetic level of the material. The size or unit of sample material usually is specified. For some materials, a mock-up or sample panel is necessary. A common example of a sample panel is a wall mock-up. This is a full size mock-up of a wall assembly and can include window, exterior veneers and waterproofing. The mock-up serves as both an aesthetic review, but also provides the contractors the opportunity to field test the assembly before full-scale assembly. The mock up may be required to be tested for water tightness and lateral forces. The mock-up panel might be 10 feet wide by 12 feet high, showing the full wall span from floor to floor. Samples usually are required for finish selection or approval. Color and textures in the actual product can vary considerably from the color and textures shown in printed material. The printed brochure gives an indication of available colors, but the colors are rendered in printer's ink, rather than in the actual material. A quality level may be specified, requiring a selection of color and/or texture from sample pieces of the material. Several acceptable manufacturers may be listed in the specification and a level of quality also may be specified. The contractor, subcontractor, or supplier may have a preference for one of these products, based on price, availability, quality, workability, or service. Samples are usually stored at the jobsite and compared to the material delivered and installed. Comparison of samples with the product received is an important part of project quality control. Review of Submittals Processing time is required to produce and review submittals, shop drawings, and samples. The procedures can seem very cumbersome and time consuming, however, there are substantial reasons for review steps by all parties. The designer is ultimately responsible for the design of the facility to meet occupancy needs and must ensure that the products being installed are suitable to meet these needs. Any change in material fabrication or quantity needs to be reviewed for its acceptability with the original design. Both the architect, contractor and sub-contractor need to be able to coordinate the installation of the product with other building systems. Each level must review, add information as necessary, and stamp or seal that the submittal was examined and approved by that party. After the submittal reaches the primary reviewer, it is returned through the same steps, which provides an opportunity for further comment and assures that each party is aware of the approval, partial approval, notes, or rejection. This approval process is cumbersome and time-consuming. However, modern software products can greatly simplify and improve the efficiency. Typically, the architect will review the submittal for compliance to the requirement in the construction documents. Revisions may be noted on the submittal. Colors and other selection items will be made by the architect during this review. Sometimes the architect will reject the entire submittal and other times will request resubmittal of some of the items. The architect also will make corrections, which normally do not need to be resubmitted, but that do need to be applied to the product. While the architect and engineers review products for performance and design intent, the contractor must review the product for preparation, quantity and installation requirements. The contractor should manage the submittal process just like any other process in the construction cycle. The submittal process requires lead-time consideration to produce the submittal, shop drawing (engineering), review and revise and the shop fabrication period. Careful planning is necessary to ensure that the products are ordered and delivered within the construction schedule, so as not to delay any activities. The contractor must prioritize the submittal process, submitting and obtaining approval for materials needed for the first part of the project. Present-day submittal software for the construction industry can help streamline that process by grouping submittals by submittal types, using a standard material library, and ability to filter by the due date or status. Notes References Anumba C.J. and N.F.O. Evbuomwan 1997, "Concurrent Engineering in Design-Build Projects," Construction Management and Economics, 15(3):271–281. Anumba C.J., Cutting-Decelle A.F., Baldwin A.N., Dufau J., Mommessin M., Bouchlaghem N.M., "Integration of Product and Process Models as a Keystone of Concurrent Engineering in Construction: The ProMICE Project," Proceedings of 2nd European Conference on Product and Process Modelling, Amor R. (Ed.), 1998. Dubois A.M., Flynn J., Verhorf M.H.G., Augenbroe, F., "Conceptual Modelling Approaches in the COMBINE Project," Final Combine Workshop Paper, Dublin, 1995. Construction documents Building engineering
Submittals (construction)
[ "Engineering" ]
1,811
[ "Building engineering", "Civil engineering", "Architecture" ]
9,521,942
https://en.wikipedia.org/wiki/Eroto-comatose%20lucidity
Eroto-comatose lucidity is a technique of sex magic known best by its formulation by English author and occultist Aleister Crowley in 1912, but which has several variations and is used in a number of ways by different spiritual communities. A common form of the ritual uses repeated sexual stimulation (but not to physical orgasm) to place the individual in a state between full sleep and full wakefulness as well as exhaustion, allowing the practitioner to commune with their god. History Aleister Crowley documented the ritual. However, Crowley may not have been the originator of the rite, and may have learned about it from a female student first. Crowley wrote in his work De Arte Magica that eroto-comatose lucidity is also called the "sleep of Siloam" and Newcomb notes that this rite preceded Crowley. He points out that Paschal Beverly Randolph ("arguably the single most important figure in the rise of modern sexual magic") called this ritualistic state the "sleep of Sialam." Randolph first discussed the "sleep of Sialam" in his 1873 work Ravalette, but described it at the time as a once-in-a-century prophetic trance. In later writings, Randolph used the term as a more general form of clairvoyant sleep used to understand spiritual things. Helena Blavatsky may also have taught the technique, calling it the "Sleep of Siloam." In her 1877 work Isis Unveiled, Blavatsky wrote that the trance must be induced through drugs rather than sexual exhaustion. Later, Blavatsky altered her understanding of the rite to mean that drug-induced trance-like state in which a new initiate first comprehends spiritual things. This was described in Blavatsky's 1888 work Secret Doctrine, and she taught that the ritualistic state allowed the individual to either commune with the gods, descend into hell, or perform spiritual acts. Blavatsky taught this was a deep sleep, but Newcomb notes that modern ritualists do not enter sleep but rather a state between sleep and wakefulness. Sexual practices used for spiritual purposes are not new. Eastern traditions within Taoism and tantrism also incorporated sexual rituals. Process Crowley first described the rite in a tract titled Eroto-Comatose Lucidity. The ritual as described by Crowley involves one "ritualist-seer" and several aides. Donald Michael Kraig advises that the more sexually experienced the aides are, the better the ritual works, and that the aides be members of the opposite sex. Religious scholar Hugh Urban, however, concludes that, for Crowley, aides of the same gender as the ritualist (e.g., homosexual activity) was the highest stage of practice of this ritual. In the first part of the ritual, the aides seek repeatedly both to arouse the ritualist sexually and to exhaust her or him. The ritualist is generally passive in this regard. There is disagreement over whether sexual arousal is enough, or sexual orgasm must be eventually accomplished. Crowley and others argue that orgasm must be avoided. Although later practitioners conclude that orgasm does not need to be avoided, that was how Crowley originally formulated the ritual. Most practitioners agree with Crowley that every means of arousal may be used, such as physical stimulation, genital stimulation, psychological stimulation, devices (such as sex toys), or drugs (an entheogen like hashish, marijuana, or other aphrodisiacs). There should be enough aides so that if one aide tires another may take his or her place. Eventually, the ritualist will tend to sink into sleep due to exhaustion. In the second part of the ritual, the aides seek to come close to awakening the ritualist through sexual stimulation alone. The goal is not to fully awaken her or him, but rather to bring them to the brink of wakefulness. Not all authors agree that the ritualist seer will be in a state between sleep and wakefulness, instead asserting that exhaustion will lead to a trance, or "sleep of lucidity". The ritualist should be neither too tired or too uncomfortable to aid in the trance-like state. Once the ritualist reaches a near-waking state, sexual stimulation must stop. The ritualist-seer is then permitted to sink back toward (but not into) sleep. This step is repeated indefinitely until the ritualist reaches a state between sleep and wakefulness in which communing with a higher power may occur. Some say a goal during this time is to not become "lost" in the trance-like state, but to remain open without directing an outcome. The ritualist may also conduct spiritual work while in this state, or witness mystical events. Exhaustion may not be necessary for the ritualist who is "bodily pure," Crowley writes. Endings The rite may end in one of two ways. The ritualist may simply sink into total sleep, or they may achieve orgasm and then sink into a deep and "undisturbable" sleep. Jason Newcomb, however, concludes that sexual exhaustion achieved through repeated orgasm may also lead to the ritualistic state and does not necessarily end the rite. Frater U. D., however, has argued that the orgasmic moment should not be lost and that the individual should strive to use the moment for spiritual or magical purposes. Upon awakening, the ritualist seer could, for example, write down everything they had experienced, witnessed, or been told. At least one author concludes that what is desired should be focused on throughout the rite, and that the individual should not be distracted from it or free of desire. Crowley also intended that when men do the ritual, any semen (or "elixir") produced by orgasm must be consumed by the ritualist, possibly in a Crowley inspired "Cake of Light". Similar rites A similar rite of sexual exhaustion described by Crowley leads not to spiritual communing but a sort of vampirism. In this rite, the aides use only the mouth to sexually exhaust the ritualist, and the intent of the aides must not be to assist the ritualist but rather to transfer the ritualist's own magical strength to themselves. Crowley claimed that when the ritualist is pushed to the point of death from sexual exhaustion in this way, the ritualist's spirit is enslaved by the aides and his or her power transferred to the aides. Michael W. Ford has argued for alternative rites as well. His concept of Luciferianism incorporates Crowley's ideas about sexual exhaustion, but concludes that the ritualist's will is what sends the spirit forth to bond with higher power. Ford argues for two methods of attaining sexual exhaustion and ascension: "Via Lilith" and "Via Cain." In the Lilith ritual, the room should be draped in crimson and black; music which inspires dark emotions, contains chanting, or contains horrific sounds should be played; and images of Lilith, Lilitu, and succubi should hang in the room. In the Cain ritual, both the room and ritualist should be adorned with fetishes of the Horned God and symbols of Cain, and Middle Eastern music should be played. In popular culture The rite and other sex magic practices have had a limited, marginal influence. Crowley's concepts have been seized on by the bands Killing Joke and Psychic TV. See also Aleister Crowley bibliography Coitus reservatus Edging (sexual practice) Maithuna References Bibliography Belanger, Michelle. Vampires in Their Own Words: An Anthology of Vampire Voices. St. Paul, Minn.: Llewellyn Worldwide, 2007. Carroll, Peter J. Liber Null & Psychonaut. Newburyport, Mass.: Red Wheel, 1987. Deveney, John Patrick. Paschal Beverly Randolph: A Nineteenth-Century Black American Spiritualist, Sosicrucian, and Sex Magician. Albany, N.Y.: SUNY Press, 1997. Kraig, Donald Michael. Modern Sex Magick: Secrets of Erotic Spirituality. Woodbury, Minn.: Llewellyn Publications, 1988. Martin, Stoddard. Art, Messianism and Crime: A Study of Antinomianism in Modern Literature and Lives. New York: Macmillan, 1986. Martin, Stoddard. Orthodox Heresy: The Rise of "Magic" as Religion and Its Relation to Literature. New York: Macmillan, 1989. Newcomb, Jason. Sexual Sorcery: A Complete Guide to Sex Magick. Newburyport, Mass.: Samuel Weiser, 2005. Reynolds, Simon. The Sex Revolts: Gender, Rebellion, and Rock 'n' Roll. Reprint ed. Cambridge, Massachusetts: Harvard University Press, 1996. Stone, Karl. "The Moonchild of Yesod: A Grimoire of Occult Hyperchemistry." (2012). Stone, Karl. "The Star of Hastur: Explorations in Hyperchemistry." (2015). U.D., Frater. Secrets of Western Sex Magic: Magical Energy and Gnostic Trance. 3d ed. St. Paul, Minn.: Llewellyn Worldwide, 2001. Urban, Hugh D. Magia Sexualis: Sex, Magic, and Liberation in Modern Western Esotericism. Berkeley, Calif.: University of California Press, 2006. Walker, Benjamin. Body Magic. Florence, Ky.: Taylor & Francis, 1979. Walker, Benjamin. Encyclopedia of Esoteric Man. New York: Routledge & Kegan Paul, 1977. External links Text of Liber CDLI (451) - containing Aleister Crowley's description of the ritual Ceremonial magic Human sexuality
Eroto-comatose lucidity
[ "Biology" ]
1,981
[ "Human sexuality", "Behavior", "Human behavior", "Sexuality" ]
9,522,326
https://en.wikipedia.org/wiki/Karel%20Niessen
Karel Frederik Niessen (1895 in Velsen – 1967) was a Dutch theoretical physicist who made contributions to quantum mechanics and is known for the Pauli–Niessen model. Education Niessen began his studies in physics at the University of Utrecht in 1914. In 1922, he received his doctorate under L. S. Ornstein. He was an assistant at the University from 1921 to 1928, except for his postdoctoral study and research at the Ludwig Maximilian University of Munich under Arnold Sommerfeld, 1925 to 1926 on a Rockefeller Foundation Fellowship. He also spent 1928 to 1929 on a Rockefeller Foundation Fellowship at the University of Wisconsin–Madison. In 1922, Niessen’s doctoral thesis, as well as Wolfgang Pauli’s extended doctoral thesis, dealt with the hydrogen molecule ion in the Bohr–Sommerfeld framework. Their work is referred to as the Pauli-Niessen model. Their works helped to show the inadequacy of the old quantum mechanics, which gave physicists the impetus to explore new paths which led to the matrix mechanics formulation of quantum mechanics by Werner Heisenberg and Max Born in 1925 and the wave mechanics formulation by Erwin Schrödinger in 1926, which were shown to be equivalent. Career Upon Niessen’s return to the Netherlands in 1929, he took a lifelong position as a theoretical physicist at Philips Electronics in Eindhoven. Selected Literature Karel F. Niessen Zur Quantentheorie des Wasserstoffmolekülions, doctoral dissertation, University of Utrecht, Utrecht: I. Van Druten (1922) as cited in Mehra, Volume 5, Part 2, 2001, p. 932. K. F. Niessen Zur Quantentheorie des Wasserstoffmolekülions, Annalen der Physik 70 129-134 (1923) K. F. Niessen Ableitung des Planckschen Strahlungsgesetzes für Atome mit zwei Freiheitsgraden, Annalen der Physik 75 743–780 (1924) K. F. Niessen (Utrecht) Die Energieberechnung in einem sehr vereinfachten Vierkörperproblem, Zeitschrift für Physik Volume 43, Numbers 9-10, Pages 675-693 (1927). Received 14 April 1927. K. F. Niessen Überdie annähernden komplexen Lösungen der Schrödingerschen Differentialgleichun für den harmonischen Oszillator, Annalen der Physik 85 487-514 (1928) as cited in Jammer, 1966, p. 279. K. F. Niessen On the Saturation of the Electric and Magnetic Polarization of Gases in Quantum Mechanics, Phys. Rev. 34 253 - 278 (1929). Department of Physics, University of Wisconsin–Madison. Received 1 June 1929. K. F. Niessen (Physics Department, Madison, Wisconsin) Ein Gas in gekreuzten Feldern nach der Quantenmechanik Journal Zeitschrift für Physik, Volume 58, Numbers 1-2, Pages 63–74 (1929). Received 13 July 1929. K. F. Niessen Über das akustische analogon der sommerfeldschen oberflächenwelle Niessen, K. F. Physica 8 (3) 337-343 (1941) K. F. Niessen On one of Heisenberg's hypotheses in the theory of specific heat of superconductors, Physica 16 (2) 77-83 (1950) References Jammer, Max The Conceptual Development of Quantum Mechanics (McGraw Hill, 1966) Mehra, Jagdish, and Helmut Rechenberg The Historical Development of Quantum Theory. Volume 5 Erwin Schrödinger and the Rise of Wave Mechanics. Part 2 The Creation of Wave Mechanics: Early Response and Applications 1925 - 1926. (Springer, 2001) Notes 1895 births 1967 deaths 20th-century Dutch physicists Quantum physicists Utrecht University alumni People from Velsen
Karel Niessen
[ "Physics" ]
863
[ "Quantum physicists", "Quantum mechanics" ]
9,522,381
https://en.wikipedia.org/wiki/Fundamental%20diagram%20of%20traffic%20flow
The fundamental diagram of traffic flow is a diagram that gives a relation between road traffic flux (vehicles/hour) and the traffic density (vehicles/km). A macroscopic traffic model involving traffic flux, traffic density and velocity forms the basis of the fundamental diagram. It can be used to predict the capability of a road system, or its behaviour when applying inflow regulation or speed limits. Basic statements There is a connection between traffic density and vehicle velocity: The more vehicles are on a road, the slower their velocity will be. To prevent congestion and to keep traffic flow stable, the number of vehicles entering the control zone has to be smaller or equal to the number of vehicles leaving the zone in the same time. At a critical traffic density and a corresponding critical velocity the state of flow will change from stable to unstable. If one of the vehicles brakes in unstable flow regime the flow will collapse. The primary tool for graphically displaying information in the study traffic flow is the fundamental diagram. Fundamental diagrams consist of three different graphs: flow-density, speed-flow, and speed-density. The graphs are two dimensional graphs. All the graphs are related by the equation “flow = speed * density”; this equation is the essential equation in traffic flow. The fundamental diagrams were derived by the plotting of field data points and giving these data points a best fit curve. With the fundamental diagrams researchers can explore the relationship between speed, flow, and density of traffic. Speed-density The speed-density relationship is linear with a negative slope; therefore, as the density increases the speed of the roadway decreases. The line crosses the speed axis, y, at the free flow speed, and the line crosses the density axis, x, at the jam density. Here the speed approaches free flow speed as the density approaches zero. As the density increases, the speed of the vehicles on the roadway decreases. The speed reaches approximately zero when the density equals the jam density. Flow-density In the study of traffic flow theory, the flow-density diagram is used to determine the traffic state of a roadway. Currently, there are two types of flow density graphs: parabolic and triangular. Academia views the triangular flow-density curve as more the accurate representation of real world events. The triangular curve consists of two vectors. The first vector is the freeflow side of the curve. This vector is created by placing the freeflow velocity vector of a roadway at the origin of the flow-density graph. The second vector is the congested branch, which is created by placing the vector of the shock wave speed at zero flow and jam density. The congested branch has a negative slope, which implies that the higher the density on the congested branch the lower the flow; therefore, even though there are more cars on the road, the number of cars passing a single point is less than if there were fewer cars on the road. The intersection of freeflow and congested vectors is the apex of the curve and is considered the capacity of the roadway, which is the traffic condition at which the maximum number of vehicles can pass by a point in a given time period. The flow and capacity at which this point occurs is the optimum flow and optimum density, respectively. The flow density diagram is used to give the traffic condition of a roadway. With the traffic conditions, time-space diagrams can be created to give travel time, delay, and queue lengths of a road segment. Speed-flow Speed – flow diagrams are used to determine the speed at which the optimum flow occurs. There are currently two shapes of the speed-flow curve. The speed-flow curve also consists of two branches, the free flow and congested branches. The diagram is not a function, allowing the flow variable to exist at two different speeds. The flow variable existing at two different speeds occurs when the speed is higher and the density is lower or when the speed is lower and the density is higher, which allows for the same flow rate. In the first speed-flow diagram, the free flow branch is a horizontal line, which shows that the roadway is at free flow speed until the optimum flow is reached. Once the optimum flow is reached, the diagram switches to the congested branch, which is a parabolic shape. The second speed flow diagram is a parabola. The parabola suggests that the only time there is free flow speed is when the density approaches zero; it also suggests that as the flow increases the speed decreases. This parabolic graph also contains an optimum flow. The optimum flow also divides the free flow and congested branches on the parabolic graph. Macroscopic fundamental diagram A macroscopic fundamental diagram (MFD) is type of traffic flow fundamental diagram that relates space-mean flow, density and speed of an entire network with n number of links as shown in Figure 1. The MFD thus represents the capacity, , of the network in terms of vehicle density with being the maximum capacity of the network and being the jam density of the network. The maximum capacity or “sweet spot” of the network is the region at the peak of the MFD function. Flow The space-mean flow, , across all the links of a given network can be expressed by: , where B is the area in the time-space diagram shown in Figure 2. Density The space-mean density, , across all the links of a given network can be expressed by: , where A is the area in the time-space diagram shown in Figure 2. Speed The space-mean speed, , across all the links of a given network can be expressed by: , where B is the area in the space-time diagram shown in Figure 2. Average travel time The MFD function can be expressed in terms of the number of vehicles in the network such that: where represents the total lane miles of the network. Let be the average distance driven by a user in the network. The average travel time () is: Application of the Macroscopic Fundamental Diagram (MFD) In 2008, the traffic flow data of the city street network of Yokohama, Japan was collected using 500 fixed sensors and 140 mobile sensors. The study revealed that city sectors with approximate area of 10 km2 are expected to have well-defined MFD functions. However, the observed MFD does not produce the full MFD function in the congested region of higher densities. Most beneficially though, the MFD function of a city network was shown to be independent of the traffic demand. Thus, through the continuous collection of traffic flow data the MFD for urban neighborhoods and cities can be obtained and used for analysis and traffic engineering purposes. These MFD functions can aid agencies in improving network accessibility and help to reduce congestion by monitoring the number of vehicles in the network. In turn, using congestion pricing, perimeter control, and other various traffic control methods, agencies can maintain optimum network performance at the "sweet spot" peak capacity. Agencies can also use the MFD to estimate average trip times for public information and engineering purposes. Keyvan-Ekbatani et al. have exploited the notion of MFD to improve mobility in saturated traffic conditions via application of gating measures, based on an appropriate simple feedback control structure. They developed a simple (nonlinear and linearized) control design model, incorporating the operational MFD, which allows for the gating problem to be cast in a proper feedback control design setting. This allows for application and comparison of a variety of linear or nonlinear, feedback or predictive (e.g. Smith predictor, internal model control and other) control design methods from the control engineering arsenal; among them, a simple but efficient PI controller was developed and successfully tested in a fairly realistic microscopic simulation environment. See also Traffic flow Traffic wave Traffic congestion Three-detector problem and Newell's method References Road transport Transportation engineering
Fundamental diagram of traffic flow
[ "Engineering" ]
1,598
[ "Transportation engineering", "Civil engineering", "Industrial engineering" ]
9,522,575
https://en.wikipedia.org/wiki/Via%20Net%20Loss
Via Net Loss (VNL) is a network architecture of telephone systems using circuit switching technologies deployed in the 1950s with Direct Distance Dialing and used until the late 1980s. The purpose of the VNL plan and a five-level long-distance switching hierarchy was to minimize the number of trunk circuits used during a call and maximize the voice quality achieved on each circuit. Excessive noise or loss meant that subscribers may have difficulty hearing each other. This was particularly important in the 1960s when dial-up data applications were developed using analog modems. The five levels of PSTN switching systems used with VNL were: Class 1 - Regional long-distance switching systems Class 2 - Sectional long-distance switching systems Class 3 - Primary long-distance switching systems Class 4 - Toll-access switching systems Class 5 - End-office switching systems Class 5 end-office switches provide local telephone service and dialtone to residential, business, and government subscribers, as well as telephone company payphones. Residential service includes message rate and flat rate local calling plans with extra charges for long-distance calls and supplementary services such as call waiting, 3-way calling, and call forwarding. Business service is mostly message rate local calling plans with extra charges for long distance and supplementary services. Message Rate calling means that subscribers pay for calls based on duration of the call and distance to the called party. Government subscribers include cities, counties, state, and federal agencies and often included Centrex service. Pay phones were traditionally provided exclusively by telephone companies but during the early 1980s Customer-owned coin-operated telephone services were established. Class 4 toll access switches provide long-distance (toll) telephone service including intrastate calling and inter-state calling. Intrastate calls are generally more expensive than inter-state calls due to favorable tariffs with price plans approved by the Public Utilities Commission or Public Service Commission for each state. Inter-state calls are generally less expensive than intrastate calls since tariffs are filed with the Federal Communications Commission because of the inter-state commerce aspect of the service. Class 4 switches provide access to long-distance service in rural areas. In addition, Class 4 switches traditionally provided operator assisted calls such as person-to-person, collect, and calls billed to third parties. However, many operator services are now automated with minimum human intervention. Class 3 primary switches provided the first layer of the AT&T long-distance switching network. VNL routing methods preferred trunk connections between Class 3 switches to minimize class 1 and class 2 connections. Class 3 switches also act as Service Switching Points or SSP's that provide access to Intelligent Network services such as Toll-Free, Virtual Private Network, Calling Card, and Credit Card calls. If circuits to other Class 3 switches were unavailable, the call was routed to the Class 2 (and/or Class 1) switch in the same region. Calls were not routed "up-chain" to Class 2 or Class 1 switches in a different region. Analog circuits between AT&T long-distance switches are known as Inter-Toll trunks while circuits from a long-distance switch to local switches are known as Toll Completing trunks or toll switching trunks. Trunks between long-distance switches in other carrier networks are known as Inter-Machine Trunks or IMT's. Class 2 sectional switches provide the second layer of long-distance switching. VNL routing methods preferred trunk connections between the originating Class 2 switch and a Class 3 or Class 2 switch in a different region. Calls were not routed "up-chain" to a Class 1 switch in a different region. Class 1 regional switches provide the final layer of long-distance switching. VNL routing methods preferred "down-chain" trunk connections between the originating Class 1 switch and a Class 3, Class 2, or Class 1 switch in a different region. Analog trunk connections between Class 1 switches were required to have a loss of zero decibels. The VNL architecture was gradually phased out due to the conversion of network circuits from analog to digital and the related conversion to a non-hierarchical network routing schemes such as AT&T's Dynamic Non-Hierarchical Routing or Nortel's Dynamically Controlled Routing methods. See IEEE publications for details on DNHR and DCR. See also Service Evaluation System Communication circuits
Via Net Loss
[ "Engineering" ]
860
[ "Telecommunications engineering", "Communication circuits" ]
9,522,674
https://en.wikipedia.org/wiki/Mendelian%20error
A Mendelian error in the genetic analysis of a species, describes an allele in an individual which could not have been received from either of its biological parents by Mendelian inheritance. Inheritance is defined by a set of related individuals who have the same or similar phenotypes for a locus of a particular gene. A Mendelian error means that the very structure of the inheritance as defined by analysis of the parental genes is incorrect: one parent of one individual is not actually the parent indicated; therefore the assumption is that the parental information is incorrect. Possible explanations for Mendelian errors are genotyping errors, erroneous assignment of the individuals as relatives, or de novo mutations. Mendelian error is established by demonstrating the existence of a trait which is inconsistent with every possible combination of genotype compatible with the individual. This method of determination requires pedigree checking, however, and establishing a contradiction between phenotype and pedigree is an NP-complete problem. Genetic inconsistencies which do not correspond to this definition are Non-Mendelian Errors. Statistical genetics analysis is used to detect these errors and to detect the possibility of the individual being linked to a specific disease linked to a single gene. Examples of such diseases in humans caused by single genes are Huntington's disease or Marfan syndrome. See also Gregor Mendel SNP genotyping Footnotes Mendelian error detection in complex pedigree using weighted constraint satisfaction techniques Genetics error NP-complete problems
Mendelian error
[ "Mathematics", "Biology" ]
305
[ "NP-complete problems", "Mathematical problems", "Genetics", "Computational problems" ]
9,523,363
https://en.wikipedia.org/wiki/Snow%20tire
Snow tires, also known as winter tires, are tires designed for use on snow and ice. Snow tires have a tread design with larger gaps than those on conventional tires, increasing traction on snow and ice. Such tires that have passed specific winter traction performance tests are entitled to display a 3PMSF (Three-Peak Mountain Snow Flake) and/or a IMP (Icy Mountain Peak) symbols on their sidewalls. Tires designed for winter conditions are optimized to drive at temperatures below . Studded tires are a type of snow tires which have metal or ceramic studs that protrude from the tire to increase traction on hard-packed snow or ice. Studs abrade dry pavement, causing dust and creating wear in the wheel path. Regulations that require the use of snow tires or permit the use of studs vary by jurisdiction. All-season tires have tread gaps that are smaller than snow tires and larger than conventional tires. They are quieter than winter tires on clear roads, but less capable on snow or ice. Roadway conditions in winter Snow tires operate on a variety of surfaces, including pavement (wet or dry), mud, ice, or snow. The tread design of snow tires is adapted primarily to allow penetration of the snow into the tread, where it compacts and provides resistance against slippage. The snow strength developed by compaction depends on the properties of the snow, which depend on its temperature and water content—wetter, warmer snow compacts better than dry, colder snow up to a point where the snow is so wet that it lubricates the tire-road interface. New and powder snow have densities of . Compacted snow may have densities of . Snow or ice-covered roadways present lower braking and cornering friction, compared to dry conditions. The roadway friction properties of snow, in particular, are a function of temperature. At temperatures below , snow crystals are harder and generate more friction as a tire passes over them than at warmer conditions with snow or ice on the road surface. However, as temperatures rise above , the presence of free water increasingly lubricates the snow or ice and diminishes tire friction. Hydrophilic rubber compounds help create friction in the presence of water or ice. Dry and moist snow conditions on roadways Treads Attributes that can distinguish snow tires from "all-season" and summer tires include: An open, deep tread, with a high void ratio between rubber and spaces between the solid rubber Shoulder blocks, a specialized tread design at the outside of the tire tread to increase snow contact and friction A narrower aspect ratio between the diameter of the tire and the tread width to minimize resistance from the plowing effect of the tire through deeper snow Hydrophilic rubber compounds that improve friction on wet surfaces Additional siping, or thin slits in the rubber, that provide more biting edges and improve traction on wet or icy surfaces. Wet-film conditions on hard-compacted snow or ice require studs or chains. Studs Many jurisdictions in Asia, Europe, and North America seasonally allow snow tires with metal or ceramic studs to improve grip on packed snow or ice. Such tires are prohibited in other jurisdictions or during warmer months because of the damage they may cause to road surfaces. The metal studs are fabricated by encapsulating a hard pin in a softer material base, sometimes called the jacket. The pin is often made of tungsten carbide, a very hard high performance ceramic. The softer base is the part that anchors the stud in the rubber of the tire. As the tire wears with use, the softer base wears so that its surface is at about the same level as the rubber, whereas the hard pin wears so that it continues to protrude from the tire. The pin should protrude at least for the tire to function properly. Snow tires do not eliminate skidding on ice and snow, but they greatly reduce risks. Studdable tires are manufactured with molded holes on the rubber tire tread. Usually, there are 80 to 100 molded holes per tire for stud insertion. The insertion is done by using a special tool that spreads the rubber hole so that a stud jacket can be inserted and the flange at the bottom of the jacket can be fitted nicely to the bottom of the hole. The metal studs come in specific heights to match the depths of the holes molded into the tire tread based on the tread depths. For this reason, stud metals can only be inserted when the tires have not been driven on. A proper stud insertion results in the metal jacket that is flush with the surface of the tire tread having only the pin part that protrudes. Tire/snow interactions The compacted snow develops strength against slippage along a shear plane parallel to the contact area of the tire on the ground. At the same time, the bottom of the tire treads compress the snow on which they are bearing, also creating friction. The process of compacting snow within the treads requires it to be expelled in time for the tread to compact snow anew on the next rotation. The compaction/contact process works both in the direction of travel for propulsion and braking, but also laterally for cornering. The deeper the snow that the tire rolls through, the higher the resistance encountered by the tire, as it compacts the snow it encounters and plows some of it to either side. At some point on a given angle of uphill pitch, this resistance becomes greater than the resistance to slippage achieved by the tread's contact with the snow and the tires with power begin to slip and spin. Deeper snow means that climbing a hill without spinning the powered wheels becomes more difficult. However, the plowing/compaction effect aids in braking to the extent that it creates rolling resistance. Tire/snow interactions Standards 3PMSF The 3PMSF (Three-Peak Mountain Snow Flake) indicates 10% better acceleration on snow. ASTM International F1805 (formalized in the year 1999) IMP The IMP (Icy Mountain Peak) indicates 18% better deceleration on ice. ASTM International F2493 (formalized in the year 2021) Regulations Asia All prefectures of Japan, except for the southernmost prefecture of Okinawa, have a traffic regulation requiring motorized vehicles to be fitted with winter tires or tire chains when the road is covered by ice or snow. In addition, tire chains must be fitted for all vehicles on rural designated highways in snow country regions when regulated by traffic signs requiring tire chains. In many prefectures, tread grooves of snow tires are worn off for more than 50% of their original depth, tires must be replaced to meet the legal requirements. Drivers will be fined for failing to comply with the snow tire or tire chains requirements, and checkpoints are in place on major highways. Nationwide studded tire restrictions in Japan for passenger vehicles came into effect in April 1991, followed by restrictions for commercial trucks in 1993. Studded tires are still legal in Japan, but their usage is restricted by environmental law and it is a criminal offence to operate a vehicle fitted with a studded tire on dry asphalt or concrete. Europe As of 2016, regulations pertaining to snow tires in Europe varied by country. The principal aspects of regulations were whether the use was mandatory and whether studded tires were permitted. Mandatory use – The following countries required snow tires between specified dates or when roads are snowy or icy: Austria, Bosnia-Herzegovina, Croatia, Czechia, Estonia, Finland, France, Germany, Latvia, Lithuania, Montenegro, Norway, Romania, Serbia, Slovakia, Slovenia, Sweden, and Russia. Studded tires banned – The following countries ban the use of studded tires: Albania, Belgium, Bosnia-Herzegovina, Bulgaria, Croatia, Czechia, Germany, Hungary, Luxembourg, North Macedonia, Montenegro, Netherlands, Poland, Portugal, Romania, Serbia, Slovakia, and Slovenia. Studded tires restricted – The following countries allowed the seasonally restricted use of studded tires: Austria, Denmark, Estonia, Finland, France, Iceland, Ireland, Latvia, Lithuania, Norway, Russia, Spain, Sweden, and Switzerland. North America The U.S. National Highway Traffic Safety Administration (NHTSA) and Transport Canada allow display of a 3PMSF symbol to indicate that the tire has exceeded the industry requirement from a reference (non-snow) tire. As of 2016, snow tires were 3.6% of the US market and 35% of the Canadian market. US states and Canadian provinces control the use of snow tires. Of these, Quebec is the only jurisdiction that requires snow tires throughout. Some may require snow tires or chains only in specified areas during the winter. See also Snow chains Snow socks References Tires Snow Ice in transportation Inclement weather management Automotive safety Vehicle safety technologies Tyres
Snow tire
[ "Physics" ]
1,773
[ "Ice in transportation", "Physical phenomena", "Weather", "Inclement weather management", "Physical systems", "Transport" ]
9,523,459
https://en.wikipedia.org/wiki/Tank%20blanketing
Tank blanketing, also called gas sealing or tank padding, is the process of applying a gas to the empty space in a storage container. The term storage container here refers to any container that is used to store products, regardless of its size. Though tank blanketing is used for a variety of reasons, it typically involves using a buffer gas to protect products inside the storage container. A few of the benefits of blanketing include a longer product life in the container, reduced hazards, and longer equipment life cycles. Methods In 1970, Appalachian Controls Environmental (ACE) was the world’s first company to introduce a tank blanketing valve. There are now many ready-made systems available for purchase from a variety of process equipment companies. It is also possible to piece together your own system using a variety of different equipment. Regardless of which method is used, the basic requirements are the same. There must be a way of allowing the blanketing gas into the system, and a way to vent the gas should the pressure get too high. Since ACE introduced its valve many companies have engineered their own versions. Though many of the products available vary in features and applicability, the fundamental design is the same. When the pressure inside the container drops below a set point, a valve opens and allows the blanketing gas to enter. Once the pressure reaches the set point, the valve closes. As a safety feature, many systems include a pressure vent that opens when the pressure inside exceeds a maximum pressure set point. This helps to prevent the container from rupturing due to high pressure. Since most blanketing gas sources will provide gas at a much higher than desired pressure, a blanketing system will also use a pressure reducing valve to decrease the inlet pressure to the tank. Although it varies from application to application, blanketing systems usually operate at a slightly higher than atmospheric pressure (a few inches of water column above atmospheric). Higher pressures than this are generally not used as they often yield only marginal increases in results while wasting large amounts of expensive blanketing gas. Some systems also utilize inert gases to agitate the liquid contents of the container. This is desirable because products, such as citric acid, are added to food oils the tank will begin to settle over time with the heavier contents sinking to the bottom. However, a system that utilizes nitrogen sparging (and then subsequently tank blanketing once the nitrogen reaches the vapor space) may have negative impact on the products involved. Nitrogen sparging creates a significantly higher amount of surface contact between the gas and the product, which in turn creates a much larger opportunity for undesired oxidation to occur. It is possible for nitrogen that is as much 99.9% free of oxygen to increase the amount of oxidation within the product due to the high amount of surface contact. Common practices The most common gas used in blanketing is nitrogen. Nitrogen is widely used due to its inert properties, as well as its availability and relatively low cost. Tank blanketing is used for a variety of products including cooking oils, volatile combustible products, and purified water. These applications also cover a wide variety of storage containers, ranging from as large as a tank containing millions of gallons of vegetable oil down to a quart-size container or smaller. Nitrogen is appropriate for use at any of these scales. The use of an inert blanketing gas for food products helps to keep oxygen levels low in and around the product. Low levels of oxygen surrounding the product help to reduce the amount of oxidation that may occur, and increases shelf life. In the case of cooking oils, lipid oxidation can cause the oil to change its color, flavor, or aroma. It also decreases the nutrient levels in the food and can even generate toxic substances. Tank blanketing strategies are also implemented to prepare the product for transit (railcar or truck) and for final packaging before sealing the product. When considering the application for combustible products, the greatest benefit is process safety. Since fuels require oxygen to combust, reduced oxygen content in the vapor space lowers the risk of unwanted combustion. Tank blanketing is also used to keep contaminants out of a storage space. This is accomplished by creating positive pressure inside the container. This positive pressure ensures that if a leak should occur, the gas will leak out rather than having the contaminants infiltrate the container. Some examples include its use on purified water to keep unwanted minerals out and its use on food products to keep contaminants out. To ensure their safety, gas-blanketing systems for food use are regulated by the U.S. Food and Drug Administration (FDA) and must adhere to strict maintenance schedules and follow all product-contact regulations with regards to purity, toxicity, and filter specs. As with any use of inert gases, care must be taken to ensure that workers are not exposed to large quantities of nitrogen or other non-breathable substances, which can quickly result in asphyxiation and death. Use of them in commercial applications is subject to the regulation of OSHA in the USA and similar regulatory bodies elsewhere. See also Industrial gas Oxygen reduction system Inerting system References Author unavailable (2000), Fisher Controls becomes an “ACE” in tank blanketing [Electronic version]. Control Engineering Europe, July 2000, 12. Kanner, J., Rosenthal, I. (1992), An Assessment of Lipid Oxidation in Foods [Electronic version]. Pure Appl. Chem., Vol. 64, No. 12, 1959-1964. Retrieved February 15, 2007, from http://www.iupac.org/publications/pac/1992/pdf/6412x1959.pdf Amos, Kenna (1999). Leakless vapor-space valve controls unveiled. InTech, January 1999. Retrieved February 15, 2007, from http://findarticles.com/p/articles/mi_qa3739/is_199901/ai_n8840650 External sources Online Chemical Engineering Information Nitrogen properties, uses, and applications Control engineering Chemical processes
Tank blanketing
[ "Chemistry", "Engineering" ]
1,239
[ "Chemical process engineering", "Control engineering", "Chemical processes", "nan" ]
9,523,634
https://en.wikipedia.org/wiki/Kaplansky%20density%20theorem
In the theory of von Neumann algebras, the Kaplansky density theorem, due to Irving Kaplansky, is a fundamental approximation theorem. The importance and ubiquity of this technical tool led Gert Pedersen to comment in one of his books that, The density theorem is Kaplansky's great gift to mankind. It can be used every day, and twice on Sundays. Formal statement Let K− denote the strong-operator closure of a set K in B(H), the set of bounded operators on the Hilbert space H, and let (K)1 denote the intersection of K with the unit ball of B(H). Kaplansky density theorem. If is a self-adjoint algebra of operators in , then each element in the unit ball of the strong-operator closure of is in the strong-operator closure of the unit ball of . In other words, . If is a self-adjoint operator in , then is in the strong-operator closure of the set of self-adjoint operators in . The Kaplansky density theorem can be used to formulate some approximations with respect to the strong operator topology. 1) If h is a positive operator in (A−)1, then h is in the strong-operator closure of the set of self-adjoint operators in (A+)1, where A+ denotes the set of positive operators in A. 2) If A is a C*-algebra acting on the Hilbert space H and u is a unitary operator in A−, then u is in the strong-operator closure of the set of unitary operators in A. In the density theorem and 1) above, the results also hold if one considers a ball of radius r > 0, instead of the unit ball. Proof The standard proof uses the fact that a bounded continuous real-valued function f is strong-operator continuous. In other words, for a net {aα} of self-adjoint operators in A, the continuous functional calculus a → f(a) satisfies, in the strong operator topology. This shows that self-adjoint part of the unit ball in A− can be approximated strongly by self-adjoint elements in A. A matrix computation in M2(A) considering the self-adjoint operator with entries 0 on the diagonal and a and a* at the other positions, then removes the self-adjointness restriction and proves the theorem. See also Jacobson density theorem Notes References Kadison, Richard, Fundamentals of the Theory of Operator Algebras, Vol. I : Elementary Theory, American Mathematical Society. . V.F.R.Jones von Neumann algebras; incomplete notes from a course. M. Takesaki Theory of Operator Algebras I Von Neumann algebras Theorems in functional analysis
Kaplansky density theorem
[ "Mathematics" ]
575
[ "Theorems in mathematical analysis", "Theorems in functional analysis" ]
9,524,316
https://en.wikipedia.org/wiki/Wildcat%20%28musical%29
Wildcat is a musical with a book by N. Richard Nash, lyrics by Carolyn Leigh, and music by Cy Coleman. The original production opened on Broadway in 1960, starring a 49-year-old Lucille Ball in her only Broadway show. The show introduced the song "Hey, Look Me Over", which was subsequently performed as a cover version by several musicians. An original cast album was issued by RCA Victor records (LOC-1060). Background and production Nash had envisioned the main character of Wildy as a woman in her late 20s, and was forced to rewrite the role when Lucille Ball expressed interest not only in playing it but financing the project as well. Desilu, the company owned by Ball and her soon-to-be ex-husband Desi Arnaz, ultimately invested $360,000 in the show in exchange for 36% of the net profits, the rights to the original cast recording (ultimately released by RCA Victor), and television rights for musical numbers to be included in a special titled Lucy Goes to Broadway, a project that eventually was scrapped. Ball also was permitted to choose her leading man. Kirk Douglas's salary demands and heavy film schedule eliminated him from the running, and Gordon MacRae, Jock Mahoney, and Gene Barry were considered before she selected Keith Andes. Michael Kidd, who co-produced with Nash, directed and choreographed, and he got songwriters Coleman and Leigh on board. It was Leigh's second Broadway production (following 1954's Peter Pan with Mary Martin) and Coleman's Broadway debut. The Philadelphia tryout opened on October 29, 1960 to a glowing review from Variety, but local critics were less enthusiastic. The scheduled Broadway opening had to be postponed when trucks hauling the sets and costumes to New York were stranded on the New Jersey Turnpike for several days by a major blizzard. After two previews, the show opened on December 16 at the Alvin Theatre. The cast included Paula Stewart and Swen Swenson, with Valerie Harper among the chorus members. Vivian Vance, Ball's costar from I Love Lucy, was in the opening night audience and was photographed giving the star a congratulatory hug backstage after the show. Hampered by lukewarm reviews and Ball's lingering illness, it ran for only 171 performances. Ball quickly realized audiences had come expecting to see her Lucy Ricardo persona and began mugging and ad-libbing to bring her characterization closer to that of the zany housewife she had portrayed in I Love Lucy. It was clearly Ball who was drawing the crowds, and when she fell ill and demands for refunds ran high, the producers announced plans to close the show for a week in late March 1961 to allow her to recover her strength. The closure came sooner than planned when Ball, suffering from a virus and chronic fatigue, departed for Florida on February 8. She returned two weeks later, but on April 22 she collapsed on stage. It was decided the show would close for nine weeks at the end of May and reopen once its star had recovered fully, but May 24 proved to be her final performance as the musicians' union insisted on members of the orchestra being paid during the shutdowns. This ultimately made it infeasible for the production to remain active, forcing it to close permanently on June 3, 1961. Wildcat was Ball's only appearance in a Broadway production. She previously had been cast in the Bartlett Cormack play Hey Diddle Diddle, a comedy that premiered in Princeton, New Jersey on January 21, 1937. Ball played the part of Julie Tucker, "one of three roommates coping with neurotic directors, confused executives, and grasping stars who interfere with the girls' ability to get ahead." The play received good reviews, but there were problems, chiefly with its star Conway Tearle; the play was scheduled to open on Broadway at the Vanderbilt Theatre, but closed after one week in Washington, D.C., when Tearle suddenly became gravely ill. The Australian production of Wildcat starring Toni Lamond opened at Princess Theatre, Melbourne on July 19, 1963. The production employed British actor Gordon Boyd and Canadian actress Norah Halliday to play Joe Dynamite and Janie, respectively, among a cast of 82 performers. The Australian production reinstated the cut song "Ain't It Sad, Ain't It Mean" as a duet for Wildcat and Sookie. The show closed September 14, 1963. Plot Wildcat "Wildy" Jackson arrives in 1912 in Centavo City with dreams of striking oil but with neither capital nor know-how to help her accomplish her goal. Joe Dynamite, the most successful crew foreman in the territory, finds her ruggedness appealing and agrees to work with her if she can prove ownership to her claimed land and hire a crew. She finds owned by a hermit prospector, but Joe is certain the property is dry. Wildy attempts to lure him with her female charms, but when he still rejects her plans she has him falsely arrested, then released into her custody. A grateful Joe agrees to start work on the project but abandons it once he discovers it was Wildy who had him jailed. Left high and literally dry by her partner and crew, Wildy resorts to desperate measures to strike a Texas-sized gusher. Songs Act I I Hear - Townspeople Hey, Look Me Over - Wildy and Jane Wildcat(*) - Wildy and Townspeople You've Come Home - Joe That's What I Want for Janie(*) - Wildy What Takes My Fancy - Wildy and Sookie You're a Liar - Wildy and Joe One Day We Dance - Hank and Jane Give a Little Whistle and I'll Be There - Wildy, Joe, The Crew and Townspeople Tall Hope - Tattoo, Oney, Sadie, Matt and Crew (*) Song cut sometime after opening night. Act II Tippy Tippy Toes - Wildy and Countess El Sombrero - Corduroy Road You've Come Home (Reprise) - Joe Cast Wildcat Jackson—Lucille Ball Jane Jackson—Paula Stewart Sheriff Sam Gore—Howard Fischer Barney—Ken Ayers Luke—Anthony Saverino Countess Emily O'Brien—Edith King Joe Dynamite—Keith Andes Hank—Clifford David Miguel—HF Green Sookie—Don Tomkins Matt—Charles Braswell Corky—Bill Linton Oney—Swen Swenson Sandy—Ray Mason Tattoo—Bill Walker Cisco—Al Lanti Postman—Bill Richards Inez—Marsha Wagner Blonde—Wendy Nickerson References Bibliography Brady, Kathleen. "Lucille the Life of Lucille Ball" (2011) Open Road Integrated Media, Sanders, Coyne Steven and Gilbert, Tom. Desilu: The Story of Lucille Ball and Desi Arnaz (2003), William Morrow and Company, , pp. 202–220 External links Plot and production information at guidetomusicaltheatre 1960 musicals Broadway musicals Musicals by Cy Coleman Original musicals Musicals set in the 1910s Works about petroleum Works by N. Richard Nash Musicals set in the United States
Wildcat (musical)
[ "Chemistry" ]
1,440
[ "Petroleum", "Works about petroleum" ]
9,525,372
https://en.wikipedia.org/wiki/Durium
Durium is a highly durable synthetic resin developed in 1929. It was used in phonograph records, as well as in the casting process for metallic type and in the aeronautics industry. Origin It is a resorcinol-formaldehyde resin, the result of research by Hal T. Beans, professor of chemistry at Columbia University. Properties The resin is flexible, tasteless, odorless, fire and waterproof. It is highly resistant to heat and was heated to in production of records. It is fast-setting, reducing the production cost of items made from it. Applications Being resistant to fire and water, the resin was used as a substitute for varnish on aeronautical parts. It was commercialized by Durium Products Company (renamed Durium Products, Inc., from 1931) as the medium for Hit of the Week records, from 1930 to 1932. The resin was bonded to a cardboard substrate and, being much lighter than its competitor shellac, was sold at newsstands for only 15 cents per disc. References Synthetic resins
Durium
[ "Chemistry" ]
212
[ "Synthetic materials", "Synthetic resins" ]
9,525,603
https://en.wikipedia.org/wiki/Breezeway
A breezeway is an architectural feature similar to a hallway that allows the passage of a breeze between structures to accommodate high winds, allow aeration, or provide aesthetic design variation. Often, a breezeway is a simple roof connecting two structures (such as a house and a garage); sometimes, it can be much more like a tunnel with windows on either side. It may also refer to a hallway between two wings of a larger building – such as between a house and a garage – that lacks heating and cooling but allows sheltered passage. Breezeways have been used to house restaurants as well. One of the earliest breezeway designs to be architecturally designed and published was designed by Frank Lloyd Wright in 1900 for the B. Harley Bradley House in Kankakee, Illinois. However, breezeway features had come into use in vernacular architecture long before this, as for example with the dogtrot breezeway that originally connected the two elements of a double log cabin on the North American frontier. A side-deck is the upper deck outboard of any structure such as a coachroof or doghouse, also called a breezeway. See also Carport Pergola Skyway Transom (architecture) References External links Residential breezeway image Rooms
Breezeway
[ "Engineering" ]
250
[ "Rooms", "Architecture" ]
9,525,684
https://en.wikipedia.org/wiki/Ethyl%20acrylate
Ethyl acrylate is an organic compound with the formula CH2CHCO2CH2CH3. It is the ethyl ester of acrylic acid. It is a colourless liquid with a characteristic acrid odor. It is mainly produced for paints, textiles, and non-woven fibers. It is also a reagent in the synthesis of various pharmaceutical intermediates. Production Ethyl acrylate is produced by acid-catalysed esterification of acrylic acid, which in turn is produced by oxidation of propylene. It may also be prepared from acetylene, carbon monoxide and ethanol by a Reppe reaction. Commercial preparations contain a polymerization inhibitor such as hydroquinone, phenothiazine, or hydroquinone ethyl ether. Reactions and uses Precursor to polymers and other monomers Ethyl acrylate is used in the production of polymers including resins, plastics, rubber, and denture material. Ethyl acrylate is a reactant for homologous alkyl acrylates (acrylic esters) by transesterification with higher alcohols through acidic or basic catalysis. In that way speciality acrylates are made accessible, e.g. 2-ethylhexyl acrylate (from 2-ethylhexanol) used for pressure-sensitive adhesives, cyclohexyl acrylate (from cyclohexanol) used for automotive clear lacquers, 2-hydroxyethyl acrylate (from ethylene glycol) which is crosslinkable with diisocyanates to form gels used with long-chain acrylates (from C18+ alcohols) as comonomer for comb polymers for reduction of the solidification point of paraffin oils and 2-dimethylaminoethyl acrylate (from dimethylaminoethanol) for the preparation of flocculants for sewage clarification and paper production. As a reactive monomer, ethyl acrylate is used in homopolymers and copolymers with e.g. ethene, acrylic acid and its salts, amides and esters, methacrylates, acrylonitrile, maleic esters, vinyl acetate, vinyl chloride, vinylidene chloride, styrene, butadiene and unsaturated polyesters. Copolymers of acrylic acid ethyl ester with ethene (EPA/ethylene-ethyl acrylate copolymers) are suitable as adhesives and polymer additives, just like ethene vinyl acetate copolymers. Copolymers with acrylic acid increase the cleaning effect of liquid detergents, copolymers with methacrylic acid are used as gastric juices tablet covers (Eudragit). The large number of possible comonomer units and their combination in copolymers and terpolymers with ethyl acrylate allows the realization of different properties of the acrylate copolymers in a variety of applications in paints and adhesives, paper, textile and leather auxiliaries together with cosmetic and pharmaceutical products. As Michael acceptor and HX acceptor Ethyl acrylate reacts with amines catalyzed by Lewis acids in a Michael addition to β-alanine derivatives in high yields: The nucleophilic addition at ethyl acrylate as an α,β-unsaturated carbonyl compound is a frequent strategy in the synthesis of pharmaceutical intermediates. Examples are the hypnotic glutethimide or the vasodilator vincamin (obsolete by now) or more recent therapeutics such as the COPD agent cilomilast or the nootropic leteprinim. Ethyl 3-bromopropionate is prepared by hydrobromination of ethyl acrylate. Dienophile With dienes, ethyl acrylate reacts as a good dienophile in Diels–Alder reactions e.g. with buta-1,3-diene in a [4+2] cycloaddition reaction to give a cyclohexene carboxylic acid ester in a high yield. Natural occurrence Ethyl acrylate is also used as a flavoring agent. It has been found as a volatile component in pineapples and Beaufort cheese and is a secondary component in vanilla flavor obtained from heat extraction of vanilla in amounts of up to 1 ppm. In such high concentrations it negatively affects the extracted aroma. Safety The International Agency for Research on Cancer stated, "Overall evaluation, ethyl acrylate is possibly carcinogenic to humans (Group 2B)." The United States Environmental Protection Agency (EPA) states, "Human studies on occupational exposure to ethyl acrylate... have suggested a relationship between exposure to the chemical(s) and colorectal cancer, but the evidence is conflicting and inconclusive. In a study by the National Toxicology Program (NTP), increased incidence of squamous cell papillomas and carcinomas of the forestomach were observed in rats and mice exposed via gavage (experimentally placing the chemical in the stomach). However, the NTP recently determined that these data were not relevant to human carcinogenicity since humans do not have a forestomach, and removed ethyl acrylate from its list of carcinogens." However, ethyl acrylate also increased the incidence of thyroid follicular cell adenoma in male mice, and thyroid follicular cell adenoma or carcinoma (combined) in male rats exposed through inhalation. It is possibly carcinogenic and it is toxic in large doses, with an LD50 (rats, oral) of 1020 mg/kg. As of October 2018, the FDA withdrew authorization for its use as a synthetic flavoring substance in food. One favorable safety aspect is that ethyl acrylate has good warning properties; the odor threshold is much lower than the concentration required to create an atmosphere immediately dangerous to life and health. Reports of the exact levels vary somewhat, but, for example, the EPA reports an odor threshold of 0.0012 parts per million (ppm), but the EPA's lowest level of health concern, the Acute Exposure Guideline Level-1 (AEGL-1) is 8.3 ppm, which is almost 7000 times the odor threshold. However, as a possible carconigen, NIOSH maintains "that there is no safe level of exposure to a carcinogen. Reduction of worker exposure to chemical carcinogens as much as possible through elimination or substitution and engineering controls is the primary way to prevent occupational cancer." References External links CDC - NIOSH Pocket Guide to Chemical Hazards - Ethyl Acrylate Monomers Ethyl esters Lachrymatory agents IARC Group 2B carcinogens Acrylate esters
Ethyl acrylate
[ "Chemistry", "Materials_science" ]
1,465
[ "Lachrymatory agents", "Monomers", "Polymer chemistry", "Chemical weapons" ]
9,526,323
https://en.wikipedia.org/wiki/Flettner%20rotor
A Flettner rotor is a smooth cylinder with disc end plates which is spun along its long axis and, as air passes at right angles across it, the Magnus effect causes an aerodynamic force to be generated in the direction perpendicular to both the long axis and the direction of airflow. The rotor sail is named after the German aviation engineer and inventor Anton Flettner, who started developing the rotor sail in the 1920s. In a rotor ship, the rotors stand vertically and lift is generated at right angles to the wind, to drive the ship forwards. In a rotor airplane, the rotor extends sideways in place of a wing and upwards lift is generated. Magnus effect The Magnus effect is named after Gustav Magnus, the German physicist who investigated it. It describes the force generated by fluid flow over a rotating body, at right angles to both the direction of flow and the axis of rotation. This force on a rotating cylinder is known as Kutta–Joukowski lift, after Martin Kutta and Nikolai Zhukovsky (or Joukowski), who first analyzed the effect. The Flettner rotor is just one form of the Magnus rotor, which in general need not be cylindrical. Marine applications Rotor ships A rotor ship uses one or more Flettner rotors mounted upright. They are rotated by the ship's engines, and act like sails to propel the ship under wind power. A conventionally-powered underwater propeller may be provided for additional operational flexibility. An early prototype, the Baden Baden (formerly the Buckau), crossed the Atlantic in 1925, but interest was not revived until energy saving became a major concern in the new millennium. The E-Ship 1 was launched in 2008, and new vessels continue to appear. Since then, multiple rotor installations have been completed, including tilting rotors to allow passage beneath bridges. Typically, rotor sails have been reported to generate 5-20% fuel savings. Stabilizers A Flettner rotor mounted beneath the waterline of a ship's hull and emerging laterally will act to stabilize the ship in heavy seas. By controlling the direction and speed of rotation, strong lift or downforce can be generated. The largest deployment of the system to date is in the motor yacht Eclipse. Rotor airplanes Some flying machines have been built which use the Magnus effect to create lift with a rotating cylinder at the front of a wing, allowing flight at lower horizontal speeds. An early attempt to use the Magnus effect for a heavier-than-air aircraft was made in 1908 by a US member of Congress, Butler Ames of Massachusetts. A later example was the Plymouth A-A-2004 in the early 1930s, built by three inventors in New York state. French designer Jean de Chappedelaine developed his Aérogyre at much the same time. A prototype, based on a modified Caudron C.270 Luciole, was flown in 1934 with the wing rotor stationary. It crashed on its next flight, but whether the wing was rotating during that flight is unknown. Similar devices The Flettner rotor inspired Sigurd Johannes Savonius to invent a spinning ventilation device after a collaboration between the two inventors. Anton Flettner's company Flettner Ventilator Limited acquired Savonius' patent and still sells them in the United Kingdom. The devices are often referred to as "Flettner ventilators" even though the mechanism more closely resembles a Savonius wind turbine, which was a 1924 invention that resulted from the same collaboration. References External links www.marineinsight.com - Flettner Rotor for Ships - Uses, History and Problems Fluid dynamics Aircraft components Marine propulsion
Flettner rotor
[ "Chemistry", "Engineering" ]
738
[ "Chemical engineering", "Marine engineering", "Piping", "Marine propulsion", "Fluid dynamics" ]
9,526,432
https://en.wikipedia.org/wiki/Mine-Resistant%20Ambush%20Protected%20Vehicle
Mine-Resistant Ambush Protected Vehicle (MRAPV), also known as MRAP Vehicle, is a type of armoured personnel carrier that are designed specifically to withstand land mines, improvised explosive device (IED) attacks and ambushes to save troops' lives. Most modern Infantry mobility vehicle also have certain level of MRAP capabilities. History Specialized light armored vehicles designed specifically to resist land mines were first introduced in the 1970s by the Rhodesian Army, and were further developed by South African manufacturers starting in 1974 with the Hippo armored personnel carrier (APC). The first step by the South African Defence Force (SADF) was the Bosvark, a Unimog fitted with a shallow mine-deflecting tub on the chassis to protect the crew. Then came the first generation of purpose-built vehicles, including the Hippo and various other light vehicles. They were essentially armoured V-shaped hulls mounted on truck chassis. The next generation was represented by the Buffel, a Unimog chassis with a mine-protected cab and a mine-protected crew compartment mounted on it. These early vehicles overloaded their chassis and they were clumsy off-road. The Casspir Mine-Resistant Ambush Protected Vehicle was developed for the SADF after 1980; this was the inspiration for the American and other military MRAPV program and the basis for some of the program's vehicles. Design These vehicles have good off-road mobility, armour protection against small arms fire, improvised explosive device (IED) and anti-personnel mines. These armored vehicles generally have distinctive V-shaped hull (for mine protection) and a wheeled chassis. List of Mine-Resistant Ambush Protected Vehicles Dedicated MRAPV ATF Dingo 2 BAE Caiman – Part of the American MRAP program BMC Kirpi Cougar – Part of the American MRAP program CS/VP3 MRAP First Win Golan Armoured Vehicle International MaxxPro – Part of the American MRAP program Kalyani M4 Kamaz Typhoon – Part of the Russian Typhoon program Mine Protected Combat Vehicle – Rhodesia IMV from 1979 Nexter Aravis Force Protection Ocelot Oshkosh M-ATV – Part of the American MRAP program Protolab Misu RG-31 Nyala RG-33 (6×6) Sisu GTP Toofan – Iranian MRAP infantry mobility vehicle Unicob Zastava M20 MRAP Infantry mobility vehicle with MRAP capabilities AMZ Żubr Bushmaster IMV COMBATGUARD Didgori series Grizzly APC Hunter TR-12 Iveco LMV – Several thousand ordered by Italian military and other European militaries Kozak (armored personnel carrier) M16 Miloš Mahindra Armored Light Specialist Vehicle Mungo ESK Oshkosh L-ATV – Selected to meet US military's JLTV requirement on 25 August 2015 Otokar Cobra Otokar Cobra II RG-33 (4×4) STREIT Group Spartan Varta Ejder Yalçın References Wheeled armoured fighting vehicles Wheeled armoured personnel carriers Military engineering vehicles Military vehicles introduced in the 2000s Iraq War terminology
Mine-Resistant Ambush Protected Vehicle
[ "Engineering" ]
637
[ "Engineering vehicles", "Military engineering", "Military engineering vehicles" ]
9,526,571
https://en.wikipedia.org/wiki/Scattering%20length
The scattering length in quantum mechanics describes low-energy scattering. For potentials that decay faster than as , it is defined as the following low-energy limit: where is the scattering length, is the wave number, and is the phase shift of the outgoing spherical wave. The elastic cross section, , at low energies is determined solely by the scattering length: General concept When a slow particle scatters off a short ranged scatterer (e.g. an impurity in a solid or a heavy particle) it cannot resolve the structure of the object since its de Broglie wavelength is very long. The idea is that then it should not be important what precise potential one scatters off, but only how the potential looks at long length scales. The formal way to solve this problem is to do a partial wave expansion (somewhat analogous to the multipole expansion in classical electrodynamics), where one expands in the angular momentum components of the outgoing wave. At very low energy the incoming particle does not see any structure, therefore to lowest order one has only a spherical outgoing wave, called the s-wave in analogy with the atomic orbital at angular momentum quantum number l=0. At higher energies one also needs to consider p and d-wave (l=1,2) scattering and so on. The idea of describing low energy properties in terms of a few parameters and symmetries is very powerful, and is also behind the concept of renormalization. The concept of the scattering length can also be extended to potentials that decay slower than as . A famous example, relevant for proton-proton scattering, is the Coulomb-modified scattering length. Example As an example on how to compute the s-wave (i.e. angular momentum ) scattering length for a given potential we look at the infinitely repulsive spherical potential well of radius in 3 dimensions. The radial Schrödinger equation () outside of the well is just the same as for a free particle: where the hard core potential requires that the wave function vanishes at , . The solution is readily found: . Here and is the s-wave phase shift (the phase difference between incoming and outgoing wave), which is fixed by the boundary condition ; is an arbitrary normalization constant. One can show that in general for small (i.e. low energy scattering). The parameter of dimension length is defined as the scattering length. For our potential we have therefore , in other words the scattering length for a hard sphere is just the radius. (Alternatively one could say that an arbitrary potential with s-wave scattering length has the same low energy scattering properties as a hard sphere of radius .) To relate the scattering length to physical observables that can be measured in a scattering experiment we need to compute the cross section . In scattering theory one writes the asymptotic wavefunction as (we assume there is a finite ranged scatterer at the origin and there is an incoming plane wave along the -axis): where is the scattering amplitude. According to the probability interpretation of quantum mechanics the differential cross section is given by (the probability per unit time to scatter into the direction ). If we consider only s-wave scattering the differential cross section does not depend on the angle , and the total scattering cross section is just . The s-wave part of the wavefunction is projected out by using the standard expansion of a plane wave in terms of spherical waves and Legendre polynomials : By matching the component of to the s-wave solution (where we normalize such that the incoming wave has a prefactor of unity) one has: This gives: See also Fermi pseudopotential Neutron scattering length References Quantum mechanics Scattering theory
Scattering length
[ "Physics", "Chemistry" ]
756
[ "Scattering", "Theoretical physics", "Quantum mechanics", "Scattering theory" ]
9,526,638
https://en.wikipedia.org/wiki/Ganirelix
Ganirelix acetate (or diacetate), sold under the brand names Orgalutran and Antagon among others, is an injectable competitive gonadotropin-releasing hormone antagonist (GnRH antagonist). It is primarily used in assisted reproduction to control ovulation. The drug works by blocking the action of gonadotropin-releasing hormone (GnRH) upon the pituitary, thus rapidly suppressing the production and action of LH and FSH. Ganirelix is used in fertility treatment to prevent premature ovulation that could result in the harvesting of eggs that are too immature to be used in procedures such as in vitro fertilization. GnRH agonists are also sometimes used in reproductive therapy, as well as to treat disorders involving sex-steroid hormones, such as endometriosis. One advantage of using GnRH antagonists is that repeated administration of GnRH agonists results in decreased levels of gonadotropins and sex steroids due to desensitization of the pituitary. This is avoided when using GnRH antagonists such as ganirelix. The success of ganirelix in reproductive therapy has been shown to be comparable to that when using GnRH agonists. Medical uses Ganirelix is used as a fertility treatment drug for women. Specifically, it is used to prevent premature ovulation in people with ovaries undergoing fertility treatment involving ovarian hyperstimulation that causes the ovaries to produce multiple eggs. When such premature ovulation occurs, the eggs released by the ovaries may be too immature to be used in in-vitro fertilization. Ganirelix prevents ovulation until it is triggered by injecting human chorionic gonadotrophin (hCG). Contraindications Ganirelix should not be used in women who are already pregnant, and because of this the onset of pregnancy must be ruled out before it is administered. Women using ganirelix should not breast feed, as it is not known whether ganirelix is excreted in breast milk. Side effects Clinical studies have shown that the most common side effect is a slight reaction at the site of injection in the form of redness, and sometimes swelling. Clinical studies have shown that, one hour after injection, the incidence of at least one moderate or severe local skin reaction per treatment cycle was 12% in 4 patients treated with ganirelix and 25% in patients treated subcutaneously with a GnRH agonist. The local reactions generally disappear within 4 hours after administration. Other reported side effects are some that are known to be associated with ovarian hyperstimulation, including gynecological abdominal pain, headache, vaginal bleeding, nausea, and gastrointestinal abdominal pain. In some rare cases, less than 1 user in 10,000, hypersensitivity to ganirelix can cause anaphylactoid reactions, most likely due to allergy. Birth defects A follow-up analysis for ganirelix done by the Marketing Authorisation Holder compared the number of congenital malformations between individuals whose mothers were treated with ganirelix compared with individuals whose mothers were treated with a GnRH agonist. The total number of congenital malformations was higher in the ganirelix group than in the GnRH agonist group (7.6% vs. 5.5%). This falls within the range for the normal incidence of congenital malformations, and current data do not suggest that ganirelix increases the incidence of congenital malformations or anomalies. No important differences in the frequency of ectopic pregnancies and miscarriage were noted with the use of ganirelix. Pharmacology Pharmacodynamics Ganirelix is a synthetic peptide that works as an antagonist against gonadotropin-releasing hormone (GnRH) ("Ganirelix acetate injection," 2009). Ganirelix competitively blocks GnRH receptors on the pituitary gonadotroph, quickly resulting in the suppression of gonadotropin secretion. This suppression is easily reversed by discontinuation of ganirelix administration. Ganirelix has a significantly higher receptor binding affinity (Kd = 0.4 nM) than GnRH (Kd = 3.6 nM). Pharmacokinetics When ganirelix is given to healthy adult females, steady-state serum concentrations are reached, on average, after three days ("Ganirelix acetate injection," 2009). A study administering ganirelix to healthy adult females (n=15) found the mean (SD) elimination half-life (t1/2) to be 16.2(1.6) hours, volume of distribution/absolute bioavailability (Vd/F) 76.5(10.3) liters, maximum serum concentration (Cmax) 11.2(2.4) ng/mL, and the time until maximum concentration (tmax) 1.1(0.2) hours. One 250 μg injection of ganirelix resulted in a mean absolute bioavailability of 91.1%. Chemistry Ganirelix is derived from GnRH, with amino acid substitutions made at positions 1, 2, 3, 6, 8, and 10. History The European Commission gave marketing authorization for ganirelix throughout the European Union to N.V. Organon in May 2000. References Fertility medicine GnRH antagonists Peptides Drugs developed by Merck & Co.
Ganirelix
[ "Chemistry" ]
1,190
[ "Biomolecules by chemical classification", "Peptides", "Molecular biology" ]
9,526,660
https://en.wikipedia.org/wiki/Spectrum%20of%20theistic%20probability
Popularized by Richard Dawkins in The God Delusion, the spectrum of theistic probability is a way of categorizing one's belief regarding the probability of the existence of a deity. Atheism, theism, and agnosticism J. J. C. Smart argues that the distinction between atheism and agnosticism is unclear, and many people who have passionately described themselves as agnostics were in fact atheists. He writes that this mischaracterization is based on an unreasonable philosophical skepticism that would not allow us to make any claims to knowledge about the world. He proposes instead the following analysis: Let us consider the appropriateness or otherwise of someone (call him 'Philo') describing himself as a theist, atheist or agnostic. I would suggest that if Philo estimates the various plausibilities to be such that on the evidence before him the probability of theism comes out near to one he should describe himself as a theist and if it comes out near zero he should call himself an atheist, and if it comes out somewhere in the middle he should call himself an agnostic. There are no strict rules about this classification because the borderlines are vague. If need be, like a middle-aged man who is not sure whether to call himself bald or not bald, he should explain himself more fully. Dawkins' formulation In The God Delusion, Richard Dawkins posits that "the existence of God is a scientific hypothesis like any other." He goes on to propose a continuous "spectrum of probabilities" between two extremes of opposite certainty, which can be represented by seven "milestones". Dawkins suggests definitive statements to summarize one's place along the spectrum of theistic probability. These "milestones" are: Strong theist. 100% probability of God. In the words of Carl Jung: "I do not believe, I know." De facto theist. Very high probability but short of 100%. "I don't know for certain, but I strongly believe in God and live my life on the assumption that he is there." Leaning towards theism. Higher than 50% but not very high. "I am very uncertain, but I am inclined to believe in God." Completely impartial. Exactly 50%. "God's existence and non-existence are exactly equiprobable." Leaning towards atheism. Lower than 50% but not very low. "I do not know whether God exists but I'm inclined to be skeptical." De facto atheist. Very low probability, but short of zero. "I don't know for certain but I think God is very improbable, and I live my life on the assumption that he is not there." Strong atheist. "I know there is no God, with the same conviction as Jung knows there is one." Dawkins argues that while there appear to be plenty of individuals that would place themselves as "1" due to the strictness of religious doctrine against doubt, most atheists do not consider themselves "7" because atheism arises from a lack of evidence and evidence can always change a thinking person's mind. In print, Dawkins self-identified as a "6". When interviewed by Bill Maher and later by Anthony Kenny, he suggested "6.9" to be more accurate. See also Agnostic atheism Apatheism Ignosticism Nontheism References Richard Dawkins Philosophy and atheism Theism Applied probability
Spectrum of theistic probability
[ "Mathematics" ]
738
[ "Applied mathematics", "Applied probability" ]
9,526,852
https://en.wikipedia.org/wiki/Monoxenous%20development
Monoxenous development, or monoxeny, characterizes a parasite whose development is restricted to a single host species. The etymology of the terms monoxeny / monoxenous derives from the two ancient Greek words (), meaning "unique", and (), meaning "foreign". In a monoxenous life cycle, the parasitic species may be strictly host specific (using only a single host species, such as gregarines) or not (e.g. Eimeria, Coccidia). References External links xeno-, xen- word info Parasitism
Monoxenous development
[ "Biology" ]
128
[ "Parasitism", "Symbiosis" ]
171,878
https://en.wikipedia.org/wiki/Tautochrone%20curve
A tautochrone curve or isochrone curve () is the curve for which the time taken by an object sliding without friction in uniform gravity to its lowest point is independent of its starting point on the curve. The curve is a cycloid, and the time is equal to π times the square root of the radius (of the circle which generates the cycloid) over the acceleration of gravity. The tautochrone curve is related to the brachistochrone curve, which is also a cycloid. The tautochrone problem The tautochrone problem, the attempt to identify this curve, was solved by Christiaan Huygens in 1659. He proved geometrically in his Horologium Oscillatorium, originally published in 1673, that the curve is a cycloid. The cycloid is given by a point on a circle of radius tracing a curve as the circle rolls along the axis, as: Huygens also proved that the time of descent is equal to the time a body takes to fall vertically the same distance as diameter of the circle that generates the cycloid, multiplied by . In modern terms, this means that the time of descent is , where is the radius of the circle which generates the cycloid, and is the gravity of Earth, or more accurately, the earth's gravitational acceleration. This solution was later used to solve the problem of the brachistochrone curve. Johann Bernoulli solved the problem in a paper (Acta Eruditorum, 1697). The tautochrone problem was studied by Huygens more closely when it was realized that a pendulum, which follows a circular path, was not isochronous and thus his pendulum clock would keep different time depending on how far the pendulum swung. After determining the correct path, Christiaan Huygens attempted to create pendulum clocks that used a string to suspend the bob and curb cheeks near the top of the string to change the path to the tautochrone curve. These attempts proved unhelpful for a number of reasons. First, the bending of the string causes friction, changing the timing. Second, there were much more significant sources of timing errors that overwhelmed any theoretical improvements that traveling on the tautochrone curve helps. Finally, the "circular error" of a pendulum decreases as length of the swing decreases, so better clock escapements could greatly reduce this source of inaccuracy. Later, the mathematicians Joseph Louis Lagrange and Leonhard Euler provided an analytical solution to the problem. Lagrangian solution For a simple harmonic oscillator released from rest, regardless of its initial displacement, the time it takes to reach the lowest potential energy point is always a quarter of its period, which is independent of its amplitude. Therefore, the Lagrangian of a simple harmonic oscillator is isochronous. In the tautochrone problem, if the particle's position is parametrized by the arclength from the lowest point, the kinetic energy is then proportional to , and the potential energy is proportional to the height . One way the curve in the tautochrone problem can be an isochrone is if the Lagrangian is mathematically equivalent to a simple harmonic oscillator; that is, the height of the curve must be proportional to the arclength squared: where the constant of proportionality is . Compared to the simple harmonic oscillator's Lagrangian, the equivalent spring constant is , and the time of descent is However, the physical meaning of the constant is not clear until we determine the exact analytical equation of the curve. To solve for the analytical equation of the curve, note that the differential form of the above relation is which eliminates , and leaves a differential equation for and . This is the differential equation for a cycloid when the vertical coordinate is counted from its vertex (the point with a horizontal tangent) instead of the cusp. To find the solution, integrate for in terms of : where , and the height decreases as the particle moves forward . This integral is the area under a circle, which can be done with another substitution and yield: This is the standard parameterization of a cycloid with . It's interesting to note that the arc length squared is equal to the height difference multiplied by the full arch length . "Virtual gravity" solution The simplest solution to the tautochrone problem is to note a direct relation between the angle of an incline and the gravity felt by a particle on the incline. A particle on a 90° vertical incline undergoes full gravitational acceleration , while a particle on a horizontal plane undergoes zero gravitational acceleration. At intermediate angles, the acceleration due to "virtual gravity" by the particle is . Note that is measured between the tangent to the curve and the horizontal, with angles above the horizontal being treated as positive angles. Thus, varies from to . The position of a mass measured along a tautochrone curve, , must obey the following differential equation: which, along with the initial conditions and , has solution: It can be easily verified both that this solution solves the differential equation and that a particle will reach at time from any starting position . The problem is now to construct a curve that will cause the mass to obey the above motion. Newton's second law shows that the force of gravity and the acceleration of the mass are related by: The explicit appearance of the distance, , is troublesome, but we can differentiate to obtain a more manageable form: This equation relates the change in the curve's angle to the change in the distance along the curve. We now use trigonometry to relate the angle to the differential lengths , and : Replacing with in the above equation lets us solve for in terms of : Likewise, we can also express in terms of and solve for in terms of : Substituting and , we see that these parametric equations for and are those of a point on a circle of radius rolling along a horizontal line (a cycloid), with the circle center at the coordinates : Note that ranges from . It is typical to set and so that the lowest point on the curve coincides with the origin. Therefore: Solving for and remembering that is the time required for descent, being a quarter of a whole cycle, we find the descent time in terms of the radius : (Based loosely on Proctor, pp. 135–139) Abel's solution Niels Henrik Abel attacked a generalized version of the tautochrone problem (Abel's mechanical problem), namely, given a function that specifies the total time of descent for a given starting height, find an equation of the curve that yields this result. The tautochrone problem is a special case of Abel's mechanical problem when is a constant. Abel's solution begins with the principle of conservation of energy – since the particle is frictionless, and thus loses no energy to heat, its kinetic energy at any point is exactly equal to the difference in gravitational potential energy from its starting point. The kinetic energy is , and since the particle is constrained to move along a curve, its velocity is simply , where is the distance measured along the curve. Likewise, the gravitational potential energy gained in falling from an initial height to a height is , thus: In the last equation, we have anticipated writing the distance remaining along the curve as a function of height (, recognized that the distance remaining must decrease as time increases (thus the minus sign), and used the chain rule in the form . Now we integrate from to to get the total time required for the particle to fall: This is called Abel's integral equation and allows us to compute the total time required for a particle to fall along a given curve (for which would be easy to calculate). But Abel's mechanical problem requires the converse – given , we wish to find , from which an equation for the curve would follow in a straightforward manner. To proceed, we note that the integral on the right is the convolution of with and thus take the Laplace transform of both sides with respect to variable : where . Since , we now have an expression for the Laplace transform of in terms of the Laplace transform of : This is as far as we can go without specifying . Once is known, we can compute its Laplace transform, calculate the Laplace transform of and then take the inverse transform (or try to) to find . For the tautochrone problem, is constant. Since the Laplace transform of 1 is , i.e., , we find the shape function : Making use again of the Laplace transform above, we invert the transform and conclude: It can be shown that the cycloid obeys this equation. It needs one step further to do the integral with respect to to obtain the expression of the path shape. (Simmons, Section 54). See also Beltrami identity Brachistochrone curve Calculus of variations Catenary Cycloid Uniformly accelerated motion References Bibliography External links Mathworld Plane curves Mechanics de:Zykloide#Die Tautochronie der Zykloide
Tautochrone curve
[ "Physics", "Mathematics", "Engineering" ]
1,858
[ "Plane curves", "Euclidean plane geometry", "Mechanics", "Mechanical engineering", "Planes (geometry)" ]
171,879
https://en.wikipedia.org/wiki/Brachistochrone%20curve
In physics and mathematics, a brachistochrone curve (), or curve of fastest descent, is the one lying on the plane between a point A and a lower point B, where B is not directly below A, on which a bead slides frictionlessly under the influence of a uniform gravitational field to a given end point in the shortest time. The problem was posed by Johann Bernoulli in 1696. The brachistochrone curve is the same shape as the tautochrone curve; both are cycloids. However, the portion of the cycloid used for each of the two varies. More specifically, the brachistochrone can use up to a complete rotation of the cycloid (at the limit when A and B are at the same level), but always starts at a cusp. In contrast, the tautochrone problem can use only up to the first half rotation, and always ends at the horizontal. The problem can be solved using tools from the calculus of variations and optimal control. The curve is independent of both the mass of the test body and the local strength of gravity. Only a parameter is chosen so that the curve fits the starting point A and the ending point B. If the body is given an initial velocity at A, or if friction is taken into account, then the curve that minimizes time differs from the tautochrone curve. History Galileo's problem Earlier, in 1638, Galileo Galilei had tried to solve a similar problem for the path of the fastest descent from a point to a wall in his Two New Sciences. He draws the conclusion that the arc of a circle is faster than any number of its chords,From the preceding it is possible to infer that the quickest path of all [lationem omnium velocissimam], from one point to another, is not the shortest path, namely, a straight line, but the arc of a circle. ... Consequently the nearer the inscribed polygon approaches a circle the shorter the time required for descent from A to C. What has been proven for the quadrant holds true also for smaller arcs; the reasoning is the same. Just after Theorem 6 of Two New Sciences, Galileo warns of possible fallacies and the need for a "higher science". In this dialogue Galileo reviews his own work. Galileo studied the cycloid and gave it its name, but the connection between it and his problem had to wait for advances in mathematics. Galileo’s conjecture is that “The shortest time of all [for a movable body] will be that of its fall along the arc ADB [of a quarter circle] and similar properties are to be understood as holding for all lesser arcs taken upward from the lowest limit B.” In Fig.1, from the “Dialogue Concerning the Two Chief World Systems”, Galileo claims that the body sliding along the circular arc of a quarter circle, from A to B will reach B in less time than if it took any other path from A to B. Similarly, in Fig. 2, from any point D on the arc AB, he claims that the time along the lesser arc DB will be less than for any other path from D to B. In fact, the quickest path from A to B or from D to B, the brachistochrone, is a cycloidal arc, which is shown in Fig. 3 for the path from A to B, and Fig.4 for the path from D to B, superposed on the respective circular arc. Introduction of the problem Johann Bernoulli posed the problem of the brachistochrone to the readers of Acta Eruditorum in June, 1696. He said: Bernoulli wrote the problem statement as: {{Quote |text=Given two points A and B in a vertical plane, what is the curve traced out by a point acted on only by gravity, which starts at A and reaches B in the shortest time.}} Johann and his brother Jakob Bernoulli derived the same solution, but Johann's derivation was incorrect, and he tried to pass off Jakob's solution as his own. Johann published the solution in the journal in May of the following year, and noted that the solution is the same curve as Huygens' tautochrone curve. After deriving the differential equation for the curve by the method given below, he went on to show that it does yield a cycloid. However, his proof is marred by his use of a single constant instead of the three constants, vm, 2g and D, below. Bernoulli allowed six months for the solutions but none were received during this period. At the request of Leibniz, the time was publicly extended for a year and a half. At 4 p.m. on 29 January 1697 when he arrived home from the Royal Mint, Isaac Newton found the challenge in a letter from Johann Bernoulli. Newton stayed up all night to solve it and mailed the solution anonymously by the next post. Upon reading the solution, Bernoulli immediately recognized its author, exclaiming that he "recognizes a lion from his claw mark". This story gives some idea of Newton's power, since Johann Bernoulli took two weeks to solve it.D.T. Whiteside, Newton the Mathematician, in Bechler, Contemporary Newtonian Research, p. 122. Newton also wrote, "I do not love to be dunned [pestered] and teased by foreigners about mathematical things...", and Newton had already solved Newton's minimal resistance problem, which is considered the first of the kind in calculus of variations. In the end, five mathematicians responded with solutions: Newton, Jakob Bernoulli, Gottfried Leibniz, Ehrenfried Walther von Tschirnhaus and Guillaume de l'Hôpital. Four of the solutions (excluding l'Hôpital's) were published in the same edition of the journal as Johann Bernoulli's. In his paper, Jakob Bernoulli gave a proof of the condition for least time similar to that below before showing that its solution is a cycloid. According to Newtonian scholar Tom Whiteside, in an attempt to outdo his brother, Jakob Bernoulli created a harder version of the brachistochrone problem. In solving it, he developed new methods that were refined by Leonhard Euler into what the latter called (in 1766) the calculus of variations. Joseph-Louis Lagrange did further work that resulted in modern infinitesimal calculus. Johann Bernoulli's solution Introduction In a letter to L’Hôpital, (21/12/1696), Bernoulli stated that when considering the problem of the curve of quickest descent, after only 2 days he noticed a curious affinity or connection with another no less remarkable problem leading to an ‘indirect method’ of solution. Then shortly afterwards he discovered a ‘direct method’. Direct method In a letter to Henri Basnage, held at the University of Basel Public Library, dated 30 March 1697, Johann Bernoulli stated that he had found two methods (always referred to as "direct" and "indirect") to show that the Brachistochrone was the "common cycloid", also called the "roulette". Following advice from Leibniz, he included only the indirect method in the Acta Eruditorum Lipsidae of May 1697. He wrote that this was partly because he believed it was sufficient to convince anyone who doubted the conclusion, partly because it also resolved two famous problems in optics that "the late Mr. Huygens" had raised in his treatise on light. In the same letter he criticised Newton for concealing his method. In addition to his indirect method he also published the five other replies to the problem that he received. Johann Bernoulli's direct method is historically important as a proof that the brachistochrone is the cycloid. The method is to determine the curvature of the curve at each point. All the other proofs, including Newton's (which was not revealed at the time) are based on finding the gradient at each point. In 1718, Bernoulli explained how he solved the brachistochrone problem by his direct method.The Early Period of the Calculus of Variations, by P. Freguglia and M. Giaquinta, pp. 53–57, . He explained that he had not published it in 1697, for reasons that no longer applied in 1718. This paper was largely ignored until 1904 when the depth of the method was first appreciated by Constantin Carathéodory, who stated that it shows that the cycloid is the only possible curve of quickest descent. According to him, the other solutions simply implied that the time of descent is stationary for the cycloid, but not necessarily the minimum possible. Analytic solution A body is regarded as sliding along any small circular arc Ce between the radii KC and Ke, with centre K fixed. The first stage of the proof involves finding the particular circular arc, Mm, which the body traverses in the minimum time. The line KNC intersects AL at N, and line Kne intersects it at n, and they make a small angle CKe at K. Let NK = a, and define a variable point, C on KN extended. Of all the possible circular arcs Ce, it is required to find the arc Mm, which requires the minimum time to slide between the 2 radii, KM and Km. To find Mm Bernoulli argues as follows. Let MN = x. He defines m so that MD = mx, and n so that Mm = nx + na and notes that x is the only variable and that m is finite and n is infinitely small. The small time to travel along arc Mm is , which has to be a minimum (‘un plus petit’). He does not explain that because Mm is so small the speed along it can be assumed to be the speed at M, which is as the square root of MD, the vertical distance of M below the horizontal line AL. It follows that, when differentiated this must give so that x = a. This condition defines the curve that the body slides along in the shortest time possible. For each point, M on the curve, the radius of curvature, MK is cut in 2 equal parts by its axis AL. This property, which Bernoulli says had been known for a long time, is unique to the cycloid. Finally, he considers the more general case where the speed is an arbitrary function X(x), so the time to be minimised is . The minimum condition then becomes which he writes as : and which gives MN (=x) as a function of NK (= a). From this the equation of the curve could be obtained from the integral calculus, though he does not demonstrate this. Synthetic solution He then proceeds with what he called his Synthetic Solution, which was a classical, geometrical proof, that there is only a single curve that a body can slide down in the minimum time, and that curve is the cycloid. "The reason for the synthetic demonstration, in the manner of the ancients, is to convince Mr. de la Hire. He has little time for our new analysis, describing it as false (He claims he has found 3 ways to prove that the curve is a cubic parabola)" – Letter from Johan Bernoulli to Pierre Varignon dated 27 Jul 1697. Assume AMmB is the part of the cycloid joining A to B, which the body slides down in the minimum time. Let ICcJ be part of a different curve joining A to B, which can be closer to AL than AMmB. If the arc Mm subtends the angle MKm at its centre of curvature, K, let the arc on IJ that subtends the same angle be Cc. The circular arc through C with centre K is Ce. Point D on AL is vertically above M. Join K to D and point H is where CG intersects KD, extended if necessary. Let and t be the times the body takes to fall along Mm and Ce respectively. , , Extend CG to point F where, and since , it follows that Since MN = NK, for the cycloid: , , and If Ce is closer to K than Mm then and In either case, , and it follows that If the arc, Cc subtended by the angle infinitesimal angle MKm on IJ is not circular, it must be greater than Ce, since Cec becomes a right-triangle in the limit as angle MKm approaches zero. Note, Bernoulli proves that CF > CG by a similar but different argument. From this he concludes that a body traverses the cycloid AMB in less time than any other curve ACB. Indirect method According to Fermat’s principle, the actual path between two points taken by a beam of light (which obeys Snell's law of refraction) is one that takes the least time. In 1697 Johann Bernoulli used this principle to derive the brachistochrone curve by considering the trajectory of a beam of light in a medium where the speed of light increases following a constant vertical acceleration (that of gravity g). By the conservation of energy, the instantaneous speed of a body v after falling a height y in a uniform gravitational field is given by: , The speed of motion of the body along an arbitrary curve does not depend on the horizontal displacement. Bernoulli noted that Snell's law of refraction gives a constant of the motion for a beam of light in a medium of variable density: , where vm is the constant and represents the angle of the trajectory with respect to the vertical. The equations above lead to two conclusions: At the onset, the angle must be zero when the particle speed is zero. Hence, the brachistochrone curve is tangent to the vertical at the origin. The speed reaches a maximum value when the trajectory becomes horizontal and the angle θ = 90°. Assuming for simplicity that the particle (or the beam) with coordinates (x,y) departs from the point (0,0) and reaches maximum speed after falling a vertical distance D: . Rearranging terms in the law of refraction and squaring gives: which can be solved for dx in terms of dy: . Substituting from the expressions for v and vm above gives: which is the differential equation of an inverted cycloid generated by a circle of diameter D=2r, whose parametric equation is: where φ is a real parameter, corresponding to the angle through which the rolling circle has rotated. For given φ, the circle's centre lies at . In the brachistochrone problem, the motion of the body is given by the time evolution of the parameter: where t is the time since the release of the body from the point (0,0). Jakob Bernoulli's solution Johann's brother Jakob showed how 2nd differentials can be used to obtain the condition for least time. A modernized version of the proof is as follows. If we make a negligible deviation from the path of least time, then, for the differential triangle formed by the displacement along the path and the horizontal and vertical displacements, . On differentiation with dy fixed we get, . And finally rearranging terms gives, where the last part is the displacement for given change in time for 2nd differentials. Now consider the changes along the two neighboring paths in the figure below for which the horizontal separation between paths along the central line is d2x (the same for both the upper and lower differential triangles). Along the old and new paths, the parts that differ are, For the path of least times these times are equal so for their difference we get, And the condition for least time is, which agrees with Johann's assumption based on the law of refraction. Newton's solution Introduction In June 1696, Johann Bernoulli had used the pages of the Acta Eruditorum Lipsidae to pose a challenge to the international mathematical community: to find the form of the curve joining two fixed points so that a mass will slide down along it, under the influence of gravity alone, in the minimum amount of time. The solution was originally to be submitted within six months. At the suggestion of Leibniz, Bernoulli extended the challenge until Easter 1697, by means of a printed text called "Programma", published in Groningen, in the Netherlands. The Programma is dated 1 January 1697, in the Gregorian Calendar. This was 22 December 1696 in the Julian Calendar, in use in Britain. According to Newton's niece, Catherine Conduitt, Newton learned of the challenge at 4 pm on 29 January and had solved it by 4 am the following morning. His solution, communicated to the Royal Society, is dated 30 January. This solution, later published anonymously in the Philosophical Transactions, is correct but does not indicate the method by which Newton arrived at his conclusion. Bernoulli, writing to Henri Basnage in March 1697, indicated that even though its author, "by an excess of modesty", had not revealed his name, yet even from the scant details supplied it could be recognised as Newton's work, "as the lion by its claw" (in Latin, ex ungue Leonem). D. T. Whiteside notes that the letter in French has ex ungue Leonem preceded by the French word comme. The much quoted version tanquam ex ungue Leonem is due to David Brewster's 1855 book on the life and works of Newton. Bernoulli's intention was, Whiteside argues, simply to indicate he could tell the anonymous solution was Newton's, just as it was possible to tell that an animal was a lion given its claw; it was not meant to suggest that Bernoulli considered Newton to be the lion among mathematicians, as it has since come to be interpreted. John Wallis, who was 80 years old at the time, had learned of the problem in September 1696 from Johann Bernoulli's youngest brother Hieronymus, and had spent three months attempting a solution before passing it in December to David Gregory, who also failed to solve it. After Newton had submitted his solution, Gregory asked him for the details and made notes from their conversation. These can be found in the University of Edinburgh Library, manuscript A , dated 7 March 1697. Either Gregory did not understand Newton's argument, or Newton's explanation was very brief. However, it is possible, with a high degree of confidence, to construct Newton's proof from Gregory's notes, by analogy with his method to determine the solid of minimum resistance (Principia, Book 2, Proposition 34, Scholium 2). A detailed description of his solution of this latter problem is included in the draft of a letter in 1694, also to David Gregory. In addition to the minimum time curve problem, there was a second problem that Newton also solved at the same time. Both solutions appeared anonymously in Philosophical Transactions of the Royal Society, for January 1697. The Brachistochrone problem Fig. 1, shows Gregory’s diagram (except the additional line IF is absent from it, and Z, the start point has been added). The curve ZVA is a cycloid and CHV is its generating circle. Since it appears that the body is moving upward from e to E, it must be assumed that a small body is released from Z and slides along the curve to A, without friction, under the action of gravity. Consider a small arc eE, which the body is ascending. Assume that it traverses the straight line eL to point L, horizontally displaced from E by a small distance, o, instead of the arc eE. Note, that eL is not the tangent at e, and that o is negative when L is between B and E. Draw the line through E parallel to CH, cutting eL at n. From a property of the cycloid, En is the normal to the tangent at E, and similarly the tangent at E is parallel to VH. Since the displacement EL is small, it differs little in direction from the tangent at E so that the angle EnL is close to a right-angle. In the limit as the arc eE approaches zero, eL becomes parallel to VH, provided o is small compared to eE making the triangles EnL and CHV similar. Also en approaches the length of chord eE, and the increase in length, , ignoring terms in and higher, which represent the error due to the approximation that eL and VH are parallel. The speed along eE or eL can be taken as that at E, proportional to , which is as CH, since This appears to be all that Gregory’s note contains. Let t be the additional time to reach L, Therefore, the increase in time to traverse a small arc displaced at one endpoint depends only on the displacement at the endpoint and is independent of the position of the arc. However, by Newton’s method, this is just the condition required for the curve to be traversed in the minimum time possible. Therefore, he concludes that the minimum curve must be the cycloid. He argues as follows. Assuming now that Fig. 1 is the minimum curve not yet determined, with vertical axis CV, and the circle CHV removed, and Fig. 2 shows part of the curve between the infinitesimal arc eE and a further infinitesimal arc Ff a finite distance along the curve. The extra time, t, to traverse eL (rather than eE) is nL divided by the speed at E (proportional to ), ignoring terms in and higher: , At L the particle continues along a path LM, parallel to the original EF, to some arbitrary point M. As it has the same speed at L as at E, the time to traverse LM is the same as it would have been along the original curve EF. At M it returns to the original path at point f. By the same reasoning, the reduction in time, T, to reach f from M rather than from F is The difference (t – T) is the extra time it takes along the path compared to the original : plus terms in and higher (1) Because is the minimum curve, (t – T) is must be greater than zero, whether o is positive or negative. It follows that the coefficient of o in (1) must be zero: (2) in the limit as eE and fF approach zero. Note since is the minimum curve it has to be assumed that the coefficient of is greater than zero. Clearly there has to be 2 equal and opposite displacements, or the body would not return to the endpoint, A, of the curve. If e is fixed, and if f is considered a variable point higher up the curve, then for all such points, f, is constant (equal to ). By keeping f fixed and making e variable it is clear that is also constant. But, since points, e and f are arbitrary, equation (2) can be true only if , everywhere, and this condition characterises the curve that is sought. This is the same technique he uses to find the form of the Solid of Least Resistance. For the cycloid, , so that , which was shown above to be constant, and the Brachistochrone is the cycloid. Newton gives no indication of how he discovered that the cycloid satisfied this last relation. It may have been by trial and error, or he may have recognised immediately that it implied the curve was the cycloid. See also Aristotle's wheel paradox Beltrami identity Calculus of variations Catenary Newton's minimal resistance problem Trochoid Uniformly accelerated motion References External links Brachistochrone ( at MathCurve, with excellent animated examples) The Brachistochrone, Whistler Alley Mathematics. Table IV from Bernoulli's article in Acta Eruditorum 1697 Brachistochrones'' by Michael Trott and Brachistochrone Problem by Okay Arik, Wolfram Demonstrations Project. The Brachistochrone problem at MacTutor Geodesics Revisited — Introduction to geodesics including two ways of derivation of the equation of geodesic with brachistochrone as a special case of a geodesic. Optimal control solution to the Brachistochrone problem in Python. The straight line, the catenary, the brachistochrone, the circle, and Fermat Unified approach to some geodesics. Plane curves Mechanics
Brachistochrone curve
[ "Physics", "Mathematics", "Engineering" ]
5,083
[ "Plane curves", "Euclidean plane geometry", "Mechanics", "Mechanical engineering", "Planes (geometry)" ]
171,899
https://en.wikipedia.org/wiki/Agathis%20australis
Agathis australis, or kauri, is a coniferous tree in the family Araucariaceae, found north of 38°S in the northern regions of New Zealand's North Island. It is the largest (by volume) but not tallest species of tree in New Zealand, standing up to 50 m tall in the emergent layer above the forest's main canopy. The tree has smooth bark and small narrow leaves. Other common names to distinguish A. australis from other members of Agathis are southern kauri and New Zealand kauri. With its podsolization capability and regeneration pattern it can compete with faster growing angiosperms. Because it is such a conspicuous species, forest containing kauri is generally known as kauri forest, although kauri need not be the most abundant tree. In the warmer northern climate, kauri forests have a higher species richness than those found further south. Kauri even act as a foundation species that modify the soil under their canopy to create unique plant communities. Taxonomy Scottish botanist David Don described the species as Dammara australis. Agathis is derived from Greek and means 'ball of twine', a reference to the shape of the male cones, which are also known by the botanical term strobili. Australis translates in English to 'southern'. Etymology The Māori name is descended from Proto-Polynesian *kauquli, Samoan ebony or Diospyros samoensis. Description The young plant grows straight upwards and has the form of a narrow cone with branches going out along the length of the trunk. However, as it gains in height, the lowest branches are shed, preventing vines from climbing. By maturity, the top branches form an imposing crown that stands out over all other native trees, dominating the forest canopy. The flaking bark of the kauri tree defends it from parasitic plants, and accumulates around the base of the trunk. On large trees it may pile up to a height of 2 m or more. The kauri has a habit of forming small clumps or patches scattered through mixed forests. Kauri leaves are 3 to 7 cm long and 1 cm broad, tough and leathery in texture, with no midrib; they are arranged in opposite pairs or whorls of three on the stem. The seed cones are globose, 5 to 7 cm diameter, and mature 18 to 20 months after pollination; the seed cones disintegrate at maturity to release winged seeds, which are then dispersed by the wind. A single tree produces both male and female seed cones. Fertilisation of the seeds occurs by pollination, which may be driven by the same or another tree's pollen. Size Agathis australis can attain heights of 40 to 50 metres and trunk diameters big enough to rival Californian sequoias at over 5 metres. The largest kauri trees did not attain as much height or girth at ground level but contain more timber in their cylindrical trunks than comparable Sequoias with their tapering stems. The largest recorded specimen was known as The Great Ghost and grew in the mountains at the head of the Tararu Creek, which drains into the Hauraki Gulf just north of the mouth of the Waihou River (Thames). Thames Historian Alastair Isdale says the tree was 8.54 metres in diameter, and 26.83 metres in girth. It was consumed by fire in . A kauri tree at Mill Creek, Mercury Bay, known as Father of the Forests was measured in the early 1840s as 22 metres in circumference and 24 metres to the first branches. It was recorded as being killed by lightning in that period. Another huge tree, Kairaru, had a girth of 20.1 metres and a columnar trunk free of branches for 30.5 metres as measured by a Crown Lands ranger, Henry Wilson, in 1860. It was on a spur of Mt Tutamoe about 30 km south of Waipoua Forest near Kaihau. It was destroyed in the 1880s or 1890s when a series of huge fires swept the area. Other trees far larger than living kauri have been noted in other areas. Rumors of stumps up to 6 metres are sometimes suggested in areas such as the Billygoat Track above the Kauaeranga Valley near Thames. However, there is no good evidence for these (e.g., a documented measurement or a photograph with a person for scale). Given that over 90 per cent of the area of kauri forest standing before 1000AD was destroyed by about 1900, it is not surprising that recent records are of smaller, but still very large trees. Two large kauri fell during tropical storms in the 1970s. One of these was Toronui, in Waipoua Forest. Its diameter was larger than that of Tāne Mahuta and its clean bole larger than that of Te Matua Ngahere, and by forestry measurements was the largest standing. Another tree, Kopi, in Omahuta Forest near the standing Hokianga kauri, was the third largest with a height of 56.39 metres (185') and a diameter of 4.19 metres (13.75'). It fell in 1973. Like many ancient kauri both trees were partly hollow. Growth rate and age In general over the lifetime of the tree the growth rate tends to increase, reach a maximum, then decline. A 1987 study measured mean annual diameter increments ranging from 1.5 to 4.6 mm per year with an overall average of 2.3 mm per year. This is equivalent to 8.7 annual rings per centimetre of core, said to be half the commonly quoted figure for growth rate. The same study found only a weak relationship between age and diameter. The growth of kauri in planted and second-growth natural forests has been reviewed and compared during the development of growth and yield models for the species. Kauri in planted forests were found to have up to 12 times the volume productivity than those in natural stands at the same age. Individuals in the same 10 cm diameter class may vary in age by 300 years, and the largest individual on any particular site is often not the oldest. Trees can normally live longer than 600 years. Many individuals probably exceed 1000 years, but there is no conclusive evidence that trees can exceed 2000 years in age. By combining tree ring samples from living kauri, wooden buildings, and preserved swamp wood, a dendrochronology has been created which reaches back 4,500 years, the longest tree ring record of past climate change in the southern hemisphere. One 1700 year old swamp wood kauri that dates to approximately 42,000 years ago contains fine-scale carbon-14 fluctuations in its rings that may be reflective of the most recent magnetic field flip of the earth. Root structure and soil interaction Much like podocarps, it feeds in the organic litter near the surface of the soil through fine root hairs. This layer of the soil is composed of organic matter derived from falling leaves and branches as well as dead trees, and is constantly undergoing decomposition. On the other hand, broadleaf trees such as māhoe derive a good fraction of their nutrition in the deeper mineral layer of the soil. Although its feeding root system is very shallow, it also has several downwardly directed peg roots which anchor it firmly in the soil. Such a solid foundation is necessary to prevent a tree the size of a kauri from blowing over in storms and cyclones. The litter left by kauri is much more acidic than most trees, and as it decays similarly acidic compounds are liberated. In a process known as leaching, these acidic molecules pass through the soil layers with the help of rainfall, and release other nutrients trapped in clay such as nitrogen and phosphorus. This leaves these important nutrients unavailable to other trees, as they are washed down into deeper layers. This process is known as podsolization, and changes the soil colour to a dull grey. For a single tree, this leaves an area of leached soil beneath known as a cup podsol (de). This leaching process is important for kauri's survival as it competes with other species for space. Leaf litter and other decaying parts of a kauri decompose much more slowly than those of most other species. Besides its acidity, the plant also bears substances such as waxes and phenols, most notably tannins, that are harmful to microorganisms. This results in a large buildup of litter around the base of a mature tree in which its own roots feed. As with most perennials, these feeding roots also house a symbiotic fungi known as mycorrhiza which increase the plant's efficiency in taking up nutrients. In this mutualistic relationship, the fungus derives its own nutrition from the roots. In its interactions with the soil, kauri is thus able to starve its competitors of much needed nutrients and compete with much younger lineages. The fungi on kauri are a food source for the larvae of the New Zealand giraffe weevil, Lasiorhynchus barbicornis. The larvae of L. barbicornis burrow into the wood of a tree for up to two years. Then L. barbicornis exit the bark of the tree as a fully formed adult beetle. These adult L. barbicornis exit from trees in Spring and Summer and months. After emerging from the tree, these adult L. barbicornis only live for a few weeks. Distribution Local spatial distribution In terms of local topography, kauri is far from randomly dispersed. As mentioned above, kauri relies on depriving its competitors of nutrition in order to survive. However, one important consideration not discussed thus far is the slope of the land. Water on hills flows downward by the action of gravity, taking with it the nutrients in the soil. This results in a gradient from nutrient poor soil at the top of slopes to nutrient rich soils below. As nutrients leached are replaced by aqueous nitrates and phosphates from above, the kauri tree is less able to inhibit the growth of strong competitors such as angiosperms. In contrast, the leaching process is only enhanced on higher elevation. In Waipoua Forest this is reflected in higher abundances of kauri on ridge crests, and greater concentrations of its main competitors, such as tarairi, at low elevations. This pattern is known as niche partitioning, and allows more than one species to occupy the same area. Those species which live alongside kauri include tawari, a montane broadleaf tree which is normally found in higher altitudes, where nutrient cycling is naturally slow. Changes over recent geological time Kauri is found growing in its natural ecosystem north of 38°S latitude. Its southern limit stretches from the Kawhia Harbour in the west to the eastern Kaimai Range. However, its distribution has changed greatly over geological time because of climate change. This is shown in the recent Holocene epoch by its migration southwards after the peak of the last ice age. During this time when frozen ice sheets covered much of the world's continents, kauri was able to survive only in isolated pockets, its main refuge being in the very far north. Radiocarbon dating is one technique used by scientists to uncover the history of the tree's distribution, with stump kauri from peat swamps used for measurement. The coldest period in recent times occurred about 15,000 to 20,000 years ago, during which time kauri was apparently confined north of Kaitaia, near the northernmost point of the North Island, North Cape. Kauri requires a mean temperature of 17 °C or more for most of the year. The tree's retreat can be used as a proxy for temperature changes during this period. While not present in modern days, the Aupōuri Peninsula in the far north was a refuge for kauri, as large quantities of kauri gum were present in the soils. It remains unclear whether kauri recolonised the North Island from a single refuge in the far north or from scattered pockets of isolated stands that managed to survive despite climatic conditions. It spread south through Whangārei, past Dargaville and as far south as Waikato, attaining its peak distribution during the years 3000 BP to 2000 BP. There is some suggestion that it has receded somewhat since then, which may indicate temperatures have declined slightly. During the peak of its movement southwards, it was travelling as fast as 200 metres per year. Its southward spread seems relatively rapid for a tree that can take a millennium to reach complete maturity. This can be explained by its life history pattern. Kauri relies on wind for pollination and seed dispersal, while many other native trees have their seeds carried large distances by frugivores (animals which eat fruit) such as the kererū (native pigeon). However, kauri trees can produce seeds while relatively young, taking only 50 years or so before giving rise to their own offspring. This trait makes them somewhat like a pioneer species, despite the fact that their long lifespan is characteristic of K-selected species. In good conditions, where access to water and sunlight are above average, diameters in excess of 15 centimetres and seed production can occur inside 15 years. Regeneration and life history Just as the niche of kauri is differentiated through its interactions with the soil, it also has a separate regeneration 'strategy' compared to its broadleaf neighbours. The relationship is very similar to the podocarp-broadleaf forests further south. Kauri demand much more light and require larger gaps to regenerate than such broadleaf trees as pūriri and kohekohe that show far more shade tolerance. Unlike kauri, these broadleaf species can regenerate in areas where lower levels of light reach ground level, for example from a single branch falling off. Kauri trees must therefore remain alive long enough for a large disturbance to occur, allowing them sufficient light to regenerate. In areas where large amounts of forest are destroyed, such as by logging, kauri seedlings are able to regenerate much more easily due not only to increased sunlight, but their relatively strong resistance to wind and frosts. Kauri occupy the emergent layer of the forest, where they are exposed to the effects of the weather; however, the smaller trees that dominate the main canopy are sheltered both by the emergent trees above and by each other. Left in open areas without protection, these smaller trees are far less capable of regenerating. When there is a disturbance severe enough to favour their regeneration, kauri trees regenerate en masse, producing a generation of trees of similar age after each disturbance. The distribution of kauri allows researchers to deduce when and where disturbances have occurred, and how large they may have been; the presence of abundant kauri may indicate that an area is prone to disturbance. Kauri seedlings can still occur in areas with low light but mortality rates increase for such seedlings, and those that survive self-thinning and grow to sapling stage tend to be found in higher light environments. During periods with less disturbance kauri tends to lose ground to broadleaf competitors which can better tolerate shaded environments. In the complete absence of disturbance, kauri tends to become rare as it is excluded by its competitors. Kauri biomass tends to decrease during such times, as more biomass becomes concentrated in angiosperm species like tōwai. Kauri trees also tend to become more randomly distributed in age, with each tree dying at a different point in time, and regeneration gaps becoming rare and sporadic. Over thousands of years these varying regeneration strategies produce a tug of war effect where kauri retreats uphill during periods of calm, then takes over lower areas briefly during mass disturbances. Although such trends cannot be observed in a human lifetime, research into current patterns of distribution, behaviour of species in experimental conditions, and study of pollen sediments (see palynology) have helped shed light on the life history of kauri. Kauri seeds may generally be taken from mature cones in late March. Each scale on a cone contains a single winged seed approximately 5 mm by 8 mm and attached to a thin wing perhaps half as large again. The cone is fully open and dispersed within only two to three days of starting. Studies show that kauri develop root grafts through which they share water and nutrients with neighbours of the same species. Ethnobotany Deforestation Heavy logging, which began around 1820 and continued for a century, has considerably decreased the number of kauri trees. It has been estimated that before 1840, the kauri forests of northern New Zealand occupied at least 12,000 square kilometres. The British Royal Navy sent four vessels, HMS Coromandel (1821), HMS Dromedary (1821), (1840), and HMS Tortoise (1841) to gather kauri-wood spars. By 1900, less than 10 per cent of the original kauri survived. By the 1950s this area had decreased to about 1,400 square kilometres in 47 forests depleted of their best kauri. It is estimated that today, there is 4 per cent of uncut forest left in small pockets. Estimates are that around half of the timber was accidentally or deliberately burnt. More than half of the remainder had been exported to Australia, Britain, and other countries, while the balance was used locally to build houses and ships. Much of the timber was sold for a return sufficient only to cover wages and expenses. From 1871 to 1895 the receipts indicate a rate of about 8 shillings (around NZ$20 in 2003) per 100 superficial feet (34 shillings/m3). The Government continued to sell large areas of kauri forests to sawmillers who, under no restrictions, took the most effective and economical steps to secure the timber, resulting in much waste and destruction. At a sale in 1908 more than 5,000 standing kauri trees, totalling about 20,000,000 superficial feet (47,000 m3), were sold for less than £2 per tree (£2 in 1908 equates to around NZ$100 in 2003). It is said that in 1890 the royalty on standing timber fell in some cases to as low as twopence (NZ$0.45 in 2003) per 100 superficial feet (8 pence/m3), though the expense of cutting and removing it to the mills was typically great due to the difficult terrain where they were located. Probably the most controversial kauri logging decision in the last century was that of the National Government to initiate clear fell logging of the Warawara state forest (North of the Hokianga) in the late 1960s. This created a national outcry as this forest contains the second largest volume of kauri after the Waipoua forest and was until that time, essentially unlogged (Adams, 1980). The plan also involved considerable cost, requiring a long road to be driven up a steep high plateau into the heart of the protected area. Because the stands of kauri were dense, the ecological destruction in the affected plateau area (approximately a fifth of the forest by area, and a quarter by volume of timber) was essentially complete (as of the early 1990s most of the affected area contained a thick covering of native grasses with little or no kauri regeneration). Logging was stopped in fulfillment of an election pledge by the Labour Government of 1972. When the National Party was reelected in 1975, the ban on kauri logging in the Warawara remained in place, but was soon replaced by policies encouraging the logging of giant tōtara and other podocarps in the central North Island. The outcry over the Warawara was an important stepping stone towards the legal protection of the small percentage of remaining virgin kauri-podocarp forest in New Zealand's Government-owned forests. Uses Although today its use is far more restricted, in the past the size and strength of kauri timber made it a popular wood for construction and ship building, particularly for masts of sailing ships because of its parallel grain and the absence of branches for much of its height. Kauri crown and stump wood was much appreciated for its beauty, and was sought after for ornamental wood panelling as well as high-end furniture. Although not as highly prized, the light colour of kauri trunk wood made it also well-suited for more utilitarian furniture construction, as well as for use in the fabrication of cisterns, barrels, bridge construction material, fences, moulds for metal forges, large rollers for the textile industry, railway sleepers and cross bracing for mines and tunnels. In the late 19th and early 20th centuries Kauri gum (semi-fossilised kauri resin) was a valuable commodity, particularly for varnish, spurring the development of a gum-digger industry. Today, the kauri is being considered as a long-term carbon sink. This is because estimates of the total carbon content in living above ground biomass and dead biomass of mature kauri forest are the second highest of any forest type recorded anywhere in the world. The estimated total carbon capture is up to nearly 1000 tonnes per hectare. In this capacity, kauri are bettered only by mature Eucalyptus regnans forest, and are far higher than any tropical or boreal forest type yet recorded. It is also conjectured that the process of carbon capture does not reach equilibrium, which along with no need of direct maintenance, makes kauri forests a potentially attractive alternative to short rotation forestry options such as Pinus radiata. Timber Technical specifications Moisture content of dried wood: 12 per cent Density of wood: 560 kg/m3 Tensile strength: 88 MPa Modulus of elasticity: 9.1 GPa After felled kauri wood dries to a 12 per cent moisture content, the tangential contraction is 4.1 per cent and the radial contraction is 2.3 per cent. Kauri is considered a first rate timber. The whiter sapwood is generally slightly lighter in weight. Kauri is not highly resistant to rot and when used in boatbuilding must be protected from the elements with paint, varnish or epoxy to avoid rot. Its popularity with boatbuilders is due to its very long, clear lengths, its relatively light weight and its beautiful sheen when oiled or varnished. Kauri wood planes and saws easily. Its wood holds screws and nails very well and does not readily split, crack, or warp. Kauri wood darkens with age to a richer golden brown colour. Very little New Zealand kauri is now sold, and the most commonly available kauri in New Zealand is Fiji kauri, which is very similar in appearance but lighter in weight. Swamp kauri Prehistoric kauri forests have been preserved in waterlogged soils as swamp kauri. A considerable number of kauri have been found buried in salt marshes, resulting from ancient natural changes such as volcanic eruptions, sea-level changes and floods. Such trees have been radiocarbon dated to 50,000 years ago or older. The bark and the seed cones of the trees often survive together with the trunk, although when excavated and exposed to the air, these parts undergo rapid deterioration. The quality of the disinterred wood varies. Some is in good shape, comparable to that of newly felled kauri, although often lighter in colour. The colour can be improved by the use of natural wood stains to heighten the details of the grain. After a drying process, such ancient kauri can be used for furniture, but not for construction. Conservation The small remaining pockets of kauri forest in New Zealand have survived in areas that were not subjected to burning by Māori and were too inaccessible for European loggers. The largest area of mature kauri forest is Waipoua Forest in Northland. Mature and regenerating kauri can also be found in other National and Regional Parks such as Puketi and Omahuta Forests in Northland, the Waitākere Ranges near Auckland, and Coromandel Forest Park on the Coromandel Peninsula. The importance of Waipoua Forest in relation to the kauri was that it remained the only kauri forest retaining its former virgin condition, and that it was extensive enough to give reasonable promise of permanent survival. On 2 July 1952 an area of over 80 km2 of Waipoua was proclaimed a forest sanctuary after a petition to the Government. The zoologist William Roy McGregor was one of the driving forces in this movement, writing an 80-page illustrated pamphlet on the subject, which proved an effective manifesto for conservation. Along with the Warawara to the North, Waipoua Forest contains three quarters of New Zealand's remaining kauri. Kauri Grove on the Coromandel Peninsula is another area with a remaining cluster of kauri, and includes the Siamese Kauri, two trees with a conjoined lower trunk. In 1921 a philanthropic Cornishman named James Trounson sold to the Government for £40,000, a large area adjacent to a few acres of Crown land and said to contain at least 4,000 kauri trees. From time to time Trounson gifted additional land, until what is known as Trounson Park comprised a total of 4 km2. The most famous specimens are Tāne Mahuta and Te Matua Ngahere in Waipoua Forest. These two trees have become tourist attractions because of their size and accessibility. Tane Mahuta, named after the Māori forest god, is the biggest existing kauri with a girth of , a trunk height of , a total height of and a total volume including the crown of . Te Matua Ngahere, which means 'Father of the Forest', is smaller but stouter than Tane Mahuta, with a girth (circumference) of . Important note: all the measurements above were taken in 1971. Kauri is common as a specimen tree in parks and gardens throughout New Zealand, prized for the distinctive look of young trees, its low maintenance once established (although seedlings are frost tender). Kauri dieback Kauri dieback was observed in the Waitākere Ranges caused by Phytophthora cinnamomi in the 1950s, again on Great Barrier Island in 1972 linked to a different pathogen, Phytophthora agathidicida and subsequently spread to kauri forest on the mainland. The disease, known as kauri dieback or kauri collar rot, is believed to be over 300 years old and causes yellowing leaves, thinning canopy, dead branches, lesions that bleed resin, and tree death. Phytophthora agathidicida was identified as a new species in April 2008. Its closest known relative is Phytophthora katsurae. The pathogen is believed to be spread on people's shoes or by mammals, particularly feral pigs. A collaborative response team has been formed to work on the disease. The team includes MAF Biosecurity, the Conservation Department, Auckland and Northland regional councils, Waikato Regional Council, and Bay of Plenty Regional Council. The team is charged with assessing the risk, determining methods and their feasibility to limit the spread, collecting more information (e.g. how widespread), and ensuring a coordinated response. The Department of Conservation has issued guidelines to prevent the spread of the disease, including keeping to defined tracks, cleaning footwear before and after entering kauri forest areas, and staying away from kauri roots. See also Gum-digger Forestry in New Zealand Kauri Museum List of superlative trees Northland temperate kauri forest References Bibliography External links Agathis australis description The Gymnosperm Database Agathis australis collection at Museum of New Zealand Te Papa Tongarewa Kauri forest in Te Ara - the Encyclopedia of New Zealand Kauri at the New Zealand Department of Conservation Keep Kauri Standing - Kauri dieback information Kauri Gum entry from the 1966 Encyclopaedia of New Zealand Masters thesis on growth and yield of NZ kauri australis Trees of New Zealand Conservation dependent plants Kauri gum
Agathis australis
[ "Physics" ]
5,751
[ "Amorphous solids", "Unsolved problems in physics", "Kauri gum" ]
171,902
https://en.wikipedia.org/wiki/K%C5%8Dwhai
Kōwhai ( or ) are small woody legume trees within the genus Sophora, in the family Fabaceae, that are native to New Zealand. There are eight species, with Sophora microphylla and S. tetraptera being as large trees. Their natural habitat is beside streams and on the edges of forest, in lowland or mountain open areas. Kōwhai trees grow throughout the country and are a common feature in New Zealand gardens. Outside of New Zealand, kōwhai tend to be restricted to mild temperate maritime climates. The blooms of the kōwhai are widely regarded as being New Zealand's unofficial national flower. As such, it is often incorporated as a visual shorthand for the country, such as in Meghan Markle's wedding veil, which included distinctive flora representing all Commonwealth nations. The Māori word kōwhai is related to words in some other Polynesian languages that refer to different species that look superficially similar, such as (Sesbania tomentosa), (Sesbania grandiflora) and Marquesan kohai (Caesalpinia pulcherrima). Kōwhai is also the Māori word for the colour yellow. The spelling kowhai (without a macron) is common in New Zealand English. Species The eight species of kōwhai are: Sophora chathamica, coastal kōwhai Sophora fulvida, Waitakere kōwhai Sophora godleyi, Godley's kōwhai Sophora longicarinata, limestone kōwhai Sophora microphylla, small-leaved kōwhai Sophora molloyi, Cook Strait kōwhai Sophora prostrata, prostrate kōwhai Sophora tetraptera, large-leaved kōwhai Description and ecology Most species of kōwhai grow to around 8 m high and have fairly smooth bark with small leaves. S. microphylla has smaller leaves (0.5–0.7 cm long by 0.3–0.4 cm wide) and flowers (2.5–3.5 cm long) than S. tetraptera, which has leaves of 1–2 cm long and flowers that are 3–5 cm long. The very distinctive seed pods that appear after flowering are almost segmented, and each contains six or more smooth, hard seeds. Most species have yellow seeds, but Sophora prostrata has black ones. The seeds of Sophora microphylla can be very numerous and the presence of many hundreds of these distinctively yellow seeds on the ground quickly identifies the presence of a nearby kōwhai tree. Many species of kōwhai are semi-deciduous and lose most of their leaves immediately after flowering in October or November, but quickly produce new leaves. Flowering of kōwhai is staggered from July through to November, meaning each tree will get attention from birds such as tūī, kererū and bellbird. Tūī are very attracted to kōwhai and will fly long distances to get a sip of its nectar. The wood of kōwhai is dense and strong and has been used in the past for tools and machinery. Sophora is one of the four genera of native legumes in New Zealand; the other three are Carmichaelia, Clianthus, and Montigena. Studies of accumulated dried vegetation in the pre-human mid-late Holocene period suggests a low Sophora microphylla forest ecosystem in Central Otago that was used and perhaps maintained by giant moa birds, for both nesting material and food. The forests and moa no longer existed when European settlers came to the area in the 1850s. Cultivation Kōwhai can be grown from seed or tip cuttings in spring and autumn. The dark or bright yellow seeds germinate best after chitting and being soaked in water for several hours. They can also benefit from a several minute submersion in boiling water to soften the hard shell and then being kept in the same water, taken off boil, for several hours to soak up the water. Young kōwhai are quite frost tender, so cuttings or seedlings should be planted in their second year when they are 30 cm or higher. If grown from seed, kōwhai can take many years to flower, the number of years varies depending on the species. S. prostrata, sometimes called "little baby", is used as a bonsai tree. It grows up to two metres high, has divaricating stems, and sparse smallish leaves. Toxicity All parts of the kōwhai, particularly the seeds, are poisonous to humans. However, there do not appear to have been any confirmed cases in humans of severe poisoning following ingestion of kōwhai in New Zealand. Traditional Māori use Traditionally the Māori used the flexible branches as a construction material in their houses and to snare birds. The kōwhai flowers were a source of yellow dye. Also, when the kōwhai flowers bloom, in late winter and early spring, it is time to plant kumara (sweet potato). Māori also used the kōwhai tree as medicine. Wedges made of kōwhai stem were used to split wood, it was used for fences and in whare (Māori hut) construction, implements and weapons. The bark was heated in a calabash with hot stones, and made into a poultice to treat wounds or rubbed on a sore back or made into an infusion to treat bruising or muscular pains. If someone was bitten by a seal, an infusion (wai kōwhai) was prepared from kōwhai and applied to the wounds and the patient was said to recover within days. References Sophora Trees of New Zealand Plants used in traditional Māori medicine National symbols of New Zealand Plant common names
Kōwhai
[ "Biology" ]
1,205
[ "Plant common names", "Common names of organisms", "Plants" ]
171,905
https://en.wikipedia.org/wiki/Langlands%20program
In mathematics, the Langlands program is a set of conjectures about connections between number theory and geometry. It was proposed by . It seeks to relate Galois groups in algebraic number theory to automorphic forms and representation theory of algebraic groups over local fields and adeles. It was described by Edward Frenkel as "grand unified theory of mathematics." As an explanation to a non-specialist: the program provides constructs for a generalised and somewhat unified framework, to characterise the structures that underpin numbers and their abstractions; thus the invariants which base them... through analytical methods. The Langlands program consists of theoretical abstractions, which challenge even specialist mathematicians. Basically, the fundamental lemma of the project links the generalized fundamental representation of a finite field with its group extension to the automorphic forms under which it is invariant. This is accomplished through abstraction to higher dimensional integration, by an equivalence to a certain analytical group as an absolute extension of its algebra. This allows an analytical functional construction of powerful invariance transformations for a number field to its own algebraic structure. The meaning of such a construction is nuanced, but its specific solutions and generalizations are far-reaching. The consequence for proof of existence to such theoretical objects, implies an analytical method for constructing the categoric mapping of fundamental structures for virtually any number field. As an analogue to the possible exact distribution of primes; the Langlands program allows a potential general tool for the resolution of invariance at the level of generalized algebraic structures. This in turn permits a somewhat unified analysis of arithmetic objects through their automorphic functions... The Langlands view allows a general analysis of structuring number-abstractions. This description is at once a reduction and over-generalization of the program's proper theorems – although these mathematical concepts illustrate its basic ideas. Background The Langlands program is built on existing ideas: the philosophy of cusp forms formulated a few years earlier by Harish-Chandra and , the work and Harish-Chandra's approach on semisimple Lie groups, and in technical terms the trace formula of Selberg and others. What was new in Langlands' work, besides technical depth, was the proposed connection to number theory, together with its rich organisational structure hypothesised (so-called functoriality). Harish-Chandra's work exploited the principle that what can be done for one semisimple (or reductive) Lie group, can be done for all. Therefore, once the role of some low-dimensional Lie groups such as GL(2) in the theory of modular forms had been recognised, and with hindsight GL(1) in class field theory, the way was open to speculation about GL(n) for general n > 2. The cusp form idea came out of the cusps on modular curves but also had a meaning visible in spectral theory as "discrete spectrum", contrasted with the "continuous spectrum" from Eisenstein series. It becomes much more technical for bigger Lie groups, because the parabolic subgroups are more numerous. In all these approaches technical methods were available, often inductive in nature and based on Levi decompositions amongst other matters, but the field remained demanding. From the perspective of modular forms, examples such as Hilbert modular forms, Siegel modular forms, and theta-series had been developed. Objects The conjectures have evolved since Langlands first stated them. Langlands conjectures apply across many different groups over many different fields for which they can be stated, and each field offers several versions of the conjectures. Some versions are vague, or depend on objects such as Langlands groups, whose existence is unproven, or on the L-group that has several non-equivalent definitions. Objects for which Langlands conjectures can be stated: Representations of reductive groups over local fields (with different subcases corresponding to archimedean local fields, p-adic local fields, and completions of function fields) Automorphic forms on reductive groups over global fields (with subcases corresponding to number fields or function fields). Analogues for finite fields. More general fields, such as function fields over the complex numbers. Conjectures The conjectures can be stated variously in ways that are closely related but not obviously equivalent. Reciprocity The starting point of the program was Emil Artin's reciprocity law, which generalizes quadratic reciprocity. The Artin reciprocity law applies to a Galois extension of an algebraic number field whose Galois group is abelian; it assigns L-functions to the one-dimensional representations of this Galois group, and states that these L-functions are identical to certain Dirichlet L-series or more general series (that is, certain analogues of the Riemann zeta function) constructed from Hecke characters. The precise correspondence between these different kinds of L-functions constitutes Artin's reciprocity law. For non-abelian Galois groups and higher-dimensional representations of them, L-functions can be defined in a natural way: Artin L-functions. Langlands' insight was to find the proper generalization of Dirichlet L-functions, which would allow the formulation of Artin's statement in Langland's more general setting. Hecke had earlier related Dirichlet L-functions with automorphic forms (holomorphic functions on the upper half plane of the complex number plane that satisfy certain functional equations). Langlands then generalized these to automorphic cuspidal representations, which are certain infinite dimensional irreducible representations of the general linear group GL(n) over the adele ring of (the rational numbers). (This ring tracks all the completions of see p-adic numbers.) Langlands attached automorphic L-functions to these automorphic representations, and conjectured that every Artin L-function arising from a finite-dimensional representation of the Galois group of a number field is equal to one arising from an automorphic cuspidal representation. This is known as his reciprocity conjecture. Roughly speaking, this conjecture gives a correspondence between automorphic representations of a reductive group and homomorphisms from a Langlands group to an L-group. This offers numerous variations, in part because the definitions of Langlands group and L-group are not fixed. Over local fields this is expected to give a parameterization of L-packets of admissible irreducible representations of a reductive group over the local field. For example, over the real numbers, this correspondence is the Langlands classification of representations of real reductive groups. Over global fields, it should give a parameterization of automorphic forms. Functoriality The functoriality conjecture states that a suitable homomorphism of L-groups is expected to give a correspondence between automorphic forms (in the global case) or representations (in the local case). Roughly speaking, the Langlands reciprocity conjecture is the special case of the functoriality conjecture when one of the reductive groups is trivial. Generalized functoriality Langlands generalized the idea of functoriality: instead of using the general linear group GL(n), other connected reductive groups can be used. Furthermore, given such a group G, Langlands constructs the Langlands dual group LG, and then, for every automorphic cuspidal representation of G and every finite-dimensional representation of LG, he defines an L-function. One of his conjectures states that these L-functions satisfy a certain functional equation generalizing those of other known L-functions. He then goes on to formulate a very general "Functoriality Principle". Given two reductive groups and a (well behaved) morphism between their corresponding L-groups, this conjecture relates their automorphic representations in a way that is compatible with their L-functions. This functoriality conjecture implies all the other conjectures presented so far. It is of the nature of an induced representation construction—what in the more traditional theory of automorphic forms had been called a 'lifting', known in special cases, and so is covariant (whereas a restricted representation is contravariant). Attempts to specify a direct construction have only produced some conditional results. All these conjectures can be formulated for more general fields in place of : algebraic number fields (the original and most important case), local fields, and function fields (finite extensions of Fp(t) where p is a prime and Fp(t) is the field of rational functions over the finite field with p elements). Geometric conjectures The geometric Langlands program, suggested by Gérard Laumon following ideas of Vladimir Drinfeld, arises from a geometric reformulation of the usual Langlands program that attempts to relate more than just irreducible representations. In simple cases, it relates -adic representations of the étale fundamental group of an algebraic curve to objects of the derived category of -adic sheaves on the moduli stack of vector bundles over the curve. A 9-person collaborative project led by Dennis Gaitsgory announced a proof of the (categorical, unramified) geometric Langlands conjecture leveraging Hecke eigensheaf as part of the proof. Status The Langlands conjectures for GL(1, K) follow from (and are essentially equivalent to) class field theory. Langlands proved the Langlands conjectures for groups over the archimedean local fields (the real numbers) and (the complex numbers) by giving the Langlands classification of their irreducible representations. Lusztig's classification of the irreducible representations of groups of Lie type over finite fields can be considered an analogue of the Langlands conjectures for finite fields. Andrew Wiles' proof of modularity of semistable elliptic curves over rationals can be viewed as an instance of the Langlands reciprocity conjecture, since the main idea is to relate the Galois representations arising from elliptic curves to modular forms. Although Wiles' results have been substantially generalized, in many different directions, the full Langlands conjecture for remains unproved. In 1998, Laurent Lafforgue proved Lafforgue's theorem verifying the Langlands conjectures for the general linear group GL(n, K) for function fields K. This work continued earlier investigations by Drinfeld, who proved the case GL(2, K) in the 1980s. In 2018, Vincent Lafforgue established the global Langlands correspondence (the direction from automorphic forms to Galois representations) for connected reductive groups over global function fields. Local Langlands conjectures proved the local Langlands conjectures for the general linear group GL(2, K) over local fields. proved the local Langlands conjectures for the general linear group GL(n, K) for positive characteristic local fields K. Their proof uses a global argument. proved the local Langlands conjectures for the general linear group GL(n, K) for characteristic 0 local fields K. gave another proof. Both proofs use a global argument. gave another proof. Fundamental lemma In 2008, Ngô Bảo Châu proved the "fundamental lemma", which was originally conjectured by Langlands and Shelstad in 1983 and being required in the proof of some important conjectures in the Langlands program. Implications To a lay reader or even nonspecialist mathematician, abstractions within the Langlands program can be somewhat impenetrable. However, there are some strong and clear implications for proof or disproof of the fundamental Langlands conjectures. As the program posits a powerful connection between analytic number theory and generalizations of algebraic geometry, the idea of 'Functoriality' between abstract algebraic representations of number fields and their analytical prime constructions results in powerful functional tools allowing an exact quantification of prime distributions. This, in turn, yields the capacity for classification of diophantine equations and further abstractions of algebraic functions. Furthermore, if the reciprocity of such generalized algebras for the posited objects exists, and if their analytical functions can be shown to be well-defined, some very deep results in mathematics could be within reach of proof. Examples include: rational solutions of elliptic curves, topological construction of algebraic varieties, and the famous Riemann hypothesis. Such proofs would be expected to utilize abstract solutions in objects of generalized analytical series, each of which relates to the invariance within structures of number fields. Additionally, some connections between the Langlands program and M theory have been posited, as their dualities connect in nontrivial ways, providing potential exact solutions in superstring theory (as was similarly done in group theory through monstrous moonshine). Simply put, the Langlands project implies a deep and powerful framework of solutions, which touches the most fundamental areas of mathematics, through high-order generalizations in exact solutions of algebraic equations, with analytical functions, as embedded in geometric forms. It allows a unification of many distant mathematical fields into a formalism of powerful analytical methods. See also Jacquet–Langlands correspondence Erlangen program Notes References External links The work of Robert Langlands Zeta and L-functions Representation theory of Lie groups Automorphic forms Conjectures History of mathematics
Langlands program
[ "Mathematics" ]
2,745
[ "Unsolved problems in mathematics", "Langlands program", "Conjectures", "Mathematical problems", "Number theory" ]
171,915
https://en.wikipedia.org/wiki/Lee%20Smolin
Lee Smolin (; born June 6, 1955) is an American theoretical physicist, a faculty member at the Perimeter Institute for Theoretical Physics, an adjunct professor of physics at the University of Waterloo, and a member of the graduate faculty of the philosophy department at the University of Toronto. Smolin's 2006 book The Trouble with Physics criticized string theory as a viable scientific theory. He has made contributions to quantum gravity theory, in particular the approach known as loop quantum gravity. He advocates that the two primary approaches to quantum gravity, loop quantum gravity and string theory, can be reconciled as different aspects of the same underlying theory. He also advocates an alternative view on space and time that he calls temporal naturalism. His research interests also include cosmology, elementary particle theory, the foundations of quantum mechanics, and theoretical biology. Personal life Smolin was born in New York City to Michael Smolin, an environmental and process engineer and Pauline Smolin, a playwright. Smolin said his parents were Jewish followers of the Fourth Way, founded by George Gurdjieff, an Armenian mystic. Smolin described himself as Jewish. His brother, David M. Smolin, became a professor at the Cumberland School of Law in Birmingham, Alabama. Smolin dropped out of Walnut Hills High School in Cincinnati, Ohio. His interest in physics began at that time when he read Einstein's reflections on the two tasks he would leave unfinished at his death: 1, to make sense of quantum mechanics, and, 2 to unify that understanding of the quanta with gravity. Smolin would take it as his "mission" to try to complete these tasks. Shortly afterward, he browsed the Physics Library at the University of Cincinnati, where he came across Louis de Broglie's pilot wave theory in French. "I still can close my eyes," Smolin wrote in "Einstein's Unfinished Revolution" "and see a page of the book, displaying the equation that relates wavelength to momentum." Soon after that he would "talk his way into" Hampshire College, find great teachers, and get lucky in his applications to graduate school. As to his mission of solving Einstein's two big questions, by Smolin's account, he did not succeed; "Very unfortunately, neither has anyone else." Smolin has stayed involved with theatre becoming a scientific consultant for such plays as A Walk in the Woods by Lee Blessing, Background Interference by Drucilla Cornell, and Infinity by Hannah Moscovitch. Smolin is married to Dina Graser, a lawyer and urban policy consultant in Toronto, Ontario. He was previously married to Fotini Markopoulou-Kalamara. His brother is law professor David M. Smolin. Career He held postdoctoral research positions at the Institute for Advanced Study in Princeton, New Jersey, the Kavli Institute for Theoretical Physics in Santa Barbara, and the University of Chicago, before becoming a faculty member at Yale, Syracuse, and Pennsylvania State Universities. He was a visiting scholar at the Institute for Advanced Study in 1995 and a visiting professor at Imperial College London (1999-2001), before becoming one of the founding faculty members at the Perimeter Institute in 2001. Theories and work Loop quantum gravity Smolin contributed to the theory of loop quantum gravity (LQG) in collaborative work with Ted Jacobson, Carlo Rovelli, Louis Crane, Abhay Ashtekar and others. LQG is an approach to the unification of quantum mechanics with general relativity which utilizes a reformulation of general relativity in the language of gauge field theories, which allows the use of techniques from particle physics, particularly the expression of fields in terms of the dynamics of loops. With Rovelli he discovered the discreteness of areas and volumes and found their natural expression in terms of a discrete description of quantum geometry in terms of spin networks. In recent years he has focused on connecting LQG to phenomenology by developing implications for experimental tests of spacetime symmetries as well as investigating ways elementary particles and their interactions could emerge from spacetime geometry. Background independent approaches to string theory Between 1999 and 2002, Smolin made several proposals to provide a fundamental formulation of string theory that does not depend on approximate descriptions involving classical background spacetime models. Experimental tests of quantum gravity Smolin is among those theorists who have proposed that the effects of quantum gravity can be experimentally probed by searching for modifications in special relativity detected in observations of high energy astrophysical phenomena, including very high energy cosmic rays and photons and neutrinos from gamma ray bursts. Among Smolin's contributions are the co-invention of doubly special relativity (with João Magueijo, independently of work by Giovanni Amelino-Camelia), and of relative locality (with Amelino-Camelia, Laurent Freidel, and Jerzy Kowalski-Glikman). Foundations of quantum mechanics Smolin has worked since the early 1980s on a series of proposals for hidden variables theories, which would be non-local deterministic theories which would give a precise description of individual quantum phenomena. In recent years, he has pioneered two new approaches to the interpretation of quantum mechanics suggested by his work on the reality of time, called the real ensemble interpretation and the principle of precedence. Cosmological natural selection Smolin's hypothesis of cosmological natural selection, also called the fecund universes theory, suggests that a process analogous to biological natural selection applies at the grandest of scales. Smolin published the idea in 1992 and summarized it in a book aimed at a lay audience called The Life of the Cosmos. Black holes have a role in this natural selection. In fecund theory, a collapsing black hole causes the emergence of a new universe on the "other side", whose fundamental constant parameters (masses of elementary particles, Planck constant, elementary charge, and so forth) may differ slightly from the universe where the black hole collapsed. Each universe gives rise to as many new universes — its "offspring" — as it has black holes, giving an evolutionary advantage to universes in which black holes are common, which are similar to our own. The theory thus explains why our universe appears "fine-tuned" for the emergence of life as we know it. Because the theory applies the evolutionary concepts of "reproduction", "mutation", and "selection" to universes, it is formally analogous to models of population biology. When Smolin published the theory in 1992, he proposed as a prediction of his theory that no neutron star should exist with a mass of more than 1.6 times the mass of the sun. Later this figure was raised to two solar masses following more precise modeling of neutron star interiors by nuclear astrophysicists. Smolin also predicted that inflation, if true, must only be in its simplest form, governed by a single field and parameter. Contributions to the philosophy of physics Smolin has contributed to the philosophy of physics through a series of papers and books that advocate the relational, or Leibnizian, view of space and time. Since 2006, he has collaborated with the Brazilian philosopher and Harvard Law School professor Roberto Mangabeira Unger on the issues of the reality of time and the evolution of laws; in 2014 they published a book, its two parts being written separately. A book length exposition of Smolin's philosophical views appeared in April 2013. Time Reborn argues that physical science has made time unreal while, as Smolin insists, it is the most fundamental feature of reality: "Space may be an illusion, but time must be real" (p. 179). An adequate description according to him would give a Leibnizian universe: indiscernibles would not be admitted and every difference should correspond to some other difference, as the principle of sufficient reason would have it. A few months later a more concise text was made available in a paper with the title Temporal Naturalism. The Trouble with Physics Smolin's 2006 book The Trouble with Physics explored the role of controversy and disagreement in the progress of science. It argued that science progresses fastest if the scientific community encourages the widest possible disagreement among trained and accredited professionals prior to the formation of consensus brought about by experimental confirmation of predictions of falsifiable theories. He proposed that this meant the fostering of diverse competing research programs, and that premature formation of paradigms not forced by experimental facts can slow the progress of science. As a case study, The Trouble with Physics focused on the issue of the falsifiability of string theory due to the proposals that the anthropic principle be used to explain the properties of our universe in the context of the string landscape. The book was criticized by physicist Joseph Polchinski and other string theorists. In his earlier book Three Roads to Quantum Gravity (2002), Smolin stated that loop quantum gravity and string theory were essentially the same concept seen from different perspectives. In that book, he also favored the holographic principle. The Trouble with Physics, on the other hand, was strongly critical of the prominence of string theory in contemporary theoretical physics, which he believes has suppressed research in other promising approaches. Smolin suggests that string theory suffers from serious deficiencies and has an unhealthy near-monopoly in the particle theory community. He called for a diversity of approaches to quantum gravity, and argued that more attention should be paid to loop quantum gravity, an approach Smolin has devised. Finally, The Trouble with Physics is also broadly concerned with the role of controversy and the value of diverse approaches in the ethics and process of science. In the same year that The Trouble with Physics was published, Peter Woit published Not Even Wrong, a book for nonspecialists whose conclusion was similar to Smolin's, namely that string theory was a fundamentally flawed research program. Views Smolin's view on the nature of time: More and more, I have the feeling that quantum theory and general relativity are both deeply wrong about the nature of time. It is not enough to combine them. There is a deeper problem, perhaps going back to the beginning of physics. Smolin does not believe that quantum mechanics is a "final theory": I am convinced that quantum mechanics is not a final theory. I believe this because I have never encountered an interpretation of the present formulation of quantum mechanics that makes sense to me. I have studied most of them in depth and thought hard about them, and in the end I still can't make real sense of quantum theory as it stands. In a 2009 article, Smolin articulated the following philosophical views (the sentences in italics are quotations): There is only one universe. There are no others, nor is there anything isomorphic to it. Smolin denies the existence of a "timeless" multiverse. Neither other universes nor copies of our universe—within or outside—exist. No copies can exist within the universe, because no subsystem can model precisely the larger system it is a part of. No copies can exist outside the universe, because the universe is by definition all there is. This principle also rules out the notion of a mathematical object isomorphic in every respect to the history of the entire universe, a notion more metaphysical than scientific. All that is real is real in a moment, which is a succession of moments. Anything that is true is true of the present moment. Not only is time real, but everything that is real is situated in time. Nothing exists timelessly. Everything that is real in a moment is a process of change leading to the next or future moments. Anything that is true is then a feature of a process in this process causing or implying future moments. This principle incorporates the notion that time is an aspect of causal relations. A reason for asserting it, is that anything that existed for just one moment, without causing or implying some aspect of the world at a future moment, would be gone in the next moment. Things that persist must be thought of as processes leading to newly changed processes. An atom at one moment is a process leading to a different or a changed atom at the next moment. Mathematics is derived from experience as a generalization of observed regularities, when time and particularity are removed. Under this heading, Smolin distances himself from mathematical platonism, and gives his reaction to Eugene Wigner's "The Unreasonable Effectiveness of Mathematics in the Natural Sciences". Smolin views rejecting the idea of a creator as essential to cosmology on similar grounds to his objections against the multiverse. He does not definitively exclude or reject religion or mysticism but rather believes that science should only deal with that which is observable. He also opposes the anthropic principle, which he claims "cannot help us to do science." He also advocates "principles for an open future" which he claims underlie the work of both healthy scientific communities and democratic societies: "(1) When rational argument from public evidence suffices to decide a question, it must be considered to be so decided. (2) When rational argument from public evidence does not suffice to decide a question, the community must encourage a diverse range of viewpoints and hypotheses consistent with a good-faith attempt to develop convincing public evidence." (Time Reborn p. 265.) Lee Smolin has been a recurring guest on Through the Wormhole. Awards and honors Smolin was named as #21 on Foreign Policy Magazine's list of Top 100 Public Intellectuals. He is also one of many physicists dubbed the "New Einstein" by the media. The Trouble with Physics was named by Newsweek magazine as number 17 on a list of 50 "Books for our Time", June 27, 2009. In 2007 he was awarded the Majorana Prize from the Electronic Journal of Theoretical Physics, and in 2009 the Klopsteg Memorial Award from the American Association of Physics Teachers (AAPT) for "extraordinary accomplishments in communicating the excitement of physics to the general public," He is a fellow of the Royal Society of Canada and the American Physical Society. In 2014 he was awarded the Buchalter Cosmology Prize for a work published in collaboration with Marina Cortês. Publications 1997. The Life of the Cosmos 2001. Three Roads to Quantum Gravity 2006. The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. Houghton Mifflin. 2013. Time Reborn: From the Crisis in Physics to the Future of the Universe. 2014. The Singular Universe and the Reality of Time: A Proposal in Natural Philosophy by Lee Smolin and Roberto Mangabeira Unger, Cambridge University Press, 2019. Einstein’s Unfinished Revolution: The Search for What Lies Beyond the Quantum, Penguin Press. See also List of University of Waterloo people References External links A partial list of Smolin's published work A debate of the merits of string theory between Smolin and Brian Greene, from National Public Radio (2006) "The Unique Universe ": Smolin explains his skepticism re the multiverse (2009) Closer to the Truth: Series of interviews by Smolin on fundamental issues in physics : Smolin's presentation at the Royal Society of Arts (2013) 21st-century American physicists American cosmologists 1955 births Living people American Jews Hampshire College alumni Harvard University alumni University of Cincinnati alumni Institute for Advanced Study visiting scholars Academic staff of the University of Waterloo Loop quantum gravity researchers American relativity theorists Philosophers of cosmology Philosophers of time Fellows of the American Physical Society
Lee Smolin
[ "Astronomy" ]
3,145
[ "People associated with astronomy", "Philosophers of cosmology", "Philosophy of astronomy" ]
171,918
https://en.wikipedia.org/wiki/Rc%20%28Unix%20shell%29
rc (for "run commands") is the command line interpreter for Version 10 Unix and Plan 9 from Bell Labs operating systems. It resembles the Bourne shell, but its syntax is somewhat simpler. It was created by Tom Duff, who is better known for an unusual C programming language construct ("Duff's device"). A port of the original rc to Unix is part of Plan 9 from User Space. A rewrite of rc for Unix-like operating systems by Byron Rakitzis is also available but includes some incompatible changes. Rc uses C-like control structures instead of the original Bourne shell's ALGOL-like structures, except that it uses an if not construct instead of else, and has a Bourne-like for loop to iterate over lists. In rc, all variables are lists of strings, which eliminates the need for constructs like "$@". Variables are not re-split when expanded. The language is described in Duff's paper. Influences es es (for "extensible shell") is an open source, command line interpreter developed by Rakitzis and Paul Haahr that uses a scripting language syntax influenced by the rc shell. It was originally based on code from Byron Rakitzis's clone of rc for Unix. Extensible shell is intended to provide a fully functional programming language as a Unix shell. It does so by introducing "program fragments" in braces as a new datatype, lexical scoping via let, and some more minor improvements. The bulk of es development occurred in the early 1990s, after the shell was introduced at the Winter 1993 USENIX conference in San Diego. Official releases appear to have ceased after 0.9-beta-1 in 1997, and es lacks features present in more popular shells, such as zsh and bash. A public domain fork of is active . Examples The Bourne shell script: if [ "$1" = "hello" ]; then echo hello, world else case "$2" in 1) echo $# 'hey' "jude's"$3;; 2) echo `date` :$*: :"$@":;; *) echo why not 1>&2 esac for i in a b c; do echo $i done fi is expressed in rc as: if(~ $1 hello) echo hello, world if not { switch($2) { case 1 echo $#* 'hey' 'jude''s'^$3 case 2 echo `{date} :$"*: :$*: case * echo why not >[1=2] } for(i in a b c) echo $i } Rc also supports more dynamic piping: a |[2] b # pipe only standard error of a to b — equivalent to '3>&2 2>&1 >&3 | b' in Bourne shell a <>b # opens file b as a's standard input and standard output a <{b} <{c} # becomes a {standard output of b} {standard output of c}, # better known as "process substitution" References External links - Plan 9 manual page Plan 9 from User Space - Includes rc and other Plan 9 tools for Linux, Mac OS X and other Unix-like systems Byron Rakitzis' rewrite for Unix (article ) es Official website Free system software Inferno (operating system) Plan 9 from Bell Labs Procedural programming languages Programming languages created in 1989 Scripting languages Text-oriented programming languages Unix shells
Rc (Unix shell)
[ "Technology" ]
731
[ "Plan 9 from Bell Labs", "Computing platforms", "Inferno (operating system)" ]
171,950
https://en.wikipedia.org/wiki/Root%20of%20unity
In mathematics, a root of unity, occasionally called a de Moivre number, is any complex number that yields 1 when raised to some positive integer power . Roots of unity are used in many branches of mathematics, and are especially important in number theory, the theory of group characters, and the discrete Fourier transform. Roots of unity can be defined in any field. If the characteristic of the field is zero, the roots are complex numbers that are also algebraic integers. For fields with a positive characteristic, the roots belong to a finite field, and, conversely, every nonzero element of a finite field is a root of unity. Any algebraically closed field contains exactly th roots of unity, except when is a multiple of the (positive) characteristic of the field. General definition An th root of unity, where is a positive integer, is a number satisfying the equation Unless otherwise specified, the roots of unity may be taken to be complex numbers (including the number 1, and the number −1 if is even, which are complex with a zero imaginary part), and in this case, the th roots of unity are However, the defining equation of roots of unity is meaningful over any field (and even over any ring) , and this allows considering roots of unity in . Whichever is the field , the roots of unity in are either complex numbers, if the characteristic of is 0, or, otherwise, belong to a finite field. Conversely, every nonzero element in a finite field is a root of unity in that field. See Root of unity modulo n and Finite field for further details. An th root of unity is said to be if it is not an th root of unity for some smaller , that is if If n is a prime number, then all th roots of unity, except 1, are primitive. In the above formula in terms of exponential and trigonometric functions, the primitive th roots of unity are those for which and are coprime integers. Subsequent sections of this article will comply with complex roots of unity. For the case of roots of unity in fields of nonzero characteristic, see . For the case of roots of unity in rings of modular integers, see Root of unity modulo n. Elementary properties Every th root of unity is a primitive th root of unity for some , which is the smallest positive integer such that . Any integer power of an th root of unity is also an th root of unity, as This is also true for negative exponents. In particular, the reciprocal of an th root of unity is its complex conjugate, and is also an th root of unity: If is an th root of unity and then . Indeed, by the definition of congruence modulo n, for some integer , and hence Therefore, given a power of , one has , where is the remainder of the Euclidean division of by . Let be a primitive th root of unity. Then the powers , , ..., , are th roots of unity and are all distinct. (If where , then , which would imply that would not be primitive.) This implies that , , ..., , are all of the th roots of unity, since an th-degree polynomial equation over a field (in this case the field of complex numbers) has at most solutions. From the preceding, it follows that, if is a primitive th root of unity, then if and only if If is not primitive then implies but the converse may be false, as shown by the following example. If , a non-primitive th root of unity is , and one has , although Let be a primitive th root of unity. A power of is a primitive th root of unity for where is the greatest common divisor of and . This results from the fact that is the smallest multiple of that is also a multiple of . In other words, is the least common multiple of and . Thus Thus, if and are coprime, is also a primitive th root of unity, and therefore there are distinct primitive th roots of unity (where is Euler's totient function). This implies that if is a prime number, all the roots except are primitive. In other words, if is the set of all th roots of unity and is the set of primitive ones, is a disjoint union of the : where the notation means that goes through all the positive divisors of , including and . Since the cardinality of is , and that of is , this demonstrates the classical formula Group properties Group of all roots of unity The product and the multiplicative inverse of two roots of unity are also roots of unity. In fact, if and , then , and , where is the least common multiple of and . Therefore, the roots of unity form an abelian group under multiplication. This group is the torsion subgroup of the circle group. Group of th roots of unity For an integer n, the product and the multiplicative inverse of two th roots of unity are also th roots of unity. Therefore, the th roots of unity form an abelian group under multiplication. Given a primitive th root of unity , the other th roots are powers of . This means that the group of the th roots of unity is a cyclic group. It is worth remarking that the term of cyclic group originated from the fact that this group is a subgroup of the circle group. Galois group of the primitive th roots of unity Let be the field extension of the rational numbers generated over by a primitive th root of unity . As every th root of unity is a power of , the field contains all th roots of unity, and is a Galois extension of If is an integer, is a primitive th root of unity if and only if and are coprime. In this case, the map induces an automorphism of , which maps every th root of unity to its th power. Every automorphism of is obtained in this way, and these automorphisms form the Galois group of over the field of the rationals. The rules of exponentiation imply that the composition of two such automorphisms is obtained by multiplying the exponents. It follows that the map defines a group isomorphism between the units of the ring of integers modulo and the Galois group of This shows that this Galois group is abelian, and implies thus that the primitive roots of unity may be expressed in terms of radicals. Galois group of the real part of the primitive roots of unity The real part of the primitive roots of unity are related to one another as roots of the minimal polynomial of The roots of the minimal polynomial are just twice the real part; these roots form a cyclic Galois group. Trigonometric expression De Moivre's formula, which is valid for all real and integers , is Setting gives a primitive th root of unity – one gets but for . In other words, is a primitive th root of unity. This formula shows that in the complex plane the th roots of unity are at the vertices of a regular -sided polygon inscribed in the unit circle, with one vertex at 1 (see the plots for and on the right). This geometric fact accounts for the term "cyclotomic" in such phrases as cyclotomic field and cyclotomic polynomial; it is from the Greek roots "cyclo" (circle) plus "tomos" (cut, divide). Euler's formula which is valid for all real , can be used to put the formula for the th roots of unity into the form It follows from the discussion in the previous section that this is a primitive th-root if and only if the fraction is in lowest terms; that is, that and are coprime. An irrational number that can be expressed as the real part of the root of unity; that is, as , is called a trigonometric number. Algebraic expression The th roots of unity are, by definition, the roots of the polynomial , and are thus algebraic numbers. As this polynomial is not irreducible (except for ), the primitive th roots of unity are roots of an irreducible polynomial (over the integers) of lower degree, called the th cyclotomic polynomial, and often denoted . The degree of is given by Euler's totient function, which counts (among other things) the number of primitive th roots of unity. The roots of are exactly the primitive th roots of unity. Galois theory can be used to show that the cyclotomic polynomials may be conveniently solved in terms of radicals. (The trivial form is not convenient, because it contains non-primitive roots, such as 1, which are not roots of the cyclotomic polynomial, and because it does not give the real and imaginary parts separately.) This means that, for each positive integer , there exists an expression built from integers by root extractions, additions, subtractions, multiplications, and divisions (and nothing else), such that the primitive th roots of unity are exactly the set of values that can be obtained by choosing values for the root extractions ( possible values for a th root). (For more details see , below.) Gauss proved that a primitive th root of unity can be expressed using only square roots, addition, subtraction, multiplication and division if and only if it is possible to construct with compass and straightedge the regular -gon. This is the case if and only if is either a power of two or the product of a power of two and Fermat primes that are all different. If is a primitive th root of unity, the same is true for , and is twice the real part of . In other words, is a reciprocal polynomial, the polynomial that has as a root may be deduced from by the standard manipulation on reciprocal polynomials, and the primitive th roots of unity may be deduced from the roots of by solving the quadratic equation That is, the real part of the primitive root is and its imaginary part is The polynomial is an irreducible polynomial whose roots are all real. Its degree is a power of two, if and only if is a product of a power of two by a product (possibly empty) of distinct Fermat primes, and the regular -gon is constructible with compass and straightedge. Otherwise, it is solvable in radicals, but one are in the casus irreducibilis, that is, every expression of the roots in terms of radicals involves nonreal radicals. Explicit expressions in low degrees For , the cyclotomic polynomial is Therefore, the only primitive first root of unity is 1, which is a non-primitive th root of unity for every n > 1. As , the only primitive second (square) root of unity is −1, which is also a non-primitive th root of unity for every even . With the preceding case, this completes the list of real roots of unity. As , the primitive third (cube) roots of unity, which are the roots of this quadratic polynomial, are As , the two primitive fourth roots of unity are and . As , the four primitive fifth roots of unity are the roots of this quartic polynomial, which may be explicitly solved in terms of radicals, giving the roots where may take the two values 1 and −1 (the same value in the two occurrences). As , there are two primitive sixth roots of unity, which are the negatives (and also the square roots) of the two primitive cube roots: As 7 is not a Fermat prime, the seventh roots of unity are the first that require cube roots. There are 6 primitive seventh roots of unity, which are pairwise complex conjugate. The sum of a root and its conjugate is twice its real part. These three sums are the three real roots of the cubic polynomial and the primitive seventh roots of unity are where runs over the roots of the above polynomial. As for every cubic polynomial, these roots may be expressed in terms of square and cube roots. However, as these three roots are all real, this is casus irreducibilis, and any such expression involves non-real cube roots. As , the four primitive eighth roots of unity are the square roots of the primitive fourth roots, . They are thus See Heptadecagon for the real part of a 17th root of unity. Periodicity If is a primitive th root of unity, then the sequence of powers is -periodic (because for all values of ), and the sequences of powers for are all -periodic (because ). Furthermore, the set } of these sequences is a basis of the linear space of all -periodic sequences. This means that any -periodic sequence of complex numbers can be expressed as a linear combination of powers of a primitive th root of unity: for some complex numbers and every integer . This is a form of Fourier analysis. If is a (discrete) time variable, then is a frequency and is a complex amplitude. Choosing for the primitive th root of unity allows to be expressed as a linear combination of and : This is a discrete Fourier transform. Summation Let be the sum of all the th roots of unity, primitive or not. Then This is an immediate consequence of Vieta's formulas. In fact, the th roots of unity being the roots of the polynomial , their sum is the coefficient of degree , which is either 1 or 0 according whether or . Alternatively, for there is nothing to prove, and for there exists a root – since the set of all the th roots of unity is a group, , so the sum satisfies , whence . Let be the sum of all the primitive th roots of unity. Then where is the Möbius function. In the section Elementary properties, it was shown that if is the set of all th roots of unity and is the set of primitive ones, is a disjoint union of the : This implies Applying the Möbius inversion formula gives In this formula, if , then , and for : . Therefore, . This is the special case of Ramanujan's sum , defined as the sum of the th powers of the primitive th roots of unity: Orthogonality From the summation formula follows an orthogonality relationship: for and where is the Kronecker delta and is any primitive th root of unity. The matrix whose th entry is defines a discrete Fourier transform. Computing the inverse transformation using Gaussian elimination requires operations. However, it follows from the orthogonality that is unitary. That is, and thus the inverse of is simply the complex conjugate. (This fact was first noted by Gauss when solving the problem of trigonometric interpolation.) The straightforward application of or its inverse to a given vector requires operations. The fast Fourier transform algorithms reduces the number of operations further to . Cyclotomic polynomials The zeros of the polynomial are precisely the th roots of unity, each with multiplicity 1. The th cyclotomic polynomial is defined by the fact that its zeros are precisely the primitive th roots of unity, each with multiplicity 1. where are the primitive th roots of unity, and is Euler's totient function. The polynomial has integer coefficients and is an irreducible polynomial over the rational numbers (that is, it cannot be written as the product of two positive-degree polynomials with rational coefficients). The case of prime , which is easier than the general assertion, follows by applying Eisenstein's criterion to the polynomial and expanding via the binomial theorem. Every th root of unity is a primitive th root of unity for exactly one positive divisor of . This implies that This formula represents the factorization of the polynomial into irreducible factors: Applying Möbius inversion to the formula gives where is the Möbius function. So the first few cyclotomic polynomials are If is a prime number, then all the th roots of unity except 1 are primitive th roots. Therefore, Substituting any positive integer ≥ 2 for , this sum becomes a base repunit. Thus a necessary (but not sufficient) condition for a repunit to be prime is that its length be prime. Note that, contrary to first appearances, not all coefficients of all cyclotomic polynomials are 0, 1, or −1. The first exception is . It is not a surprise it takes this long to get an example, because the behavior of the coefficients depends not so much on as on how many odd prime factors appear in . More precisely, it can be shown that if has 1 or 2 odd prime factors (for example, ) then the th cyclotomic polynomial only has coefficients 0, 1 or −1. Thus the first conceivable for which there could be a coefficient besides 0, 1, or −1 is a product of the three smallest odd primes, and that is . This by itself doesn't prove the 105th polynomial has another coefficient, but does show it is the first one which even has a chance of working (and then a computation of the coefficients shows it does). A theorem of Schur says that there are cyclotomic polynomials with coefficients arbitrarily large in absolute value. In particular, if where are odd primes, and t is odd, then occurs as a coefficient in the th cyclotomic polynomial. Many restrictions are known about the values that cyclotomic polynomials can assume at integer values. For example, if is prime, then if and only if . Cyclotomic polynomials are solvable in radicals, as roots of unity are themselves radicals. Moreover, there exist more informative radical expressions for th roots of unity with the additional property that every value of the expression obtained by choosing values of the radicals (for example, signs of square roots) is a primitive th root of unity. This was already shown by Gauss in 1797. Efficient algorithms exist for calculating such expressions. Cyclic groups The th roots of unity form under multiplication a cyclic group of order , and in fact these groups comprise all of the finite subgroups of the multiplicative group of the complex number field. A generator for this cyclic group is a primitive th root of unity. The th roots of unity form an irreducible representation of any cyclic group of order . The orthogonality relationship also follows from group-theoretic principles as described in Character group. The roots of unity appear as entries of the eigenvectors of any circulant matrix; that is, matrices that are invariant under cyclic shifts, a fact that also follows from group representation theory as a variant of Bloch's theorem. In particular, if a circulant Hermitian matrix is considered (for example, a discretized one-dimensional Laplacian with periodic boundaries), the orthogonality property immediately follows from the usual orthogonality of eigenvectors of Hermitian matrices. Cyclotomic fields By adjoining a primitive th root of unity to one obtains the th cyclotomic field This field contains all th roots of unity and is the splitting field of the th cyclotomic polynomial over The field extension has degree φ(n) and its Galois group is naturally isomorphic to the multiplicative group of units of the ring As the Galois group of is abelian, this is an abelian extension. Every subfield of a cyclotomic field is an abelian extension of the rationals. It follows that every nth root of unity may be expressed in term of k-roots, with various k not exceeding φ(n). In these cases Galois theory can be written out explicitly in terms of Gaussian periods: this theory from the Disquisitiones Arithmeticae of Gauss was published many years before Galois. Conversely, every abelian extension of the rationals is such a subfield of a cyclotomic field – this is the content of a theorem of Kronecker, usually called the Kronecker–Weber theorem on the grounds that Weber completed the proof. Relation to quadratic integers For , both roots of unity and are integers. For three values of , the roots of unity are quadratic integers: For they are Eisenstein integers (). For they are Gaussian integers (): see Imaginary unit. For four other values of , the primitive roots of unity are not quadratic integers, but the sum of any root of unity with its complex conjugate (also an th root of unity) is a quadratic integer. For , none of the non-real roots of unity (which satisfy a quartic equation) is a quadratic integer, but the sum of each root with its complex conjugate (also a 5th root of unity) is an element of the ring (). For two pairs of non-real 5th roots of unity these sums are inverse golden ratio and minus golden ratio. For , for any root of unity equals to either 0, ±2, or ± (). For , for any root of unity, equals to either 0, ±1, ±2 or ± (). See also Argand system Circle group, the unit complex numbers Cyclotomic field Group scheme of roots of unity Dirichlet character Ramanujan's sum Witt vector Teichmüller character Notes References Algebraic numbers Cyclotomic fields Polynomials 1 (number) Complex numbers
Root of unity
[ "Mathematics" ]
4,367
[ "Polynomials", "Mathematical objects", "Algebraic numbers", "Complex numbers", "Numbers", "Algebra" ]
171,952
https://en.wikipedia.org/wiki/Gabriela%20Mistral
Lucila Godoy Alcayaga (; 7 April 1889 – 10 January 1957), known by her pseudonym Gabriela Mistral (), was a Chilean poet-diplomat, educator, and Catholic. She was a member of the Secular Franciscan Order or Third Franciscan order. She was the first Latin American author to receive a Nobel Prize in Literature in 1945, "for her lyric poetry which, inspired by powerful emotions, has made her name a symbol of the idealistic aspirations of the entire Latin American world". Some central themes in her poems are nature, betrayal, love, a mother's love, sorrow and recovery, travel, and Latin American identity as formed from a mixture of Native American and European influences. Her image is featured on the 5,000 Chilean peso banknote. Early life Mistral was born in Vicuña, Chile, but grew up in Montegrande, an Andean village where she attended a primary school taught by her older sister, Emelina Molina. Despite the financial problems caused by Emelina later on, Mistral held great respect for her. Her father, Juan Gerónimo Godoy Villanueva, was also a schoolteacher but left the family when she was three years old and died alone and estranged in 1911. Poverty was a constant presence in her early life. At the age of fifteen, she supported herself and her mother, Petronila Alcayaga, a seamstress, by working as a teacher's aide in Compañía Baja, a seaside town near La Serena, Chile. In 1904, Mistral published some early poems, including Ensoñaciones ("Dreams"), Carta Íntima ("Intimate Letter"), and Junto al Mar ("By the Sea"), in the local newspapers El Coquimbo: Diario Radical and La Voz de Elqui, using different pseudonyms and variations of her name. In 1906, Mistral met Romelio Ureta, a railway worker and her first love, who tragically took his own life in 1909. Shortly after, her second love married someone else. These heartbreaks were reflected in her early poetry and gained recognition with her first published literary work in 1914, Sonetos de la muerte ("Sonnets on Death"). To protect her job as a teacher, she used a pen name, fearing the consequences of revealing her true identity. Mistral won first prize in the national literary contest Juegos Florales held in Santiago, the capital of Chile. Exploring themes of death and life more broadly than previous Latin American poets, she expanded her poetic horizons. While Mistral had passionate friendships with both men and women, which influenced her writing, she kept her emotional life private. Since June 1908, Mistral had been using the pen name Gabriela Mistral for most of her writing. After winning the Juegos Florales, she rarely used her given name, Lucila Godoy, for her publications. She constructed her pseudonym from the names of two of her favorite poets, Gabriele D'Annunzio and Frédéric Mistral, or, according to another account, as a combination of the Archangel Gabriel and the mistral wind of Provence. In 1922, Mistral published her debut book, Desolación ("Desolation"), with assistance from Federico de Onis, the Director of the Hispanic Institute of New York. The collection of poems explored themes such as motherhood, religion, nature, morality, and love for children. Her personal sorrows were reflected in the poems, solidifying her international reputation. Departing from the modernist trends in Latin America, Mistral's work was hailed by critics as straightforward yet simplistic. Two years later, in 1924, she released her second book, Ternura ("Tenderness"). Career as an educator During her adolescence, the scarcity of trained teachers, especially in rural areas, allowed anyone willing to work to find employment as a teacher. However, the young woman faced challenges in accessing good schools due to her lack of political and social connections. In 1907, she was rejected from the Normal School without explanation, which she later attributed to the school's chaplain, Father Ignacio Munizaga, who was aware of her publications advocating for educational reform and increased access to schools for all social classes. Although her formal education ended in 1900, she secured teaching positions with the help of her older sister, Emelina, who had likewise begun as a teacher's aide and was responsible for much of the poet's early education. Through her publications in local and national newspapers and magazines, as well as her willingness to relocate, she advanced from one teaching position to another. Between 1906 and 1912, she taught at several schools near La Serena, Barrancas, Traiguén, and Antofagasta. In 1912, she began working at a liceo (high school) in Los Andes, where she remained for six years, frequently visiting Santiago. In 1918, Pedro Aguirre Cerda, the Minister of Education and future President of Chile, appointed her as the director of the Sara Braun Lyceum in Punta Arenas. She subsequently moved to Temuco in 1920 and then to Santiago in 1921, defeating a candidate associated with the Radical Party to become the director of Santiago's Liceo #6, the country's newest and most prestigious girls' school. The controversy surrounding Gabriela Mistral's nomination for the coveted position in Santiago influenced her decision to accept an invitation to work in Mexico in 1922, under the guidance of Mexico's Minister of Education, José Vasconcelos. There, she contributed to the nation's plan to reform libraries and schools and establish a national education system. During this time, she gained international recognition through her journalism, public speaking, and the publication of her work Desolación in New York. She later published Lecturas para Mujeres (Readings for Women), a collection of prose and verse celebrating girls' education, featuring works by Latin American and European writers. After spending nearly two years in Mexico, Mistral traveled to Washington D.C., where she addressed the Pan American Union, and then continued her journey to New York and Europe. In Madrid, she published Ternura (Tenderness), a collection of lullabies and rondas intended for children, parents, and fellow poets. She returned to Chile in early 1925, formally retiring from the country's education system and receiving a pension. Just in time, as the legislature had recently granted the demands of the teachers' union, led by Mistral's rival Amanda Labarca Hubertson, stipulating that only university-trained teachers could be appointed in schools. Despite her limited formal education, Mistral received the academic title of Spanish Professor from the University of Chile in 1923, which highlighted her remarkable self-education and her intellectual abilities, nurtured by the vibrant culture of newspapers, magazines, and books in provincial Chile. Pablo Neruda, Chile's second Nobel Prize laureate in literature, met Mistral when she relocated to his hometown, Temuco. She introduced him to her poetry and recommended readings, leading to a lifelong friendship between the two poets. International work and recognition Mistral's international stature made it unlikely for her to remain in Chile. In mid-1925, she was invited to represent Latin America in the newly formed Institute for Intellectual Cooperation of the League of Nations. In early 1926, she relocated to France, effectively becoming an exile for the rest of her life. Initially, she made a living through journalism and giving lectures in the United States and Latin America, including Puerto Rico, the Caribbean, Brazil, Uruguay, and Argentina. Between 1926 and 1932, Mistral primarily resided in France and Italy. During this period, she worked for the League for Intellectual Cooperation of the League of Nations, attending conferences throughout Europe and the Americas. She held a visiting professorship at Barnard College of Columbia University in 1930–1931, briefly worked at Middlebury College and Vassar College in 1931, and received a warm reception at the University of Puerto Rico at Rio Piedras, where she gave conferences and wrote in 1931, 1932, and 1933. Like many Latin American artists and intellectuals, Mistral served as a consul from 1932 until her death, working in various locations including Naples, Madrid, Lisbon, Nice, Petrópolis, Los Angeles, Santa Barbara, Veracruz, Rapallo, and New York City. While serving as consul in Madrid, she had occasional professional interactions with fellow Chilean consul and Nobel Prize recipient Pablo Neruda. Mistral was among the early writers to recognize the importance and originality of Neruda's work, which she had known since he was a teenager and she was a school director in his hometown of Temuco. Mistral published hundreds of articles in magazines and newspapers throughout the Spanish-speaking world. She had notable confidants such as Eduardo Santos, President of Colombia, all the elected Presidents of Chile from 1922 to her death in 1957, Eduardo Frei Montalva (who would be elected president in 1964), and Eleanor Roosevelt. Her second major volume of poetry, Tala, was published in 1938 in Buenos Aires with the assistance of her longtime friend and correspondent Victoria Ocampo. The proceeds from the sale were dedicated to children orphaned by the Spanish Civil War. This volume contains poems that celebrate the customs and folklore of Latin America and Mediterranean Europe, reflecting Mistral's identification as "una mestiza de vasco," acknowledging her European Basque-Indigenous Amerindian background. On 14 August 1943, Mistral's 17-year-old nephew, Juan Miguel Godoy, whom she considered as a son and called Yin Yin, tragically took his own life. The grief from this loss, along with her responses to the tensions of World War II and the Cold War in Europe and the Americas, are reflected in her last volume of poetry published during her lifetime, Lagar, which appeared in a truncated form in 1954. Her partner Doris Dana edited and published a final volume of poetry, Poema de Chile, posthumously in 1967. Poema de Chile depicts the poet's return to Chile after death, accompanied by an Indian boy from the Atacama desert and an Andean deer, the huemul. This collection of poetry foreshadows the interest in objective description and re-vision of the epic tradition that would emerge among poets of the Americas, all of whom Mistral carefully read.On 15 November 1945, Mistral became the first Latin American and the fifth woman to receive the Nobel Prize in Literature. King Gustav of Sweden presented her with the award in person on 10 December 1945. In 1947, she received an honorary doctorate from Mills College in Oakland, California. In 1951, she was awarded the National Literature Prize in Chile. Poor health limited Mistral's travel in her final years. She resided in the town of Roslyn, New York, and then transferred to Hempstead, New York, where she died from pancreatic cancer on 10 January 1957 at the age of 67. Her remains were returned to Chile nine days later, and the Chilean government declared three days of national mourning, with hundreds of thousands of mourners paying their respects. Some of Mistral's best-known poems include Piececitos de Niño, Balada, Todas Íbamos a ser Reinas, La Oración de la Maestra, El Ángel Guardián, Decálogo del Artista, and La Flor del Aire. She also wrote and published approximately 800 essays in magazines and newspapers. Mistral was renowned as a correspondent and highly regarded orator, both in person and through radio broadcasts. Mistral may be most widely quoted in English for Su Nombre es Hoy ("His Name is Today"): Characteristics of her work Mistral's work incorporates gray tones and conveys recurring feelings of sadness and bitterness, reflecting her difficult childhood marked by deprivation and a lack of affection at home. Despite this, her writings also reveal her deep affection for children, which she developed during her early years as a teacher in a rural school. Catholicism, a significant influence in Mistral's life, is also evident in her literature; however, she maintains a neutral stance toward religion. Her writing skillfully combines religious themes with emotions of love and piety, solidifying her position as one of the most esteemed representatives of Latin American literature in the 20th century. Death, posthumous tributes and legacy During the 1970s and 1980s, the military dictatorship of General Augusto Pinochet appropriated Gabriela Mistral's image, portraying her as a symbol of "submission to authority" and "social order." Author Licia Fiol-Matta challenged the traditional views of Mistral as a saint-like celibate and suffering heterosexual woman, suggesting that she was a lesbian instead. In 2007, after the death of Mistral's alleged last romantic partner, Doris Dana, her archive was discovered, containing letters exchanged between Mistral and various occasional female lovers. The publication of these letters in the book Niña errante (2007), edited by Pedro Pablo Zegers, supported the notion of a long-lasting romantic relationship between Mistral and Dana during Mistral's final years. The letters were later translated into English by Velma García and published by the University of New Mexico Press in 2018. Despite these claims, Doris Dana, who was 31 years younger than Mistral, explicitly denied in her final interview that their relationship was ever romantic or erotic, describing it as that of a stepmother and stepdaughter. Dana also denied being a lesbian and expressed skepticism regarding Mistral's sexual orientation. Mistral suffered from diabetes and heart problems, and she ultimately died of pancreatic cancer at the age of 67 on 10 January 1957, in Hempstead Hospital in New York City, with Doris Dana by her side. On 7 April 2015, Google commemorated Gabriela Mistral's 126th birthday, honoring the Chilean poet and educator with a special doodle. Themes Gabriela Mistral has greatly influenced Latin American poetry. In a powerful speech by Swedish writer Hjalmar Gullberg, a member of the Swedish Academy, he provided insights into the perspective and emotions of Gabriela Mistral. Gullberg discussed how the language of troubadours, once unintelligible to Frédéric Mistral's own mother, became the language of poetry. This language continued to thrive with the birth of Gabriela Mistral, whose voice shook the world and opened the eyes and ears of those willing to listen. Gullberg noted that after experiencing the suicide of her first love, Gabriela Mistral emerged as a poet whose words spread across South America and beyond. While little is known about her first love, his death influenced Mistral's poems, which often explored themes of death, despair, and possibly a resentment towards God. Her collection of poems titled Desolación, inspired by the loss of her first love and later the death of a beloved nephew, impacted many others. The fifteenth poem in Desolación expressed sorrow for the loss of a child and resonated with those who experienced the pain of losing loved ones. However, Gabriela Mistral's books do not solely focus on themes of death, desolation, and loss. She also explored themes of love and motherhood, not only in relation to her beloved railroad employee and nephew but also in her interactions with the children she taught. Her collection of songs and rounds, titled Ternura, reflects her love for the children in her school. Published in Madrid in 1924, these heartfelt words were embraced by four thousand Mexican children who sang them as a tribute to Mistral. Her dedication to her children earned her the title of the Poet of Motherhood. Having lived through two world wars and other violent conflicts, Mistral's experiences paved the way for her third major collection, Tala (meaning "ravage" according to Gullberg). Tala encompasses a blend of sacred hymns, simple songs for children, and poems that touch on subjects like water, corn, salt, and wine. Gullberg pays homage to Mistral, acknowledging her as the great singer of sorrow and motherhood in Latin America. Mistral's collections of poems and songs beautifully express her care for children and the sorrows she endured as a teacher and poet in Latin America. Every word in her work evokes themes of sorrow and motherhood. Awards and honors 1914: Juegos Florales, Sonetos de la Muerte 1945: Nobel Prize in Literature 1951: Chilean National Prize for Literature The Venezuelan writer and diplomat who worked under the name Lucila Palacios took her nom de plume in honour of Mistral's original name. Works 1914: Sonetos de la muerte ("Sonnets of Death") 1922: Desolación ("Despair"), including "Decalogo del artista", New York : Instituto de las Españas 1923: Lecturas para Mujeres ("Readings for Women") 1924: Ternura: canciones de niños, Madrid: Saturnino Calleja 1934: Nubes Blancas y Breve Descripción de Chile (1934) 1938: Tala ("Harvesting"), Buenos Aires: Sur 1941: Antología: Selección de Gabriela Mistral, Santiago, Chile: Zig Zag 1952: Los sonetos de la muerte y otros poemas elegíacos, Santiago, Chile: Philobiblion 1954: Lagar, Santiago, Chile 1957: Recados: Contando a Chile, Santiago, Chile: Editorial del PacíficoCroquis mexicanos; Gabriela Mistral en México, México City: Costa-Amic 1958: Poesías completas, Madrid : Aguilar 1967: Poema de Chile ("Poem of Chile"), published posthumously 1992: Lagar II, published posthumously, Santiago, Chile: Biblioteca Nacional Works translated into other languages English Several selections of Mistral's poetry have been published in English translation, including those by Doris Dana, Langston Hughes, and Ursula K. Le Guin. Selected Poems of Gabriela Mistral, trans. Langston Hughes (Bloomington: Indiana University Press, 1957) Selected poems of Gabriela Mistral, trans. Doris Dana (Johns Hopkins Press, 1971), ISBN 978-0801811975 Selected Poems of Gabriela Mistral, trans. Ursula Le Guin (University of New Mexico Press, 2003), ISBN 978-0826328182 Madwomen: The Locas mujeres Poems of Gabriela Mistral, trans. Randall Couch (University of Chicago Press, 2008, paper 2009), ISBN 978-0-226-53191-5 Gabriela Mistral: This Far Place, trans. John Gallas, Contemplative Poetry 8 (Oxford: SLG Press, 2023), ISBN 978-0728303409 Two editions of her first book of poems, Desolación, have been translated into English and appear in bilingual volumes. Desolation: A Bilingual Edition of Desolación (1923), trans. Michael P. Predmore and Liliana Baltra (Pittsburgh: Latin American Literary Review Press, 2014), ISBN 9781891270246 Desolación (1922): Centennial Bilingual Edition, trans. Inés Bellina, Anne Freeland, and Alejandra Quintana Arocho, (New York: Sundial House, Columbia University Press, 2023), ISBN 9798987926437 Nepali Some of Mistral's poems are translated into Nepali by Suman Pokhrel, and collected in an anthology titled Manpareka Kehi Kavita. See also Barnard College, repository for part of Mistral's personal library, given by Doris Dana in 1978. List of female Nobel laureates NGC 3324, together with IC 2599 known as the Gabriela Mistral Nebula References External links bengali tanslation Gabriela Mistral by moom rahman Gabriela Mistral's heritage Life and Poetry of Gabriela Mistral Gabriela Mistral Foundation Gabriela Mistral Poems List of Works Gabriela Mistral – University of Chile About her Basque origin Gabriela Mistral (1889–1957) – Memoria Chilena Gabriela Mistral reads eighteen poems from her collected volumes: Ternura, Lagar, and Tala. Recorded at Library of Congress, Hispanic Division on 12 December 1950. Gabriela Mistral Papers, 1911–1949 1889 births 1957 deaths People from Elqui Province Chilean people of Basque descent Chilean people of Diaguita descent Chilean women diplomats Chilean diplomats Chilean emigrants to the United States Chilean Nobel laureates Chilean schoolteachers Mestizo writers Deaths from pancreatic cancer in New York (state) Nobel laureates in Literature People from Roslyn Harbor, New York People from Hempstead (village), New York Women Nobel laureates National Prize for Literature (Chile) winners Pseudonymous women writers Postmodern writers 20th-century Chilean women writers 20th-century Chilean poets Chilean women poets Columbia University faculty 20th-century pseudonymous writers Chilean academics Chilean Anti-Francoists
Gabriela Mistral
[ "Technology" ]
4,366
[ "Women Nobel laureates", "Women in science and technology" ]
171,964
https://en.wikipedia.org/wiki/Environmental%20ethics
In environmental philosophy, environmental ethics is an established field of practical philosophy "which reconstructs the essential types of argumentation that can be made for protecting natural entities and the sustainable use of natural resources." The main competing paradigms are anthropocentrism, physiocentrism (called ecocentrism as well), and theocentrism. Environmental ethics exerts influence on a large range of disciplines including environmental law, environmental sociology, ecotheology, ecological economics, ecology and environmental geography. There are many ethical decisions that human beings make with respect to the environment. These decision raise numerous questions. For example: Should humans continue to clear cut forests for the sake of human consumption? What species or entities ought to be considered for their own sake, independently of its contribution to biodiversity and other extrinsic goods? Why should humans continue to propagate its species, and life itself? Should humans continue to make gasoline-powered vehicles? What environmental obligations do humans need to keep for future generations? Is it right for humans to knowingly cause the extinction of a species for the convenience of humanity? How should humans best use and conserve the space environment to secure and expand life? What role can Planetary Boundaries play in reshaping the human-earth relationship? The academic field of environmental ethics grew up in response to the works of Rachel Carson and Murray Bookchin and events such as the first Earth Day in 1970, when environmentalists started urging philosophers to consider the philosophical aspects of environmental problems. Two papers published in Science had a crucial impact: Lynn White's "The Historical Roots of our Ecologic Crisis" (March 1967) and Garrett Hardin's "The Tragedy of the Commons" (December 1968). Also influential was Garett Hardin's later essay called "Exploring New Ethics for Survival", as well as an essay by Aldo Leopold in his A Sand County Almanac, called "The Land Ethic", in which Leopold explicitly claimed that the roots of the ecological crisis were philosophical (1949). The first international academic journals in this field emerged from North America in the late 1970s and early 1980s – the US-based journal Environmental Ethics in 1979 and the Canadian-based journal The Trumpeter: Journal of Ecosophy in 1983. The first British based journal of this kind, Environmental Values, was launched in 1992. Marshall's categories Some scholars have tried to categorise the various ways the natural environment is valued. Alan Marshall and Michael Smith are two examples of this, as cited by Peter Vardy in The Puzzle of Ethics. According to Marshall, three general ethical approaches have emerged over the last 40 years: Libertarian Extension, the Ecologic Extension, and Conservation Ethics. Libertarian extension Marshall's libertarian extension echoes a civil liberty approach (i.e. a commitment to extending equal rights to all members of a community). In environmentalism, the community is generally thought to consist of non-humans as well as humans. Andrew Brennan was an advocate of ecologic humanism (eco-humanism), the argument that all ontological entities, animate and inanimate, can be given ethical worth purely on the basis that they exist. The work of Arne Næss and his collaborator Sessions also falls under the libertarian extension, although they preferred the term "deep ecology". Deep ecology is the argument for the intrinsic value or inherent worth of the environment – the view that it is valuable in itself. Their argument falls under both the libertarian extension and the ecologic extension. Peter Singer's work can be categorized under Marshall's 'libertarian extension'. He reasoned that the "expanding circle of moral worth" should be redrawn to include the rights of non-human animals, and to not do so would be guilty of speciesism. Singer found it difficult to accept the argument from intrinsic worth of a-biotic or "non-sentient" (non-conscious) entities, and concluded in his first edition of "Practical Ethics" that they should not be included in the expanding circle of moral worth. This approach is essentially then, bio-centric. However, in a later edition of Practical Ethics after the work of Næss and Sessions, Singer admits that, although unconvinced by deep ecology, the argument from intrinsic value of non-sentient entities is plausible, but at best problematic. Singer advocated a humanist ethics. Ecologic extension Alan Marshall's category of ecologic extension places emphasis not on human rights but on the recognition of the fundamental interdependence of all biological (and some abiological) entities and their essential diversity. Whereas Libertarian Extension can be thought of as flowing from a political reflection of the natural world, ecologic extension is best thought of as a scientific reflection of the natural world. Ecological Extension is roughly the same classification of Smith's eco-holism, and it argues for the intrinsic value inherent in collective ecological entities like ecosystems or the global environment as a whole entity. Holmes Rolston, among others, has taken this approach. This category might include James Lovelock's Gaia hypothesis; the theory that the planet earth alters its geo-physiological structure over time in order to ensure the continuation of an equilibrium of evolving organic and inorganic matter. The planet is characterized as a unified, holistic entity with independent ethical value, compared to which the human race is of no particular significance in the long run. Conservation ethics Marshall's category of 'conservation ethics' is an extension of use-value into the non-human biological world. It focuses only on the worth of the environment in terms of its utility or usefulness to humans. It contrasts the intrinsic value ideas of 'deep ecology,' hence is often referred to as 'shallow ecology,' and generally argues for the preservation of the environment on the basis that it has extrinsic value – instrumental to the welfare of human beings. Conservation is therefore a means to an end and purely concerned with mankind and inter-generational considerations. It could be argued that it is this ethic that formed the underlying arguments proposed by Governments at the Kyoto summit in 1997 and three agreements reached in the Rio Earth Summit in 1992. Humanist theories Peter Singer advocated the preservation of "world heritage sites", unspoilt parts of the world that acquire a "scarcity value" as they diminish over time. Their preservation is a bequest for future generations as they have been inherited from human's ancestors and should be passed down to future generations so they can have the opportunity to decide whether to enjoy unspoilt countryside or an entirely urban landscape. A good example of a world heritage site would be the tropical rainforest, a very specialist ecosystem that has taken centuries to evolve. Clearing the rainforest for farmland often fails due to soil conditions, and once disturbed, can take thousands of years to regenerate. Applied theology The Christian world view sees the universe as created by God, and humankind accountable to God for the use of the resources entrusted to humankind. Ultimate values are seen in the light of being valuable to God. This applies both in breadth of scope – caring for people (Matthew 25) and environmental issues, e.g. environmental health (Deuteronomy 22.8; 23.12-14) – and dynamic motivation, the love of Christ controlling (2 Corinthians 5.14f) and dealing with the underlying spiritual disease of sin, which shows itself in selfishness and thoughtlessness. In many countries this relationship of accountability is symbolised at harvest thanksgiving. (B.T. Adeney : Global Ethics in New Dictionary of Christian Ethics and Pastoral Theology 1995 Leicester) Abrahamic religious scholars have used theology to motivate the public. John L. O'Sullivan, who coined the term manifest destiny, and other influential people like him used Abrahamic ideologies to encourage action. These religious scholars, columnists and politicians historically have used these ideas and continue to do so to justify the consumptive tendencies of a young America around the time of the Industrial Revolution. In order to solidify the understanding that God had intended for humankind to use earths natural resources, environmental writers and religious scholars alike proclaimed that humans are separate from nature, on a higher order. Those that may critique this point of view may ask the same question that John Muir asks ironically in a section of his novel A Thousand Mile Walk to the Gulf, why are there so many dangers in the natural world in the form of poisonous plants, animals and natural disasters, The answer is that those creatures are a result of Adam and Eve's sins in the garden of Eden. Since the turn of the 20th century, the application of theology in environmentalism diverged into two schools of thought. The first system of understanding holds religion as the basis of environmental stewardship. The second sees the use of theology as a means to rationalize the unmanaged consumptions of natural resources. Lynn White and Calvin DeWitt represent each side of this dichotomy. John Muir personified nature as an inviting place away from the loudness of urban centers. "For Muir and the growing number of Americans who shared his views, Satan's home had become God's Own Temple." The use of Abrahamic religious allusions assisted Muir and the Sierra Club to create support for some of the first public nature preserves. Authors like Terry Tempest Williams as well as John Muir build on the idea that "...God can be found wherever you are, especially outside. Family worship was not just relegated to Sunday in a chapel." References like these assist the general public to make a connection between paintings done at the Hudson River School, Ansel Adams' photographs, along with other types of media, and their religion or spirituality. Placing intrinsic value upon nature through theology is a fundamental idea of deep ecology. Normative ethical theories Normative ethics is a field in Moral Philosophy that investigates how one ought to act. What is morally right and wrong, and how moral standards are determined. Superficially, this approach may seem intrinsically anthropocentric. However, theoretical frameworks from traditional normative ethical theories are abundant within contemporary environmental ethics. Consequentialism Consequentialist theories focus on the consequences of actions, this emphasizes not what is 'right', but rather what is of 'value' and 'good'. Act Utilitarianism, for example, expands this formulation to emphasize that what makes an action right is whether it maximises well-being and reduces pain. Thus, actions that result in greater well-being are considered obligatory and permissible. It has been noted that this is an 'instrumentalist' position towards the environment, and as such not fully adequate to the delicate demands of ecological diversity.Rule-utilitarianism is the view that following certain rules without exception is the surest way to bring about the best consequences. This is an important update to act-utilitarianism because agents do not need to judge about the likely consequences of each act; all they must do is determine whether or not a proposed course of action falls under a specific rule and, if it does, act as the rule specifies. Aldo Leopold's Land Ethic (1949) tries to avoid this type of instrumentalism by proposing a more holistic approach to the relationship between humans and their 'biotic community', so to create a 'limit' based on the maxim that "a thing is right when it tends to preserve the integrity, stability, and beauty of the biotic community; it is wrong when it tends otherwise." Thus, the use of natural resources is permissible as long as it does not disrupt the stability of the ecosystem. Some philosophers have categorised Leopold's views to be within a consequentialist framework, however it is disputed whether this was intentional. Other consequentialist views such as that of Peter Singer tend to emphasis the inclusion of non-human sentient beings into ethical considerations. This view argues that all sentient creates which are by nature able to feel pleasure and pain, are of equal moral consideration for their intrinsic value. Nevertheless, non-sentient beings, such as plants, rivers and ecosystems, are considered to be merely instrumental. Deontology Deontological theories state that an action should be based on duties or obligations to what is right, instead of what is good. In strong contrast to consequentialism, this view argues for principles of duty based not on a function of value, but on reasons that make no substantive reference to the consequences of an action. Something of intrinsic value, then, has to be protected not because its goodness would maximise a wider good, but because it is valuable in itself; not as a means towards something, but as an end in itself. Thus, if the natural environment is categorised as intrinsically valuable, any destruction or damage to such would be considered wrong as a whole rather than merely due to a calculated loss of net value. It can be said that this approach is more holistic in principle than one of consequentialist nature, as it fits more adequately with the delicate balance of large ecosystems. Theories of rights, for example, are generally deontological. That is, within this framework an environmental policy that gives rights to non-human sentient beings, would prioritise the conservation of such in their natural state, rather than in an artificial manner. Consider for example, issues in climate engineering; Ocean fertilisation aims to expand marine algae in order to remove higher levels of CO2. A complication from this approach is that it creates salient disruptions to local ecosystems. Furthermore, an environmental ethical theory based on the rights of marine animals in those ecosystems, would create a protection against this type of intervention. Environmental deontologists such as Paul W. Taylor, for example, have argued for a Kantian approach to issues of this kind. Taylor argues that all living things are 'teleological centres of life' deserving of rights and respect. His view uses a concept of 'universalizability', to argue that one ought to act only on actions which could be rationally willed as a universal law. Val Plumwood has criticised this approach by noting that the universalisation framework, is not necessarily based on 'respect' for the other, as it's based on duty and 'becoming' part of the environment. Virtue ethics Virtue ethics states that some character traits should be cultivated, and others avoided. This framework avoids problems of defining what is of intrinsic value, by instead arguing that what is important is to act in accordance with the correct character trait. The Golden mean formulation, for example, states that to be 'generous' (virtue), one should neither be miserly (deficiency) or extravagant (excess). Unlike deontology and consequentialism, theories of virtue focus their formulations on how the individual has to act to live a flourishing life. This presents a 'subjective flexibility' which seems like an adequate position to hold considering the fluctuating demands of sustainability. However, as a consequence, it can also be said that this is an inherently anthropocentric standpoint. Some Ecofeminist theories such as that of Val Plumwood, have been categorised as a form of virtue ethics. Plumwood argues that a virtue-based ethical framework adapts more fittingly to environmental diversity, as virtues such as 'respect', 'gratitude', and 'sensitivity', are not only suitable to ecological subjectivity but also more applicable to the views of indigenous people. Furthermore, what traits would be considered as part of environmental vices? Ronald Sandler argues that detrimental dispositions to human flourishing such as 'greed', 'intemperance' and 'arrogance', lead to detrimental dispositions to the protection of the environment such as 'apathy', against other species, and 'pessimism' about conservation. Views such as this, create a mutualistic connection between virtuous human flourishing, and environmental flourishing. Anthropocentrism Anthropocentrism is the position that humans are the most important or critical element in any given situation; that the human race must always be its own primary concern. Detractors of anthropocentrism argue that the Western tradition biases homo sapiens when considering the environmental ethics of a situation and that humans evaluate their environment or other organisms in terms of the utility for them (see speciesism). Many argue that all environmental studies should include an assessment of the intrinsic value of non-human beings, which would entail a reassessment of humans ecocultural identities. In fact, based on this very assumption, a philosophical article has explored recently the possibility of humans' willing extinction as a gesture toward other beings. The authors refer to the idea as a thought experiment that should not be understood as a call for action. Baruch Spinoza reasoned that if humans were to look at things objectively, they would discover that everything in the universe has a unique value. Likewise, it is possible that a human-centred or anthropocentric/androcentric ethic is not an accurate depiction of reality, and there is a bigger picture that humans may or may not be able to understand from a human perspective. Peter Vardy distinguished between two types of anthropocentrism. A strong anthropocentric ethic argues that humans are at the center of reality and it is right for them to be so. Weak anthropocentrism, however, argues that reality can only be interpreted from a human point of view, thus humans have to be at the centre of reality as they see it. Another point of view has been developed by Bryan Norton, who has become one of the essential actors of environmental ethics by launching environmental pragmatism, now one of its leading trends. Environmental pragmatism refuses to take a stance in disputes between defenders of anthropocentrist and non-anthropocentrist ethics. Instead, Norton distinguishes between strong anthropocentrism and weak-or-extended-anthropocentrism and argues that the former must underestimate the diversity of instrumental values humans may derive from the natural world. A recent view relates anthropocentrism to the future of life. Biotic ethics are based on the human identity as part of gene/protein organic life whose effective purpose is self-propagation. This implies a human purpose to secure and propagate life. Humans are central because only they can secure life beyond the duration of the Sun, possibly for trillions of eons. Biotic ethics values life itself, as embodied in biological structures and processes. Humans are special because they can secure the future of life on cosmological scales. In particular, humans can continue sentient life that enjoys its existence, adding further motivation to propagate life. Humans can secure the future of life, and this future can give human existence a cosmic purpose. Status of the field Only after 1990 did the field gain institutional recognition as programs such as Colorado State University, the University of Montana, Bowling Green State University, and the University of North Texas. In 1991, Schumacher College of Dartington, England, was founded and now provides an MSc in Holistic Science. These programs began to offer a master's degree with a specialty in environmental ethics/philosophy. Beginning in 2005 the Department of Philosophy and Religion Studies at the University of North Texas offered a PhD program with a concentration in environmental ethics/philosophy. In Germany, the University of Greifswald has recently established an international program in Landscape Ecology & Nature Conservation with a strong focus on environmental ethics. In 2009, the University of Munich and Deutsches Museum founded the Rachel Carson Center for Environment and Society, an international, interdisciplinary center for research and education in the environmental humanities. Relationship with animal ethics Differing conceptions of the treatment of and obligations towards animals, particularly those living in the wild, within animal ethics and environmental ethics has been a source of controversy between the two ethical positions; some ethicists have asserted that the two positions are incompatible, while others have argued that these disagreements can be overcome. See also Anarcho-primitivism Biocentrism Bioethics Climate ethics Conservation movement Crop art Earth Economics (policy think tank) Ecocentrism Ecological economics EcoQuest (a series of two educational games) Environmental health ethics Environmental movement Environmental organization Environmental politics Environmental racism Environmental resource management Environmental skepticism Environmental virtue ethics Hans Jonas Human ecology List of environmental philosophers Nature conservation Population control Resource depletion Self-validating reduction Solastalgia Terraforming Trail ethics Van Rensselaer Potter Veganism Artificialization Notes Further reading Brennan, Andrew/ Lo, Yeuk-Sze 2016: Environmental Ethics. In: Zalta, Edward N. (Hg.): The Stanford Encyclopedia of Philosophy (Winter 2016 Edition). https://plato.stanford.edu, Stanford University: https://plato.stanford.edu/archives/win2016/entries/ethics–environmental/. Ott, Konrad (2020): Environmental ethics. In: Kirchhoff, Thomas (ed.): Online Encyclopedia Philosophy of Nature / Online Lexikon Naturphilosophie, doi: https://doi.org/10.11588/oepn.2020.0.71420; https://journals.ub.uni-heidelberg.de/index.php/oepn/article/view/71420. External links Bioethics Literature Database Brief History of Environmental Ethics Thesaurus Ethics in the Life Sciences EnviroLink Library: Environmental Ethics - online resource for environmental ethics information EnviroLink Forum - Environmental Ethics Discussion/Debate Environmental Ethics (journal) Sustainable and Ethical Architecture Architectural Firm Stanford Encyclopedia of Philosophy Environmental Ethics entry in the Internet Encyclopedia of Philosophy. Center for Environmental Philosophy UNT Dept of Philosophy Creation Care Reading Room: Extensive online resources for environment and faith (Tyndale Seminary) Category List - Religion-Online.org "Ecology/Environment" Islam, Christianity and the Environment Relational ethics Articles containing video clips Environmentalism Environmental philosophy
Environmental ethics
[ "Environmental_science" ]
4,532
[ "Environmental philosophy", "Environmental social science", "Environmental ethics" ]
171,992
https://en.wikipedia.org/wiki/Cyclotomic%20polynomial
In mathematics, the nth cyclotomic polynomial, for any positive integer n, is the unique irreducible polynomial with integer coefficients that is a divisor of and is not a divisor of for any Its roots are all nth primitive roots of unity , where k runs over the positive integers less than n and coprime to n (and i is the imaginary unit). In other words, the nth cyclotomic polynomial is equal to It may also be defined as the monic polynomial with integer coefficients that is the minimal polynomial over the field of the rational numbers of any primitive nth-root of unity ( is an example of such a root). An important relation linking cyclotomic polynomials and primitive roots of unity is showing that is a root of if and only if it is a dth primitive root of unity for some d that divides n. Examples If n is a prime number, then If n = 2p where p is a prime number other than 2, then For n up to 30, the cyclotomic polynomials are: The case of the 105th cyclotomic polynomial is interesting because 105 is the least positive integer that is the product of three distinct odd prime numbers (3×5×7) and this polynomial is the first one that has a coefficient other than 1, 0, or −1: Properties Fundamental tools The cyclotomic polynomials are monic polynomials with integer coefficients that are irreducible over the field of the rational numbers. Except for n equal to 1 or 2, they are palindromes of even degree. The degree of , or in other words the number of nth primitive roots of unity, is , where is Euler's totient function. The fact that is an irreducible polynomial of degree in the ring is a nontrivial result due to Gauss. Depending on the chosen definition, it is either the value of the degree or the irreducibility which is a nontrivial result. The case of prime n is easier to prove than the general case, thanks to Eisenstein's criterion. A fundamental relation involving cyclotomic polynomials is which means that each n-th root of unity is a primitive d-th root of unity for a unique d dividing n. The Möbius inversion formula allows to be expressed as an explicit rational fraction: where is the Möbius function. This provides a recursive formula for the cyclotomic polynomial , which may be computed by dividing by the cyclotomic polynomials for the proper divisors d dividing n, starting from : This gives an algorithm for computing any , provided integer factorization and division of polynomials are available. Many computer algebra systems, such as SageMath, Maple, Mathematica, and PARI/GP, have a built-in function to compute the cyclotomic polynomials. Easy cases for computation As noted above, if is a prime number, then If n is an odd integer greater than one, then In particular, if is twice an odd prime, then (as noted above) If is a prime power (where p is prime), then More generally, if with relatively prime to , then These formulas may be applied repeatedly to get a simple expression for any cyclotomic polynomial in terms of a cyclotomic polynomial of square free index: If is the product of the prime divisors of (its radical), then This allows formulas to be given for the th cyclotomic polynomial when has at most one odd prime factor: If is an odd prime number, and and are positive integers, then For other values of , the computation of the th cyclotomic polynomial is similarly reduced to that of where is the product of the distinct odd prime divisors of . To deal with this case, one has that, for prime and not dividing , Integers appearing as coefficients The problem of bounding the magnitude of the coefficients of the cyclotomic polynomials has been the object of a number of research papers. If n has at most two distinct odd prime factors, then Migotti showed that the coefficients of are all in the set {1, −1, 0}. The first cyclotomic polynomial for a product of three different odd prime factors is it has a coefficient −2 (see above). The converse is not true: only has coefficients in {1, −1, 0}. If n is a product of more different odd prime factors, the coefficients may increase to very high values. E.g., has coefficients running from −22 to 23; also , the smallest n with 6 different odd primes, has coefficients of magnitude up to 532. Let A(n) denote the maximum absolute value of the coefficients of . It is known that for any positive k, the number of n up to x with A(n) > nk is at least c(k)⋅x for a positive c(k) depending on k and x sufficiently large. In the opposite direction, for any function ψ(n) tending to infinity with n we have A(n) bounded above by nψ(n) for almost all n. A combination of theorems of Bateman and Vaughan states that on the one hand, for every , we have for all sufficiently large positive integers , and on the other hand, we have for infinitely many positive integers . This implies in particular that univariate polynomials (concretely for infinitely many positive integers ) can have factors (like ) whose coefficients are superpolynomially larger than the original coefficients. This is not too far from the general Landau-Mignotte bound. Gauss's formula Let n be odd, square-free, and greater than 3. Then: for certain polynomials An(z) and Bn(z) with integer coefficients, An(z) of degree φ(n)/2, and Bn(z) of degree φ(n)/2 − 2. Furthermore, An(z) is palindromic when its degree is even; if its degree is odd it is antipalindromic. Similarly, Bn(z) is palindromic unless n is composite and n ≡ 3 (mod 4), in which case it is antipalindromic. The first few cases are Lucas's formula Let n be odd, square-free and greater than 3. Then for certain polynomials Un(z) and Vn(z) with integer coefficients, Un(z) of degree φ(n)/2, and Vn(z) of degree φ(n)/2 − 1. This can also be written If n is even, square-free and greater than 2 (this forces n/2 to be odd), for Cn(z) and Dn(z) with integer coefficients, Cn(z) of degree φ(n), and Dn(z) of degree φ(n) − 1. Cn(z) and Dn(z) are both palindromic. The first few cases are: Sister Beiter conjecture The Sister Beiter conjecture is concerned with the maximal size (in absolute value) of coefficients of ternary cyclotomic polynomials where are three odd primes. Cyclotomic polynomials over a finite field and over the -adic integers Over a finite field with a prime number of elements, for any integer that is not a multiple of , the cyclotomic polynomial factorizes into irreducible polynomials of degree , where is Euler's totient function and is the multiplicative order of modulo . In particular, is irreducible if and only if is a primitive root modulo , that is, does not divide , and its multiplicative order modulo is , the degree of . These results are also true over the -adic integers, since Hensel's lemma allows lifting a factorization over the field with elements to a factorization over the -adic integers. Polynomial values If takes any real value, then for every (this follows from the fact that the roots of a cyclotomic polynomial are all non-real, for ). For studying the values that a cyclotomic polynomial may take when is given an integer value, it suffices to consider only the case , as the cases and are trivial (one has and ). For , one has if is not a prime power, if is a prime power with . The values that a cyclotomic polynomial may take for other integer values of is strongly related with the multiplicative order modulo a prime number. More precisely, given a prime number and an integer coprime with , the multiplicative order of modulo , is the smallest positive integer such that is a divisor of For , the multiplicative order of modulo is also the shortest period of the representation of in the numeral base (see Unique prime; this explains the notation choice). The definition of the multiplicative order implies that, if is the multiplicative order of modulo , then is a divisor of The converse is not true, but one has the following. If is a positive integer and is an integer, then (see below for a proof) where is a non-negative integer, always equal to 0 when is even. (In fact, if is neither 1 nor 2, then is either 0 or 1. Besides, if is not a power of 2, then is always equal to 0) is 1 or the largest odd prime factor of . is odd, coprime with , and its prime factors are exactly the odd primes such that is the multiplicative order of modulo . This implies that, if is an odd prime divisor of then either is a divisor of or is a divisor of . In the latter case, does not divide Zsigmondy's theorem implies that the only cases where and are It follows from above factorization that the odd prime factors of are exactly the odd primes such that is the multiplicative order of modulo . This fraction may be even only when is odd. In this case, the multiplicative order of modulo is always . There are many pairs with such that is prime. In fact, Bunyakovsky conjecture implies that, for every , there are infinitely many such that is prime. See for the list of the smallest such that is prime (the smallest such that is prime is about , where is Euler–Mascheroni constant, and is Euler's totient function). See also for the list of the smallest primes of the form with and , and, more generally, , for the smallest positive integers of this form. Values of If is a prime power, then If is not a prime power, let we have and is the product of the for dividing and different of . If is a prime divisor of multiplicity in , then divide , and their values at are factors equal to of As is the multiplicity of in , cannot divide the value at of the other factors of Thus there is no prime that divides If is the multiplicative order of modulo , then By definition, If then would divide another factor of and would thus divide showing that, if there would be the case, would not be the multiplicative order of modulo . The other prime divisors of are divisors of . Let be a prime divisor of such that is not be the multiplicative order of modulo . If is the multiplicative order of modulo , then divides both and The resultant of and may be written where and are polynomials. Thus divides this resultant. As divides , and the resultant of two polynomials divides the discriminant of any common multiple of these polynomials, divides also the discriminant of Thus divides . and are coprime. In other words, if is a prime common divisor of and then is not the multiplicative order of modulo . By Fermat's little theorem, the multiplicative order of is a divisor of , and thus smaller than . is square-free. In other words, if is a prime common divisor of and then does not divide Let . It suffices to prove that does not divide for some polynomial , which is a multiple of We take The multiplicative order of modulo divides , which is a divisor of . Thus is a multiple of . Now, As is prime and greater than 2, all the terms but the first one are multiples of This proves that Applications Using , one can give an elementary proof for the infinitude of primes congruent to 1 modulo n, which is a special case of Dirichlet's theorem on arithmetic progressions. Suppose is a finite list of primes congruent to modulo Let and consider . Let be a prime factor of (to see that decompose it into linear factors and note that 1 is the closest root of unity to ). Since we know that is a new prime not in the list. We will show that Let be the order of modulo Since we have . Thus . We will show that . Assume for contradiction that . Since we have for some . Then is a double root of Thus must be a root of the derivative so But and therefore This is a contradiction so . The order of which is , must divide . Thus Periodic recursive sequences The constant-coefficient linear recurrences which are periodic are precisely the power series coefficients of rational functions whose denominators are products of cyclotomic polynomials. In the theory of combinatorial generating functions, the denominator of a rational function determines a linear recurrence for its power series coefficients. For example, the Fibonacci sequence has generating function and equating coefficients on both sides of gives for . Any rational function whose denominator is a divisor of has a recursive sequence of coefficients which is periodic with period at most n. For example, has coefficients defined by the recurrence for , starting from . But , so we may write which means for , and the sequence has period 6 with initial values given by the coefficients of the numerator. See also Cyclotomic field Aurifeuillean factorization Root of unity References Further reading Gauss's book Disquisitiones Arithmeticae [Arithmetical Investigations] has been translated from Latin into French, German, and English. The German edition includes all of his papers on number theory: all the proofs of quadratic reciprocity, the determination of the sign of the Gauss sum, the investigations into biquadratic reciprocity, and unpublished notes. ; Reprinted 1965, New York: Chelsea, ; Corrected ed. 1986, New York: Springer, , External links Polynomials Algebra Number theory
Cyclotomic polynomial
[ "Mathematics" ]
3,051
[ "Polynomials", "Discrete mathematics", "Algebra", "Number theory" ]
172,047
https://en.wikipedia.org/wiki/Classful%20network
A classful network is an obsolete network addressing architecture used in the Internet from 1981 until the introduction of Classless Inter-Domain Routing (CIDR) in 1993. The method divides the IP address space for Internet Protocol version 4 (IPv4) into five address classes based on the leading four address bits. Classes A, B, and C provide unicast addresses for networks of three different network sizes. Class D is for multicast networking and the class E address range is reserved for future or experimental purposes. Since its discontinuation, remnants of classful network concepts have remained in practice only in limited scope in the default configuration parameters of some network software and hardware components, most notably in the default configuration of subnet masks. Background In the original address definition, the most significant eight bits of the 32-bit IPv4 address was the network number field which specified the particular network a host was attached to. The remaining 24 bits specified the local address, also called rest field (the rest of the address), which uniquely identified a host connected to that network. This format was sufficient at a time when only a few large networks existed, such as the ARPANET (network number 10), and before the wide proliferation of local area networks (LANs). As a consequence of this architecture, the address space supported only a low number (254) of independent networks. Before the introduction of address classes, the only address blocks available were these large blocks which later became known as Class A networks. As a result, some organizations involved in the early development of the Internet received very large address space allocations (16,777,216 IP addresses each). Introduction of address classes Expansion of the network had to ensure compatibility with the existing address space and the IPv4 packet structure, and avoid the renumbering of the existing networks. The solution was to expand the definition of the network number field to include more bits, allowing more networks to be designated, each potentially having fewer hosts. Since all existing network numbers at the time were smaller than 64, they had only used the 6 least-significant bits of the network number field. Thus it was possible to use the most-significant bits of an address to introduce a set of address classes while preserving the existing network numbers in the first of these classes. The new addressing architecture was introduced by in 1981 as a part of the specification of the Internet Protocol. It divided the address space into primarily three address formats, henceforth called address classes, and left a fourth range reserved to be defined later. The first class, designated as Class A, contained all addresses in which the most significant bit is zero. The network number for this class is given by the next 7 bits, therefore accommodating 128 networks in total, including the zero network, and including the IP networks already allocated. A Class B network was a network in which all addresses had the two most-significant bits set to 1 and 0 respectively. For these networks, the network address was given by the next 14 bits of the address, thus leaving 16 bits for numbering host on the network for a total of addresses per network. Class C was defined with the 3 high-order bits set to 1, 1, and 0, and designating the next 21 bits to number the networks, leaving each network with 256 local addresses. The leading bit sequence 111 designated an at-the-time unspecified addressing mode ("escape to extended addressing mode"), which was later subdivided as Class D (1110) for multicast addressing, while leaving as reserved for future use the 1111 block designated as Class E. This architecture change extended the addressing capacity of the Internet but did not prevent IP address exhaustion. The problem was that many sites needed larger address blocks than a Class C network provided, and therefore they received a Class B block, which was in most cases much larger than required. Due to the rapid growth of the Internet, the pool of unassigned Class B addresses (214, or about 16,000) was rapidly being depleted. Starting in 1993, classful networking was replaced by Classless Inter-Domain Routing (CIDR), in an attempt to solve this problem. Classful addressing definition Under classful network addressing, the 32-bit IPv4 address space was partitioned into five classes (A-E) as shown in the following tables. Classes Bit-wise representation In the following bit-wise representation, n indicates a bit used for the network ID. H indicates a bit used for the host ID. X indicates a bit without a specified purpose. Class A 0. 0. 0. 0 = 00000000.00000000.00000000.00000000 127.255.255.255 = 01111111.11111111.11111111.11111111 0nnnnnnn.HHHHHHHH.HHHHHHHH.HHHHHHHH Class B 128. 0. 0. 0 = 10000000.00000000.00000000.00000000 191.255.255.255 = 10111111.11111111.11111111.11111111 10nnnnnn.nnnnnnnn.HHHHHHHH.HHHHHHHH Class C 192. 0. 0. 0 = 11000000.00000000.00000000.00000000 223.255.255.255 = 11011111.11111111.11111111.11111111 110nnnnn.nnnnnnnn.nnnnnnnn.HHHHHHHH Class D 224. 0. 0. 0 = 11100000.00000000.00000000.00000000 239.255.255.255 = 11101111.11111111.11111111.11111111 1110XXXX.XXXXXXXX.XXXXXXXX.XXXXXXXX Class E 240. 0. 0. 0 = 11110000.00000000.00000000.00000000 255.255.255.255 = 11111111.11111111.11111111.11111111 1111XXXX.XXXXXXXX.XXXXXXXX.XXXXXXXX The number of addresses usable for addressing specific hosts in each network is always , where N is the number of rest field bits, and the subtraction of 2 adjusts for the use of the all-bits-zero host value to represent the network address and the all-bits-one host value for use as a broadcast address. Thus, for a Class C address with 8 bits available in the host field, the maximum number of hosts is 254. Today, IP addresses are associated with a subnet mask. This was not required in a classful network because the mask was implied by the address itself; any network device would inspect the first few bits of the IP address to determine the class of the address and thus its netmask. The blocks numerically at the start and end of classes A, B and C were originally reserved for special addressing or future features, i.e., and are reserved in former class A; and were reserved in former class B but are now available for assignment; and are reserved in former class C. While the network is a Class A network, it is designated for loopback and cannot be assigned to a network. Class D is reserved for multicast and cannot be used for regular unicast traffic. Class E is reserved and cannot be used on the public Internet. Many older routers will not accept using it in any context. See also IPv4 subnetting reference List of assigned /8 IPv4 address blocks Notes References External links IANA, Current IPv4 /8 delegations Overview of IP addressing, both classless and classful (404) It includes a list of Class A networks as of that date. Internet architecture IP addresses
Classful network
[ "Technology" ]
1,650
[ "Internet architecture", "IT infrastructure" ]
172,048
https://en.wikipedia.org/wiki/Partially%20ordered%20group
In abstract algebra, a partially ordered group is a group (G, +) equipped with a partial order "≤" that is translation-invariant; in other words, "≤" has the property that, for all a, b, and g in G, if a ≤ b then a + g ≤ b + g and g + a ≤ g + b. An element x of G is called positive if 0 ≤ x. The set of elements 0 ≤ x is often denoted with G+, and is called the positive cone of G. By translation invariance, we have a ≤ b if and only if 0 ≤ -a + b. So we can reduce the partial order to a monadic property: if and only if For the general group G, the existence of a positive cone specifies an order on G. A group G is a partially orderable group if and only if there exists a subset H (which is G+) of G such that: 0 ∈ H if a ∈ H and b ∈ H then a + b ∈ H if a ∈ H then -x + a + x ∈ H for each x of G if a ∈ H and -a ∈ H then a = 0 A partially ordered group G with positive cone G+ is said to be unperforated if n · g ∈ G+ for some positive integer n implies g ∈ G+. Being unperforated means there is no "gap" in the positive cone G+. If the order on the group is a linear order, then it is said to be a linearly ordered group. If the order on the group is a lattice order, i.e. any two elements have a least upper bound, then it is a lattice-ordered group (shortly l-group, though usually typeset with a script l: ℓ-group). A Riesz group is an unperforated partially ordered group with a property slightly weaker than being a lattice-ordered group. Namely, a Riesz group satisfies the Riesz interpolation property: if x1, x2, y1, y2 are elements of G and xi ≤ yj, then there exists z ∈ G such that xi ≤ z ≤ yj. If G and H are two partially ordered groups, a map from G to H is a morphism of partially ordered groups if it is both a group homomorphism and a monotonic function. The partially ordered groups, together with this notion of morphism, form a category. Partially ordered groups are used in the definition of valuations of fields. Examples The integers with their usual order An ordered vector space is a partially ordered group A Riesz space is a lattice-ordered group A typical example of a partially ordered group is Zn, where the group operation is componentwise addition, and we write (a1,...,an) ≤ (b1,...,bn) if and only if ai ≤ bi (in the usual order of integers) for all i = 1,..., n. More generally, if G is a partially ordered group and X is some set, then the set of all functions from X to G is again a partially ordered group: all operations are performed componentwise. Furthermore, every subgroup of G is a partially ordered group: it inherits the order from G. If A is an approximately finite-dimensional C*-algebra, or more generally, if A is a stably finite unital C*-algebra, then K0(A) is a partially ordered abelian group. (Elliott, 1976) Properties Archimedean The Archimedean property of the real numbers can be generalized to partially ordered groups. Property: A partially ordered group is called Archimedean when for any , if and for all then . Equivalently, when , then for any , there is some such that . Integrally closed A partially ordered group G is called integrally closed if for all elements a and b of G, if an ≤ b for all natural n then a ≤ 1. This property is somewhat stronger than the fact that a partially ordered group is Archimedean, though for a lattice-ordered group to be integrally closed and to be Archimedean is equivalent. There is a theorem that every integrally closed directed group is already abelian. This has to do with the fact that a directed group is embeddable into a complete lattice-ordered group if and only if it is integrally closed. See also Note References M. Anderson and T. Feil, Lattice Ordered Groups: an Introduction, D. Reidel, 1988. M. R. Darnel, The Theory of Lattice-Ordered Groups, Lecture Notes in Pure and Applied Mathematics 187, Marcel Dekker, 1995. L. Fuchs, Partially Ordered Algebraic Systems, Pergamon Press, 1963. V. M. Kopytov and A. I. Kokorin (trans. by D. Louvish), Fully Ordered Groups, Halsted Press (John Wiley & Sons), 1974. V. M. Kopytov and N. Ya. Medvedev, Right-ordered groups, Siberian School of Algebra and Logic, Consultants Bureau, 1996. R. B. Mura and A. Rhemtulla, Orderable groups, Lecture Notes in Pure and Applied Mathematics 27, Marcel Dekker, 1977. , chap. 9. Further reading External links Ordered algebraic structures Ordered groups Order theory
Partially ordered group
[ "Mathematics" ]
1,120
[ "Mathematical structures", "Algebraic structures", "Ordered algebraic structures", "Ordered groups", "Order theory" ]
172,068
https://en.wikipedia.org/wiki/Logit
In statistics, the logit ( ) function is the quantile function associated with the standard logistic distribution. It has many uses in data analysis and machine learning, especially in data transformations. Mathematically, the logit is the inverse of the standard logistic function , so the logit is defined as Because of this, the logit is also called the log-odds since it is equal to the logarithm of the odds where is a probability. Thus, the logit is a type of function that maps probability values from to real numbers in , akin to the probit function. Definition If is a probability, then is the corresponding odds; the of the probability is the logarithm of the odds, i.e.: The base of the logarithm function used is of little importance in the present article, as long as it is greater than 1, but the natural logarithm with base is the one most often used. The choice of base corresponds to the choice of logarithmic unit for the value: base 2 corresponds to a shannon, base  to a nat, and base 10 to a hartley; these units are particularly used in information-theoretic interpretations. For each choice of base, the logit function takes values between negative and positive infinity. The “logistic” function of any number is given by the inverse-: The difference between the s of two probabilities is the logarithm of the odds ratio (), thus providing a shorthand for writing the correct combination of odds ratios only by adding and subtracting: History Several approaches have been explored to adapt linear regression methods to a domain where the output is a probability value , instead of any real number . In many cases, such efforts have focused on modeling this problem by mapping the range to and then running the linear regression on these transformed values. In 1934, Chester Ittner Bliss used the cumulative normal distribution function to perform this mapping and called his model probit, an abbreviation for "probability unit". This is, however, computationally more expensive. In 1944, Joseph Berkson used log of odds and called this function logit, an abbreviation for "logistic unit", following the analogy for probit: Log odds was used extensively by Charles Sanders Peirce (late 19th century). G. A. Barnard in 1949 coined the commonly used term log-odds; the log-odds of an event is the logit of the probability of the event. Barnard also coined the term lods as an abstract form of "log-odds", but suggested that "in practice the term 'odds' should normally be used, since this is more familiar in everyday life". Uses and properties The logit in logistic regression is a special case of a link function in a generalized linear model: it is the canonical link function for the Bernoulli distribution. More abstractly, the logit is the natural parameter for the binomial distribution; see . The logit function is the negative of the derivative of the binary entropy function. The logit is also central to the probabilistic Rasch model for measurement, which has applications in psychological and educational assessment, among other areas. The inverse-logit function (i.e., the logistic function) is also sometimes referred to as the expit function. In plant disease epidemiology, the logistic, Gompertz, and monomolecular models are collectively known as the Richards family models. The log-odds function of probabilities is often used in state estimation algorithms because of its numerical advantages in the case of small probabilities. Instead of multiplying very small floating point numbers, log-odds probabilities can just be summed up to calculate the (log-odds) joint probability. Comparison with probit Closely related to the function (and logit model) are the probit function and probit model. The and are both sigmoid functions with a domain between 0 and 1, which makes them both quantile functions – i.e., inverses of the cumulative distribution function (CDF) of a probability distribution. In fact, the is the quantile function of the logistic distribution, while the is the quantile function of the normal distribution. The function is denoted , where is the CDF of the standard normal distribution, as just mentioned: As shown in the graph on the right, the and functions are extremely similar when the function is scaled, so that its slope at matches the slope of the . As a result, probit models are sometimes used in place of logit models because for certain applications (e.g., in item response theory) the implementation is easier. See also Sigmoid function, inverse of the logit function Discrete choice on binary logit, multinomial logit, conditional logit, nested logit, mixed logit, exploded logit, and ordered logit Limited dependent variable Logit analysis in marketing Multinomial logit Ogee, curve with similar shape Perceptron Probit, another function with the same domain and range as the logit Ridit scoring Data transformation (statistics) Arcsin (transformation) Rasch model References External links Which Link Function — Logit, Probit, or Cloglog? 12.04.2023 Further reading Logarithms Special functions
Logit
[ "Mathematics" ]
1,100
[ "E (mathematical constant)", "Logarithms", "Special functions", "Combinatorics" ]
172,080
https://en.wikipedia.org/wiki/Balcony
A balcony (from , "scaffold") is a platform projecting from the wall of a building, supported by columns or console brackets, and enclosed with a balustrade, usually above the ground floor. They are commonly found on multi-level houses, apartments and cruise ships. Types The traditional Maltese balcony is a wooden, closed balcony projecting from a wall. In contrast, a Juliet balcony does not protrude out of the building. It is usually part of an upper floor, with a balustrade only at the front, resembling a small loggia. A modern Juliet balcony often involves a metal barrier placed in front of a high window that can be opened. In the UK, the technical name for one of these was officially changed in August 2020 to a Juliet guarding. Juliet balconies are named after William Shakespeare's Juliet who, in traditional staging of the play Romeo and Juliet, is courted by Romeo while she is on her balcony—although the play itself, as written, makes no mention of a balcony, but only of a window at which Juliet appears. Various types of balcony have been used in this famous scene; the "balcony of Juliet" at Villa Capuleti in Verona is not a Juliet balcony, as it protrudes from the wall of the villa (see photograph below). Functions A unit with a regular balcony will have doors that open onto a small patio with railings, a small patio garden or skyrise greenery. A French balcony is a false balcony, with doors that open to a railing with a view of the courtyard or the surrounding scenery below. Sometimes balconies are adapted for ceremonial purposes, e.g. that of St. Peter's Basilica at Rome, when the newly elected pope gives his blessing urbi et orbi after the conclave. Inside churches, balconies are sometimes provided for the singers, and in banqueting halls and the like for the musicians. In theatres, the balcony was formerly a stage box, but the name is now usually confined to the part of the auditorium above the dress circle and below the gallery. Balconies are part of the sculptural shape of the building allowing for irregular facades without the cost of irregular internal structures. In addition to functioning as an outdoor space for a dwelling unit, balconies can also play a secondary role in building sustainability and indoor environmental quality (IEQ). Balconies have been shown to provide an overhang effect that helps prevent interior overheating by reducing solar gain, and may also have benefits in terms of blocking noise and improving natural ventilation within units. Materials Balconies can be made out of various materials; historically, stone was the most commonly used. With the rise of technology and the modern age, balconies are now able to be built out of other materials, including glass and stainless steel to provide a durable and modern look to a building. Examples One of the most famous uses of a balcony is in traditional staging of the scene that has come to be known as the "balcony scene" in Shakespeare's tragedy Romeo and Juliet (though the scene makes no mention of a balcony, only of a window at which Juliet appears). Names Manufacturers' names for their balcony railing designs often refer to the origin of the design, e.g. Italian balcony, Spanish balcony, Mexican balcony, Ecuadorian balcony. They also refer to the shape and form of the pickets used for the balcony railings, e.g. knuckle balcony. Within the construction industry it is normal for balconies to be named descriptively. For example, slide-on cassette balconies referring to the modern method used to install aluminum balconies or cast-in-situ balconies relating to concrete balconies poured on a construction site. Gallery See also Notes References External links Architectural elements Floors Garden features Parts of a theatre ta:பலுக்கல்
Balcony
[ "Technology", "Engineering" ]
795
[ "Structural engineering", "Parts of a theatre", "Building engineering", "Floors", "Architectural elements", "Components", "Architecture" ]
172,088
https://en.wikipedia.org/wiki/Machine%20vision
Machine vision is the technology and methods used to provide imaging-based automatic inspection and analysis for such applications as automatic inspection, process control, and robot guidance, usually in industry. Machine vision refers to many technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of computer science. It attempts to integrate existing technologies in new ways and apply them to solve real world problems. The term is the prevalent one for these functions in industrial automation environments but is also used for these functions in other environment vehicle guidance. The overall machine vision process includes planning the details of the requirements and project, and then creating a solution. During run-time, the process starts with imaging, followed by automated analysis of the image and extraction of the required information. Definition Definitions of the term "Machine vision" vary, but all include the technology and methods used to extract information from an image on an automated basis, as opposed to image processing, where the output is another image. The information extracted can be a simple good-part/bad-part signal, or more a complex set of data such as the identity, position and orientation of each object in an image. The information can be used for such applications as automatic inspection and robot and process guidance in industry, for security monitoring and vehicle guidance. This field encompasses a large number of technologies, software and hardware products, integrated systems, actions, methods and expertise. Machine vision is practically the only term used for these functions in industrial automation applications; the term is less universal for these functions in other environments such as security and vehicle guidance. Machine vision as a systems engineering discipline can be considered distinct from computer vision, a form of basic computer science; machine vision attempts to integrate existing technologies in new ways and apply them to solve real world problems in a way that meets the requirements of industrial automation and similar application areas. The term is also used in a broader sense by trade shows and trade groups such as the Automated Imaging Association and the European Machine Vision Association. This broader definition also encompasses products and applications most often associated with image processing. The primary uses for machine vision are automatic inspection and industrial robot/process guidance. In more recent times the terms computer vision and machine vision have converged to a greater degree. See glossary of machine vision. Imaging based automatic inspection and sorting The primary uses for machine vision are imaging-based automatic inspection and sorting and robot guidance.; in this section the former is abbreviated as "automatic inspection". The overall process includes planning the details of the requirements and project, and then creating a solution. This section describes the technical process that occurs during the operation of the solution. Methods and sequence of operation The first step in the automatic inspection sequence of operation is acquisition of an image, typically using cameras, lenses, and lighting that has been designed to provide the differentiation required by subsequent processing. MV software packages and programs developed in them then employ various digital image processing techniques to extract the required information, and often make decisions (such as pass/fail) based on the extracted information. Equipment The components of an automatic inspection system usually include lighting, a camera or other imager, a processor, software, and output devices. Imaging The imaging device (e.g. camera) can either be separate from the main image processing unit or combined with it in which case the combination is generally called a smart camera or smart sensor. Inclusion of the full processing function into the same enclosure as the camera is often referred to as embedded processing. When separated, the connection may be made to specialized intermediate hardware, a custom processing appliance, or a frame grabber within a computer using either an analog or standardized digital interface (Camera Link, CoaXPress). MV implementations also use digital cameras capable of direct connections (without a framegrabber) to a computer via FireWire, USB or Gigabit Ethernet interfaces. While conventional (2D visible light) imaging is most commonly used in MV, alternatives include multispectral imaging, hyperspectral imaging, imaging various infrared bands, line scan imaging, 3D imaging of surfaces and X-ray imaging. Key differentiations within MV 2D visible light imaging are monochromatic vs. color, frame rate, resolution, and whether or not the imaging process is simultaneous over the entire image, making it suitable for moving processes. Though the vast majority of machine vision applications are solved using two-dimensional imaging, machine vision applications utilizing 3D imaging are a growing niche within the industry. The most commonly used method for 3D imaging is scanning based triangulation which utilizes motion of the product or image during the imaging process. A laser is projected onto the surfaces of an object. In machine vision this is accomplished with a scanning motion, either by moving the workpiece, or by moving the camera & laser imaging system. The line is viewed by a camera from a different angle; the deviation of the line represents shape variations. Lines from multiple scans are assembled into a depth map or point cloud. Stereoscopic vision is used in special cases involving unique features present in both views of a pair of cameras. Other 3D methods used for machine vision are time of flight and grid based. One method is grid array based systems using pseudorandom structured light system as employed by the Microsoft Kinect system circa 2012. Image processing After an image is acquired, it is processed. Central processing functions are generally done by a CPU, a GPU, a FPGA or a combination of these. Deep learning training and inference impose higher processing performance requirements. Multiple stages of processing are generally used in a sequence that ends up as a desired result. A typical sequence might start with tools such as filters which modify the image, followed by extraction of objects, then extraction (e.g. measurements, reading of codes) of data from those objects, followed by communicating that data, or comparing it against target values to create and communicate "pass/fail" results. Machine vision image processing methods include; Stitching/Registration: Combining of adjacent 2D or 3D images. Filtering (e.g. morphological filtering) Thresholding: Thresholding starts with setting or determining a gray value that will be useful for the following steps. The value is then used to separate portions of the image, and sometimes to transform each portion of the image to simply black and white based on whether it is below or above that grayscale value. Pixel counting: counts the number of light or dark pixels Segmentation: Partitioning a digital image into multiple segments to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Edge detection: finding object edges Color Analysis: Identify parts, products and items using color, assess quality from color, and isolate features using color. Blob detection and extraction: inspecting an image for discrete blobs of connected pixels (e.g. a black hole in a grey object) as image landmarks. Neural network / deep learning / machine learning processing: weighted and self-training multi-variable decision making Circa 2019 there is a large expansion of this, using deep learning and machine learning to significantly expand machine vision capabilities. The most common result of such processing is classification. Examples of classification are object identification,"pass fail" classification of identified objects and OCR. Pattern recognition including template matching. Finding, matching, and/or counting specific patterns. This may include location of an object that may be rotated, partially hidden by another object, or varying in size. Barcode, Data Matrix and "2D barcode" reading Optical character recognition: automated reading of text such as serial numbers Gauging/Metrology: measurement of object dimensions (e.g. in pixels, inches or millimeters) Comparison against target values to determine a "pass or fail" or "go/no go" result. For example, with code or bar code verification, the read value is compared to the stored target value. For gauging, a measurement is compared against the proper value and tolerances. For verification of alpha-numberic codes, the OCR'd value is compared to the proper or target value. For inspection for blemishes, the measured size of the blemishes may be compared to the maximums allowed by quality standards. Outputs A common output from automatic inspection systems is pass/fail decisions. These decisions may in turn trigger mechanisms that reject failed items or sound an alarm. Other common outputs include object position and orientation information for robot guidance systems. Additionally, output types include numerical measurement data, data read from codes and characters, counts and classification of objects, displays of the process or results, stored images, alarms from automated space monitoring MV systems, and process control signals. This also includes user interfaces, interfaces for the integration of multi-component systems and automated data interchange. Deep learning The term deep learning has variable meanings, most of which can be applied to techniques used in machine vision for over 20 years. However the usage of the term in "machine vision" began in the later 2010s with the advent of the capability to successfully apply such techniques to entire images in the industrial machine vision space. Conventional machine vision usually requires the "physics" phase of a machine vision automatic inspection solution to create reliable simple differentiation of defects. An example of "simple" differentiation is that the defects are dark and the good parts of the product are light. A common reason why some applications were not doable was when it was impossible to achieve the "simple"; deep learning removes this requirement, in essence "seeing" the object more as a human does, making it now possible to accomplish those automatic applications. The system learns from a large amount of images during a training phase and then executes the inspection during run-time use which is called "inference". Imaging based robot guidance Machine vision commonly provides location and orientation information to a robot to allow the robot to properly grasp the product. This capability is also used to guide motion that is simpler than robots, such as a 1 or 2 axis motion controller. The overall process includes planning the details of the requirements and project, and then creating a solution. This section describes the technical process that occurs during the operation of the solution. Many of the process steps are the same as with automatic inspection except with a focus on providing position and orientation information as the result. Market As recently as 2006, one industry consultant reported that MV represented a $1.5 billion market in North America. However, the editor-in-chief of an MV trade magazine asserted that "machine vision is not an industry per se" but rather "the integration of technologies and products that provide services or applications that benefit true industries such as automotive or consumer goods manufacturing, agriculture, and defense." See also Machine vision glossary Feature detection (computer vision) Foreground detection Vision processing unit Optical sorting References Applications of computer vision Computer vision
Machine vision
[ "Engineering" ]
2,194
[ "Robotics engineering", "Packaging machinery", "Machine vision", "Artificial intelligence engineering", "Computer vision" ]
172,111
https://en.wikipedia.org/wiki/Washing%20machine
A washing machine (laundry machine, clothes washer, washer, or simply wash) is a machine designed to launder clothing. The term is mostly applied to machines that use water. Other ways of doing laundry include dry cleaning (which uses alternative cleaning fluids and is performed by specialist businesses) and ultrasonic cleaning. Modern-day home appliances use electric power to automatically clean clothes. The user adds laundry detergent, which is sold in liquid, powder, or dehydrated sheet form, to the wash water. The machines are also found in commercial laundromats where customers pay-per-use. History Washing by hand Laundering by hand involves soaking, beating, scrubbing, and rinsing dirty textiles. Before indoor plumbing, it was necessary to carry all the water used for washing, boiling, and rinsing the laundry from a pump, well, or spring. Water for the laundry would be hand-carried, heated on a fire for washing, and then poured into a tub. This meant the amount of warm, soapy water was limited; it would be reused to wash the least soiled clothing, then to wash progressively dirtier laundry. Removal of soap and water from the clothing after washing was a separate process. First, soap would be rinsed out with clear water. After rinsing, the soaking wet clothing would be formed into a roll and twisted by hand to extract water. The entire process often occupied an entire day of work, plus drying and ironing. Early machines An early example of washing by machine is the practice of fulling. In a fulling mill, the cloth was beaten with wooden hammers, known as fulling stocks or fulling hammers. The first English patent under the category of washing machines was issued in 1691. A drawing of an early washing machine appeared in the January 1752 issue of The Gentleman's Magazine, a British publication. Jacob Christian Schäffer's washing machine design was published in 1767 in Germany. In 1782, Henry Sidgier was issued a British patent for a rotating drum washer, and in the 1790s, Edward Beetham sold numerous "patent washing mills" in England. One of the first innovations in washing machine technology was the use of enclosed containers or basins that had grooves, fingers, or paddles to help with the scrubbing and rubbing of the clothes. The person using the washer would use a stick to press and rotate the clothes along the textured sides of the basin or container, agitating the clothes to remove dirt and mud. This crude agitator technology was hand-powered, but still more effective than actually hand-washing the clothes. More advancements were made to washing machine technology in the form of the rotating drum design. These early design patents consisted of a drum washer that was hand-cranked to make the wooden drums rotate. While the technology was simple enough, it was a milestone in the history of washing machines, as it introduced the idea of "powered" washing drums. As metal drums started to replace the traditional wooden drums, it allowed for the drum to turn above an open fire or an enclosed fire chamber, raising the water temperature for more effective washes. It was in the nineteenth century that steam power was first used in washing machine designs. In 1862, a patented "compound rotary washing machine, with rollers for wringing or mangling" by Richard Lansdale of Pendleton, Manchester, was shown at the 1862 London Exhibition. The first United States Patent, titled "Clothes Washing", was granted to Nathaniel Briggs of New Hampshire in 1797. Because of the Patent Office fire in 1836, no description of the device survives. The invention of the washing machine is also attributed to Watervliet Shaker Village, as a patent was issued to an Amos Larcom of Watervliet, New York, in 1829, but it is not certain that Larcom was a Shaker. A device that combined a washing machine with a wringer mechanism appeared in 1843 when Canadian John E. Turnbull of Saint John, New Brunswick patented a "Clothes Washer With Wringer Rolls". During the 1850s, Nicholas Bennett of the Mount Lebanon Shaker Society at New Lebanon, New York, invented a "wash mill", but in 1858 he assigned the patent to David Parker of the Canterbury Shaker Village, where it was registered as the "Improved Washing Machine". Margaret Colvin improved the Triumph Rotary Washer, which was exhibited in the Women's Pavilion at the Centennial International Exhibition of 1876 in Philadelphia. At the same exhibition, the Shakers won a gold medal for their machine. Electric washing machines were advertised and discussed in newspapers as early as 1904. Alva J. Fisher has been incorrectly credited with the invention of the electric washer. The US Patent Office shows at least one patent issued before Fisher's US patent number 966677 (e.g. Woodrow's US patent number 921195). The first inventor of the electric washing machine remains unknown. US electric washing machine sales reached 913,000 units in 1928. However, high unemployment rates in the Depression years reduced sales; by 1932 the number of units shipped was down to about 600,000. An early laundromat in the United States opened in Fort Worth, Texas, in 1934. It was run by Andrew Klein. Patrons used coin-in-the-slot facilities to rent washing machines. The term "laundromat" can be found in newspapers as early as 1884 and they were widespread during the Depression. England established public washrooms for laundry along with bathhouses throughout the nineteenth century. Washer design improved during the 1930s. The mechanism was now enclosed within a cabinet, and more attention was paid to electrical and mechanical safety. Spin dryers were introduced to replace the dangerous power mangle/wringers of the day. By 1940, 60% of the 25,000,000 wired homes in the United States had an electric washing machine. Many of these machines featured a power wringer, although built-in spin dryers were not uncommon. Automatic machines Bendix Home Appliances, a subsidiary of Avco, introduced the first domestic automatic washing machine in 1937, having applied for a patent in the same year. Avco had licensed the name from Bendix Corporation, an otherwise unrelated company. In appearance and mechanical detail, this first machine was not unlike the front-loading automatic washers produced today. Although it included many of today's basic features, the machine lacked any drum suspension and therefore had to be anchored to the floor to prevent "walking". Because of the components required, the machine was also expensive. For instance, the Bendix Home Laundry Service Manual (published November 1, 1946) shows that the drum speed change was facilitated by a 2-speed gearbox built to a heavy-duty standard (not unlike a car automatic gearbox, albeit smaller in size). The timer was also probably costly because miniature electric motors were expensive to produce. Early automatic washing machines were usually connected to a water supply via temporary slip-on connectors to sink taps. Later, permanent connections to hot and cold water became the norm. Most modern front-loading European machines now only have a cold water connection (called "cold fill") and rely completely on internal electric heaters to raise the water temperature. Many of the early automatic machines had coin-in-the-slot facilities and were installed in the basement laundry rooms of apartment houses. World War II and after After the attack on Pearl Harbor, US domestic washer production was suspended for the duration of World War II in favor of manufacturing war material. However, numerous US appliance manufacturers were permitted to undertake the research and development of washers during the war years. Many took the opportunity to develop automatic machines, realizing that these represented the future of the industry. A large number of US manufacturers introduced competing automatic machines (mainly of the top-loading type) in the late 1940s and early 1950s. General Electric also introduced its first top-loading automatic model in 1947. This machine had many of the features that are incorporated into modern machines. Another early form of automatic washing machine manufactured by The Hoover Company used cartridges to program different wash cycles. This system, called the "Keymatic", used plastic cartridges with key-like slots and ridges around the edges. The cartridge was inserted into a slot on the machine and a mechanical reader operated the machine accordingly. Several manufacturers produced semi-automatic machines, requiring the user to intervene at one or two points in the wash cycle. A common semi-automatic type (available from Hoover in the UK until at least the 1970s) included two tubs: one with an agitator or impeller for washing, plus another smaller tub for water extraction or centrifugal rinsing. These machines are still available in some countries such as India. Since their introduction, automatic washing machines have relied on electromechanical timers to sequence the washing and extraction process. Electromechanical timers consist of a series of cams on a common shaft driven by a small electric motor via a reduction gearbox. At the appropriate time in the wash cycle, each cam actuates a switch to engage or disengage a particular part of the machinery (for example, the drain pump motor). One of the first was invented in 1957 by Winston L. Shelton and Gresham N. Jennings, then both General Electric engineers. The device was granted US Patent 2870278. On the early electromechanical timers, the motor ran at a constant speed throughout the wash cycle, although the user could truncate parts of the program by manually advancing the control dial. However, by the 1950s demand for greater flexibility in the wash cycle led to the introduction of more sophisticated electrical timers to supplement the electromechanical timer. These newer timers enabled greater variation in functions such as the wash time. With this arrangement, the electric timer motor is periodically switched off to permit the clothing to soak and is only re-energized just before a micro-switch being engaged or disengaged for the next stage of the process. Fully electronic timers did not become widespread until decades later. Despite the high cost of automatic washers, manufacturers had difficulty meeting the demand. Although there were material shortages during the Korean War, by 1953 automatic washing machine sales in the US exceeded those of wringer-type electric machines. In the UK and most of Europe, electric washing machines did not become popular until the 1950s. This was largely because of the economic impact of World War II on the consumer market, which did not properly recover until the late 1950s. The early electric washers were single-tub wringer-type machines, as fully automatic washing machines were expensive. During the 1960s, twin tub machines briefly became popular, helped by the low price of the Rolls Razor washers. Twin tub washing machines have two tubs, one larger than the other. The smaller tub in reality is a spinning drum for centrifugal drying while the larger tub only has an agitator in its bottom. Some machines could pump used wash water into a separate tub for temporary storage and to later pump it back for re-use. This was done not to save water or soap, but because heated water was expensive and time-consuming to produce. Automatic washing machines did not become dominant in the UK until well into the 1970s and by then were almost exclusively of the front-loader design. In early automatic washing machines, any changes in impeller/drum speed were achieved by mechanical means or by a rheostat on the motor power supply. However, since the 1970s electronic control of motor speed has become a common feature on the more expensive models. Cost-cutting and contemporary development Over time manufacturers of automatic washers have gone to great lengths to reduce costs. For instance, expensive gearboxes are no longer required, since motor speed can be controlled electronically. Some models can be controlled via WiFi, and have angled/tilted drums to facilitate loading. Even on some expensive washers, the outer drum of front-loading machines is often (but not always) made of plastic (it can also be made out of metal, but this is expensive). This makes changing the main bearings difficult, as the plastic drum usually cannot be separated into two halves to enable the inner drum to be removed to gain access to the bearing. Many residential front-loading washing machines typically have a concrete block to dampen vibration. Alternatives include a plastic counterweight that can be filled with water after delivery, reducing or controlling motor speeds, using hydraulic suspensions instead of spring suspensions, and having freely moving steel balls or liquid contained inside a ring mounted on both the top and bottom of the drum to counter the weight of the clothes and reduce vibration. Most newer front-load machines now use a brushless DC (BLDC) motor directly connected to the basket (direct drive), where the stator assembly is attached to the rear of the outer plastic drum assembly, whilst the co-axial rotor is mounted on the shaft of the inner drum. The direct drive motor eliminates the need for a pulley, belt, and belt tensioner. It was first introduced to washing machines by Fisher and Paykel in 1991. Since then, other manufacturers have followed suit. Some washing machines with this type of motor now come with 10-year or 20-year warranties. The motor type used is an outrunner, due to its slim design with variable speed and high torque. The rotor is connected to the inner tub through its center. It can be made of metal or plastic. Some direct drive washers use induction motors instead of BLDC motors. Additional features The modern washing machine market has seen several innovations and features, examples including: Washing machines including water jets (also known as water sprays, jet sprays and water showers) and steam nozzles that claim to sanitize clothes, help reduce washing times, and remove soil from the clothes. Water jets get their water from the bottom of the drum, thus recirculating the water in the washer. Others have special drums with holes that will fill with water from the bottom of the tub and redeposit the water on top of the clothes. Some drums have elements with the shape of waves, pyramids, hexagons, domes, or diamonds. Some include titanium or ceramic heating elements that claim to eliminate calcium buildup in the element. They can heat water up to . Some high-end models have lights built into the washer itself to light the drum, Others have soap dispensers where the user fills a tank with detergent and softener and the washing machine automatically doses the detergent and softener and, in some cases, chooses the most appropriate wash cycle. In some models, the tanks come pre-filled and are installed and replaced with new tanks, also pre-filled or refilled by the user, in a dedicated compartment on the bottom of the machine. Some have support for single-use capsules containing enough laundry additives for one load. The capsules are installed in the detergent compartment. Many dilute the detergent before it comes in contact with the clothes, some by means of mixing the soap and water with air to make foam, which is then introduced into the drum and improves cleaning performance. Alternatively micro bubbles may be used instead. Some have pulsators that are mounted on a plate on the bottom of the drum instead of an agitator. The plate spins, and the pulsators generate waves that help shake the soil out of the clothes. Many also include mechanisms to prevent or remove undissolved detergent residue on the detergent dispenser. It is possible to incorporate a blower and a nozzle to smooth wrinkles in clothes without removing them from the washer. Some manufacturers like LG Electronics and Samsung Electronics have introduced functions on their washers that allow users to troubleshoot common problems with their washers without having to contact technical support. LG's approach involves a phone receiving signals through sound tones, while Samsung's approach involves having the user take a photo of the washer's time display with a phone. In both methods, the problem and steps to resolve it are displayed on the phone itself. Some models are also NFC enabled. Some implementations are patented under US Patent US20050268669A1 and US Patent US20050097927A1. In the early 1990s, upmarket machines incorporated microcontrollers for the timing process. These proved reliable and cost-effective, so many cheaper machines now also incorporate microcontrollers rather than electromechanical timers. Since the 2010s, some machines have had touchscreen displays, full-color or color displays, or touch-sensitive control panels. In 1994, Staber Industries released the System 2000 washing machine, which is the only top-loading, horizontal-axis washer to be manufactured in the United States. The hexagonal tub spins like a front-loading machine, using only about one-third as much water as conventional top-loaders. This factor has led to an Energy Star rating for its high efficiency. This type of horizontal-axis washer and dryer (with a circular drum) is often used in Europe, where space is limited, as they can be as thin as in width. In 1998, New Zealand-based company Fisher & Paykel introduced its SmartDrive washing machine line in the US. This washing machine uses a computer-controlled system to determine factors such as load size and adjusts the wash cycle to match. It also used a mixed system of washing, first with the "Eco-Active" wash, using a low level of recirculated water being sprayed on the load followed by a more traditional style wash. The SmartDrive also included a direct drive brushless DC electric motor, which simplified the bowl and agitator drive by eliminating the gearbox system. In 2000, the British inventor James Dyson launched the CR01 ContraRotator, a type of washing machine with two cylinders rotating in opposite directions. It was claimed that this design reduced the wash time and produced cleaner washing than a single-cylinder machine. In 2004 the launch of the CR02, was the first washing machine to gain the British Allergy Foundation Seal of Approval. However, neither of the ContraRotator machines is now in production as they were expensive to manufacture. They were discontinued in 2005. It is patented under U.S. Patent US7750531B2, U.S. Patent US6311527, U.S. Patent US20010023513, U.S. Patent US6311527B1, U.S. Patent USD450164. In 2001, Whirlpool Corporation introduced the Calypso, the first vertical-axis high-efficiency washing machine to be top-loading. A washplate in the bottom of the tub nutated (a special wobbling motion) to bounce, shake, and toss the laundry. Simultaneously, water containing detergent was sprayed onto the laundry. The machine proved to be good at cleaning but gained a bad reputation due to frequent breakdowns and destruction of laundry. The washer was recalled with a class-action lawsuit and pulled off the market. In 2003, Maytag introduced their top-loading Neptune TL FAV6800A and TL FAV9800A washers. Instead of an agitator, the machine had two washplates, perpendicular to each other and at a 45-degree angle from the bottom of the tub. The machine would fill with only a small amount of water and the two wash plates would spin, tumbling the load within it, mimicking the action of a front-loading washer in a vertical-axis design. In 2006, Sanyo introduced the "world-first" (as of February 2, 2006, with regards to home use drum-type washer/dryer) drum-type washing machine with "Air Wash" function (i.e.: using ozone as a disinfectant). It also reused and disinfected rinse water. This washing machine uses only of water in the recycle mode. Approximately in 2012, eco-indicators were introduced, capable of predicting the energy demand based on the customer settings in terms of program and temperature. Features available in most modern consumer washing machines: Delayed execution: a timer to delay the start of the laundry cycle Predefined programs for different laundry types Rotation speed settings Variable temperatures, including cold wash Additionally, some modern machines feature: Child lock Steam Time remaining indication Extra water/rinse. UV disinfection. Around 2015 and 2017, some manufacturers (namely Samsung and LG Electronics) offered washers and dryers that either have a top-loading washer and dryer built on top of a front-loading washer and dryer respectively (in Samsung washers and dryers) or offer users an optional top-loading washer that can be installed under a washer or dryer (for LG washers and dryers) Both manufacturers have also introduced front-loading washers allowing users to add items after a wash cycle has started, and Samsung has also introduced top-loading washers with a built-in sink and a detergent dispenser that claims to leave no residue on the dispenser itself. In IFA 2017, Samsung released the QuickDrive, a front-loading washer similar to the Dyson ContraRotator but instead of two counter-rotating drums, the QuickDrive has a single drum with a counter-rotating impeller mounted on the back of the drum. Samsung claims this technique reduces cycle times by half and energy consumption by 20%. The US has introduced standards for washing machines that improve their energy efficiency and reduce their water consumption. Types Top-loading The top-loading, vertical-axis washer has been the dominant design in the United States and Canada. This design places the clothes in a vertically mounted perforated basket that is contained within a water-retaining tub, with a finned water-pumping agitator in the center of the bottom of the basket. Clothes are loaded through the top of the machine, which is usually but not always covered with a hinged door. The drum of a top loading washing machine can include a lint trap. Agitation During the wash cycle, the outer tub is filled with water sufficient to fully immerse and suspend the clothing freely in the basket. The movement of the agitator pushes water outward between the paddles towards the edge of the tub. The water then moves outward, up the sides of the basket, towards the center, and then down towards the agitator to repeat the process, in a circulation pattern similar to the shape of a torus. The agitator direction is periodically reversed because continuous motion in one direction would just lead to the water spinning around the basket with the agitator rather than the water being pumped in the torus-shaped motion. Some washers supplement the water-pumping action of the agitator with a large rotating screw on the shaft above the agitator, to help move water downwards in the center of the basket. A washing machine can have an impeller, also called a wash plate, instead of an agitator, which serves the same purpose but does not have a vertical cylinder extending from its base. Since the agitator and the drum are separate and distinct in a top-loading washing machine, the mechanism of a top-loader is inherently more complicated than a front-loading machine. Manufacturers have devised several ways to control the motion of the agitator during the wash and rinse separately from the high-speed rotation of the drum required for the spin cycle. While a top-loading washing machine could use a universal motor or DC brushless motor, it is conventional for top-loading washing machines to use more expensive, heavy, and potentially more electrically efficient and reliable induction motors. An alternative to this oscillating agitator design is the impeller-type washtub pioneered by Hoover on its long-running Hoovermatic series of top-loading machines. Here, an impeller (trademarked by Hoover as a "Pulsator") mounted on the side of the tub spins in a constant direction and creates a fast-moving current of water in the tub which drags the clothes through the water along a toroidal path. This design was used in the Hoover 0307 washer. The impeller design has the advantage of mechanical simplicity – a single-speed motor with belt drive is all that is required to drive the Pulsator with no need for gearboxes or complex electrical controls, but has the disadvantage of lower load capacity in relation to tub size. Hoovermatic machines were made mostly in twin-tub format for the European market (where they competed with Hotpoint's Supermatic line which used the oscillating agitator design) until the early 1990s. Some industrial garment testing machines still use the Hoover wash action. Another alternative involves 'pulsating' the agitator, in other words having an agitator with a reciprocating motion along its vertical axis. Some washing machines have agitators that move in an orbiting motion or agitators that nutate at the bottom. Special top loading washing machines designed for washing sneakers can incorporate bristles in their agitators. Alternatively the inner tub itself can nutate inside the outer tub. The many different ways manufacturers have solved the same problem over the years is a good example of many different ways to solve the same engineering problem with different goals, different manufacturing capabilities and expertise, and different patent encumbrances. Reversible motor In many current top-loading washers, if the motor spins in one direction, the gearbox drives the agitator; if the motor spins the other way, the gearbox locks the agitator and spins the basket and agitator together. Similarly, if the pump motor rotates one way it recirculates the sudsy water; in the other direction it pumps water from the machine during the spin cycle. Mechanically, this system is very simple. Mode-changing transmission In some top-loaders, the motor runs only in one direction. During agitation, the transmission converts the rotation into the alternating motion driving the agitator. During the spin cycle, the timer turns on a solenoid which engages a clutch locking the motor's rotation to the wash basket, providing a spin cycle. General Electric's very popular line of Filter-Flo (seen to the right) used a variant of this design where the motor reversed only to pump water out of the machine. The same clutch which allows the heavy tub full of wet clothes to "slip" as it comes up to the motor's speed, is also allowed to "slip" during agitation to engage a Gentle Cycle for delicate clothes. Whirlpool (Kenmore) created a popular design demonstrating the complex mechanisms which could be used to produce different motions from a single motor with the so-called "wig wag" mechanism, which was used for decades until modern controls rendered it obsolete. In the Whirlpool mechanism, a protruding moving piece oscillates in time with the agitation motion. Two solenoids are mounted to this protruding moving piece, with wires attaching them to the timer. During the cycle, the motor operates continuously, and the solenoids on the "wig wag" engage in agitation or spin. Despite the wires controlling the solenoids being subject to abrasion and broken connections due to their constant motion and the solenoids operating in a damp environment where corrosion could damage them, these machines were surprisingly reliable. Reversible motor with mode-changing transmission Some top-loaders, especially compact apartment-sized washers, use a hybrid mechanism. The motor reverses direction every few seconds, often with a pause between direction changes, to perform the agitation. The spin cycle is accomplished by engaging a clutch in the transmission. A separate motorized pump is generally used to drain this style of machine. These machines could easily be implemented with universal motors or more modern DC brushless motors, but older ones tend to use a capacitor-start induction motor with a pause between reversals of agitation. Front-loading The front-loading or horizontal-axis clothes washer is the dominant design in Europe and in most parts of the world. In the United States and Canada, most "high-end" washing machines are of this type. In addition, most commercial and industrial clothes washers around the world are of the horizontal-axis design. This layout mounts the inner drum and outer drum horizontally, and loading is through a door at the front of the machine. The door often but not always contains a transparent window. Agitation is supplied by the back-and-forth rotation of the cylinder and by gravity. The clothes are lifted by paddles on the inside wall of the drum and then dropped. This motion flexes the weave of the fabric and forces water and detergent solution through the clothes load. Because the wash action does not require the clothing to be freely suspended in water, only enough water is needed to moisten the fabric. Because less water is required, front-loaders typically use less soap, and the repeated dropping and folding action of the tumbling can easily produce large amounts of foam or suds. Front-loaders control water usage through the surface tension of water, and the capillary wicking action this creates in the fabric weave. A front-loader washer always fills to the same low water level, but a large pile of dry clothing standing in water will soak up the moisture, causing the water level to drop. The washer then refills to maintain the original water level. Because it takes time for this water absorption to occur with a motionless pile of fabric, nearly all front-loaders begin the washing process by slowly tumbling the clothing under the stream of water entering and filling the drum, to rapidly saturate the clothes with water. Compared to top-loading washers, clothing can be packed more tightly in a front loader, up to the full drum volume if using a cotton wash cycle. This is because wet cloth usually fits into a smaller space than dry cloth, and front-loaders can self-regulate the water needed to achieve correct washing and rinsing. However, extreme overloading of front-loading washers pushes fabrics towards the small gap between the loading door and the front of the wash basket, potentially resulting in fabrics lost between the basket and outer tub, and in severe cases, tearing of clothing and jamming the motion of the basket. Mechanical aspects Front-loading washers are mechanically simple compared to top-loaders, with the main motor (a universal motor or variable-frequency drive motor) normally being connected to the drum via a grooved pulley belt and large pulley wheel without the need for a gearbox, clutch or crank. The action of a front-loading washing machine is better suited to a motor capable of reversing direction with every reversal of the wash drum; a universal motor is noisier, less efficient, and does not last as long, but is better suited to the task of reversing direction every few seconds. Some models, such as those by LG, use a motor directly connected to the drum, eliminating the need for a belt and pulley. However, front-load washers suffer from their own technical challenges due to the horizontal disposition of the drum. A top-loading washer keeps water inside the tub merely through the force of gravity pulling down on the water, while a front-loader must tightly seal the door with a gasket to prevent water dripping onto the floor during the wash cycle. This access door is locked shut with an interlocking device during the entire wash cycle, since opening the door with the machine in use could result in water gushing onto the floor. If this interlock is broken for any reason, such a machine stops operation, even if this failure happens mid-cycle. In most machines, the interlock is usually doubly redundant to prevent either opening with the drum full of water or being opened during the spin cycle. For front-loaders without viewing windows on the door, it is possible to accidentally pinch the fabric between the door and the drum, resulting in tearing and damage to the pinched clothing during tumbling and spinning. Nearly all front-loader washers for the consumer market also use a folded flexible bellows assembly around the door opening to keep clothing contained inside the drum during the tumbling wash cycle. If this bellows assembly were not used, small articles of clothing such as socks could slip out of the wash drum near the door and fall down the narrow slot between the outer and inner drums, plugging the drain and possibly jamming rotation of the inner drum. Retrieving lost items from between the outer drum and inner drum can require complete disassembly of the front of the washer and pulling out the entire inner wash drum. Commercial and industrial front-loaders used by businesses (described below) usually do not use the bellows, but instead require all small objects to be placed in a mesh bag to prevent loss near the drum opening. Variant and hybrid designs There are many variations of the two general designs. Top-loading machines in Asia use impellers instead of agitators. Impellers are similar to agitators except that they do not have the center post extending up in the middle of the washtub basket. Horizontal-axis top-loader Some machines which load from the top are otherwise much more similar to front-loading horizontal-axis drum machines. They have a drum rotating around a horizontal axis, as a front-loader, but there is no front door; instead, there is a liftable lid that provides access to the drum, which has a hatch that can be latched shut. Clothes are loaded, the hatch and lid are closed, and the machine operates and spins just like a front loader. These machines are narrower but usually taller than front-loaders, usually have a lower capacity, and are intended for use where only a narrow space is available, as is sometimes the case in Europe. They have incidental advantages: they can be loaded while standing (but force the user to bend down instead of crouching down or sitting to unload); they do not require a perishable rubber bellows seal; and instead of the drum having a single bearing on one side, it has a pair of symmetrical bearings, one on each side, avoiding asymmetrical bearing loading and potentially increasing life. Combo washer dryer There are also combo washer dryer machines that combine washing cycles and a full drying cycle in the same drum, eliminating the need to transfer wet clothes from a washer to a dryer machine. In principle, these machines are convenient for overnight cleaning (the combined cycle is considerably longer), but the effective capacity for cleaning larger batches of laundry is drastically reduced. The drying process tends to use much more energy than using two separate devices, because a combo washer dryer not only must dry the clothing but also needs to dry out the wash chamber itself. These machines are used more where space is at a premium, such as areas of Europe and Japan because they can be fit into small spaces, perform both washing and drying, and many can be operated without dedicated utility connections. In these machines, the washer and dryer functions often have different capacities, with the dryer usually having the lowest capacity. These combo machines should not be confused with a dryer on top of a washer installation, or with a laundry center, which is a one-piece appliance offering a compromise between a washer-dryer combo and a full washer to the side of the dryer installation or a dryer on top of a washer installation. Laundry centers usually have the dryer on top of the washer, with the controls for both machines being on a single control panel. Often, the controls are simpler than the controls on a washer-dryer combo or a dedicated washer and dryer. Some implementations are patented under US Patent US6343492B1 and US Patent US 6363756B1. Comparison True front-loading machines, top-loading machines with horizontal-axis drums, and true top-loading vertical-axis machines can be compared on several aspects: Efficient cleaning: Front loaders usually use less energy, water, and detergent compared to the best top-loaders. High-efficiency washers use 20% to 60% of the detergent, water, and energy of "standard" commonly-used top-loader washers. They usually take somewhat longer (20–110 minutes) to wash a load, but are often computer controlled with additional sensors, to adapt the wash cycle to the needs of each load. Water usage: Front-loaders usually use less water than top-loading residential clothes washers. Estimates are that front-loaders use from one-third to one half as much water as top-loaders. Spin-dry effectiveness: Front-loaders (and European horizontal-axis top-loaders and some front-loaders) offer much higher maximum spin speeds of up to 2000 RPM, although home machines tend to be in the 1000 to 1400 RPM range, while top-loaders (with agitators) do not exceed 1140 RPM. High-efficiency top-loaders with a wash plate (instead of an agitator) can spin up to 1100 RPM, as their center of gravity is lower. Higher spin speeds, along with the diameter of the drum, determine the g-force, and a higher g-force removes more residual water, making clothes dry faster. This also reduces energy consumption if clothes are dried in a clothes dryer. Cycle length: Top-loaders have tended to have shorter cycle times, in part because their design has traditionally emphasized simplicity and speed of operation more than resource conservation. It is observed that top-loaders wash the clothes in half the time as compared to a front-load washing machine. Wear and abrasion: Top-loaders require an agitator or impeller mechanism to force enough water through clothes to clean them effectively, which greatly increases mechanical wear and tear on fabrics. Front-loaders use paddles in the drum to repeatedly pick up and drop clothes into the water for cleaning; this gentler action causes less wear and tear. The rate of clothes wear can be roughly gauged by the amount of accumulation in a clothes dryer lint filter, since the lint largely consists of stray fibers detached from textiles during washing and drying. Difficult items: Top-loaders may have trouble cleaning large items, such as sleeping bags or pillows, which tend to float on top of the wash water rather than circulate within it. In addition, vigorous top-loader agitator motions may damage delicate fabrics. Whereas in a front-load washing machine, one can easily wash pillows, shoes, soft toys, and other difficult-to-wash items. Noise: Front-loaders tend to operate more quietly than top-loaders because the door seal helps contain noise, and because there is less of a tendency towards imbalance. Top loaders usually need a mechanical transmission (due to agitators, see above), which can generate more noise than the rubber belt or direct drive found in most front-loaders. Compactness: True front-loading machines may be installed underneath counter-height work surfaces. A front-loading washing machine, in a fully fitted kitchen, may even be disguised as a kitchen cabinet. These models can also be convenient in homes with limited floor area, since the clothes dryer may be installed directly above the washer ("stacked" configuration). Water leakage: Top-loading machines are less prone to leakage because simple gravity reliably keeps water from spilling out the loading door on top. True front-loading machines require a flexible seal or gasket on the front door, and the front door must be locked during operation to prevent opening, lest large amounts of water spill out. This seal may leak and require replacement. However, many current front-loaders use so little water that they can be stopped mid-cycle for the addition or removal of laundry, while keeping the water level in the horizontal tub below the door level. Best practice installations of either type of machine will include a floor drain or an overflow catch tray with a drain connection, since neither design is immune to leakage or a solenoid valve getting stuck in the open position. Maintenance and reliability: Top-loading washers are more tolerant of maintenance neglect, and may not need a regular "freshening" cycle to clean door seals and bellows. During the spin cycle, a top-loading tub is free to move about inside the cabinet of the machine, using only a lip around the top of the inner basket and outer tub to keep the spinning water and clothing from spraying out over the edge. Therefore, the potentially problematic door-sealing and door-locking mechanisms used by true front-loaders are not needed. On the other hand, top-loaders use mechanical gearboxes that are more vulnerable to wear than simpler front-load motor drives. Accessibility and ergonomics: Front-loaders are more convenient for shorter people and those with paraplegia, as the controls are front-mounted and the horizontal drum eliminates the need for standing or climbing. Risers, also referred to as pedestals, often with storage drawers underneath, can be used to raise the door of a true front-loader closer to the user's level. However, if stacked, the dryer controls, if at the top of the dryer, may be too tall for shorter people to conveniently access. Initial cost: In countries where top-loaders are popular, front-loaders tend to be more expensive to buy than top-loaders, though their lower operating costs can lead to lower total cost of ownership, especially if energy, detergent, or water are expensive. On the other hand, in countries with a large front-loader user base, top-loaders are usually seen as alternatives and more expensive than basic off-brand front-loaders, although without many differences in total cost of ownership apart from design-originated ones. In addition, manufacturers have tended to include more advanced features such as internal water heating, automatic dirt sensors, and high-speed emptying on front loaders, although some of these features could be implemented on top loaders. Wash cycles The earliest washing machines simply carried out a washing action when loaded with clothes and soap, filled with hot water, and started. Over time machines became more and more automated, first with complex electromechanical controllers, then fully electronic controllers; users put clothes into the machine, select a suitable program via a switch, start the machine, and come back to remove clean and slightly damp clothes at the end of the cycle. The controller starts and stops many different processes including pumps and valves to fill and empty the drum with water, heating, and rotating at different speeds, with different combinations of settings for different fabrics. Longer wash cycles can allow greater water and energy efficiency (with less water to heat up). For a load, from 2011 to 2021, the average Australian washing machine cycle (including rinsing and spinning) has lengthened from 99 to 144 minutes for front-loaders, and 55 to 59 minutes for top-loaders. Washing Many front-loading machines have internal electrical heating elements to heat the wash water, to near boiling if desired. The rate of the chemical cleaning action of the detergent and other laundry chemicals increases greatly with temperature, by the Arrhenius equation. Washing machines with internal heaters can use special detergents formulated to release different chemical ingredients at different temperatures, allowing different types of stains and soils to be cleaned from the clothes as the wash water is heated by the electrical heater. However, higher-temperature washing uses more energy, and many fabrics and elastics are damaged at higher temperatures. Temperatures exceeding have the undesirable effect of deactivating the enzymes when using biological detergent. Many machines are cold-fill, connected to cold water only, which they internally heat to operating temperature. Where water can be heated more cheaply or with less carbon dioxide emission than by electricity, a cold-fill operation is inefficient. Front-loaders need to use low-sudsing detergents because the tumbling action of the drum entrains air into the clothes load, which can cause excessive foamy suds and overflows. However, due to the efficient use of water and detergent, the suds issue with front-loaders can be controlled by simply using less detergent, without lessening the cleaning action. Rinsing Washing machines perform several rinses after the main wash to remove most of the detergent. Modern washing machines use less hot water due to environmental concerns; however, this has led to the problem of poor rinsing on many washing machines on the market, which can be a problem to people who are sensitive to detergents. The Allergy UK website suggests re-running the rinse cycle, or rerunning the entire wash cycle without detergent. In response to complaints, many washing machines allow the user to select additional rinse cycles, at the expense of higher water usage and longer cycle time. Bosch, for example, in its allergy wash program, incorporates an additional three-minute rinse cycle with water of at least to rinse off detergent residues and any allergens. Spin Front-loading machines spin in multiple stages of their cycle: after main wash, after individual rinses, and the final high-speed spin. Some of those spins may be absent depending on the particular cycle. Higher spin speeds, along with larger tub diameters, remove more water, leading to faster drying. On the other hand, the need for ironing can be reduced by not using the spin cycle in the washing machine. If a heated clothes dryer is used after the wash and spin, energy use is reduced if more water has been removed from clothes. However, faster spinning can crease clothes more. Also, mechanical wear on bearings increases rapidly with rotational speed, reducing life. Early machines would spin at 300 rpm and, because of lack of any mechanical suspension, would often shake and vibrate. In 1976, most front-loading washing machines spun at around 700 RPM, or less. Today, most machines spin at 1000–1600 RPM. Most machines have variable speeds, ranging 300–2000 RPM depending on the machine. Separate spin-driers, without washing functionality, are available for specialized applications. For example, a small high-speed centrifuge machine may be provided in locker rooms of communal swimming pools to allow wet swimsuits to be substantially dried to a slightly damp condition after daily use. Washing machines often incorporate balance rings filled with a liquid such as a calcium chloride salt water solution, that are designed to balance the inner drum of the washer during spin cycles. The balance ring may be filled with oil and contain balls on races, somewhat similarly to a ball bearing, to achieve the same effect. The Bendix Economat used a flexible rubber inner tub that would squeeze the clothes towards the agitator located in the center of the inner tub in order to remove water from the clothes, instead of spinning the inner tub. This was performed by exerting a vacuum on the inner tub. Maintenance wash Many home washing machines use a plastic, rather than metal, outer shell to contain the wash water; residue can build up on the plastic tub over time. Some manufacturers advise users to perform a regular maintenance or "freshening" wash to clean the inside of the washing machine of any mold, bacteria, encrusted detergent, and unspecified dirt more effectively than with a normal wash. A maintenance wash is performed without any laundry, on the hottest wash program, adding substances such as white vinegar, 100 grams of citric acid, a detergent with bleaching properties, or a proprietary washing machine cleaner. The first injection of water goes into the sump so the machine can be allowed to fill for about 30 seconds before adding cleaning substances. Installation and flood prevention Flexible rubber hoses are typically used to connect from a building water supply to a washing machine. These hoses are often exposed to full water pressure on a continuing basis and can deteriorate over time, developing bulges or weak spots that eventually cause leaks or catastrophic bursting and flooding. Since the hoses are often hidden from view, they may be difficult to inspect and easily forgotten until a problem occurs. If a hose burst occurs when nobody is present to notice the problem, a huge volume of water can be delivered over a short time, causing extensive interior flooding damage or even structural damage. It has been estimated that a burst supply hose can deliver two tons of water in an hour. To reduce these risks, it is a common recommendation to use flexible hoses which have been jacketed with a braided stainless steel mesh. This jacketing cannot prevent leaks from developing, but it can slow the development of large bulges or "aneurysms" which can burst suddenly without warning. However, even braided metal jackets often cannot withstand the enormous pressures generated by water freezing within an enclosed volume. An additional precaution is to install a washing machine inside a shallow metal or plastic pan, which can collect minor leakage and divert the water to a nearby drain, or to the outside of a building. Drain pans can also divert water released by other problems, such as a jammed solenoid valve in a washing machine. A serious limitation of drain pans is that they typically cannot handle the large volumes of pressurized water released by a burst supply hose, so a drain pan is no substitute for hose burst precautions. In the absence of a drain, a pan may still be useful to confine leakage temporarily, while a local or remote water alarm is triggered. In addition or instead of an alarm, a water detector may signal the main water shutoff valve to the building to be automatically closed to prevent flooding. A very effective precaution is to install a shutoff or isolation valve which stops any water from being supplied, except when a washing machine is actually operating. The simplest method is to manually open and close the hot and cold water shutoff valves (traditionally globe valves) behind the washing machine, each time it is used. This method relies on the washing machine user conscientiously operating the two valves each time laundry is done, in spite of the awkward location of the valves and the tedious process of turning the handles through multiple rotations. An improvement over the traditional setup is to install a specialized laundry shutoff valve. Typically, it consists of two ball valves connected to a single handle, so they can be operated by a horizontal or vertical lever moved by 90 degrees. This makes the operation of the valves a quick procedure, but the washing machine user must still remember to turn off the water, even though the failure to do this produces no immediately obvious problems. To close this risk exposure, some shutoff valves have a spring-energized mechanical timer which is started when the user pushes a lever to open the valves. After a preset time of several hours elapses, the spring-powered mechanism automatically closes the valve without further user intervention. A variant of this setup requires the user to press a button to open the valves for an electrically-timed interval. Other automatic valve operating mechanisms electronically detect when a washing machine draws electrical power as it starts, and then open the water supply valves. Typically, the power plug for the washing machine is connected to a special detector receptacle or cable, to allowing monitoring of the power draw. Although pressurized water supply leaks can cause the most damage in the least amount of time, water drainage can also cause problems if not handled properly. Washing machine drainage hoses should be secured properly to prevent accidental dislodgement, and drains should be inspected and cleared periodically to prevent buildup of laundry lint, mold, and other deposits. Efficiency and standards Capacity and cost are both considerations when purchasing a washing machine. All else being equal, a machine of higher capacity will cost more to buy, but will be more convenient if large amounts of laundry must be cleaned. Fewer runs of a machine of larger capacity may have lower running costs and better energy and water efficiency than frequent use of a smaller machine, particularly for large families. However, running a large machine with small loads is typically inefficient and wasteful, unless the machine has been designed to handle such situations. For many years energy and water efficiency were not regulated, and little attention was paid to them. From the last part of the 20th century, increasing attention was paid to efficiency, with regulations enforcing some standards. Efficiency became a selling point, both to save on running costs and to reduce carbon dioxide emissions associated with energy generation, and waste of water. As energy and water efficiency became regulated, they became a selling point for buyers; however, the effectiveness of rinsing was not specified, and it did not directly attract the attention of buyers. Therefore, manufacturers tended to reduce the degree of rinsing after washing, saving water and electrical energy. This had the side-effect of leaving more detergent residue in clothes, which can affect people with allergies or sensitivity. In response to complaints, some manufacturers have now designed their machines with a user-selectable option for additional rinsing. Europe Washing machines display an EU Energy Label with grades for energy efficiency, washing performance, and spin efficiency. Grades for energy efficiency run from A+++ to D (best to worst), providing a simple method for judging running costs. Washing performance and spin efficiency are graded in the range A to G. However, all machines for sale must have washing performance A, so that manufacturers cannot compromise washing performance in order to improve the energy efficiency. This labeling has had the desired effect of driving customers toward more efficient washing machines and away from less efficient ones. According to regulations, each washing machine is equipped with a wastewater filter. This ensures that no hazardous chemical substances are disposed of improperly through the sewage system; on the other hand, it also ensures that if there is backflow in the plumbing system, sewage cannot enter the washing machine. United States Top-loading and front-loading clothes washers are covered by a single national standard regulating energy consumption. The old federal standards applicable before January 2011 did not restrict water consumption; there was no limit on how much unheated rinse water could be used. Energy consumption for clothes washers is quantified using the energy factor. After new mandatory federal standards were introduced, many US washers were manufactured to be more energy- and water-efficient than required by the federal standard, or even than required by the more-stringent Energy Star standard. Manufacturers were further motivated to exceed mandatory standards by a program of direct-to-manufacturer tax credits. In North America, the Energy Star program compares and lists energy-efficient clothes washers. Certified Energy Star units can be compared by their Modified Energy Factor (MEF) and Water Factor (WF) coefficients. The MEF figure of merit states how many cubic feet (about 28.3 liters) of clothes are washed per kWh (kilowatt hour). The coefficient is influenced by factors including the configuration of the washer (top-loading, front-loading), its spin speed, and the temperatures and the amount of water used in the rinse and wash cycles. Energy Star residential clothes washers must have an MEF of at least 2.0 (the higher the better); the best machines may reach 3.5. Energy Star washers must also have a WF of less than 6.0 (the lower the better). Commercial use A commercial washing machine is intended for more intensive use than a consumer washing machine. Durability and functionality is more important than style; most commercial washers are bulky and heavy, often with more expensive stainless steel construction to minimize corrosion in a constantly-moist environment. They are built with large easy-to-open service covers, and washers are designed not to require access from the underside for service. Commercial washers are often installed in long rows, with a wide access passageway behind all the machines to allow maintenance without moving the heavy machinery. Laundromat machines Many commercial washers are built for use by the general public, and are installed in publicly accessible laundromats or laundrettes. Originally, they were operated by coins (similar to older vending machines), but today they are activated by money accepting devices or card readers. The features of a commercial laundromat washer are usually more limited than those of a consumer washer, usually offering just two or three basic wash programs and an option to choose wash cycle temperatures. Some more-advanced models allow extra-cost options such as an additional wash or rinse cycle, at the choice of the user. The typical front-loading commercial washing machine also differs from consumer models in its discharge of spent wash and rinse water. While the consumer models pump used washer water out, allowing the waste drainage pipe to be located above the floor level, front-loading commercial machines generally use only gravity to expel used water. A drain valve at the bottom rear of the machine opens at the appointed time during the cycle, allowing water to flow out. This requires a special drainage trough equipped with a filter and drain, and routed behind each machine. The trough is usually part of a cement platform built for the purpose of raising the machines to a convenient height, and can be seen behind washers at most laundromats. Most laundromat machines are horizontal-axis front-loading models, because of their lower operating costs (notably, lower consumption of expensive hot water). Industrial washers By contrast, commercial washers for internal business operations (which are often referred to as "washer/extractor" machines) may include features absent from domestic machines. Many commercial washers offer an option for automatic injection of five or more different chemical types, so that the operator does not have to deal with constantly measuring out soap products and fabric softeners for each load by hand. Instead, a precise metering system draws the detergents and wash additives directly from large liquid-chemical storage barrels, and injects them as needed into the various wash and rinse cycles. Some computer-controlled commercial washers offer the operator detailed control over the various wash and rinse cycles, allowing the operator to program custom washing cycles. Most large-scale industrial washers are horizontal-axis machines, but they may have front-, side-, or top-load doors. Some industrial clothes washers can batch-process up to of textiles at once, and can be used for extremely machine-abusive washing tasks such as stone washing or fabric bleaching and dyeing. An industrial washer can be mounted on heavy-duty shock absorbers and attached to a concrete floor, so that it can extract water from even the most severely out-of-balance and heavy wash loads. Noise and vibration are not as unacceptable as in a domestic machine. The machine may be mounted on hydraulic cylinders, permitting the entire washer to be lifted and tilted so that fabrics can be automatically dumped from the wash drum onto a conveyor belt once the cycle is complete. One special type of continuous-processing washer is known as the tunnel washer. This specialized high-capacity machine does not have a drum where everything being washed undergoes distinct wash and rinse cycles. Instead, the laundry progresses slowly and continuously through a long, large-diameter horizontal-axis rotating tube in the manner of an assembly line, with different processes at different positions. Social impact The historically laborious process of washing clothes (a task which often consumed a whole day) was at times described as "women's work". The spread of the washing machine has been seen to be a force behind the improvement of women's position in society. Before the advent of the washing machine, laundry was done first at watercourses, and later in public wash-houses known as lavoirs. Camille Paglia and others argue that the washing machine led to a type of social isolation of women, as a previously communal activity became a solitary one. In 2009 the Italian newspaper L'Osservatore Romano reprinted a Playboy magazine article on International Women's Day arguing that the washing machine had done more for the liberation of women than the contraceptive pill and abortion rights. A study from Université de Montréal, Canada presented a similar point of view, and added refrigerators. The following year, Swedish statistician Hans Rosling suggested that the positive effect the washing machine had on the liberation of women makes it "the greatest invention of the industrial revolution". It has been argued that washing machines are an example of labor-saving technology which does not decrease employment, because households can internalize the gains of the innovation. Historian Frances Finnegan credits the rise of domestic laundry technology in helping to undercut the economic viability of the Magdalene asylums in Ireland (later revealed to be inhumanly abusive prisons for women), by supplanting their laundry businesses and prompting the eventual closure of the institutions as a whole. Irish feminist Mary Frances McDonald has described washing machines as the single most life-changing invention for women. In India, dhobis, a caste group specialized in washing clothes, are slowly adapting to modern technology, but even with access to washing machines, many still handwash garments as well. Since most modern homes are equipped with a washing machine, many Indians have dispensed with the services of the dhobiwallahs. Environmental impact Due to the increasing cost of repairs relative to the price of a washing machine, there has been a major increase in the yearly number of defective washing machines being discarded, to the detriment of the environment. The cost of repair and the expected life of a machine may make the purchase of a new machine seem like the better option. Different washing machine models vary widely in their use of water, detergent, and energy. The energy required for heating is large compared to that used by lighting, electric motors, and electronic devices. Because of their use of hot water, washing machines are among the largest overall consumers of energy in a typical modern home. Washing machines worldwide release around 62 million tonnes of carbon dioxide equivalent in a year. However, modern improvements have been made aiming to lower these emission numbers, and it depends on the user's choice to fully determine their environmental impact. See also Centrifugation Laundry Clothes dryer Combo washer dryer Detergent Drying cabinet Energetic efficiency Home appliance Ironing Laundry detergent Laundry symbols Laundry-folding machine List of home appliances Major appliance Silver Nano Standpipe Thor washing machine L'Increvable Wig wag (washing machines) References External links Preservation and also exhibition of vintage washing machines Washing Machines at the Canada Science and Technology Museum Washing Machine Museum 1843 introductions American inventions Articles containing video clips Centrifuges Cleaning tools English inventions Home appliances Home automation Laundry washing equipment 19th-century inventions
Washing machine
[ "Physics", "Chemistry", "Technology", "Engineering" ]
12,949
[ "Home automation", "Centrifugation", "Machines", "Chemical equipment", "Physical systems", "Home appliances", "Centrifuges" ]
172,147
https://en.wikipedia.org/wiki/CS%20gas
The compound 2-chlorobenzalmalononitrile (also called o-chlorobenzylidene malononitrile; chemical formula: C10H5ClN2), a cyanocarbon, is the defining component of the lachrymatory agent commonly referred to as CS gas, a tear gas which is used as a riot control agent, and is banned for use in warfare due to the 1925 Geneva Protocol. Exposure causes a burning sensation and tearing of the eyes to the extent that the subject cannot keep their eyes open, and a burning irritation of the mucous membranes of the nose, mouth and throat, resulting in profuse coughing, nasal mucus discharge, disorientation, and difficulty breathing, partially incapacitating the subject. CS gas is an aerosol of a volatile solvent (a substance that dissolves other active substances and that easily evaporates) and 2-chlorobenzalmalononitrile, which is a solid compound at room temperature. CS gas is generally accepted as being non-lethal. History CS gas was first synthesized by two Americans, Ben Corson and Roger Stoughton, at Middlebury College in Vermont in 1928, and the chemical's name is derived from the first letters of the scientists' surnames. CS was developed and tested secretly at Porton Down in Wiltshire, UK, in the 1950s and '60s. CS was used first on animals, and subsequently on British Army servicemen volunteers. CS has less effect on animals because they have different tear ducts and, in the case of non-human mammals, their fur inhibits the free entry of the gas. As recently as 2002, the U.S. State Department Bureau of International Security and Nonproliferation of Colin Powell made a firm distinction between "riot-control agents" such as CS gas, and "lethal chemical weapons." The Bureau cited support for this position from the U.K. and Japan. The use of CS in warfare has been prohibited under the Chemical Weapons Convention. The OPCW (the governing body of the convention) has observed its use in the Russo-Ukrainian War in 2024. Production CS is synthesized by the reaction of 2-chlorobenzaldehyde and malononitrile via the Knoevenagel condensation: ClC6H4CHO + H2C(CN)2 → ClC6H4CHC(CN)2 + H2O The reaction is catalysed with a weak base like piperidine or pyridine. The production method has not changed since the substance was discovered by Corson and Stoughton. Other bases, solvent free methods and microwave promotion have been suggested to improve the production of the substance. The physiological properties had been discovered already by the chemists first synthesising the compound in 1928: "Physiological Properties. Certain of these dinitriles have the effect of sneeze and tear gases. They are harmless when wet but to handle the dry powder is disastrous." Use as an aerosol As 2-chlorobenzalmalononitrile is a solid at room temperature, not a gas, a variety of techniques have been used to make this solid usable as an aerosol: Melted and sprayed in the molten form. Dissolved in organic solvent. CS2 dry powder (CS2 is a siliconized, micro-pulverized form of CS). CS from thermal grenades by generation of hot gases. In the Waco Siege in the United States, CS was dissolved in the organic solvent dichloromethane (also known as methylene chloride). The solution was dispersed as an aerosol via explosive force and when the highly volatile dichloromethane evaporated, CS crystals precipitated and formed a fine dispersion in the air. Effects Many types of tear gas and other riot control agents have been produced with effects ranging from mild tearing of the eyes to immediate vomiting and prostration. CN and CS are the most widely used and known, but around 15 different types of tear gas have been developed worldwide, e.g. adamsite or bromoacetone, CNB, and CNC. CS has become the most popular due to its strong effect. The effect of CS on a person will depend on whether it is packaged as a solution or used as an aerosol. The size of solution droplets and the size of the CS particulates after evaporation are factors determining its effect on the human body. The chemical reacts with moisture on the skin and in the eyes, causing a burning sensation and the immediate forceful and uncontrollable shutting of the eyes. Effects usually include tears streaming from the eyes, profuse coughing, exceptional nasal discharge that is full of mucus, burning in the eyes, eyelids, nose and throat areas, disorientation, dizziness and restricted breathing. It will also burn the skin where sweaty or sunburned. In highly concentrated doses, it can also induce severe coughing and vomiting. Most of the immediate effects wear off within a few hours (such as exceptional nasal discharge and profuse coughing), although respiratory, gastrointestinal, and oral symptoms may persist for months. Excessive exposure can cause chemical burns resulting in permanent scarring. Adults exposed to tear gas during the 2020 protests in Portland, Oregon, US also reported menstrual changes (899; 54.5% of 1650 female respondents). Exposure to tear gas is associated with avoidable healthcare utilization. Secondary effects People or objects contaminated with CS gas can cause secondary exposure to others, including healthcare professionals and police. In addition, repeated exposure may cause sensitisation. Toxicity TRPA1 (Transient Receptor Potential-Ankyrin 1) ion channel expressed on nociceptors (especially trigeminal) has been implicated as the site of action for CS gas in rodent models. Although described as a non-lethal weapon for crowd control, studies have raised doubts about this classification. CS can cause severe pulmonary damage and can also significantly damage the heart and liver. On 28 September 2000, Prof. Dr. Uwe Heinrich released a study commissioned by John C. Danforth, of the United States Office of Special Counsel, to investigate the use of CS by the FBI at the Branch Davidians' Mount Carmel compound. He said no human deaths had been reported, but concluded that the lethality of CS used would have been determined mainly by two factors: whether gas masks were used and whether the occupants were trapped in a room. He suggests that if no gas masks were used and the occupants were trapped, then, "there is a distinct possibility that this kind of CS exposure can significantly contribute to or even cause lethal effects". CS gas can have a clastogenic effect (abnormal chromosome change) on mammalian cells, but no studies have linked it to miscarriages or stillbirths. In Egypt, CS gas was reported to be the cause of death of several protesters in Mohamed Mahmoud Street near Tahrir square during the November 2011 protests. The solvent in which CS is dissolved, methyl isobutyl ketone (MIBK), is classified as harmful by inhalation; irritating to the eyes and respiratory system; and repeated exposure may cause skin dryness or cracking. See also List of parties to the Chemical Weapons Convention List of uses of CS gas by country CR gas CN gas Pepper spray Chemical Weapons Convention Hand grenades References External links Salem H, Gutting B, Kluchinsky T, Boardman C, Tuorinsky S, Hout J (2008). Medical Aspects of Chemical Warfare, Chapter 13 Riot Control Agents, US Army Medical Institute, Borden Institute, pp. 441–484 (2008). Hout J, Hook G, LaPuma P, White D (2010). "Identification of compounds formed during low temperature thermal dispersion of encapsulated o-chlorobenzylidene malononitrile (CS riot control agent)" Journal of Occupational and Environmental Hygiene, June 2010 Gas Chromatography NIST CDC – NIOSH Pocket Guide to Chemical Hazards – o-Chlorobenzylidene malononitrile Patten report recommendations 69 and 70 relating to public order equipment A Paper prepared by the Steering Group led by the Northern Ireland Office – April 2001 Committees on toxicity, mutagenicity and carcinogenicity of chemicals in food, consumer products and the environment statement on 2-chlorobenzylidene malononitrile (CS) and CS spray, September 1999. (pdf) Journal of Non-lethal Combatives, January 2003 Noxious Tear-Gas Bomb Mightier in Peace than in War. "Crowd Control Technologies: An Assessment Of Crowd Control Technology Options For The European Union" – The Omega Foundation (pdf) eMedicine Information on irritants: Cs, Cn, Cnc, Ca, Cr, Cnb, PS Riot control agents Lachrymatory agents Chemical weapons 2-Chlorophenyl compounds Nitriles
CS gas
[ "Chemistry", "Biology" ]
1,849
[ "Chemical accident", "Chemical weapons", "Functional groups", "Lachrymatory agents", "Riot control agents", "Biochemistry", "Nitriles" ]
172,190
https://en.wikipedia.org/wiki/Castor%20oil
Castor oil is a vegetable oil pressed from castor beans, the seeds of the plant Ricinus communis. The seeds are 40 to 60 percent oil. It is a colourless or pale yellow liquid with a distinct taste and odor. Its boiling point is and its density is 0.961 g/cm3. It includes a mixture of triglycerides in which about 90 percent of fatty acids are ricinoleates. Oleic acid and linoleic acid are the other significant components. Some 270,000–360,000 tonnes (600–800 million pounds) of castor oil are produced annually for a variety of uses. Castor oil and its derivatives are used in the manufacturing of soaps, lubricants, hydraulic and brake fluids, paints, dyes, coatings, inks, cold-resistant plastics, waxes and polishes, nylon, and perfumes. Etymology The name probably comes from a confusion between the Ricinus plant that produces it and another plant, the Vitex agnus-castus. An alternative etymology, though, suggests that it was used as a replacement for castoreum. History Use of castor oil as a laxative is attested to in the Ebers Papyrus, and it was in use several centuries earlier. Midwifery manuals from the 19th century recommended castor oil and 10 drops of laudanum for relieving "false pains." Composition Castor oil is well known as a source of ricinoleic acid, a monounsaturated, 18-carbon fatty acid. Among fatty acids, ricinoleic acid is unusual in that it has a hydroxyl functional group on the 12th carbon atom. This functional group causes ricinoleic acid (and castor oil) to be more polar than most fats. The chemical reactivity of the alcohol group also allows chemical derivatization that is not possible with most other seed oils. Because of its ricinoleic acid content, castor oil is a valuable chemical in feedstocks, commanding a higher price than other seed oils. As an example, in July 2007, Indian castor oil sold for about US$0.90/kg ($0.41/lb), whereas U.S. soybean, sunflower, and canola oils sold for about $0.30/kg ($0.14/lb). Human uses Castor oil has been used orally to relieve constipation or to evacuate the bowel before intestinal surgery. The laxative effect of castor oil is attributed to ricinoleic acid, which is produced by hydrolysis in the small intestine. Use of castor oil for simple constipation is medically discouraged because it may cause violent diarrhea. Food and preservative In the food industry, food-grade castor oil is used in food additives, flavorings, candy (e.g., polyglycerol polyricinoleate in chocolate), as a mold inhibitor, and in packaging. Polyoxyethylated castor oil (e.g., Kolliphor EL) is also used in the food industries. In India, Pakistan, and Nepal, food grains are preserved by the application of castor oil. It stops rice, wheat, and pulses from rotting. For example, the legume pigeon pea is commonly available coated in oil for extended storage. Emollient Castor oil has been used in cosmetic products included in creams and as a moisturizer. It is often combined with zinc oxide to form an emollient and astringent, zinc and castor oil cream, which is commonly used to treat infants for nappy rash. Medicine Castor oil is used as a vehicle for serums administering steroid hormones such as estradiol valerate via intramuscular or subcutaneous injection. Alternative medicine Despite the lack of evidence, castor oil is sometimes claimed to be able to cure diseases. According to the American Cancer Society, "available scientific evidence does not support claims that castor oil on the skin cures cancer or any other disease." Childbirth Despite some undesirable side effects, castor oil is used for labor induction. There is no high-quality research proving that ingestion of castor oil results in cervical ripening or induction of labor; there is, however, evidence that taking it causes nausea and diarrhea. A systematic review of "three trials, involving 233 women, found there has not been enough research done to show the effects of castor oil on ripening the cervix or inducing labour or compare it to other methods of induction. The review found that all women who took castor oil by mouth felt nauseous. More research is needed into the effects of castor oil to induce labour." Castor oil is still used for labor induction in environments where modern drugs are not available; a review of pharmacologic, mechanical, and "complementary" methods of labor induction published in 2024 by the American Journal of Obstetrics and Gynecology stated that castor oil's physiological effect is poorly understood but "given gastrointestinal symptomatology, a prostaglandin mediation has been suggested but not confirmed." According to Drugs in Pregnancy and Lactation: A Reference Guide to Fetal and Neonatal Risk (2008), castor oil should not be ingested or used topically by pre-term pregnant women. There is no data on the potential toxicity of castor oil for nursing mothers. Punishment Since children commonly strongly dislike the taste of castor oil, some parents punished children with a dose of it. Physicians recommended against the practice because it may associate medicines with punishment and make children afraid of the doctor. Use in torture A heavy dose of castor oil could be used as a humiliating punishment for adults. Colonial officials used it in the British Raj (India) to deal with recalcitrant servants. Belgian military officials prescribed heavy doses of castor oil in Belgian Congo as a punishment for being too sick to work. Castor oil was also a tool of punishment favored by the Falangist and later Francoist Spain during and following the Spanish Civil War. Its use as a form of gendered violence to repress women was especially prominent. This began during the war where Nationalist forces would specifically target Republican-aligned women, both troops and civilians, who lived in Republican-controlled areas. The forced drinking of castor oil occurred alongside sexual assault, rape, torture and murder of these women. Its most notorious use as punishment came in Fascist Italy under Benito Mussolini. It was a favorite tool used by the Blackshirts to intimidate and humiliate their opponents. Political dissidents were force-fed large quantities of castor oil by fascist squads so as to induce bouts of extreme diarrhea in the victims. This technique was said to have been originated by Gabriele D'Annunzio or Italo Balbo. This form of torture was potentially deadly, as the administration of the castor oil was often combined with nightstick beatings, especially to the rear, so that the resulting diarrhea would not only lead to dangerous dehydration but also infect the open wounds from the beatings. However, even those victims who survived had to bear the humiliation of the laxative effects resulting from excessive consumption of the oil. Industrial uses Coatings Castor oil is used as a biobased polyol in the polyurethane industry. The average functionality (number of hydroxyl groups per triglyceride molecule) of castor oil is 2.7, so it is widely used as a rigid polyol and in coatings. One particular use is in a polyurethane concrete where a castor-oil emulsion is reacted with an isocyanate (usually polymeric methylene diphenyl diisocyanate) and a cement and construction aggregate. This is applied fairly thickly as a slurry, which is self-levelling. This base is usually further coated with other systems to build a resilient floor. Castor oil is not a drying oil, meaning that it has a low reactivity with air compared with oils such as linseed oil and tung oil. However, dehydration of castor oil yields linoleic acids, which do have drying properties. In this process, the OH group on the ricinoleic acid along with a hydrogen from the next carbon atom are removed, forming a double bond which then has oxidative cross-linking properties and yields the drying oil. It is considered a vital raw material. Chemical precursor Castor oil can react with other materials to produce other chemical compounds that have numerous applications. Transesterification followed by steam cracking gives undecylenic acid, a precursor to specialized polymer nylon 11, and heptanal, a component in fragrances. Breakdown of castor oil in strong base gives 2-octanol, both a fragrance component and a specialized solvent, and the dicarboxylic acid sebacic acid. Hydrogenation of castor oil saturates the alkenes, giving a waxy lubricant. Castor oil may be epoxidized by reacting the OH groups with epichlorohydrin to make the triglycidyl ether of castor oil which is useful in epoxy technology. This is available commercially as Heloxy 505. The production of lithium grease consumes a significant amount of castor oil. Hydrogenation and saponification of castor oil yields 12-hydroxystearic acid, which is then reacted with lithium hydroxide or lithium carbonate to give high-performance lubricant grease. Since it has a relatively high dielectric constant (4.7), highly refined and dried castor oil is sometimes used as a dielectric fluid within high-performance, high-voltage capacitors. Lubrication Vegetable oils such as castor oil are typically unattractive alternatives to petroleum-derived lubricants because of their poor oxidative stability. Castor oil has better low-temperature viscosity properties and high-temperature lubrication than most vegetable oils, making it useful as a lubricant in jet, diesel, and racing engines. The viscosity of castor oil at 10 °C is 2,420 centipoise, but it tends to form gums in a short time, so its usefulness is limited to engines that are regularly rebuilt, such as racing engines. Lubricant company Castrol took its name from castor oil. Castor oil has been suggested as a lubricant for bicycle pumps because it does not degrade natural rubber seals. Turkey red oil Turkey red oil, also called sulphonated (or sulfated) castor oil, is made by adding sulfuric acid to vegetable oils, most notably castor oil. It was the first synthetic detergent after ordinary soap. It is used in formulating lubricants, softeners, and dyeing assistants. Biodiesel Castor oil, like currently less expensive vegetable oils, can be used as feedstock in the production of biodiesel. The resulting fuel is superior for cold winters, because of its exceptionally low cloud point and pour point. Initiatives to grow more castor for energy production, in preference to other oil crops, are motivated by social considerations. Tropical subsistence farmers would gain a cash crop. Early aviation and aeromodelling Castor oil was the preferred lubricant for rotary engines, such as the Gnome engine after that engine's widespread adoption for aviation in Europe in 1909. It was used almost universally in rotary-engined Allied aircraft in World War I. Germany had to make do with inferior ersatz oil for its rotary engines, which resulted in poor reliability. The methanol-fueled, two-cycle, glow-plug engines used for aeromodelling, since their adoption by model airplane hobbyists in the 1940s, have used varying percentages of castor oil as lubricants. It is highly resistant to degradation when the engine has its fuel-air mixture leaned for maximum engine speed. Gummy residues can still be a problem for aeromodelling powerplants lubricated with castor oil, however, usually requiring eventual replacement of ball bearings when the residue accumulates within the engine's bearing races. One British manufacturer of sleeve valved four-cycle model engines has stated the "varnish" created by using castor oil in small percentages can improve the pneumatic seal of the sleeve valve, improving such an engine's performance over time. Safety The castor seed contains ricin, a toxic lectin. Heating during the oil extraction process denatures and deactivates the lectin. Harvesting castor beans, though, may not be without risk. The International Castor Oil Association FAQ document states that castor beans contain an allergenic compound called CB1A. This chemical is described as being virtually nontoxic, but has the capacity to affect people with hypersensitivity. The allergen may be neutralized by treatment with a variety of alkaline agents. The allergen is not present in the castor oil itself. See also Botanol, a flooring material derived from castor oil Castor wax List of unproven and disproven cancer treatments References Further reading – overview of chemical properties and manufacturing of castor oil External links Ayurvedic medicaments Castor oil plant Cosmetics chemicals Laxatives Liquid dielectrics Non-petroleum based lubricants Oils Traditional medicine Triglycerides
Castor oil
[ "Chemistry" ]
2,815
[ "Oils", "Carbohydrates" ]
172,199
https://en.wikipedia.org/wiki/Faltings%27s%20theorem
Faltings's theorem is a result in arithmetic geometry, according to which a curve of genus greater than 1 over the field of rational numbers has only finitely many rational points. This was conjectured in 1922 by Louis Mordell, and known as the Mordell conjecture until its 1983 proof by Gerd Faltings. The conjecture was later generalized by replacing by any number field. Background Let be a non-singular algebraic curve of genus over . Then the set of rational points on may be determined as follows: When , there are either no points or infinitely many. In such cases, may be handled as a conic section. When , if there are any points, then is an elliptic curve and its rational points form a finitely generated abelian group. (This is Mordell's Theorem, later generalized to the Mordell–Weil theorem.) Moreover, Mazur's torsion theorem restricts the structure of the torsion subgroup. When , according to Faltings's theorem, has only a finite number of rational points. Proofs Igor Shafarevich conjectured that there are only finitely many isomorphism classes of abelian varieties of fixed dimension and fixed polarization degree over a fixed number field with good reduction outside a fixed finite set of places. Aleksei Parshin showed that Shafarevich's finiteness conjecture would imply the Mordell conjecture, using what is now called Parshin's trick. Gerd Faltings proved Shafarevich's finiteness conjecture using a known reduction to a case of the Tate conjecture, together with tools from algebraic geometry, including the theory of Néron models. The main idea of Faltings's proof is the comparison of Faltings heights and naive heights via Siegel modular varieties. Later proofs Paul Vojta gave a proof based on Diophantine approximation. Enrico Bombieri found a more elementary variant of Vojta's proof. Brian Lawrence and Akshay Venkatesh gave a proof based on -adic Hodge theory, borrowing also some of the easier ingredients of Faltings's original proof. Consequences Faltings's 1983 paper had as consequences a number of statements which had previously been conjectured: The Mordell conjecture that a curve of genus greater than 1 over a number field has only finitely many rational points; The Isogeny theorem that abelian varieties with isomorphic Tate modules (as -modules with Galois action) are isogenous. A sample application of Faltings's theorem is to a weak form of Fermat's Last Theorem: for any fixed there are at most finitely many primitive integer solutions (pairwise coprime solutions) to , since for such the Fermat curve has genus greater than 1. Generalizations Because of the Mordell–Weil theorem, Faltings's theorem can be reformulated as a statement about the intersection of a curve with a finitely generated subgroup of an abelian variety . Generalizing by replacing by a semiabelian variety, by an arbitrary subvariety of , and by an arbitrary finite-rank subgroup of leads to the Mordell–Lang conjecture, which was proved in 1995 by McQuillan following work of Laurent, Raynaud, Hindry, Vojta, and Faltings. Another higher-dimensional generalization of Faltings's theorem is the Bombieri–Lang conjecture that if is a pseudo-canonical variety (i.e., a variety of general type) over a number field , then is not Zariski dense in . Even more general conjectures have been put forth by Paul Vojta. The Mordell conjecture for function fields was proved by Yuri Ivanovich Manin and by Hans Grauert. In 1990, Robert F. Coleman found and fixed a gap in Manin's proof. Notes Citations References → Contains an English translation of → Gives Vojta's proof of Faltings's Theorem. (Translation: ) Diophantine geometry Theorems in number theory Theorems in algebraic geometry
Faltings's theorem
[ "Mathematics" ]
832
[ "Theorems in algebraic geometry", "Theorems in number theory", "Theorems in geometry", "Mathematical problems", "Mathematical theorems", "Number theory" ]
172,228
https://en.wikipedia.org/wiki/Transrapid
Transrapid () is a German-developed high-speed monorail train using magnetic levitation. Planning for the system started in the late 1960s, with a test facility in Emsland, Germany inaugurated in 1983. In 1991, technical readiness for application was approved by the Deutsche Bundesbahn in cooperation with renowned universities. The last version, the 2007-built Transrapid 09, is designed for a cruising speed of and allows acceleration and deceleration of approximately . In 2002, the first commercial implementation was completed – the Shanghai Maglev Train, which connects the city of Shanghai's rapid transit network to Shanghai Pudong International Airport. The Transrapid system has not yet been deployed on a long-distance intercity line. The system was developed and marketed by Siemens and ThyssenKrupp, as well as other, mostly German companies. In 2006, a Transrapid train collided with a maintenance vehicle on the German test track, leading to 23 fatalities. In 2011, the Emsland test track closed down when its operating license expired. In early 2012, demolition and reconversion of the entire Emsland site including the factory was approved, but has been delayed until late 2023 because of concepts for usage as a Hyperloop test track or a maglev track for the Chinese CRRC Maglev. Technology Levitation The super-speed Transrapid maglev system has no wheels, no axles, no gear transmissions, no steel rails, and no overhead electrical pantographs. The maglev vehicles do not roll on wheels; rather, they hover above the track guideway, using the attractive magnetic force between two linear arrays of electromagnetic coils—one side of the coil on the vehicle, the other side in the track guideway, which function together as a magnetic dipole. During levitation and travelling operation, the Transrapid maglev vehicle floats on a frictionless magnetic cushion with no mechanical contact whatsoever with the track guideway. On-board vehicle electronic systems measure the dipole gap distance 100,000 times per second to guarantee the clearance between the coils attached to the underside of the guideway and the magnetic portion of the vehicle wrapped around the guideway edges. With this precise, constantly updated electronic control, the dipole gap remains nominally constant at . When levitated, the maglev vehicle has about of clearance above the guideway surface. The Transrapid maglev vehicle requires less power to hover than it needs to run its on-board air conditioning equipment. In Transrapid vehicle versions TR08 and earlier, when travelling at speeds below , the vehicle levitation system and all on-board vehicle electronics were supplied with power through physical connections to the track guideway. At vehicle speeds above , all on-board power was supplied by recovered harmonic oscillation of the magnetic fields created from the track's linear stator. (Since these oscillations are parasitic, they cannot be used for vehicle propulsion). A new energy transmission system, version TR09, has since been developed for Transrapid, in which maglev vehicles now require no physical contact with the track guideway for their on-board power needs, regardless of the maglev vehicle speed. This feature helps to reduce on-going maintenance and operational costs. In case of power failure of the track's propulsion system, the maglev vehicle can use on-board backup batteries to temporarily power the vehicle's levitation system. Propulsion The Transrapid maglev system uses a synchronous longstator linear motor for both propulsion and braking. It works like a rotating electric motor whose stator is "unrolled" along the underside of the guideway; instead of producing torque (rotation) it produces a linear force along its length. The electromagnets in the maglev vehicle which lift it also work as the equivalent of the excitation portion (rotor) of this linear electric motor. Since the magnetic travelling field works in only one direction, if there were to be several maglev trains on a given track section, they would all travel in the same direction thereby reducing the possibility of collision between moving trains. Energy requirements The normal energy consumption of the Transrapid is approximately per section for levitation and travel, and vehicle control. The drag coefficient of the Transrapid is about 0.26. The aerodynamic drag of the vehicle, which has a frontal cross section of , requires a power consumption, at or cruising speed, given by the following formula: Power consumption compares favourably with other high-speed rail systems. With an efficiency of 0.85, the power required is about 4.2 MW. Energy consumption for levitation and guidance purposes equates to approximately 1.7 kW/t. As the propulsion system is also capable of functioning in reverse, energy is transferred back into the electrical grid during braking. An exception to this is when an emergency stop is performed using the emergency landing skids beneath the vehicle, although this method of bringing the vehicle to a stop is intended only as a last resort should it be impossible or undesirable to keep the vehicle levitating on back-up power to a natural halt. Market segment and historical parallels Compared to classical railway lines, Transrapid allows higher speeds and gradients with less weathering and lower energy consumption and maintenance needs. The Transrapid track is more flexible, and more easily adapted to specific geographical circumstances than a classical train system. Cargo is restricted to a maximum payload of per car. Transrapid allows maximum speeds of , placing it between conventional high speed trains () and air traffic (). The magnetic field generator, an important part of the engine being a part of the track, limits the system capacity. From a competition standpoint, the Transrapid is a proprietary solution. The track being a part of the engine, only the single-source Transrapid vehicles and infrastructure can be operated. There is no multisourcing foreseen concerning vehicles or the highly complicated crossings and switches. Unlike classical railways or other infrastructure networks, as jointly administrated by the Federal Network Agency (Bundesnetzagentur) in Germany, a Transrapid system does not allow any direct competition. Ecological impact The Transrapid is an electrically driven, clean, high-speed, high-capacity means of transport able to build up point-to-point passenger connections in geographically challenged surroundings. This has to be set in comparison with the impact on heritage and or landscape protection areas (compare Waldschlösschen Bridge). Any impact of emissions has to take into account the source of electrical energy. The reduced expense, noise and vibration of a people-only Transrapid system versus a cargo train track is not directly comparable. The reuse of existing tracks and the interfacing with existing networks is limited. The Transrapid indirectly competes for resources, space and tracks in urban and city surroundings with classical urban transport systems and high speed trains. Comparative costs Track construction cost The fully elevated Shanghai Maglev was built at a cost of US$1.33 billion over a length of including trains and stations. Thus the cost per km for dual track was US$43.6 million, including trains and stations. This was the first commercial use of the technology. Since then conventional fast rail track has been mass-produced in China for between US$4.6 and US$30.8 million per kilometer, mostly in rural areas. (See High-speed rail in China). In 2008 Transrapid Australia quoted the Victoria State Government A$34 million per kilometer for dual track. This assumed 50% of the track was at grade and 50% was elevated. In comparison, the Regional Rail Link built in Victoria cost around A$5 billion, or A$105 million per kilometer, including two stations. From the above it is not possible to say whether Transrapid or conventional fast rail track would be cheaper for a particular application. The higher operating speed of the maglev system will result in more passengers being delivered over the same distance in a set time. The ability of the Transrapid system to handle tighter turns and steeper gradients could heavily influence a cost comparison for a particular project. Train purchase cost In 2008, Transrapid Australia quoted the Victorian State Government between A$16.5 million (commuter) and A$20 million (luxury) per trains section or carriage. Due to the width of the Transrapid carriages they have a floor area of about . This works out at between A$179,000 and A$217,000 per square meter. In comparison, InterCityExpress which are also built by Siemens cost about A$6 million per carriage. Due to the width of the ICE carriages they have a floor area of about . This works out at about A$83,000 per square meter. This shows Transrapid train sets are likely to cost over twice as much as ICE 3 conventional fast rail train sets at this time. However, each Transrapid train set is more than twice as efficient due to their faster operating speed and acceleration according to UK Ultraspeed. In their case study only 44% as many Transrapid train sets are needed to deliver the same number of passengers as conventional high-speed trains. Operational cost Transrapid claims their system has very low maintenance costs compared to conventional high speed rail systems due to the non-contact nature of their system. Implementations China The only commercial implementation so far was in 2000, when the Chinese government ordered a Transrapid track to be built connecting Shanghai to its Pudong International Airport. It was inaugurated in 2002 and regular daily trips started in March 2004. The travel speed is , which the Maglev train maintains for 50 seconds as the short, track only allows the cruising speed to be maintained for a short time before deceleration must begin. The average number of riders per day (14 hours of operation) is about 7,500, while the maximum seating capacity per train is 440. A second class ticket price of about 50 RMB (renminbi) (about 6 euro) is four times the price of the airport bus and ten times more expensive than a comparable underground ticket. The project was sponsored by the German Hermes loans with DM 200 million. The total cost is believed to be $1.33 billion. A planned extension of the line to Shanghai Hongqiao Airport () and onward to the city of Hangzhou () has been repeatedly delayed. Originally planned to be ready for Expo 2010, final approval was granted on 18 August 2008, and construction was scheduled to start in 2010 for completion in 2014. However the plan was cancelled, possibly due to the building of the high speed Shanghai–Hangzhou Passenger Railway. Germany The Emsland test facility was the only Transrapid track in Germany. It has been deactivated, and is scheduled to be disassembled. Nevertheless, there are plans to either use it as a test facility for the CRRC 600 or to reconstruct it in order to serve as a Hyperloop track. Proposed systems Iran In 2007, Iran and a German company reached an agreement on using maglev trains to link the cities of Tehran and Mashhad. The agreement was signed at the Mashhad International Fair site between Iranian Ministry of Roads and Transportation and the German company. Munich-based Schlegel Consulting Engineers said they had signed the contract with the Iranian ministry of transport and the governor of Mashad. "We have been mandated to lead a German consortium in this project," a spokesman said. "We are in a preparatory phase." The next step will be to assemble a consortium, a process that is expected to take place "in the coming months," the spokesman said. The project could be worth between 10 billion and 12 billion euros, the Schlegel spokesman said. Siemens and ThyssenKrupp, the developers of a high-speed maglev train, called the Transrapid, both said they were unaware of the proposal. The Schlegel spokesman said Siemens and ThyssenKrupp were currently "not involved" in the consortium. Switzerland In 2011 SwissRapide AG in co-operation with the SwissRapide Consortium was developing and promoting an above-ground magnetic levitation (Maglev) monorail system, based on the Transrapid technology. The first projects planned were the lines Bern–Zürich, Lausanne–Geneva as well as Zürich–Winterthur. United States Colorado I-70 Transrapid is one of a number of companies seeking to build a high speed transit system parallel to the I-70 Interstate in the US state of Colorado. Submissions put forward say that maglev offers significantly better performance than rail given the harsh climate and terrain. No technology has been preferred as of November 2013, though construction slated to begin in 2020. Los Angeles to Las Vegas The California–Nevada Interstate Maglev project is a proposed 269 mi (433 km) line from Las Vegas, Nevada to Anaheim, California. One segment would run from Las Vegas to Primm, Nevada, with proposed service to the Las Vegas area's forthcoming Ivanpah Valley Airport. The top speed would be 310 mph (500 km/h). In August 2014 the backers of the scheme were seeking to revive interest in it. Other There have been several other evaluations conducted in the US including Washington DC to Baltimore, Chattanooga to Atlanta and Pittsburgh to Philadelphia. So far no project has started construction. See list of maglev train proposals in the United States. Canary Islands A two line, 120-kilometers (75-mile) long system has been proposed for the island of Tenerife, which is visited by five million tourists per year. It would connect the island capital Santa Cruz in the north with Costa Adeje in the south and Los Realejos in the northwest with a maximum speed of 270 km/h (169 mph). The estimated cost is €3 billion. Transrapid has advantages over a conventional rail plans which would require 35% of its route in tunnels because of the steep terrain on the island. Rejected systems Germany High-speed competition The Transrapid originated as one of several competing concepts for new land-based high-speed public transportation developed in Germany. In this competition, the Transrapid primarily competed with the InterCityExpress (ICE), a high-speed rail system based on "traditional" railway technology. The ICE “won” in that it was adopted nationwide in Germany, however Transrapid development continued. A number of studies for possible Transrapid lines were conducted after the ICE had entered service, including a long-distance line from Hamburg to Berlin. Munich link The most recent German Transrapid line project, and the one that came closest to being built, having previously been approved, was an airport connection track from Munich Central Station to Munich Airport, a project. The connection between the train station and airport was close to being built, but was cancelled on 27 March 2008 by the German government, due to a massive overrun in costs. Prior to the cancellation, the governing party, the Christian Social Union of Bavaria (CSU), faced internal and local resistance, in particular from communities along the proposed route. The CSU had planned to position Transrapid as an example of future technology and innovation in Bavaria. German federal transport minister Wolfgang Tiefensee announced the decision after a crisis meeting in Berlin at which industry representatives reportedly revealed that costs had risen from €1.85 billion to well over €3 billion ($4.7 billion). This rise in projected costs, however was mostly due to the cost estimates of the construction of the tunnel and related civil engineering after the designated operator Deutsche Bahn AG shifted most of the risk-sharing towards its subcontractors - and not due to the cost of the maglev technology. United Kingdom The Transrapid was rejected in 2007 by the UK government for a maglev link called UK Ultraspeed between London and Glasgow, via Birmingham, Liverpool/Manchester, Leeds, Teesside, Newcastle and Edinburgh. Incidents September 2006 accident On 22 September 2006, a Transrapid train collided with a maintenance vehicle at on the test track in Lathen, Germany. The maintenance vehicle destroyed the first section of the train, then lifted off the track to complete two full rotations before landing in a pile of pre-exploded debris. This was the first major accident involving a Transrapid train. The news media reported 23 fatalities and that several people were severely injured, these being the first fatalities on any maglev. The accident was caused by human error with the first train being allowed to leave the station before the maintenance vehicle had moved off the track. This situation could be avoided in a production environment by installing an automatic collision avoidance system. SMT fire accident On 11 August 2006, a Transrapid train running on the Shanghai Maglev Line caught fire. The fire was quickly put out by Shanghai's firefighters. It was reported that the vehicle's on-board batteries may have caused the fire. Alleged theft of Transrapid technology In April 2006, new announcements by Chinese officials planning to cut maglev rail costs by a third stirred some strong comments by various German officials and more diplomatic statements of concern from Transrapid officials. Deutsche Welle reported that the China Daily had quoted the State Council encouraging engineers to "learn and absorb foreign advanced technologies while making further innovations." The Chinese deny any technology plagiarism. The China Aviation Industry Corporation has said the new Chinese "Zhui Feng" maglev train is not dependent on foreign technology. They also claim it is much lighter than the Transrapid product and features a much more advanced design. The "Zhui Feng" is a low speed maglev design currently in use on the Changsha Maglev Express. Development history and versions See also Land speed record for railed vehicles References External links ThyssenKrupp Transrapid GmbH Der Transrapid 08 in Lathen und seine Vorgänger Comparison of wheel-rail technology and maglev technology Transrapid timeline Transrapid Photos China & GER, by International Maglev Board Further Reading - Link to PDF Documents about Transrapid Electrodynamics Experimental and prototype high-speed trains High-speed trains of Germany Land speed record rail vehicles Maglev Magnetic propulsion devices Siemens products
Transrapid
[ "Mathematics" ]
3,725
[ "Electrodynamics", "Dynamical systems" ]
172,243
https://en.wikipedia.org/wiki/Holland%20Tunnel
The Holland Tunnel is a vehicular tunnel under the Hudson River that connects Hudson Square and Lower Manhattan in New York City in the east to Jersey City, New Jersey in the west. The tunnel is operated by the Port Authority of New York and New Jersey and carries Interstate 78. The New Jersey side of the tunnel is the eastern terminus of New Jersey Route 139. The Holland Tunnel is one of three vehicular crossings between Manhattan and New Jersey; the two others are the Lincoln Tunnel and George Washington Bridge. Plans for a fixed vehicular crossing over the Hudson River were first devised in 1906. However, disagreements prolonged the planning process until 1919, when it was decided to build a tunnel under the river. Construction of the Holland Tunnel started in 1920, and it opened in 1927. At the time of its opening, it was the longest continuous underwater tunnel for vehicular traffic in the world. The Holland Tunnel was the world's first mechanically ventilated tunnel. Its ventilation system was designed by Ole Singstad, who oversaw the tunnel's completion. Original names considered for the tunnel included Hudson River Vehicular Tunnel and Canal Street Tunnel, but it was ultimately named the Holland Tunnel in memory of Clifford Milburn Holland, its initial chief engineer who died suddenly in 1924 prior to the tunnel's opening. Description The Holland Tunnel is operated by the Port Authority of New York and New Jersey. It consists of a pair of parallel tubes underneath the Hudson River. The tunnel was designed by Clifford Milburn Holland, the project's chief engineer, who died in October 1924, before it was completed. He was succeeded by Milton Harvey Freeman, who died less than a year after Holland did. Ole Singstad then oversaw the completion of the tunnel. The tunnel was designated a National Historic Civil and Mechanical Engineering Landmark in 1982 and a National Historic Landmark in 1993. Emergency services at the Holland Tunnel are provided by the Port Authority Police Department, who are stationed at the Port Authority's crossings. Tubes Materials and dimensions Each tube has a diameter, and the two tubes run apart under the Hudson River. The exteriors of each tube are composed of a series of cast iron rings, each of which comprises 14 curved steel pieces that are each long. The steel rings are covered by a layer of concrete. Each tube provides a roadway with two lanes and of vertical clearance. The north tube is between portals, while the south tube is slightly shorter, at . If each tube's immediate approach roads are included, the north tube is long and the south tube long. Most vehicles carrying hazmats, trucks with more than three axles, and vehicles towing trailers cannot use the tunnel. There is a width limit of for vehicles entering the tunnel. Both tubes' underwater sections are long and are situated in the silt beneath the river. The lowest point of the roadways is about below mean high water. The lowest point of the tunnel ceiling is about below mean high water. The tubes descend at a maximum grade of 4.06% and ascend at a grade of up to 3.8%. The tubes stretch an additional from the eastern shoreline to the New York portals, and from the western shoreline to the New Jersey portals. These sections of the tunnel are more rectangular in shape, since they were built as open cuts that were later covered over. The walls and ceiling are furnished with glazed ceramic tiles, which were originally engineered to minimize staining. The majority of the tiles are white, but there is a two-tile-high band of yellow-orange tiles at the bottom of each tube's walls, as well as two-tile-high band of blue tiles on the top. The northern tube, which carries westbound traffic, originates at Broome Street in Lower Manhattan between Varick and Hudson Streets. It continues to 14th Street east of Marin Boulevard in Jersey City. The southern tube, designed for eastbound traffic, originates at 12th Street east of Marin Boulevard, and surfaces at the Holland Tunnel Rotary in Manhattan. The entrance and exit ramps to and from each portal are lined with granite and are wide. Although the two tubes' underwater sections are parallel and adjacent to each other, the tubes' portals on either side are located two blocks apart in order to reduce congestion on each side. The Holland Tunnel's tubes initially contained a road surface made of Belgian blocks and concrete. This was replaced with asphalt in 1955. Each tube contains a catwalk on its left (inner) side, raised above the roadway. Five emergency-exit cross-passages connect the two tubes' inner catwalks. When the Holland Tunnel opened, the catwalk was equipped with police booths and a telephone system, stationed at intervals of . Traffic The volume of traffic going through the Holland Tunnel has remained steady despite tight restrictions on eastbound traffic in response to the September 11 attacks, including a ban on commercial traffic entering New York City put in place after an August 2004 threat. Aside from a sharp decline immediately following the September 11 attacks, the number of vehicles using the Holland Tunnel in either direction daily steadily declined from a peak of 103,020 daily vehicles in 1999 to 89,792 vehicles in 2016. , the eastbound direction of the Holland Tunnel was used by 14,871,543 vehicles annually. Ventilation The Holland Tunnel was the first mechanically ventilated underwater vehicular tunnel in the world. It contains a system of vents that run transverse, or perpendicular, to the tubes. Each side of the Hudson River has two ventilation shaft buildings: one on land, and one in the river approximately from the respective shoreline. All of the ventilation buildings have buff brick facades with steel and reinforced-concrete frames. The shafts within the river rise above mean high water. Their supporting piers descend , of which are underwater and are embedded in the riverbed. The river shafts double as emergency exits by way of shipping piers that connected each ventilation shaft to the shoreline. The New York Land Ventilation Tower, a five-story building with a trapezoidal footprint, is tall. The New Jersey Land Ventilation Tower is a four-story, building with a rectangular perimeter. The four ventilation towers contain a combined 84 fans. Of these, 42 are intake fans with varying capacities from per minute. The other 42 are exhaust fans, which can blow between per minute. Exhaust ducts are located at the corners of the ventilation towers, while supply ducts are in the central portion. Compartments housing exhaust fans are positioned near the corners under the exhaust stacks, with the central portions of the fan floors free for intake fans, and the central section of each outer wall for air intakes. At the time of the tunnel's construction, two-thirds of the 84 fans were being used regularly, while the other fans were reserved for emergency use. The fans blow fresh air into ducts, which provide air intake to the tunnel via openings at the tubes' curbside. The ceiling contains slits, which are used to exhaust air. The fans can replace all of the air inside the tunnel every 90 seconds. A forced ventilation system is essential because of the poisonous carbon monoxide component of automobile exhaust, which constituted a far greater percentage of exhaust gases before catalytic converters became prevalent. Approach plazas Boyle Plaza The approach to the Holland Tunnel in Jersey City begins where the lower level of NJ Route 139 and the Newark Bay Extension merge. On May 6, 1936, the section of what became Route 139/I-78 between Jersey Avenue and Marin Boulevard was named in memory of John F. Boyle, the former interstate tunnel commissioner. Despite being part of the Interstate Highway System, I-78 and Route 139 run concurrently along 12th and 14th Street Streets to reach the Holland Tunnel. Westbound traffic uses 14th Street while eastbound traffic uses 12th Street. The plaza was restored and landscaped by the Jersey City government in 1982. There is a nine-lane toll plaza for eastbound traffic only at the eastern end of 12th Street, just west of the tunnel portal. The original toll plaza had eight lanes; it was renovated in 1953–1954, and the current nine-lane tollbooth was constructed in 1988. Holland Tunnel Rotary Soon after construction of the tunnel, and amid rising vehicular traffic in the area, a railroad freight depot, St. John's Park Terminal, was abandoned and later demolished. The depot was located on the city block bounded by Laight, Varick, Beach, and Hudson Streets. The depot's site was used as a storage yard until the 1960s when it became a circular roadway for traffic exiting the eastbound tube in Manhattan. The original structure had four exits, but the plaza was renovated in the early 2000s with landscaping by Studio V Architecture and Ives Architecture Studio. A fifth exit was added in 2004. Freeman Plaza Originally used as the toll plazas for New Jersey-bound traffic, the small triangular patches of land at the mouth of the westbound tube entrance are referred to as Freeman Plaza or Freeman Square. The plaza is named after Milton Freeman, the engineer who took over the Holland Tunnel project after the death of Clifford Milburn Holland. The Freeman Plaza received its name just before the tunnel opened in 1927. The toll plaza was removed circa 1971 when the Port Authority stopped collecting tolls for New Jersey-bound drivers, and the square was later fenced off by the Port Authority. The small maintenance buildings for toll collectors were removed around 1982 or 1983. A bust of Holland sits outside the entrance to the westbound tube in Freeman Plaza. A business improvement district for the area, the Hudson Square Connection, was founded in 2009 with the goal of repurposing the square for pedestrian use. Hudson Square Connection and the Port Authority collaborated to create a five-year, $27 million master plan for Freeman Plaza. In 2013, Freeman Plaza West was opened to the public. Bounded by Hudson, Broome, and Watts Streets, it features umbrellas, bistro tables and chairs, and tree plantings. In 2014, Freeman Plaza East and Freeman Plaza North were opened on Varick and Broome Streets, respectively. The plazas contained chaise longues, bistro tables, and umbrellas. In 2016, the Hudson Square Connection added solar powered charging stations to both plazas, and introduced a summer lunchtime music series, called live@lunch. A statue by the artist Isamu Noguchi was also installed within the plaza. To the south of Freeman Plaza, between Varick, Watts, and Canal Streets is One Hudson Square, a New York City designated landmark in 2013. History Need for vehicular tunnel Until the first decade of the 20th century, passage across the lower Hudson River was possible only by ferry. The first tunnels to be bored below the Hudson River were for railroad use. The Hudson & Manhattan Railroad, now PATH, constructed two pairs of tubes to link the major railroad terminals in New Jersey with Manhattan Island: the Uptown Hudson Tubes, which opened in 1908, and the Downtown Hudson Tubes, which opened in 1909. The Pennsylvania Railroad's twin North River Tunnels, constructed to serve the new Pennsylvania Station, opened in 1910. The construction of these three tunnels proved that tunneling under the Hudson River was feasible. However, although train traffic was allowed to use the tunnel crossings, automotive traffic still had to be transported via ferry. At the same time, freight traffic in the Port of New York and New Jersey was mostly carried on boats, but traffic had grown to such a point that the boats were at full capacity, and some freight started going to other ports in the United States. To alleviate this, officials proposed building a freight railroad tunnel, but this was blocked by the organized syndicates that held influence over much of the port's freight operations. The public learned of the excessive traffic loads on existing boat routes, as well as the limited capacity of the H&M and North River Tunnels, when the surface of the Hudson River froze in winter 1917, and again when Pennsylvania Railroad workers went on strike in winter 1918. One engineer suggested that three freight railroad tunnels would be cheaper to construct than one bridge. Planning Initial plans In 1906, the New York and New Jersey Interstate Bridge Commission, a consortium of three groups, was formed to consider the need for a crossing across the Hudson River between New York City and New Jersey. That year, three railroads asked the commission to consider building a railroad bridge over the river. In 1908, the commission considered building three bridges across the Hudson River at 57th, 110th, and 179th Streets in Manhattan. The reasoning was that bridges would be cheaper than tunnels. These three locations were considered to be the only suitable locations for suspension bridges; other sites were rejected on the grounds of aesthetics, geography, or traffic flows. John Vipond Davies, one of the partners for the consulting firm Jacobs and Davies (which had constructed the Uptown Hudson Tubes), wanted to build a vehicular tunnel between Canal Street, Manhattan, and 13th Street, Jersey City. This proposal would compete with the six-lane suspension bridge at 57th Street. Some plans provided for the construction of both the bridge and the tunnel. The ferries could not accommodate all of the 19,600 vehicles per day, as of 1913, that traveled between New York and New Jersey. The Bridge Commission hosted several meetings to tell truck drivers about the details of both the 57th Street Bridge and Canal Street Tunnel plans. The United States Department of War brought up concerns about the 57th Street bridge plans: the span would need to be at least above the mean high water to avoid interfering with shipping. By comparison, the tunnel would be below mean water level. The Interstate Bridge Commission, which had been renamed the New York State Bridge and Tunnel Commission in April 1913, published a report that same month, stating that the Canal Street tunnel would cost $11 million while the 57th Street bridge would cost $42 million. In October 1913, Jacobs and Davies stated that a pair of tunnels, with each tube carrying traffic in one direction, would cost only $11 million, while a bridge might cost over $50 million. The low elevation and deep bedrock of Lower Manhattan was more conducive to a tunnel than to a bridge. By the end of that year, the consulting engineers for both the 57th Street Bridge and the Canal Street Tunnel had submitted their plans to the Bridge and Tunnel Commission. New York City merchants mainly advocated for the tunnel plan, while New Jerseyans and New York automobile drivers mostly supported the bridge plan. Meanwhile, the New York State Bridge and Tunnel Commission indicated that it favored the Canal Street tunnel plan. On the other hand, the 57th Street bridge plan remained largely forgotten. The Public Service Commission of New Jersey published a report in April 1917, stating that the construction of a Hudson River vehicle tunnel from Lower Manhattan to Jersey City was feasible. That June, following this report, Walter Evans Edge, then Governor of New Jersey, convened the Hudson River Bridge and Tunnel Commission of New Jersey, which would work with the New York Bridge and Tunnel Commission to construct the new tunnel. In March 1918, a report was sent to the New York State Legislature, advocating for the construction of the tunnel as soon as possible. That year, six million dollars in funding for the Hudson River Tunnel was proposed in two bills presented to subcommittees of the United States Senate and House of Representatives. The bill was voted down by the Interstate Commerce Committee before it could be presented to the full Senate. Plans approved The original plans for the Hudson River tunnel were for twin two-lane tubes, with each tube carrying traffic in a single direction. A request for proposals for the tunnel was announced in 1918, and eleven such requests were considered. One of these proposals, authored by engineer George Goethals, was for a bi-level tube. A modification of Jacobs and Davies' 1913 plan, the Goethals proposal specified that each level would carry three lanes of traffic, and that traffic on each level would run in a different direction. Goethals stated that his plan would cost $12 million and could be completed in three years. Subsequently, John F. O'Rourke offered to build the tunnel for $11.5 million. Goethals cited the area's freight traffic as one of the reasons for constructing the tube. His proposal would use a diameter shield to dig the tunnel. This large tunnel size was seen as a potential problem, since there were differences in the air pressure at the top and the bottom of each tunnel, and that air pressure difference increased with a larger tunnel diameter. Five engineers were assigned to examine the feasibility of Goethals's design. In July 1919, President Woodrow Wilson ratified a Congressional joint resolution for a trans-Hudson tunnel, and Clifford Milburn Holland was named the project's chief engineer. Holland stated that, based on the construction methods used for both pair of tubes, including the downtown pair, it should be relatively easy to dig through the mud on the bottom of the Hudson River, and that construction should be completed within two years. The federal government refused to finance the project, even in part, and so it fell to the states to raise the funds. In June 1919, U.S. Senator and former New Jersey governor Edge presented another iteration of the Hudson River Tunnel bill to the U.S. Senate, where it was approved. The New York and New Jersey governments signed a contract in September 1919, in which the states agreed to build, operate, and maintain the tunnel in partnership. The contract was signed by the states' respective tunnel commissions in January 1920. Under Holland's plan, each of the two tubes would have an outside diameter of including exterior linings, and the tubes would contain two-lane roadways with a total width of . One lane would be for slower traffic, and the other would be for faster traffic. This contrasted with Goethals's plan, wherein the three roadways would have had a total width of , only a few feet wider than Holland's two-lane roadways. Additionally, according to Holland, the 42-foot-wide tube would require the excavation of more dirt than both 29-foot tubes combined: two circles with 29-foot diameters would have a combined area of , while a circle with a 42-foot diameter would have an area of . The more northerly westbound tube would begin at Broome and Varick Streets on the Manhattan side and end at the now-demolished intersection of 14th and Provost Streets on the New Jersey side. The more southerly eastbound tube would begin at the still-intact intersection of 12th and Provost Streets in Jersey City, and end at the south side of Canal Street near Varick Street. By way of comparison, Goethals's plan would have combined the entrance and exit plazas on each side. The Motor Truck Association of America unsuccessfully advocated for three lanes in each tube. Even though Goethals's method of digging had not been tested, he refused to concede to Holland's proposal, and demanded to see evidence that Holland's proposal would work. The New York and New Jersey Tunnel Commission subsequently rejected Goethals's plan in favor of a twin-tube proposal that Holland had devised, which was valued at around $28.7 million. When Goethals asked why, the commission responded that Goethals's proposal had never been tested; that it was too expensive; and that the tunnel plans had many engineering weaknesses that could cause the tube to flood. Additionally, while a tube with three lanes in each direction would be able to handle more traffic than a tube with two lanes, projections showed that traffic on the tunnel's approach roads could barely handle the amount of traffic going to and from the two-lane tubes, and that widening the approach roads on each side would cost millions of dollars more. The commission then voted to forbid any further consideration of Goethals's plan. Holland defended his own plan by pointing out that the roadways in Goethals's plan would not only feature narrower road lanes, but also would have ventilation ducts that were too small to ventilate the tube efficiently. In May 1920, the New Jersey Legislature voted to approve the start of construction, overriding a veto from the New Jersey governor. The same month, the New York governor signed a similar bill that had been passed in the New York legislature. The legislature of New Jersey approved a $5 million bond issue for the tunnel in December 1920. Construction The first bid for constructing the Hudson River Tunnel, a contract for digging two of the tunnel's eight planned shafts, was advertised in September 1920. A groundbreaking for the Hudson River Tunnel's ventilation shaft, which marked the official start of construction on the tunnel, occurred on October 12, 1920, at Canal and Washington Streets on the Manhattan side. However, further construction of the Hudson River Tunnel was soon held up due to concerns over its ventilation system. There was also a dispute over whether the New York City government should pay for street-widening projects on the New Jersey side. Further delays arose when the New York and New Jersey tunnel commissions could not agree over which agency would award the contract to build the construction and ventilation shafts. Ventilation system The most significant design aspect of the Holland Tunnel is its ventilation system; it is served by four ventilation towers designed by Norwegian architect Erling Owre. At the time of its construction, underwater tunnels were a well-established part of civil engineering, but no long vehicular tunnels had been built, as all of the existing tunnels under New York City waterways carried only railroads and subways. These tubes did not have as much of a need for ventilation, since the trains that used the tubes were required to be electrically powered, and thus emitted very little pollution. On the other hand, the traffic in the Holland Tunnel consisted mostly of gasoline-driven vehicles, and ventilation was required to evacuate the carbon monoxide emissions, which would otherwise asphyxiate the drivers. There were very few tunnels at that time that were not used by rail traffic; the most notable of these non-rail tunnels, the Blackwall Tunnel and Rotherhithe Tunnel in London, did not need mechanical ventilation. However, a tunnel of the Hudson River Tunnel's length required an efficient method of ventilation, so Chief Engineer Singstad pioneered a system of ventilating the tunnel transversely (perpendicular to the tubes). In October 1920, General George R. Dyer, the chairman of the New York Tunnel Commission, published a report in which he wrote that Singstad had devised a feasible ventilation system for the Hudson River Tunnel. Working with Yale University, the University of Illinois, and the United States Bureau of Mines, Singstad built a test tunnel in the bureau's experimental mine at Bruceton, Pennsylvania, measuring over long, where cars were lined up with engines running. Volunteer students were supervised as they breathed the exhaust in order to confirm air flows and tolerable carbon monoxide levels by simulating different traffic conditions, including backups. The University of Illinois, which had hired the only professors of ventilation in the United States, built an experimental ventilation duct at its Urbana campus to test air flows. In October 1921, Singstad concluded that a conventional, longitudinal ventilation system would have to be pressurized to an air flow rate of along the tunnel. On the other hand, the tunnel could be adequately ventilated transversely if the compartment carrying the tube's roadway was placed in between two plenums. A lower plenum below the roadway floor could supply fresh air, and an upper plenum above the ceiling could exhaust fumes at regular intervals. Two thousand tests were performed with the ventilation system prototype. The system was determined to be of sufficiently low cost, relative to the safety benefits, that it was ultimately integrated into the tunnel's design. By the time the tunnel was in service, the average carbon monoxide content in both tunnels was 0.69 parts per 10,000 parts of air. The highest recorded carbon monoxide level in the Holland Tunnel was 1.60 parts per 10,000, well below the permissible maximum of 4 parts per 10,000. The public and the press proclaimed air conditions were better in the tubes than in some streets of New York City; after the tunnel opened, Singstad stated that the carbon monoxide content in the tubes were half of those recorded on the streets. Tunnel boring The ventilation system and other potential issues had been resolved by December 1921, and officials announced that the tunnel would break ground the following spring. Builders initially considered building a trench at the bottom of the Hudson River and then covering it up, but this was deemed infeasible because of the soft soil that comprised the riverbed, as well as the heavy maritime traffic that used the river. Officials started purchasing the properties in the path of the tunnel's approaches, evicting and compensating the tenants "without delay" so that construction could commence promptly. A bid to construct the tubes was advertised, and three firms responded. On March 29, 1922, the contract to dig the tubes was awarded to the lowest bidder, Booth & Flinn Ltd., for $19.3 million. The materials that were necessary to furnish the Hudson River Tunnel had already been purchased, so it was decided to start work immediately. Construction on the bores began two days later as workers broke ground for an air compressor to drive the tubes. The ceremony for the air compressor was held at the corner of Canal Street and West Side Highway on the Manhattan side. The workers who were performing the excavations, who were referred to as "sandhogs", were to dig each pair of tubes from either bank of the Hudson River, so that the two sides would eventually connect somewhere underneath the riverbed. The tunnel was to be long between portals, and the roadway was to descend to a maximum depth of below mean high water level. The start of construction for the tubes from the New Jersey side was delayed because the Hudson River Vehicular Tunnel Commission had not yet acquired some of the land for the project. Although Jersey City officials had insisted that the Tunnel Commission widen 12th and 14th Streets in Jersey City, these officials were involved in a disagreement over sale prices with the Erie Railroad, which owned some of the land that was to be acquired for the street widening. As a result, work on the Hudson River Tunnel was delayed by one year and could not be completed before 1926 at the earliest. Work on the New Jersey side finally started on May 30, 1922, after Jersey City officials continued to refuse to cede public land for the construction of the tunnel's plazas. The Jersey City Chamber of Commerce wrote a letter that denounced this action, since the New Jersey Tunnel Commission's members on the Hudson River Tunnel Commission had not been notified of the groundbreaking until they read about it in the following day's newspapers. In mid-June, a state chancellor made permanent an injunction that banned Jersey City officials from trying to preclude construction on the Hudson River Tunnel. The Hudson River Tunnel Commission ultimately decided that Jersey City would not have its own groundbreaking celebration due to the city's various efforts at blocking the tunnel's construction. However, although Jersey City officials had been primarily accused of delaying construction, officials from both states had wanted the Tunnel Commission to widen the approach streets to the Hudson River Tunnel as part of the construction process. For the project, six tunnel digging shields were to be delivered. These shields comprised cylinders whose diameters were wider than the tunnel bores, and these cylinders contained steel plates of various thicknesses on the face that was to be driven under the riverbed. Four of the shields would dig the Hudson River Tunnel under the river, while the remaining two shields would dig from the Hudson River west bank to the Jersey City portals. They could dig through rock at a rate of per day, or through mud at a daily rate of . The air compressors would provide an air pressure of . The shovels used to dig the tunnel were provided by the Marion Power Shovel Company, while the six digging shields were built by the Merchants Shipbuilding Corporation. The air compressor was completed in September 1922, and the first shield was fitted into place in the Manhattan side's construction shaft. By this point, the shafts on the New Jersey side were being excavated, and two watertight caissons were being constructed. The shield started boring in late October of that year after the steel plates that were necessary for the shield's operation had been delivered. The first permanent steel-rings lining the tubes were laid a short time afterward. The caissons were completed and launched into the river in December, and after the caissons were outfitted with the requisite equipment such as airlocks, tugboats dropped the caissons into place in January 1923. Officials projected that at this rate of progress, the tunnel would be finished within 36 months, by late 1926 or early 1927. Tunnel construction required the sandhogs to spend large amounts of time in the caisson under high pressure of up to , which was thought to be necessary to prevent river water from entering prior to completion of the tubes. The caissons were massive metal boxes with varying dimensions, but each contained walls. Sandhogs entered the tunnel through a series of airlocks, and could only remain inside of the tunnel for a designated time period. On exiting the tunnel, sandhogs had to undergo controlled decompression to avoid decompression sickness or "the bends", a condition in which nitrogen bubbles form in the blood from rapid decompression. The rate of decompression for sandhogs working on the Hudson River Tunnel was described as being "so small as to be negligible". Sandhogs underwent such decompressions 756,000 times throughout the course of construction, which resulted in 528 cases of the bends, though none were fatal. The tunnel's pressurization caused other problems, including a pressure blowout in April 1924 that flooded the tube. Due to the geology of the Hudson River, the shields digging from the New Jersey side were mostly being driven through mud, and so could be driven at a faster rate than the shields from the New York side, which were being dug through large rock formations. When workers tried to dig through the Manhattan shoreline, they had encountered several weeks of delay due to the existence of an as-yet-unrecorded granite bulkhead on the shoreline. In September 1923, after having proceeded about from the Manhattan shoreline, workers encountered a sheet of Manhattan schist under the riverbed, forcing them to slow shield digging operations from to less than . This outcropping was fed from a stream in Manhattan that emptied into the Hudson River. The sandhogs planned to use small explosive charges to dig through the rock shelf without damaging the shield. By December 1923, about of each tube's total length had been excavated, and the first of the shields had passed through the underwater shafts that had been sunk during construction. Due to these unexpected issues, the cost estimate for the tunnel was increased from $28 million to $42 million in January 1924. By March 1924, all seven of the ventilation shafts had been dug, and three of the four shields that were digging underwater had passed through their respective underwater construction shafts, with the fourth shield nearing its respective shaft. Workers also performed tests to determine whether they could receive radio transmissions while inside the tunnel. They found that they were able to receive transmissions within much of the Hudson River Tunnel. However, a New Jersey radio station later found that there was a spot in the middle of the tunnel that had no reception. The cost of the project increased as work progressed. In July 1923, the New York and New Jersey Vehicular Tunnel Commission had revised plans for the entrance and exit plazas on each side to accommodate an increase in traffic along Canal Street on the Manhattan side. The commission had spent $2.1 million to acquire land. Further redesigns were made in January 1924 due to a change of major components in the tunnel plan, including tunnel diameters and ventilation systems, which had increased the cost by another $14 million. Nearing completion The two ends of both tubes were scheduled to be connected to each other at a ceremony on October 29, 1924, in which President Calvin Coolidge would have remotely set off an explosion to connect the tunnel's two sides. However, two days before the ceremony, Holland died of a heart attack at the sanatorium in Battle Creek, Michigan, aged 41. Individuals cited in The New York Times attributed his death to the stress associated with overseeing the tunnel's construction. The ceremony was postponed out of respect for Holland's death. The tunnel was ultimately holed through on October 29, but it was a nondescript event without any ceremony. On November 12, 1924, the Hudson River Vehicular Tunnel was renamed the Holland Tunnel by the two states' respective tunnel commissions. Holland was succeeded by Milton Harvey Freeman, who died of pneumonia in March 1925, after several months of overseeing the project. After Freeman's death, the position was occupied by Ole Singstad, who oversaw the tunnel's completion. As part of the tunnel project, one block of Watts Street in Manhattan was widened to accommodate traffic heading toward the westbound tube. Sixth Avenue was also widened and extended between Greenwich Village and Church Street. Ten thousand people were evicted to make way for the Sixth Avenue extension. The north-south Church Street was widened and extended southward to Church Street and Trinity Place; West Side Highway was expanded and supplemented with an elevated highway; and the west-east Vestry and Laight Streets were also widened. On the New Jersey side, the Holland Tunnel was to connect a new highway (formerly the Route 1 Extension; now New Jersey Route 139), which extended westward to Newark. This included a viaduct, rising from 12th and 14th Streets, at the bottom of the Palisades, to the new highway, at the top of the Palisades. The New Jersey highway approach was opened in stages beginning in 1927, and most of that highway was finished in 1930. The construction of the tunnel approach roads on the New Jersey side was delayed for months by Erie Railroad, whose Bergen Arches right-of-way ran parallel to and directly south of Route 139, in the right of way of the proposed approach roads. Although the Erie had promised to find another site for its railroad yards, it had refused to respond to the plans that the New Jersey State Highway Commission had sent them. In March 1925, the Highway Commission decided that construction on the approach roads would begin regardless of Erie's response, and so the land would be taken using eminent domain. This led to a legal disagreement between the Erie and the Highway Commission. The Erie maintained that it absolutely needed 30 feet of land along 12th Street, while the Highway Commission stated that the most direct approach to the eastbound Holland Tunnel's 12th Street portal should be made using 12th Street. The commission rejected a suggestion that it should use 13th Street, one block north, because it would cost $500,000 more and involve two perpendicular turns. In October 1926, one million dollars was allocated to the completion of the Route 139 approach. The contracts for constructing the Holland Tunnel's ventilation systems were awarded in December 1925. Two months later, the New York-New Jersey Vehicular Tunnel Commission asked for $3.2 million more in funding. The tunnel was now expected to cost $46 million, an increase of $17 million over what was originally budgeted. The Holland Tunnel was nearly complete: in March 1926, Singstad stated that the tunnel was expected to be opened by the following February. By May 1926, the tubes had been almost completely furnished: the polished-white tile walls were in place, as were the bright lighting systems and the Belgian-block-and-concrete road surfaces. The north tube's tiles were sourced locally by the Sonzogni Brothers of Union City, New Jersey, while the south tube's tiles were sourced in equal amounts from Czechoslovakia and Germany. The tiles' surfaces were specially engineered so that they could maintain their coloring even after years of use. The lighting systems used in the Holland Tunnel were designed to allow motorists to adjust to a gradual change in lighting levels just before leaving the tubes. The ventilation towers were the only major component of the Holland Tunnel that was not completed, but major progress had been made by the end of 1926. Ole Singstad and the two states' tunnel commissions tested the tunnel's ventilation system by releasing gas clouds in one of the tubes in February 1927. Singstad subsequently declared that the ventilation system was well equipped to ventilate the tunnel air. However, the New York Board of Trade and Transportation disagreed, stating that the system would be inadequate if there was a genuine incident within the tunnel. In April 1927, the board had conducted their own tests with two lighted candles, and a cloud of smoke had filled the entire tube before the ventilation system was able to perform a full exhaust. The Chief Surgeon of the U.S. Board of Mines supported Singstad's position that the ventilation system could sufficiently filter the tubes' air. To affirm the ventilation system's efficacy, in November 1927, the New York and New Jersey tunnel commissions burned a car within the tunnel; the ventilation system dissipated the fire within three and a half minutes. The governors of New York and New Jersey took ceremonial rides through the tunnel in August 1926, meeting at the tunnel's midpoint. The first unofficial drive through the entirety of the Holland Tunnel was undertaken by a group of British businessmen a year later, in August 1927. The next month, a group from the Buffalo and Niagara Frontier Port Authority Survey Commission also visited the tunnel. In October, a delegation of representatives from Detroit, Michigan, and Windsor, Ontario, toured the nearly complete Holland Tunnel to get ideas for the then-proposed Detroit–Windsor Tunnel. A reporter for The New York Times was able to make a test drive through the tunnel, noting that "there is no sudden pressure of wind upon the ear-drums" and that it would reduce the duration of crossing the Hudson River by between 15 and 22 minutes. Three hundred police officers were trained in advance of the Holland Tunnel's opening, and bus companies started receiving franchises to operate buses through the tunnel. Opening The Holland Tunnel was officially opened at 4:55 p.m. EST on November 12, 1927. President Coolidge ceremonially opened the tunnel from his yacht by turning the same key that had opened the Panama Canal in 1915. Time magazine reported that Coolidge had used "the golden lever of the Presidential telegraphic instrument." It rang a giant brass bell at the tunnel's entrances that triggered American flags on both sides of the tunnel to separate. The tunnel's opening ceremony was broadcast on local radio stations. Approximately 20,000 people walked the entire length of the Holland Tunnel before it was closed to pedestrians at 7 p.m. The Holland Tunnel officially opened to vehicular traffic at 12:01 a.m. on November 13, the next day; over a thousand vehicles had gathered on the New Jersey side, ready to pay a toll. The first car to pay a toll was driven by the daughter of the chairman of New Jersey's Bridge and Tunnel Commission. The widows of chief engineers Holland and Freeman rode in the second vehicle that paid a toll. At the time, the Holland Tunnel was the world's longest continuous underwater vehicular tunnel, as well as the world's first tunnel designed specifically for vehicular traffic. Each passenger car paid a 50-cent toll (worth about $ in ). Tolls for other vehicular classes ranged from 25 cents for a motorcycle to two dollars for large trucks. Commuter bus routes, which paid a 50-cent-per-vehicle toll, began operating through the tunnel in December 1927. Truckers subsequently objected that these rates were too high, as the Holland Tunnel truck tolls were double the tolls that were charged on the trans-Hudson ferries; by contrast, the tunnel's passenger vehicle, motorcycle, and bus tolls were on par with those charged by the ferries. The toll revenues would be used to pay off the tunnel's cost (which was estimated at $48 million in 1927 dollars, $ in dollars). Within ten years of opening, it was expected that all construction costs would be paid off. Horsedrawn vehicles were banned from the tunnel from the start, since it was believed that horses' slow speeds would cause traffic congestion in the tubes. Pedestrian and bicycle traffic was also banned. A few months before the tunnel's opening, there were suggestions that pedestrians would be allowed to cross the tunnel if they paid a toll described as "not encouraging," but the idea was never seriously considered. The Holland Tunnel was expected to relieve congestion on the vehicular ferries across the Hudson River, since the capacity of the tunnel was similar to that of the vehicular ferries. Upon opening, it had been estimated that up to 15 million vehicles per year could use the tunnel in both directions, equating to a maximum daily capacity of 46,000 vehicles or an hourly capacity of 3,800 vehicles. Singstad stated that increasing freight traffic across the river would result in a corresponding increase in truck traffic, which would then cause the tunnel to reach its maximum traffic capacity shortly after its opening. The Holland Tunnel was immediately popular. On November 13, a Sunday, 52,285 vehicles passed through the tunnel on its first day of operation, more than its projected maximum capacity. The lines to enter the tunnel stretched for miles on either end, although many of these vehicles were passenger cars who were making a round trip to tour the tunnel. On November 14, the Holland Tunnel's first weekday of operation, the tunnel carried 17,726 cars. Traffic counts in the Holland Tunnel remained relatively steady until the following weekend, when over 40,000 vehicles went through the tunnel. The first holiday rush period for the Holland Tunnel occurred two weeks after the tunnel's opening, when around 30,000 motorists used the tunnel over the Thanksgiving holiday; there were no major traffic disruptions. A half-million vehicles had passed through the Holland Tunnel within three weeks, and a million had used the tubes by New Year's Day. Within the tunnel's first year, 8.5 million vehicles had used it, and the toll revenue had grossed $4.7 million in profit; it was estimated that at this rate, the Holland Tunnel's construction costs might be paid off sooner than expected. Trans-Hudson ferries reported that their traffic counts had been halved in the two weeks since the tunnel opened, and at least one ferry route reduced service within one month of the opening. Another ferry cut its toll rates to half those of the Holland Tunnel in an effort to recover business. The Hudson & Manhattan Railroad (later PATH), which operated rapid transit services across the Hudson River through its Uptown and Downtown Hudson Tubes, also saw a decline in ridership after the Holland Tunnel opened. Even after the start of the Great Depression in 1929, when most transit in New York City saw declines, the Holland Tunnel saw an increase in traffic, as did ferry lines. Early years In 1930, there was a disagreement between the Hudson River Tunnel Commission and the Port of New York Authority over who would construct the Lincoln Tunnel. The tunnel was to be located further north along the Hudson River, connecting nearby Weehawken to Manhattan. The two agencies merged that April, and the expanded Port Authority of New York and New Jersey took over operations of the Holland Tunnel, a role that it maintains to this day. Real property title was not passed however. A second vehicular link between New Jersey and Manhattan, the George Washington Bridge, opened in October 1931. The Lincoln Tunnel, the third and final vehicular connection between New Jersey and Manhattan, first opened in December 1937. Within the first 25 years of the Holland Tunnel's opening, it had carried 330 million vehicles in total, but a significant portion of Holland Tunnel traffic was diverted to the Lincoln Tunnel and George Washington Bridge after the opening of the latter two crossings. In 1945, the Port Authority approved the extension of a tunnel approach on the New Jersey side. A new viaduct for westbound traffic would connect the intersection of 14th Street and Jersey Avenue, outside the Holland Tunnel's exit portal, to Hoboken Avenue and NJ Route 139, on top of The Palisades. This would supplement an existing bidirectional viaduct, which connected Hoboken Avenue with 12th Street and currently only carries eastbound traffic. The 14th Street viaduct was first opened for vehicular use in January 1951, although the road was not complete; it was officially completed that February. The 12th and 14th Street viaducts were later also connected to the New Jersey Turnpike Extension. The first part of the extension, Newark Bay Bridge, opened between Bayonne and Newark Liberty International Airport in April 1956; the connection between Bayonne and the 12th/14th Street viaducts was completed that September, providing direct highway connection between the Holland Tunnel and Newark Airport. The NJ Turnpike Extension, as well as the Holland Tunnel and the 12th/14th Street approaches, was designated as part of I-78 in 1958. Starting in the 1940s, New York City officials developed plans to connect the Holland Tunnel's Manhattan end to the Lower Manhattan Expressway, a proposed elevated highway connecting to both the Williamsburg Bridge and the Manhattan Bridge to Brooklyn. This connection would be part of I-78. In 1956, Robert Moses suggested adding a third tube to the Holland Tunnel, similar to the Lincoln Tunnel's third tube, so there would be sufficient capacity for the proposed expressway traffic. The route of the Lower Manhattan Expressway was approved in 1960, but quickly became controversial due to the large number of tenants who would have to be relocated. The Lower Manhattan Expressway project was ultimately canceled in March 1971. The Port Authority voted in 1953 to replace the original tollbooths on the New Jersey side, which did not contain canopies, with an updated plaza that contained a canopy. The next year, the Port Authority also voted to refurbish the Holland Tunnel's administration building on the New Jersey side, as well as construct a new service building. The development of a one-man miniature electric car for tunnel police, to be installed on the tubes' catwalks, was announced in August 1954. The Port Authority tested the "catwalk car" along a stretch of the Holland Tunnel. After the car had passed its test, policemen could patrol the full length of the tubes using the catwalk car instead of having to walk the tubes' entire length. By use of a swivel seat the policemen could drive the car in either direction. Late 20th century In 1970, the Port Authority stopped collecting tolls for New Jersey-bound drivers through the Holland Tunnel, who used the westbound tube, while doubling tolls to $1 for New York City-bound drivers, who used the eastbound tube. This was done in an effort to speed up traffic, and it was the first toll increase in the tunnel's history. Although westbound drivers initially saved time by not paying tolls, the removal of westbound tolls ultimately had an adverse effect on traffic in the Holland Tunnel. In 1986, the Verrazzano-Narrows Bridge, between the New York City boroughs of Brooklyn and Staten Island, stopped collecting tolls for Brooklyn-bound drivers (who were generally headed eastbound) and doubled its tolls for Staten Island-bound drivers (who were generally headed westbound). This had the effect of increasing congestion along the New Jersey-bound tube of the Holland Tunnel, which drivers could use for free. Drivers would go through New Jersey and use the Bayonne Bridge, paying a lower toll to enter Staten Island. The amount of westbound traffic in the Holland Tunnel increased compared to eastbound traffic: by 1998, there were 50,110 daily westbound trips and 46,688 daily eastbound trips through the tunnel. Simultaneously, there was a decrease in westbound trips on the Verrazzano-Narrows Bridge compared to eastbound trips on the bridge. The Verrazzano-Narrows Bridge toll pattern also caused traffic gridlock around the Holland Tunnel, and Canal Street saw the most severe congestion because it served as the main entrance to the tunnel. Fatal accidents involving pedestrians in Lower Manhattan also increased greatly as a result. Rush-hour congestion within the Holland Tunnel has persisted for more than thirty years due to the Verrazzano-Narrows Bridge's one-way westbound toll. A renovation of the Holland Tunnel's tiled ceilings, which were deteriorating due to water damage, started in 1983. The ceilings were replaced at a total cost of $78 million, and the south tube's ceiling was renovated first. Since the Holland Tunnel had to remain open during the renovation, 4,000 modular concrete ceiling panels were made offsite, and narrow lift trucks parked in one of the tube's two lanes installed the panels while traffic continued to move through the tube's other lane. The panels were each designed to the specifications of a certain section of tube, such that none of the ceiling panels were identical; the Port Authority stated that the ceiling-replacement project was the first one of its kind in the world. In 1988, after the ceiling renovations had been completed, work started on replacing the 8-lane tollbooth, which consisted of six lanes built in the 1950s and two additional lanes built in the 1980s. The new $54 million tollbooth contained 9 lanes and a central control center. The Holland Tunnel was listed as a National Historic Landmark on June 27, 1993, becoming part of the National Register of Historic Places. With this designation, it became the 92nd National Historic Landmark in New York City and the sixth such landmark nationally that was a tunnel. According to M. Ann Belkov, the National Park Service superintendent for Ellis Island, the tunnel had been granted landmark status because it had been the first "mechanically ventilated underwater vehicular tunnel" in the world. 21st century Between 2003 and 2006, the fire protection system in both tunnels was modernized. Fire extinguishers were placed in alcoves along the tunnel walls. Although the water supply was turned off, it remained in place during the renovation. The Holland Tunnel was closed on October 29, 2012, as Hurricane Sandy approached. The tunnel, like many other New York City tunnels, was flooded by the high storm surge. It remained closed for several days, opening for buses only on November 2 and to all traffic on November 7. In February 2018, the PANYNJ approved a $364 million project to repair flood damage from the hurricane. The agency closed the Holland Tunnel's eastbound tube during late nights, except on Saturday nights, beginning in April 2020. Though the work was initially supposed to be completed in early 2022, the work was delayed by nearly a year. The PANYNJ then announced that the westbound tube would be closed during late nights, except on Saturdays, between February 2023 and late 2025. Accidents and terrorism The first fatal vehicular crash in the Holland Tunnel happened in March 1932, four and a half years after it opened. One person died and two others were injured. The 1949 Holland Tunnel fire, which started aboard a chemical truck, caused severe damage to the south tube of the tunnel. The fire resulted in 69 injuries and nearly $600,000 worth of damage to the structure. In addition, two first responders, a FDNY battalion chief and a Port Authority patrolman, died as a result of injuries sustained in fighting the fire. Due to its status as one of the few connections between Manhattan and New Jersey, the Holland Tunnel is considered to be one of the most high-risk terrorist target sites in the United States. Other such sites in New Jersey include the Lincoln Tunnel in Weehawken, New Jersey, the PATH station at Exchange Place in Jersey City, and the Port of Newark in Elizabeth. In 1995, Sheik Omar Abdel Rahman and nine other men were convicted of a bombing plot in which a radical Islamic group plotted to blow up five or six sites in New York City, including the Holland and Lincoln Tunnels and the George Washington Bridge. In 2006, a plot to detonate explosives in a Hudson River tunnel was uncovered by the Federal Bureau of Investigation. It was originally reported that the Holland Tunnel was the target, but in a later update of the source, the plot was clarified to be aimed at the PATH's tubes instead of the Holland Tunnel. September 11 attacks Following the September 11 attacks on the World Trade Center, the Holland Tunnel remained closed to all but emergency traffic for over a month, due to its proximity to the World Trade Center site. When the tunnel reopened on October 15, 2001, strict new regulations were enacted, and single-occupancy vehicles and trucks were banned from entering the tunnel. In March 2002, before all of the post-9/11 restrictions were lifted, a warehouse fire near the eastbound tube's New Jersey portal caused the tunnel to be closed entirely for five days; the fire continued for over a week. That April, all trucks were banned from the westbound tube, and trucks with more than three axles were also banned from the eastbound tube. Single-occupant vehicles were prohibited in the tunnel on weekday mornings between 6:00 am and 10:00 am until November 17, 2003, when the restrictions were lifted. Tolls , the tolls-by-mail rate going from New Jersey to New York City is $18.31 for cars and motorcycles; there is no toll for passenger vehicles going from New York City to New Jersey. New Jersey and New York–issued E-ZPass users are charged $14.06 for cars and $13.06 for motorcycles during off-peak hours, and $16.06 for cars and $15.06 for motorcycles during peak hours. Users with E-ZPass issued from agencies outside of New Jersey and New York are charged the tolls-by-mail rate. Tolls are collected at a tollbooth on the New Jersey side. Originally, tolls were collected in both directions. In August 1970, the toll was abolished for westbound drivers, and at the same time, eastbound drivers saw their tolls doubled. The tolls of eleven other New York City to New Jersey and Hudson River crossings along a stretch, from the Outerbridge Crossing in the south to the Rip Van Winkle Bridge in the north, were also changed to south- or eastbound-only at that time. E-ZPass was first made available at the Holland Tunnel in October 1997. In March 2020, due to the COVID-19 pandemic, all-electronic tolling was temporarily placed in effect for all Port Authority crossings, including the Holland Tunnel. Open road tolling began on December 23, 2020. The tollbooths were dismantled, and drivers were no longer able to pay cash at the tunnel. Instead, there will be cameras mounted onto new overhead gantries at the New Jersey side going to New York City. A vehicle without E-ZPass will have a picture taken of its license plate and a bill for the toll will be mailed to its owner. For E-ZPass users, sensors will detect their transponders wirelessly. The carpool discount plan was eliminated because the discount required a manual count of passengers. Historical toll rates Congestion toll Congestion pricing in New York City was implemented in January 2025; drivers who enter Manhattan via the tunnel pay a second toll. The congestion charges are collected via E-ZPass and tolls-by-mail. The charges vary based on time of day and vehicle class, but the congestion toll is charged once per day. Drivers who use the Holland Tunnel to enter the congestion zone will receive a credit toward the congestion charge during the day, and they would pay a discounted toll at night. See also Albert Capsouto Park, adjacent to St. John's Park List of bridges, tunnels, and cuts in Hudson County, New Jersey List of fixed crossings of the Hudson River List of tunnels documented by the Historic American Engineering Record in New Jersey List of tunnels documented by the Historic American Engineering Record in New York List of National Historic Landmarks in New Jersey List of National Historic Landmarks in New York City Transportation in New York City References Further reading "The Holland Tunnel", New York Daily News, Wednesday, February 25, 2009 External links Port Authority of New York & New Jersey: Holland Tunnel 1927 establishments in New Jersey 1927 establishments in New York (state) Articles containing video clips Crossings of the Hudson River Historic American Engineering Record in New Jersey Historic American Engineering Record in New York City Historic Civil Engineering Landmarks Historic districts in Hudson County, New Jersey Hudson Square Interstate 78 Lincoln Highway National Historic Landmarks in Manhattan National Historic Landmarks in New Jersey National Register of Historic Places in Hudson County, New Jersey New York State Register of Historic Places in New York County Port Authority of New York and New Jersey Road tunnels in New Jersey Road tunnels in New York City Road tunnels on the National Register of Historic Places Toll tunnels in New Jersey Toll tunnels in New York City Tolled sections of Interstate Highways Transportation buildings and structures on the National Register of Historic Places in New Jersey Transportation buildings and structures on the National Register of Historic Places in New York City Transportation in Jersey City, New Jersey Tribeca Tunnels completed in 1927 Tunnels in Hudson County, New Jersey Tunnels in Manhattan U.S. Route 1
Holland Tunnel
[ "Engineering" ]
11,611
[ "Civil engineering", "Historic Civil Engineering Landmarks" ]
172,244
https://en.wikipedia.org/wiki/Simulated%20annealing
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. For large numbers of local optima, SA can find the global optimum. It is often used when the search space is discrete (for example the traveling salesman problem, the boolean satisfiability problem, protein structure prediction, and job-shop scheduling). For problems where finding an approximate global optimum is more important than finding a precise local optimum in a fixed amount of time, simulated annealing may be preferable to exact algorithms such as gradient descent or branch and bound. The name of the algorithm comes from annealing in metallurgy, a technique involving heating and controlled cooling of a material to alter its physical properties. Both are attributes of the material that depend on their thermodynamic free energy. Heating and cooling the material affects both the temperature and the thermodynamic free energy or Gibbs energy. Simulated annealing can be used for very hard computational optimization problems where exact algorithms fail; even though it usually achieves an approximate solution to the global minimum, it could be enough for many practical problems. The problems solved by SA are currently formulated by an objective function of many variables, subject to several mathematical constraints. In practice, the constraint can be penalized as part of the objective function. Similar techniques have been independently introduced on several occasions, including Pincus (1970), Khachaturyan et al (1979, 1981), Kirkpatrick, Gelatt and Vecchi (1983), and Cerny (1985). In 1983, this approach was used by Kirkpatrick, Gelatt Jr., Vecchi, for a solution of the traveling salesman problem. They also proposed its current name, simulated annealing. This notion of slow cooling implemented in the simulated annealing algorithm is interpreted as a slow decrease in the probability of accepting worse solutions as the solution space is explored. Accepting worse solutions allows for a more extensive search for the global optimal solution. In general, simulated annealing algorithms work as follows. The temperature progressively decreases from an initial positive value to zero. At each time step, the algorithm randomly selects a solution close to the current one, measures its quality, and moves to it according to the temperature-dependent probabilities of selecting better or worse solutions, which during the search respectively remain at 1 (or positive) and decrease toward zero. The simulation can be performed either by a solution of kinetic equations for probability density functions, or by using a stochastic sampling method. The method is an adaptation of the Metropolis–Hastings algorithm, a Monte Carlo method to generate sample states of a thermodynamic system, published by N. Metropolis et al. in 1953. Overview The state s of some physical systems, and the function E(s) to be minimized, is analogous to the internal energy of the system in that state. The goal is to bring the system, from an arbitrary initial state, to a state with the minimum possible energy. The basic iteration At each step, the simulated annealing heuristic considers some neighboring state s* of the current state s, and probabilistically decides between moving the system to state s* or staying in state s. These probabilities ultimately lead the system to move to states of lower energy. Typically this step is repeated until the system reaches a state that is good enough for the application, or until a given computation budget has been exhausted. The neighbors of a state Optimization of a solution involves evaluating the neighbors of a state of the problem, which are new states produced through conservatively altering a given state. For example, in the traveling salesman problem each state is typically defined as a permutation of the cities to be visited, and the neighbors of any state are the set of permutations produced by swapping any two of these cities. The well-defined way in which the states are altered to produce neighboring states is called a "move", and different moves give different sets of neighboring states. These moves usually result in minimal alterations of the last state, in an attempt to progressively improve the solution through iteratively improving its parts (such as the city connections in the traveling salesman problem). It is even better to reverse the order of an interval of cities. This is a smaller move since swapping two cities can be achieved by twice reversing an interval. Simple heuristics like hill climbing, which move by finding better neighbor after better neighbor and stop when they have reached a solution which has no neighbors that are better solutions, cannot guarantee to lead to any of the existing better solutions their outcome may easily be just a local optimum, while the actual best solution would be a global optimum that could be different. Metaheuristics use the neighbors of a solution as a way to explore the solution space, and although they prefer better neighbors, they also accept worse neighbors in order to avoid getting stuck in local optima; they can find the global optimum if run for a long enough amount of time. Acceptance probabilities The probability of making the transition from the current state to a candidate new state is specified by an acceptance probability function , that depends on the energies and of the two states, and on a global time-varying parameter called the temperature. States with a smaller energy are better than those with a greater energy. The probability function must be positive even when is greater than . This feature prevents the method from becoming stuck at a local minimum that is worse than the global one. When tends to zero, the probability must tend to zero if and to a positive value otherwise. For sufficiently small values of , the system will then increasingly favor moves that go "downhill" (i.e., to lower energy values), and avoid those that go "uphill." With the procedure reduces to the greedy algorithm, which makes only the downhill transitions. In the original description of simulated annealing, the probability was equal to 1 when —i.e., the procedure always moved downhill when it found a way to do so, irrespective of the temperature. Many descriptions and implementations of simulated annealing still take this condition as part of the method's definition. However, this condition is not essential for the method to work. The function is usually chosen so that the probability of accepting a move decreases when the difference increases—that is, small uphill moves are more likely than large ones. However, this requirement is not strictly necessary, provided that the above requirements are met. Given these properties, the temperature plays a crucial role in controlling the evolution of the state of the system with regard to its sensitivity to the variations of system energies. To be precise, for a large , the evolution of is sensitive to coarser energy variations, while it is sensitive to finer energy variations when is small. The annealing schedule The name and inspiration of the algorithm demand an interesting feature related to the temperature variation to be embedded in the operational characteristics of the algorithm. This necessitates a gradual reduction of the temperature as the simulation proceeds. The algorithm starts initially with set to a high value (or infinity), and then it is decreased at each step following some annealing schedule—which may be specified by the user but must end with towards the end of the allotted time budget. In this way, the system is expected to wander initially towards a broad region of the search space containing good solutions, ignoring small features of the energy function; then drift towards low-energy regions that become narrower and narrower, and finally move downhill according to the steepest descent heuristic. For any given finite problem, the probability that the simulated annealing algorithm terminates with a global optimal solution approaches 1 as the annealing schedule is extended. This theoretical result, however, is not particularly helpful, since the time required to ensure a significant probability of success will usually exceed the time required for a complete search of the solution space. Pseudocode The following pseudocode presents the simulated annealing heuristic as described above. It starts from a state and continues until a maximum of steps have been taken. In the process, the call should generate a randomly chosen neighbour of a given state ; the call should pick and return a value in the range , uniformly at random. The annealing schedule is defined by the call , which should yield the temperature to use, given the fraction of the time budget that has been expended so far. Let For through (exclusive): Pick a random neighbour, If : Output: the final state Selecting the parameters In order to apply the simulated annealing method to a specific problem, one must specify the following parameters: the state space, the energy (goal) function , the candidate generator procedure , the acceptance probability function , and the annealing schedule AND initial temperature . These choices can have a significant impact on the method's effectiveness. Unfortunately, there are no choices of these parameters that will be good for all problems, and there is no general way to find the best choices for a given problem. The following sections give some general guidelines. Sufficiently near neighbour Simulated annealing may be modeled as a random walk on a search graph, whose vertices are all possible states, and whose edges are the candidate moves. An essential requirement for the function is that it must provide a sufficiently short path on this graph from the initial state to any state which may be the global optimum the diameter of the search graph must be small. In the traveling salesman example above, for instance, the search space for n = 20 cities has n! = 2,432,902,008,176,640,000 (2.4 quintillion) states; yet the number of neighbors of each vertex is edges (coming from n choose 20), and the diameter of the graph is . Transition probabilities To investigate the behavior of simulated annealing on a particular problem, it can be useful to consider the transition probabilities that result from the various design choices made in the implementation of the algorithm. For each edge of the search graph, the transition probability is defined as the probability that the simulated annealing algorithm will move to state when its current state is . This probability depends on the current temperature as specified by , on the order in which the candidate moves are generated by the function, and on the acceptance probability function . (Note that the transition probability is not simply , because the candidates are tested serially.) Acceptance probabilities The specification of , , and is partially redundant. In practice, it's common to use the same acceptance function for many problems and adjust the other two functions according to the specific problem. In the formulation of the method by Kirkpatrick et al., the acceptance probability function was defined as 1 if , and otherwise. This formula was superficially justified by analogy with the transitions of a physical system; it corresponds to the Metropolis–Hastings algorithm, in the case where T=1 and the proposal distribution of Metropolis–Hastings is symmetric. However, this acceptance probability is often used for simulated annealing even when the function, which is analogous to the proposal distribution in Metropolis–Hastings, is not symmetric, or not probabilistic at all. As a result, the transition probabilities of the simulated annealing algorithm do not correspond to the transitions of the analogous physical system, and the long-term distribution of states at a constant temperature need not bear any resemblance to the thermodynamic equilibrium distribution over states of that physical system, at any temperature. Nevertheless, most descriptions of simulated annealing assume the original acceptance function, which is probably hard-coded in many implementations of SA. In 1990, Moscato and Fontanari, and independently Dueck and Scheuer, proposed that a deterministic update (i.e. one that is not based on the probabilistic acceptance rule) could speed-up the optimization process without impacting on the final quality. Moscato and Fontanari conclude from observing the analogous of the "specific heat" curve of the "threshold updating" annealing originating from their study that "the stochasticity of the Metropolis updating in the simulated annealing algorithm does not play a major role in the search of near-optimal minima". Instead, they proposed that "the smoothening of the cost function landscape at high temperature and the gradual definition of the minima during the cooling process are the fundamental ingredients for the success of simulated annealing." The method subsequently popularized under the denomination of "threshold accepting" due to Dueck and Scheuer's denomination. In 2001, Franz, Hoffmann and Salamon showed that the deterministic update strategy is indeed the optimal one within the large class of algorithms that simulate a random walk on the cost/energy landscape. Efficient candidate generation When choosing the candidate generator , one must consider that after a few iterations of the simulated annealing algorithm, the current state is expected to have much lower energy than a random state. Therefore, as a general rule, one should skew the generator towards candidate moves where the energy of the destination state is likely to be similar to that of the current state. This heuristic (which is the main principle of the Metropolis–Hastings algorithm) tends to exclude very good candidate moves as well as very bad ones; however, the former are usually much less common than the latter, so the heuristic is generally quite effective. In the traveling salesman problem above, for example, swapping two consecutive cities in a low-energy tour is expected to have a modest effect on its energy (length); whereas swapping two arbitrary cities is far more likely to increase its length than to decrease it. Thus, the consecutive-swap neighbor generator is expected to perform better than the arbitrary-swap one, even though the latter could provide a somewhat shorter path to the optimum (with swaps, instead of ). A more precise statement of the heuristic is that one should try the first candidate states for which is large. For the "standard" acceptance function above, it means that is on the order of or less. Thus, in the traveling salesman example above, one could use a function that swaps two random cities, where the probability of choosing a city-pair vanishes as their distance increases beyond . Barrier avoidance When choosing the candidate generator one must also try to reduce the number of "deep" local minima—states (or sets of connected states) that have much lower energy than all its neighboring states. Such "closed catchment basins" of the energy function may trap the simulated annealing algorithm with high probability (roughly proportional to the number of states in the basin) and for a very long time (roughly exponential on the energy difference between the surrounding states and the bottom of the basin). As a rule, it is impossible to design a candidate generator that will satisfy this goal and also prioritize candidates with similar energy. On the other hand, one can often vastly improve the efficiency of simulated annealing by relatively simple changes to the generator. In the traveling salesman problem, for instance, it is not hard to exhibit two tours , , with nearly equal lengths, such that (1) is optimal, (2) every sequence of city-pair swaps that converts to goes through tours that are much longer than both, and (3) can be transformed into by flipping (reversing the order of) a set of consecutive cities. In this example, and lie in different "deep basins" if the generator performs only random pair-swaps; but they will be in the same basin if the generator performs random segment-flips. Cooling schedule The physical analogy that is used to justify simulated annealing assumes that the cooling rate is low enough for the probability distribution of the current state to be near thermodynamic equilibrium at all times. Unfortunately, the relaxation time—the time one must wait for the equilibrium to be restored after a change in temperature—strongly depends on the "topography" of the energy function and on the current temperature. In the simulated annealing algorithm, the relaxation time also depends on the candidate generator, in a very complicated way. Note that all these parameters are usually provided as black box functions to the simulated annealing algorithm. Therefore, the ideal cooling rate cannot be determined beforehand and should be empirically adjusted for each problem. Adaptive simulated annealing algorithms address this problem by connecting the cooling schedule to the search progress. Other adaptive approaches such as Thermodynamic Simulated Annealing, automatically adjusts the temperature at each step based on the energy difference between the two states, according to the laws of thermodynamics. Restarts Sometimes it is better to move back to a solution that was significantly better rather than always moving from the current state. This process is called restarting of simulated annealing. To do this we set s and e to sbest and ebest and perhaps restart the annealing schedule. The decision to restart could be based on several criteria. Notable among these include restarting based on a fixed number of steps, based on whether the current energy is too high compared to the best energy obtained so far, restarting randomly, etc. Related methods Interacting Metropolis–Hasting algorithms (a.k.a. sequential Monte Carlo) combines simulated annealing moves with an acceptance-rejection of the best-fitted individuals equipped with an interacting recycling mechanism. Quantum annealing uses "quantum fluctuations" instead of thermal fluctuations to get through high but thin barriers in the target function. Stochastic tunneling attempts to overcome the increasing difficulty simulated annealing runs have in escaping from local minima as the temperature decreases, by 'tunneling' through barriers. Tabu search normally moves to neighbouring states of lower energy, but will take uphill moves when it finds itself stuck in a local minimum; and avoids cycles by keeping a "taboo list" of solutions already seen. Dual-phase evolution is a family of algorithms and processes (to which simulated annealing belongs) that mediate between local and global search by exploiting phase changes in the search space. Reactive search optimization focuses on combining machine learning with optimization, by adding an internal feedback loop to self-tune the free parameters of an algorithm to the characteristics of the problem, of the instance, and of the local situation around the current solution. Genetic algorithms maintain a pool of solutions rather than just one. New candidate solutions are generated not only by "mutation" (as in SA), but also by "recombination" of two solutions from the pool. Probabilistic criteria, similar to those used in SA, are used to select the candidates for mutation or combination, and for discarding excess solutions from the pool. Memetic algorithms search for solutions by employing a set of agents that both cooperate and compete in the process; sometimes the agents' strategies involve simulated annealing procedures for obtaining high-quality solutions before recombining them. Annealing has also been suggested as a mechanism for increasing the diversity of the search. Graduated optimization digressively "smooths" the target function while optimizing. Ant colony optimization (ACO) uses many ants (or agents) to traverse the solution space and find locally productive areas. The cross-entropy method (CE) generates candidate solutions via a parameterized probability distribution. The parameters are updated via cross-entropy minimization, so as to generate better samples in the next iteration. Harmony search mimics musicians in improvisation where each musician plays a note to find the best harmony together. Stochastic optimization is an umbrella set of methods that includes simulated annealing and numerous other approaches. Particle swarm optimization is an algorithm modeled on swarm intelligence that finds a solution to an optimization problem in a search space, or models and predicts social behavior in the presence of objectives. The runner-root algorithm (RRA) is a meta-heuristic optimization algorithm for solving unimodal and multimodal problems inspired by the runners and roots of plants in nature. Intelligent water drops algorithm (IWD) which mimics the behavior of natural water drops to solve optimization problems Parallel tempering is a simulation of model copies at different temperatures (or Hamiltonians) to overcome the potential barriers. Multi-objective simulated annealing algorithms have been used in multi-objective optimization. See also References Further reading A. Das and B. K. Chakrabarti (Eds.), Quantum Annealing and Related Optimization Methods, Lecture Note in Physics, Vol. 679, Springer, Heidelberg (2005) V.Vassilev, A.Prahova: "The Use of Simulated Annealing in the Control of Flexible Manufacturing Systems", International Journal INFORMATION THEORIES & APPLICATIONS, VOLUME 6/1999 External links Simulated Annealing A Javascript app that allows you to experiment with simulated annealing. Source code included. "General Simulated Annealing Algorithm" An open-source MATLAB program for general simulated annealing exercises. Self-Guided Lesson on Simulated Annealing A Wikiversity project. Google in superposition of using, not using quantum computer Ars Technica discusses the possibility that the D-Wave computer being used by Google may, in fact, be an efficient simulated annealing co-processor. A Simulated Annealing-Based Multiobjective Optimization Algorithm: AMOSA. Metaheuristics Optimization algorithms and methods Monte Carlo methods
Simulated annealing
[ "Physics" ]
4,350
[ "Monte Carlo methods", "Computational physics" ]
172,270
https://en.wikipedia.org/wiki/Hidden%20message
A hidden message is information that is not immediately noticeable, and that must be discovered or uncovered and interpreted before it can be known. Hidden messages include backwards audio messages, hidden visual messages and symbolic or cryptic codes such as a crossword or cipher. Although there are many legitimate examples of hidden messages created with techniques such as backmasking and steganography, many so-called hidden messages are merely fanciful imaginings or apophany. Description The information in hidden messages is not immediately noticeable; it must be discovered or uncovered, and interpreted before it can be known. Hidden messages include backwards audio messages, hidden visual messages, and symbolic or cryptic codes such as a crossword or cipher. There are many legitimate examples of hidden messages, though many are imaginings. Backward audio messages A backward message in an audio recording is only fully apparent when the recording is played reversed. Some backward messages are produced by deliberate backmasking, while others are simply phonetic reversals resulting from random combinations of words. Backward messages may occur in various mediums, including music, video games, music videos, movies, and television shows. Backmasking Backmasking is a recording technique in which a message is recorded backwards onto a track that is meant to be played forwards. It was popularized by The Beatles, who used backward vocals and instrumentation on their 1966 album Revolver. The technique has also been used to censor words or phrases for "clean" releases of songs. Backmasking has been a controversial topic in the United States since the 1980s, when allegations of its use for Satanic purposes were made against prominent rock musicians, leading to record-burnings and proposed anti-backmasking legislation by state and federal governments. In debate are both the existence of backmasked Satanic messages and their purported ability to subliminally affect listeners. Phonetic reversal Certain phrases produce a different phrase when their phonemes are reversed—a process known as phonetic reversal. For example, "Kiss" backwards sounds like "sick", and so the title of Yoko Ono's "Kiss Kiss Kiss" sounds like "Sick Sick Sick" or "Six Six Six" backwards. Queen's "Another One Bites the Dust" backwards was claimed that the chorus, when played in reverse, can be heard as "It's fun to smoke marijuana" or "start to smoke marijuana". The Paul is dead phenomenon was started in part because a phonetic reversal of "Number nine" (the words were constantly repeated in Revolution 9) was interpreted as "Turn me on, dead man". According to proponents of reverse speech, phonetic reversal occurs unknowingly during normal speech. Visual messages Hidden messages can be created in visual mediums with techniques such as hidden computer text and steganography. In the 1980s, Coca-Cola released in South Australia an advertising poster featuring the reintroduced contour bottle, with a speech bubble, "Feel the Curves!!" An image hidden inside one of the ice cubes depicted an oral sex act. Thousands of posters were distributed to hotels and bottle shops in Australia before the mistake was discovered by Coca-Cola management. The artist of the poster was fired and all the posters were recalled. Rival PepsiCo had a similar accusation in 1990 when their promotional Pepsi Cool Cans was accused of having the word "sex" hidden in their design if two of their cans were placed atop each other. Various other messages have been claimed to exist in Disney movies, some of them risque, such as the well-known allegation of an erection showing on a priest in The Little Mermaid. According to the Snopes website, one image "is clearly true [and] undeniably purposely inserted into the movie": a topless woman in two frames of The Rescuers. PETA (People for the Ethical Treatment of Animals) had an antipathy towards PETCO, a pet food retailer in San Diego, regarding the purported mistreatment of live animals at their stores. When the San Diego Padres baseball team announced that the retailer had purchased naming rights to Petco Park, PETA was unable to persuade the sports team to terminate the agreement. Later, PETA successfully purchased a commemorative display brick with what appears to be a complimentary message: "Break Open Your Cold Ones! Toast The Padres! Enjoy This Championship Organization!" However, if one takes the first letters of each word, the resulting acrostic reads "BOYCOTT PETCO". Neither PETCO nor the Padres have taken any action to remove the brick, stating that if someone walked by, they would not know it had anything to do with the PETA/PETCO feud. Secretive design language is widely used on web sites as Easter eggs or within products as hidden features, such as In-N-Out Burger's secret menu or the new Norwegian passport design for security. See also Apophenia Pareidolia Synchronicity References External links Audio Reversal in Popular Culture — explanation of backmasking and phonetic reversals SHOOSH An easy tool to make an hidden message — visual messages can be hidden in images. Hidden subliminal design secret message link Audio engineering Perception Popular music de:Rückwärtsbotschaft fr:Message à l'envers
Hidden message
[ "Engineering" ]
1,056
[ "Electrical engineering", "Audio engineering" ]
172,274
https://en.wikipedia.org/wiki/Centaur%20%28small%20Solar%20System%20body%29
In planetary astronomy, a centaur is a small Solar System body that orbits the Sun between Jupiter and Neptune and crosses the orbits of one or more of the giant planets. Centaurs generally have unstable orbits because of this; almost all their orbits have dynamic lifetimes of only a few million years, but there is one known centaur, 514107 Kaʻepaokaʻawela, which may be in a stable (though retrograde) orbit. Centaurs typically exhibit the characteristics of both asteroids and comets. They are named after the mythological centaurs that were a mixture of horse and human. Observational bias toward large objects makes determination of the total centaur population difficult. Estimates for the number of centaurs in the Solar System more than 1 km in diameter range from as low as 44,000 to more than 10,000,000. The first centaur to be discovered, under the definition of the Jet Propulsion Laboratory and the one used here, was 944 Hidalgo in 1920. However, they were not recognized as a distinct population until the discovery of 2060 Chiron in 1977. The largest confirmed centaur is 10199 Chariklo, which at 260 kilometers in diameter is as big as a mid-sized main-belt asteroid, and is known to have a system of rings. It was discovered in 1997. No centaur has been photographed up close, although there is evidence that Saturn's moon Phoebe, imaged by the Cassini probe in 2004, may be a captured centaur that originated in the Kuiper belt. In addition, the Hubble Space Telescope has gleaned some information about the surface features of 8405 Asbolus. Ceres may have originated in the region of the outer planets, and if so might be considered an ex-centaur, but the centaurs seen today all originated elsewhere. Of the objects known to occupy centaur-like orbits, approximately 30 have been found to display comet-like dust comas, with three, 2060 Chiron, 60558 Echeclus, and 29P/Schwassmann-Wachmann 1, having detectable levels of volatile production in orbits entirely beyond Jupiter. Chiron and Echeclus are therefore classified as both centaurs and comets, while Schwassmann-Wachmann 1 has always held a comet designation. Other centaurs, such as 52872 Okyrhoe, are suspected of having shown comas. Any centaur that is perturbed close enough to the Sun is expected to become a comet. Classification A centaur has either a perihelion or a semi-major axis between those of the outer planets (between Jupiter and Neptune). Due to the inherent long-term instability of orbits in this region, even centaurs such as and , which do not currently cross the orbit of any planet, are in gradually changing orbits that will be perturbed until they start to cross the orbit of one or more of the giant planets. Some astronomers count only bodies with semimajor axes in the region of the outer planets to be centaurs; others accept any body with a perihelion in the region, as their orbits are similarly unstable. Discrepant criteria However, different institutions have different criteria for classifying borderline objects, based on particular values of their orbital elements: The Minor Planet Center (MPC) defines centaurs as having a perihelion beyond the orbit of Jupiter () and a semi-major axis less than that of Neptune (). Though nowadays the MPC often lists centaurs and scattered disc objects together as a single group. The Jet Propulsion Laboratory (JPL) similarly defines centaurs as having a semi-major axis, a, between those of Jupiter and Neptune (). In contrast, the Deep Ecliptic Survey (DES) defines centaurs using a dynamical classification scheme. These classifications are based on the simulated change in behavior of the present orbit when extended over 10 million years. The DES defines centaurs as non-resonant objects whose instantaneous (osculating) perihelia are less than the osculating semi-major axis of Neptune at any time during the simulation. This definition is intended to be synonymous with planet-crossing orbits and to suggest comparatively short lifetimes in the current orbit. The collection The Solar System Beyond Neptune (2008) defines objects with a semi-major axis between those of Jupiter and Neptune and a Jupiter-relative Tisserand's parameter above 3.05 as centaurs, classifying the objects with a Jupiter-relative Tisserand's parameter below this and, to exclude Kuiper belt objects, an arbitrary perihelion cut-off half-way to Saturn () as Jupiter-family comets, and classifying those objects on unstable orbits with a semi-major axis larger than Neptune's as members of the scattered disc. Other astronomers prefer to define centaurs as objects that are non-resonant with a perihelion inside the orbit of Neptune that can be shown to likely cross the Hill sphere of a gas giant within the next 10 million years, so that centaurs can be thought of as objects scattered inwards and that interact more strongly and scatter more quickly than typical scattered-disc objects. The JPL Small-Body Database lists 452 centaurs. There are an additional 116 trans-Neptunian objects (objects with a semi-major axis further than Neptune's, i.e. ) with a perihelion closer than the orbit of Uranus (). Ambiguous objects The Gladman & Marsden (2008) criteria would make some objects Jupiter-family comets: Both Echeclus (, ) and Okyrhoe (; ) have traditionally been classified as centaurs. Traditionally considered an asteroid, but classified as a centaur by JPL, Hidalgo (; ) would also change category to a Jupiter-family comet. Schwassmann-Wachmann 1 (; ) has been categorized as both a centaur and a Jupiter-family comet depending on the definition used. Other objects caught between these differences in classification methods include , which has a semi-major axis of 32 AU but crosses the orbits of both Uranus and Neptune. It is listed as an outer centaur by the Deep Ecliptic Survey (DES). Among the inner centaurs, (434620) 2005 VD, with a perihelion distance very near Jupiter, is listed as a centaur by both JPL and DES. A recent orbital simulation of the evolution of Kuiper Belt Objects through the centaur region has identified a short-lived "orbital gateway" between 5.4 and 7.8 AU through which 21% of all centaurs pass, including 72% of the centaurs that become Jupiter-family comets. Four objects are known to occupy this region, including 29P/Schwassmann-Wachmann, P/2010 TO20 LINEAR-Grauer, P/2008 CL94 Lemmon, and 2016 LN8, but the simulations indicate that there may of order 1000 more objects >1 km in radius that have yet to be detected. Objects in this gateway region can display significant activity and are in an important evolutionary transition state that further blurs the distinction between the centaur and Jupiter-family comet populations. The Committee on Small Body Nomenclature of the International Astronomical Union has not formally weighed in on any side of the debate. Instead, it has adopted the following naming convention for such objects: Befitting their centaur-like transitional orbits between TNOs and comets, "objects on unstable, non-resonant, giant-planet-crossing orbits with semimajor axes greater than Neptune's" are to be named for other hybrid and shape-shifting mythical creatures. Thus far, only the binary objects Ceto and Phorcys and Typhon and Echidna have been named according to the new policy. Centaurs with measured diameters listed as possible dwarf planets according to Mike Brown's website include 10199 Chariklo, and 2060 Chiron. Orbits Distribution The diagram illustrates the orbits of known centaurs in relation to the orbits of the planets. For selected objects, the eccentricity of the orbits is represented by red segments (extending from perihelion to aphelion). The orbits of centaurs show a wide range of eccentricity, from highly eccentric (Pholus, Asbolus, Amycus, Nessus) to more circular (Chariklo and the Saturn-crossers Thereus and Okyrhoe). To illustrate the range of the orbits' parameters, the diagram shows a few objects with very unusual orbits, plotted in yellow : (Apollo asteroid) follows an extremely eccentric orbit (), leading it from inside Earth's orbit (0.94 AU) to well beyond Neptune () follows a quasi-circular orbit () has the lowest inclination (). is one of a small proportion of centaurs with an extreme prograde inclination (). It follows such a highly inclined orbit (79°) that, while it crosses from the distance of the asteroid belt from the Sun to past the distance of Saturn, if its orbit is projected onto the plane of Jupiter's orbit, it does not even go out as far as Jupiter. Over a dozen known centaurs follow retrograde orbits. Their inclinations range from modest (e.g., 160° for Dioretsa) to extreme (; e.g. 105° for ). Seventeen of these high-inclination, retrograde centaurs were controversially claimed to have an interstellar origin. Changing orbits Because the centaurs are not protected by orbital resonances, their orbits are unstable within a timescale of 106–107 years. For example, 55576 Amycus is in an unstable orbit near the 3:4 resonance of Uranus. Dynamical studies of their orbits indicate that being a centaur is probably an intermediate orbital state of objects transitioning from the Kuiper belt to the Jupiter family of short-period comets. (679997) 2023 RB will have its orbit notably changed by a close approach to Saturn in 2201. Objects may be perturbed from the Kuiper belt, whereupon they become Neptune-crossing and interact gravitationally with that planet (see theories of origin). They then become classed as centaurs, but their orbits are chaotic, evolving relatively rapidly as the centaur makes repeated close approaches to one or more of the outer planets. Some centaurs will evolve into Jupiter-crossing orbits whereupon their perihelia may become reduced into the inner Solar System and they may be reclassified as active comets in the Jupiter family if they display cometary activity. Centaurs will thus ultimately collide with the Sun or a planet or else they may be ejected into interstellar space after a close approach to one of the planets, particularly Jupiter. Physical characteristics Compared to dwarf planets and asteroids, the relatively small size and distance of centaurs precludes remote observation of surfaces, but colour indices and spectra can provide clues about surface composition and insight into the origin of the bodies. Colours The colours of centaurs are very diverse, which challenges any simple model of surface composition. In the side-diagram, the colour indices are measures of apparent magnitude of an object through blue (B), visible (V) (i.e. green-yellow) and red (R) filters. The diagram illustrates these differences (in exaggerated colours) for all centaurs with known colour indices. For reference, two moons: Triton and Phoebe, and planet Mars are plotted (yellow labels, size not to scale). Centaurs appear to be grouped into two classes: very red – for example 5145 Pholus blue (or blue-grey, according to some authors) – for example 2060 Chiron or There are numerous theories to explain this colour difference, but they can be broadly divided into two categories: The colour difference results from a difference in the origin and/or composition of the centaur (see origin below) The colour difference reflects a different level of space-weathering from radiation and/or cometary activity. As examples of the second category, the reddish colour of Pholus has been explained as a possible mantle of irradiated red organics, whereas Chiron has instead had its ice exposed due to its periodic cometary activity, giving it a blue/grey index. The correlation with activity and color is not certain, however, as the active centaurs span the range of colors from blue (Chiron) to red (166P/NEAT). Alternatively, Pholus may have been only recently expelled from the Kuiper belt, so that surface transformation processes have not yet taken place. Delsanti et al. suggest multiple competing processes: reddening by the radiation, and blushing by collisions. Spectra The interpretation of spectra is often ambiguous, related to particle sizes and other factors, but the spectra offer an insight into surface composition. As with the colours, the observed spectra can fit a number of models of the surface. Water ice signatures have been confirmed on a number of centaurs (including 2060 Chiron, 10199 Chariklo and 5145 Pholus). In addition to the water ice signature, a number of other models have been put forward: Chariklo's surface has been suggested to be a mixture of tholins (like those detected on Titan and Triton) with amorphous carbon. Pholus has been suggested to be covered by a mixture of Titan-like tholins, carbon black, olivine and methanol ice. The surface of 52872 Okyrhoe has been suggested to be a mixture of kerogens, olivines and a small percentage of water ice. 8405 Asbolus has been suggested to be a mixture of 15% Triton-like tholins, 8% Titan-like tholin, 37% amorphous carbon and 40% ice tholin. Chiron appears to be the most complex. The spectra observed vary depending on the period of the observation. Water ice signature was detected during a period of low activity and disappeared during high activity. Similarities to comets Observations of Chiron in 1988 and 1989 near its perihelion found it to display a coma (a cloud of gas and dust evaporating from its surface). It is thus now officially classified as both a minor planet and a comet, although it is far larger than a typical comet and there is some lingering controversy. Other centaurs are being monitored for comet-like activity: so far two, 60558 Echeclus, and 166P/NEAT have shown such behavior. 166P/NEAT was discovered while it exhibited a coma, and so is classified as a comet, though its orbit is that of a centaur. 60558 Echeclus was discovered without a coma but recently became active, and so it too is now classified as both a comet and an asteroid. Overall, there are ~30 centaurs for which activity has been detected, with the active population biased toward objects with smaller perihelion distances. Carbon monoxide has been detected in 60558 Echeclus and Chiron in very small amounts, and the derived CO production rate was calculated to be sufficient to account for the observed coma. The calculated CO production rate from both 60558 Echeclus and Chiron is substantially lower than what is typically observed for 29P/Schwassmann–Wachmann, another distantly active comet often classified as a centaur. There is no clear orbital distinction between centaurs and comets. Both 29P/Schwassmann-Wachmann and 39P/Oterma have been referred to as centaurs since they have typical centaur orbits. The comet 39P/Oterma is currently inactive and was seen to be active only before it was perturbed into a centaur orbit by Jupiter in 1963. The faint comet 38P/Stephan–Oterma would probably not show a coma if it had a perihelion distance beyond Jupiter's orbit at 5 AU. By the year 2200, comet 78P/Gehrels will probably migrate outwards into a centaur-like orbit. Rotational periods A periodogram analysis of the light-curves of these Chiron and Chariklo gives respectively the following rotational periods: 5.5±0.4~h and 7.0± 0.6~h. Size, density, reflectivity Centaurs can reach diameters up to hundreds of kilometers. The largest centaurs have diameters in excess of 300 km, and primarily reside beyond 20 AU. Hypotheses of origin The study of centaurs’ origins is rich in recent developments, but any conclusions are still hampered by limited physical data. Different models have been put forward for possible origin of centaurs. Simulations indicate that the orbit of some Kuiper belt objects can be perturbed, resulting in the object's expulsion so that it becomes a centaur. Scattered disc objects would be dynamically the best candidates (For instance, the centaurs could be part of an "inner" scattered disc of objects perturbed inwards from the Kuiper belt.) for such expulsions, but their colours do not fit the bicoloured nature of the centaurs. Plutinos are a class of Kuiper belt object that display a similar bicoloured nature, and there are suggestions that not all plutinos' orbits are as stable as initially thought, due to perturbation by Pluto. Further developments are expected with more physical data on Kuiper belt objects. Some centaurs may have their origin in fragmentation episodes, perhaps triggered during close encounters with Jupiter. The orbits of centaurs 2020 MK4, P/2008 CL94 (Lemmon), and P/2010 TO20 (LINEAR-Grauer) pass close to that of comet 29P/Schwassmann–Wachmann, the first discovered centaur and close encounters are possible in which one of the objects traverses the coma of 29P when active. At least one centaur, 2013 VZ70, might have an origin among Saturn's irregular moon population via impact, fragmentation, or tidal disruption. Notable centaurs See also Asteroid Dwarf planet Explanatory notes References External links List of centaurs and scattered-disk objects Centaurs from The Encyclopedia of Astrobiology Astronomy and Spaceflight NASA's WISE Finds Mysterious Centaurs May Be Comets (2013) Distant minor planets Solar System
Centaur (small Solar System body)
[ "Astronomy" ]
3,853
[ "Outer space", "Solar System" ]
172,291
https://en.wikipedia.org/wiki/Drag%20coefficient
In fluid dynamics, the drag coefficient (commonly denoted as: , or ) is a dimensionless quantity that is used to quantify the drag or resistance of an object in a fluid environment, such as air or water. It is used in the drag equation in which a lower drag coefficient indicates the object will have less aerodynamic or hydrodynamic drag. The drag coefficient is always associated with a particular surface area. The drag coefficient of any object comprises the effects of the two basic contributors to fluid dynamic drag: skin friction and form drag. The drag coefficient of a lifting airfoil or hydrofoil also includes the effects of lift-induced drag. The drag coefficient of a complete structure such as an aircraft also includes the effects of interference drag. Definition The drag coefficient is defined as where: is the drag force, which is by definition the force component in the direction of the flow velocity; is the mass density of the fluid; is the flow speed of the object relative to the fluid; is the reference area The reference area depends on what type of drag coefficient is being measured. For automobiles and many other objects, the reference area is the projected frontal area of the vehicle. This may not necessarily be the cross-sectional area of the vehicle, depending on where the cross-section is taken. For example, for a sphere (note this is not the surface area = ). For airfoils, the reference area is the nominal wing area. Since this tends to be large compared to the frontal area, the resulting drag coefficients tend to be low, much lower than for a car with the same drag, frontal area, and speed. Airships and some bodies of revolution use the volumetric drag coefficient, in which the reference area is the square of the cube root of the airship volume (volume to the two-thirds power). Submerged streamlined bodies use the wetted surface area. Two objects having the same reference area moving at the same speed through a fluid will experience a drag force proportional to their respective drag coefficients. Coefficients for unstreamlined objects can be 1 or more, for streamlined objects much less. As a caution, note that although the above is the conventional definition for the drag coefficient, there are other definitions that one may encounter in the literature. The reason for this is that the conventional definition makes the most sense when one is in the Newton regime, such as what happens at high Reynolds number, where it makes sense to scale the drag to the momentum flux into the frontal area of the object. But, there are other flow regimes. In particular at very low Reynolds number, it is more natural to write the drag force as being proportional to a drag coefficient multiplied by the speed of the object (rather than the square of the speed of the object). An example of such a regime is the study of the mobility of aerosol particulates, such as smoke particles. This leads to a different formal definition of the "drag coefficient," of course. Cauchy momentum equation In the non dimensional form of the Cauchy momentum equation, the skin drag coefficient or skin friction coefficient is referred to the transversal area (the area normal to the drag force, so the coefficient is locally defined as: where: is the local shear stress, which is by definition the stress component in the direction of the local flow velocity; is the local dynamic pressure of the fluid is the local mass density of the fluid; is the local flow speed of the fluid Background The drag equation is essentially a statement that the drag force on any object is proportional to the density of the fluid and proportional to the square of the relative flow speed between the object and the fluid. The factor of comes from the dynamic pressure of the fluid, which is equal to the kinetic energy density. The value of is not a constant but varies as a function of flow speed, flow direction, object position, object size, fluid density and fluid viscosity. Speed, kinematic viscosity and a characteristic length scale of the object are incorporated into a dimensionless quantity called the Reynolds number . is thus a function of . In a compressible flow, the speed of sound is relevant, and is also a function of Mach number . For certain body shapes, the drag coefficient only depends on the Reynolds number , Mach number and the direction of the flow. For low Mach number , the drag coefficient is independent of Mach number. Also, the variation with Reynolds number within a practical range of interest is usually small, while for cars at highway speed and aircraft at cruising speed, the incoming flow direction is also more-or-less the same. Therefore, the drag coefficient can often be treated as a constant. For a streamlined body to achieve a low drag coefficient, the boundary layer around the body must remain attached to the surface of the body for as long as possible, causing the wake to be narrow. A high form drag results in a broad wake. The boundary layer will transition from laminar to turbulent if Reynolds number of the flow around the body is sufficiently great. Larger velocities, larger objects, and lower viscosities contribute to larger Reynolds numbers. For other objects, such as small particles, one can no longer consider that the drag coefficient is constant, but certainly is a function of Reynolds number. At a low Reynolds number, the flow around the object does not transition to turbulent but remains laminar, even up to the point at which it separates from the surface of the object. At very low Reynolds numbers, without flow separation, the drag force is proportional to instead of ; for a sphere this is known as Stokes' law. The Reynolds number will be low for small objects, low velocities, and high viscosity fluids. A equal to 1 would be obtained in a case where all of the fluid approaching the object is brought to rest, building up stagnation pressure over the whole front surface. The top figure shows a flat plate with the fluid coming from the right and stopping at the plate. The graph to the left of it shows equal pressure across the surface. In a real flat plate, the fluid must turn around the sides, and full stagnation pressure is found only at the center, dropping off toward the edges as in the lower figure and graph. Only considering the front side, the of a real flat plate would be less than 1; except that there will be suction on the backside: a negative pressure (relative to ambient). The overall of a real square flat plate perpendicular to the flow is often given as 1.17. Flow patterns and therefore for some shapes can change with the Reynolds number and the roughness of the surfaces. Drag coefficient examples General In general, is not an absolute constant for a given body shape. It varies with the speed of airflow (or more generally with Reynolds number ). A smooth sphere, for example, has a that varies from high values for laminar flow to 0.47 for turbulent flow. Although the drag coefficient decreases with increasing , the drag force increases. Aircraft As noted above, aircraft use their wing area as the reference area when computing , while automobiles (and many other objects) use projected frontal area; thus, coefficients are not directly comparable between these classes of vehicles. In the aerospace industry, the drag coefficient is sometimes expressed in drag counts where 1 drag count = 0.0001 of a . Automobile Blunt and streamlined body flows Concept The force between a fluid and a body, when there is relative motion, can only be transmitted by normal pressure and tangential friction stresses. So, for the whole body, the drag part of the force, which is in-line with the approaching fluid motion, is composed of frictional drag (viscous drag) and pressure drag (form drag). The total drag and component drag forces can be related as follows: where: is the planform area of the body, is the wet surface of the body, is the pressure drag coefficient, is the friction drag coefficient, is the unit vector in the direction of the shear stress acting on the body surface dS, is the unit vector in the direction perpendicular to the body surface dS, pointing from the fluid to the solid, magnitude of the shear stress acting on the body surface dS, is the pressure far away from the body (note that this constant does not affect the final result), is pressure at surface dS, is the unit vector in direction of free stream flow Therefore, when the drag is dominated by a frictional component, the body is called a streamlined body; whereas in the case of dominant pressure drag, the body is called a blunt or bluff body. Thus, the shape of the body and the angle of attack determine the type of drag. For example, an airfoil is considered as a body with a small angle of attack by the fluid flowing across it. This means that it has attached boundary layers, which produce much less pressure drag. The wake produced is very small and drag is dominated by the friction component. Therefore, such a body (here an airfoil) is described as streamlined, whereas for bodies with fluid flow at high angles of attack, boundary layer separation takes place. This mainly occurs due to adverse pressure gradients at the top and rear parts of an airfoil. Due to this, wake formation takes place, which consequently leads to eddy formation and pressure loss due to pressure drag. In such situations, the airfoil is stalled and has higher pressure drag than friction drag. In this case, the body is described as a blunt body. A streamlined body looks like a fish (tuna), Oropesa, etc. or an airfoil with small angle of attack, whereas a blunt body looks like a brick, a cylinder or an airfoil with high angle of attack. For a given frontal area and velocity, a streamlined body will have lower resistance than a blunt body. Cylinders and spheres are taken as blunt bodies because the drag is dominated by the pressure component in the wake region at high Reynolds number. To reduce this drag, either the flow separation could be reduced or the surface area in contact with the fluid could be reduced (to reduce friction drag). This reduction is necessary in devices like cars, bicycle, etc. to avoid vibration and noise production. See also Automotive aerodynamics Automobile drag coefficient Ballistic coefficient Drag crisis Zero-lift drag coefficient Notes References L. J. Clancy (1975): Aerodynamics. Pitman Publishing Limited, London, Abbott, Ira H., and Von Doenhoff, Albert E. (1959): Theory of Wing Sections. Dover Publications Inc., New York, Standard Book Number 486-60586-8 Hoerner, Dr. Sighard F., Fluid-Dynamic Drag, Hoerner Fluid Dynamics, Bricktown New Jersey, 1965. Bluff Body: http://user.engineering.uiowa.edu/~me_160/lecture_notes/Bluff%20Body2.pdf Drag of Blunt Bodies and Streamlined Bodies: http://www.princeton.edu/~asmits/Bicycle_web/blunt.html Hucho, W.H., Janssen, L.J., Emmelmann, H.J. 6(1975): The optimization of body details-A method for reducing the aerodynamics drag. SAE 760185. Drag (physics) Aerospace engineering Dimensionless numbers of fluid mechanics
Drag coefficient
[ "Chemistry", "Engineering" ]
2,302
[ "Drag (physics)", "Aerospace engineering", "Fluid dynamics" ]
172,317
https://en.wikipedia.org/wiki/Ray%20transfer%20matrix%20analysis
Ray transfer matrix analysis (also known as ABCD matrix analysis) is a mathematical form for performing ray tracing calculations in sufficiently simple problems which can be solved considering only paraxial rays. Each optical element (surface, interface, mirror, or beam travel) is described by a ray transfer matrix which operates on a vector describing an incoming light ray to calculate the outgoing ray. Multiplication of the successive matrices thus yields a concise ray transfer matrix describing the entire optical system. The same mathematics is also used in accelerator physics to track particles through the magnet installations of a particle accelerator, see electron optics. This technique, as described below, is derived using the paraxial approximation, which requires that all ray directions (directions normal to the wavefronts) are at small angles relative to the optical axis of the system, such that the approximation remains valid. A small further implies that the transverse extent of the ray bundles ( and ) is small compared to the length of the optical system (thus "paraxial"). Since a decent imaging system where this is the case for all rays must still focus the paraxial rays correctly, this matrix method will properly describe the positions of focal planes and magnifications, however aberrations still need to be evaluated using full ray-tracing techniques. Matrix definition The ray tracing technique is based on two reference planes, called the input and output planes, each perpendicular to the optical axis of the system. At any point along the optical train an optical axis is defined corresponding to a central ray; that central ray is propagated to define the optical axis further in the optical train which need not be in the same physical direction (such as when bent by a prism or mirror). The transverse directions and (below we only consider the direction) are then defined to be orthogonal to the optical axes applying. A light ray enters a component crossing its input plane at a distance from the optical axis, traveling in a direction that makes an angle with the optical axis. After propagation to the output plane that ray is found at a distance from the optical axis and at an angle with respect to it. and are the indices of refraction of the media in the input and output plane, respectively. The ABCD matrix representing a component or system relates the output ray to the input according to where the values of the 4 matrix elements are thus given by and This relates the ray vectors at the input and output planes by the ray transfer matrix () , which represents the optical component or system present between the two reference planes. A thermodynamics argument based on the blackbody radiation can be used to show that the determinant of a RTM is the ratio of the indices of refraction: As a result, if the input and output planes are located within the same medium, or within two different media which happen to have identical indices of refraction, then the determinant of is simply equal to 1. A different convention for the ray vectors can be employed. Instead of using , the second element of the ray vector is , which is proportional not to the ray angle per se but to the transverse component of the wave vector. This alters the ABCD matrices given in the table below where refraction at an interface is involved. The use of transfer matrices in this manner parallels the matrices describing electronic two-port networks, particularly various so-called ABCD matrices which can similarly be multiplied to solve for cascaded systems. Some examples Free space example As one example, if there is free space between the two planes, the ray transfer matrix is given by: where is the separation distance (measured along the optical axis) between the two reference planes. The ray transfer equation thus becomes: and this relates the parameters of the two rays as: Thin lens example Another simple example is that of a thin lens. Its RTM is given by: where is the focal length of the lens. To describe combinations of optical components, ray transfer matrices may be multiplied together to obtain an overall RTM for the compound optical system. For the example of free space of length followed by a lens of focal length : Note that, since the multiplication of matrices is non-commutative, this is not the same RTM as that for a lens followed by free space: Thus the matrices must be ordered appropriately, with the last matrix premultiplying the second last, and so on until the first matrix is premultiplied by the second. Other matrices can be constructed to represent interfaces with media of different refractive indices, reflection from mirrors, etc. Eigenvalues A ray transfer matrix can be regarded as a linear canonical transformation. According to the eigenvalues of the optical system, the system can be classified into several classes. Assume the ABCD matrix representing a system relates the output ray to the input according to We compute the eigenvalues of the matrix that satisfy eigenequation by calculating the determinant Let , and we have eigenvalues . According to the values of and , there are several possible cases. For example: A pair of real eigenvalues: and , where . This case represents a magnifier or . This case represents unity matrix (or with an additional coordinate reverter) . . This case occurs if but not only if the system is either a unity operator, a section of free space, or a lens A pair of two unimodular, complex conjugated eigenvalues and . This case is similar to a separable Fractional Fourier Transform. Matrices for simple optical components Relation between geometrical ray optics and wave optics The theory of Linear canonical transformation implies the relation between ray transfer matrix (geometrical optics) and wave optics. Common decomposition There exist infinite ways to decompose a ray transfer matrix into a concatenation of multiple transfer matrices. For example in the special case when : . Resonator stability RTM analysis is particularly useful when modeling the behavior of light in optical resonators, such as those used in lasers. At its simplest, an optical resonator consists of two identical facing mirrors of 100% reflectivity and radius of curvature , separated by some distance . For the purposes of ray tracing, this is equivalent to a series of identical thin lenses of focal length , each separated from the next by length . This construction is known as a lens equivalent duct or lens equivalent waveguide. The of each section of the waveguide is, as above, analysis can now be used to determine the stability of the waveguide (and equivalently, the resonator). That is, it can be determined under what conditions light traveling down the waveguide will be periodically refocused and stay within the waveguide. To do so, we can find all the "eigenrays" of the system: the input ray vector at each of the mentioned sections of the waveguide times a real or complex factor is equal to the output one. This gives: which is an eigenvalue equation: where is the identity matrix. We proceed to calculate the eigenvalues of the transfer matrix: leading to the characteristic equation where is the trace of the , and is the determinant of the . After one common substitution we have: where is the stability parameter. The eigenvalues are the solutions of the characteristic equation. From the quadratic formula we find Now, consider a ray after passes through the system: If the waveguide is stable, no ray should stray arbitrarily far from the main axis, that is, must not grow without limit. Suppose Then both eigenvalues are real. Since one of them has to be bigger than 1 (in absolute value), which implies that the ray which corresponds to this eigenvector would not converge. Therefore, in a stable waveguide, and the eigenvalues can be represented by complex numbers: with the substitution . For let and be the eigenvectors with respect to the eigenvalues and respectively, which span all the vector space because they are orthogonal, the latter due to The input vector can therefore be written as for some constants and After waveguide sectors, the output reads which represents a periodic function. Gaussian beams The same matrices can also be used to calculate the evolution of Gaussian beams propagating through optical components described by the same transmission matrices. If we have a Gaussian beam of wavelength radius of curvature (positive for diverging, negative for converging), beam spot size and refractive index , it is possible to define a complex beam parameter by: (, , and are functions of position.) If the beam axis is in the direction, with waist at and Rayleigh range , this can be equivalently written as This beam can be propagated through an optical system with a given ray transfer matrix by using the equation: where is a normalization constant chosen to keep the second component of the ray vector equal to . Using matrix multiplication, this equation expands as Dividing the first equation by the second eliminates the normalization constant: It is often convenient to express this last equation in reciprocal form: Example: Free space Consider a beam traveling a distance through free space, the ray transfer matrix is and so consistent with the expression above for ordinary Gaussian beam propagation, i.e. As the beam propagates, both the radius and waist change. Example: Thin lens Consider a beam traveling through a thin lens with focal length . The ray transfer matrix is and so Only the real part of is affected: the wavefront curvature is reduced by the power of the lens , while the lateral beam size remains unchanged upon exiting the thin lens. Higher rank matrices Methods using transfer matrices of higher dimensionality, that is , , and , are also used in optical analysis. In particular, propagation matrices are used in the design and analysis of prism sequences for pulse compression in femtosecond lasers. See also Transfer-matrix method (optics) Linear canonical transformation Footnotes References Further reading External links Thick lenses (Matrix methods) ABCD Matrices Tutorial Provides an example for a system matrix of an entire system. ABCD Calculator An interactive calculator to help solve ABCD matrices. Geometrical optics Accelerator physics
Ray transfer matrix analysis
[ "Physics" ]
2,075
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]
172,323
https://en.wikipedia.org/wiki/Rubella
Rubella, also known as German measles or three-day measles, is an infection caused by the rubella virus. This disease is often mild, with half of people not realizing that they are infected. A rash may start around two weeks after exposure and last for three days. It usually starts on the face and spreads to the rest of the body. The rash is sometimes itchy and is not as bright as that of measles. Swollen lymph nodes are common and may last a few weeks. A fever, sore throat, and fatigue may also occur. Joint pain is common in adults. Complications may include bleeding problems, testicular swelling, encephalitis, and inflammation of nerves. Infection during early pregnancy may result in a miscarriage or a child born with congenital rubella syndrome (CRS). Symptoms of CRS manifest as problems with the eyes such as cataracts, deafness, as well as affecting the heart and brain. Problems are rare after the 20th week of pregnancy. Rubella is usually spread from one person to the next through the air via coughs of people who are infected. People are infectious during the week before and after the appearance of the rash. Babies with CRS may spread the virus for more than a year. Only humans are infected. Insects do not spread the disease. Once recovered, people are immune to future infections. Testing is available that can verify immunity. Diagnosis is confirmed by finding the virus in the blood, throat, or urine. Testing the blood for antibodies may also be useful. Rubella is preventable with the rubella vaccine, with a single dose being more than 95% effective. Often it is given in combination with the measles vaccine and mumps vaccine, known as the MMR vaccine. When some, but less than 80%, of a population is vaccinated, more women may reach childbearing age without developing immunity by infection or vaccination, thus possibly raising CRS rates. Once infected there is no specific treatment. Rubella is a common infection in many areas of the world. Each year about 100,000 cases of congenital rubella syndrome occur. Rates of disease have decreased in many areas as a result of vaccination. There are ongoing efforts to eliminate the disease globally. In April 2015, the World Health Organization declared the Americas free of rubella transmission. The name "rubella" is from Latin and means little red. It was first described as a separate disease by German physicians in 1814, resulting in the name "German measles". Signs and symptoms Rubella has symptoms similar to those of flu. However, the primary symptom of rubella virus infection is the appearance of a rash (exanthem) on the face which spreads to the trunk and limbs and usually fades after three days, which is why it is often referred to as three-day measles. The facial rash usually clears as it spreads to other parts of the body. Other symptoms include low-grade fever, swollen glands (sub-occipital and posterior cervical lymphadenopathy), joint pains, headache, and conjunctivitis. The swollen glands or lymph nodes can persist for up to a week and the fever rarely rises above 38 °C (100.4 °F). The rash of rubella is typically pink or light red. The rash causes itching and often lasts for about three days. The rash disappears after a few days with no staining or peeling of the skin. When the rash clears up, the skin might shed in very small flakes where the rash covered it. Forchheimer spots occur in 20% of cases and are characterized by small, red papules on the area of the soft palate. Rubella can affect anyone of any age. Adult females are particularly prone to arthritis and joint pains. In children, rubella normally causes symptoms that last two days and include: Rash begins on the face which spreads to the rest of the body. Low fever of less than . Posterior cervical lymphadenopathy. In older children and adults, additional symptoms may be present, including Swollen glands Coryza (cold-like symptoms) Aching joints (especially in young females) Severe complications of rubella include: Brain inflammation (encephalitis) Low platelet count Ear infection Coryza in rubella may convert to pneumonia, either direct viral pneumonia or secondary bacterial pneumonia, and bronchitis (either viral bronchitis or secondary bacterial bronchitis). Congenital rubella syndrome Rubella can cause congenital rubella syndrome in the newborn, this being the most severe sequela of rubella. The syndrome (CRS) follows intrauterine infection by the rubella virus and comprises cardiac, cerebral, ophthalmic, and auditory defects. It may also cause prematurity, low birth weight, neonatal thrombocytopenia, anemia, and hepatitis. The risk of major defects in organogenesis is highest for infection in the first trimester. CRS is the main reason a vaccine for rubella was developed. 80–90% of mothers who contract rubella within the critical first trimester have either a miscarriage or a stillborn baby. If the fetus survives the infection, it can be born with severe heart disorders (patent ductus arteriosus being the most common), blindness, deafness, or other life-threatening organ disorders. The skin manifestations are called "blueberry muffin lesions". For these reasons, rubella is included in the TORCH complex of perinatal infections. About 100,000 cases of this condition occur each year. Cause The disease is caused by the rubella virus, in the genus Rubivirus from the family Matonaviridae, that is enveloped and has a single-stranded RNA genome. The virus is transmitted by the respiratory route and replicates in the nasopharynx and lymph nodes. The virus is found in the blood 5 to 7 days after infection and spreads throughout the body. The virus has teratogenic properties and is capable of crossing the placenta and infecting the fetus where it stops cells from developing or destroys them. During this incubation period, the patient is contagious typically for about one week before he/she develops a rash and for about one week thereafter. Increased susceptibility to infection might be inherited as there is some indication that HLA-A1 or factors surrounding A1 on extended haplotypes are involved in virus infection or non-resolution of the disease. Diagnosis Rubella virus specific IgM antibodies are present in people recently infected by rubella virus, but these antibodies can persist for over a year, and a positive test result needs to be interpreted with caution. The presence of these antibodies along with, or a short time after, the characteristic rash confirms the diagnosis. Prevention Rubella infections are prevented by active immunization programs using live attenuated virus vaccines. Two live attenuated virus vaccines, RA 27/3 and Cendehill strains, were effective in the prevention of adult disease. However, their use in prepubertal females did not produce a significant fall in the overall incidence rate of CRS in the UK. Reductions were only achieved by immunisation of all children. The vaccine is now usually given as part of the MMR vaccine. The WHO recommends the first dose be given at 12 to 18 months of age with a second dose at 36 months. Pregnant women are usually tested for immunity to rubella early on. Women found to be susceptible are not vaccinated until after the baby is born because the vaccine contains live virus. The immunisation program has been quite successful. Cuba declared the disease eliminated in the 1990s, and in 2004 the Centers for Disease Control and Prevention announced that both the congenital and acquired forms of rubella had been eliminated from the United States. The World Health Organization declared Australia rubella free in October 2018. Screening for rubella susceptibility by history of vaccination or by serology is recommended in the United States for all women of childbearing age at their first preconception counseling visit to reduce incidence of congenital rubella syndrome (CRS). It is recommended that all susceptible non-pregnant women of childbearing age should be offered rubella vaccination. Due to concerns about possible teratogenicity, use of MMR vaccine is not recommended during pregnancy. Instead, susceptible pregnant women should be vaccinated as soon as possible in the postpartum period. In susceptible people passive immunization, in the form of polyclonal immunoglobulins, appears effective up to the fifth day post-exposure. Treatment There is no specific treatment for rubella; however, management is a matter of responding to symptoms to diminish discomfort. Treatment of newborn babies is focused on management of the complications. Congenital heart defects and cataracts can be corrected by direct surgery. Management for ocular congenital rubella syndrome (CRS) is similar to that for age-related macular degeneration, including counseling, regular monitoring, and the provision of low vision devices, if required. Prognosis Rubella infection of children and adults is usually mild, self-limiting, and often asymptomatic. The prognosis in children born with CRS is poor. Epidemiology Rubella occurs worldwide. The virus tends to peak during the spring in countries with temperate climates. Before the vaccine against rubella was introduced in 1969, widespread outbreaks usually occurred every 6–9 years in the United States and 3–5 years in Europe, mostly affecting children in the 5–9 year old age group. Since the introduction of vaccine, occurrences have become rare in those countries with high uptake rates. Vaccination has interrupted the transmission of rubella in the Americas: no endemic case has been observed since February 2009. Vaccination is still strongly recommended as the virus could be reintroduced from other continents should vaccination rates in the Americas drop. During the epidemic in the US between 1962 and 1965, rubella virus infections during pregnancy were estimated to have caused 30,000 stillbirths and 20,000 children to be born impaired or disabled as a result of CRS. Universal immunisation producing a high level of herd immunity is important in the control of epidemics of rubella. In the UK, there remains a large population of men susceptible to rubella who have not been vaccinated. Outbreaks of rubella occurred amongst many young men in the UK in 1993 and in 1996 the infection was transmitted to pregnant women, many of whom were immigrants and were susceptible. Outbreaks still arise, usually in developing countries where the vaccine is not as accessible. The complications encountered in pregnancy from rubella infection (miscarriage, fetal death, congenital rubella syndrome) are more common in Africa and Southeast Asia at a rate of 121 per 100,000 live births compared to 2 per 100,000 live births in the Americas and Europe. In Japan, 15,000 cases of rubella and 43 cases of congenital rubella syndrome were reported to the National Epidemiological Surveillance of Infectious Diseases between October 15, 2012, and March 2, 2014, during the 2012–13 rubella outbreak in Japan. They mainly occurred in men aged 31–51 and young adults aged 24–34. History Rubella was first described in the mid-eighteenth century. German physician and chemist, Friedrich Hoffmann, made the first clinical description of rubella in 1740, which was confirmed by de Bergen in 1752 and Orlow in 1758. In 1814, George de Maton first suggested that it be considered a disease distinct from both measles and scarlet fever. All these physicians were German, and the disease was known as Rötheln (contemporary German Röteln). (Rötlich means "reddish" or "pink" in German.) The fact that three Germans described it led to the common name of "German measles." Henry Veale, an English Royal Artillery surgeon, described an outbreak in India. He coined the name "rubella" (from the Latin word, meaning "little red") in 1866. It was formally recognised as an individual entity in 1881, at the International Congress of Medicine in London. In 1914, Alfred Fabian Hess theorised that rubella was caused by a virus, based on work with monkeys. In 1938, Hiro and Tosaka confirmed this by passing the disease to children using filtered nasal washings from acute cases. In 1940, there was a widespread epidemic of rubella in Australia. Subsequently, ophthalmologist Norman McAllister Gregg found 78 cases of congenital cataracts in infants and 68 of them were born to mothers who had caught rubella in early pregnancy. Gregg published an account, Congenital Cataract Following German Measles in the Mother, in 1941. He described a variety of problems now known as congenital rubella syndrome (CRS) and noticed that the earlier the mother was infected, the worse the damage was. Since no vaccine was yet available, some popular magazines promoted the idea of "German measles parties" for infected children to spread the disease to other children (especially girls) to immunize them for life and protect them from later catching the disease when pregnant. The virus was isolated in tissue culture in 1962 by two separate groups led by physicians Paul Douglas Parkman and Thomas Huckle Weller. There was a pandemic of rubella between 1962 and 1965, starting in Europe and spreading to the United States. In the years 1964–65, the United States had an estimated 12.5 million rubella cases (1964–1965 rubella epidemic). This led to 11,000 miscarriages or therapeutic abortions and 20,000 cases of congenital rubella syndrome. Of these, 2,100 died as neonates, 12,000 were deaf, 3,580 were blind, and 1,800 were intellectually disabled. In New York alone, CRS affected 1% of all births. In 1967, the molecular structure of rubella was observed under electron microscopy using antigen-antibody complexes by Jennifer M. Best, June Almeida, J E Banatvala and A P Waterson. In 1969, a live attenuated virus vaccine was licensed. In the early 1970s, a triple vaccine containing attenuated measles, mumps and rubella (MMR) viruses was introduced. By 2006, confirmed cases in the Americas had dropped below 3000 a year. However, a 2007 outbreak in Argentina, Brazil, and Chile pushed the cases to 13,000 that year. Eradication efforts On January 22, 2014, the World Health Organization (WHO) and the Pan American Health Organization declared and certified Colombia free of rubella and became the first Latin American country to eliminate the disease within its borders. On April 29, 2015, the Americas became the first WHO region to officially eradicate the disease. The last non-imported cases occurred in 2009 in Argentina and Brazil. The Pan American Health Organization director remarked, "The fight against rubella has taken more than 15 years, but it has paid off with what I believe will be one of the most important pan-American public health achievements of the 21st Century." The declaration was made after 165 million health records and genetically confirming that all recent cases were caused by known imported strains of the virus. Rubella is still common in some regions of the world and Susan E. Reef, team lead for rubella at the CDC's global immunization division, who joined in the announcement, said there was no chance it would be eradicated worldwide before 2020. Rubella is the third disease to be eliminated from the Western Hemisphere with vaccination after smallpox and polio. Etymology From "rubrum" the Latin for "red", rubella means "reddish and small". "German" measles derives from "germanus" which means "similar" in this context. The name rubella is sometimes confused with rubeola, an alternative name for measles in English-speaking countries; the diseases are unrelated. In some other European languages, like Spanish, rubella and rubeola are synonyms, and rubeola is not an alternative name for measles. Thus, in Spanish, rubeola refers to rubella and sarampión refers to measles. See also Blueberry muffin baby Eradication of infectious diseases Exanthema subitum (roseola infantum) References External links Rubella at Wong's Virology. Immunization Action Coalition: Rubella Teratogens Pediatrics Virus-related cutaneous conditions Infectious diseases with eradication efforts Wikipedia medicine articles ready to translate Wikipedia emergency medicine articles ready to translate Vaccine-preventable diseases
Rubella
[ "Chemistry", "Biology" ]
3,422
[ "Teratogens", "Vaccination", "Vaccine-preventable diseases" ]
172,327
https://en.wikipedia.org/wiki/Congenital%20rubella%20syndrome
Congenital rubella syndrome (CRS) occurs when a human fetus is infected with the rubella virus (German measles) via maternal-fetal transmission and develops birth defects. The most common congenital defects affect the ophthalmologic, cardiac, auditory, and neurologic systems. Rubella infection in pregnancy can result in various outcomes ranging from asymptomatic infection to congenital defects to miscarriage and fetal death. If infection occurs 0–11 weeks after conception, the infant has a 90% risk of being affected. If the infection occurs 12–20 weeks after conception, the risk is 20%. Infants are not generally affected if rubella is contracted during the third trimester. Diagnosis of congenital rubella syndrome is made through a series of clinical and laboratory findings and management is based on the infant's clinical presentation. Maintaining rubella outbreak control via vaccination is essential in preventing congenital rubella infection and congenital rubella syndrome. Congenital rubella syndrome was discovered in 1941 by Australian Norman McAlister Gregg. Signs and symptoms The classic triad for congenital rubella syndrome is: Sensorineural deafness (58% of patients) Eye abnormalities—especially retinopathy, cataract, glaucoma, and microphthalmia (43% of patients) Congenital heart disease—especially pulmonary artery stenosis and patent ductus arteriosus (50% of patients) Other manifestations of CRS may include: Spleen, liver, or bone marrow problems (some of which may disappear shortly after birth) Intellectual disability Small head size (microcephaly) Low birth weight Thrombocytopenic purpura, leading to easy or excessive bleeding or bruising Extramedullary hematopoiesis (presents as a characteristic blueberry muffin rash) Enlarged liver (hepatomegaly) Small jaw size (micrognathia) Radiolucent bone disease Skin lesions Children who have been exposed to rubella in the womb should also be watched closely as they age for any indication of: Developmental delay Autism Schizophrenia Growth retardation Learning disabilities Thyroid disorders Diabetes mellitus Diagnosis Diagnosis of congenital rubella syndrome is made based on clinical findings and laboratory criteria. Laboratory criteria includes at least one of the following: Detection of the rubella virus via RT-PCR Detection of rubella-specific IgM antibody Detection of infant rubella-specific IgG antibody at higher levels (and persists for a longer time) than expected for passive maternal transmission Isolation of the rubella virus by nasal, blood, throat, urine, or cerebrospinal fluid specimens Clinical definition is characterized by findings in the following categories: Cataracts/congenital glaucoma, congenital heart disease (most commonly, patent ductus arteriosus or peripheral pulmonary artery stenosis), hearing impairment, pigmentary retinopathy Purpura, hepatosplenomegaly, jaundice, microcephaly, developmental delay, meningoencephalitis, radiolucent bone disease A patient is classified into the following cases depending on their clinical and laboratory findings: Suspected: A patient that has one or more of the clinical findings listed above but does not meet the definition for probable or confirmed classification Probable: A patient that does not have laboratory confirmation of congenital rubella but has either two clinical findings from Group 1 as listed above OR one clinical finding from Group 1 and one clinical finding from Group 2 as listed above Confirmed: A patient with at least one laboratory finding and one clinical finding (from either group) as listed above Infection only: A patient with no clinical findings as described above but meeting at least one confirmed laboratory criteria Prevention Vaccinating the majority of the population is effective at preventing congenital rubella syndrome. With the introduction of the rubella vaccine in 1969, the number of cases of rubella in the United States has decreased 99%, from 57,686 cases in 1969 to 271 cases in 1999. For women who plan to become pregnant, the MMR (measles mumps, rubella) vaccination is highly recommended, at least 28 days prior to conception. The vaccine should not be given to women who are already pregnant as it contains live viral particles. Other preventative actions can include the screening and vaccinations of high-risk personnel, such as medical and child care professions. Infants with birth defects suspected to be caused by congenital rubella infection should be investigated thoroughly. Confirmed cases should be reported to the local or state health department to assess control of the virus and isolation of the infant should be maintained. Management Infants with known rubella exposure during pregnancy or those with a confirmed or suspected infection should receive close follow-up and supportive care. There are no medications or antivirals that will shorten the clinical course of the virus. Only those with immunity to rubella should have contact with infected infants, as they can shed viral particles in their respiratory secretions though 1 year of age (unless they test with repeated negative viral cultures at age 3 months). Many infants can be born with multiple birth defects that require multidisciplinary management and interventions based on clinical manifestations. Often these infants will require extended period or life-long follow up with medical specialists. Early diagnosis of congenital rubella syndrome is important for planning future medical care and educational placement. Auditory Care Many infants with CRS may be born with sensorineural deafness and thus should undergo a newborn hearing evaluation. Hearing loss may not be apparent at birth and thus requires close auditory follow up. Infants with confirmed hearing impairment may require hearing aids and may benefit from an early intervention program. Ophthalmologic Care Eye abnormalities including cataracts, infantile glaucoma and retinopathy are common in infants born with CRS. Infants should undergo eye examinations after birth and during early childhood. Those with congenital eye defects require care from a pediatric ophthalmologist for specialized care and follow up. Cardiac Care Congenital cardiac anomalies including pulmonary artery stenosis and patent ductus arteriosus can be seen in infants with CRS. Infants should undergo cardiac evaluation soon after birth and those with confirmed cardiac lesions will require specialized care with a pediatric cardiologist for any interventions and follow-up care. See also Jay Horwitz (born 1945), New York Mets executive born with the syndrome References Congenital disorders Infections specific to the perinatal period Rubella Syndromes caused by microbes Virus-related cutaneous conditions Disability Infectious diseases Pediatrics Hematology
Congenital rubella syndrome
[ "Biology" ]
1,321
[ "Microorganisms", "Syndromes caused by microbes" ]
172,331
https://en.wikipedia.org/wiki/Gene%20pool
The gene pool is the set of all genes, or genetic information, in any population, usually of a particular species. Description A large gene pool indicates extensive genetic diversity, which is associated with robust populations that can survive bouts of intense selection. Meanwhile, low genetic diversity (see inbreeding and population bottlenecks) can cause reduced biological fitness and an increased chance of extinction, although as explained by genetic drift new genetic variants, that may cause an increase in the fitness of organisms, are more likely to fix in the population if it is rather small. When all individuals in a population are identical with regard to a particular phenotypic trait, the population is said to be 'monomorphic'. When the individuals show several variants of a particular trait they are said to be polymorphic. History The Russian geneticist Alexander Sergeevich Serebrovsky first formulated the concept in the 1920s as genofond (gene fund), a word that was imported to the United States from the Soviet Union by Theodosius Dobzhansky, who translated it into English as "gene pool." Gene pool concept in crop breeding Harlan and de Wet (1971) proposed classifying each crop and its related species by gene pools rather than by formal taxonomy. Primary gene pool (GP-1): Members of this gene pool are probably in the same "species" (in conventional biological usage) and can intermate freely. Harlan and de Wet wrote, "Among forms of this gene pool, crossing is easy; hybrids are generally fertile with good chromosome pairing; gene segregation is approximately normal and gene transfer is generally easy.". They also advised subdividing each crop gene pool in two: Subspecies A: Cultivated races Subspecies B: Spontaneous races (wild or weedy) Secondary gene pool (GP-2): Members of this pool are probably normally classified as different species than the crop species under consideration (the primary gene pool). However, these species are closely related and can cross and produce at least some fertile hybrids. As would be expected by members of different species, there are some reproductive barriers between members of the primary and secondary gene pools: hybrids may be weak hybrids may be partially sterile chromosomes may pair poorly or not at all recovery of desired phenotypes may be difficult in subsequent generations However, "The gene pool is available to be utilized, however, if the plant breeder or geneticist is willing to put out the effort required." Tertiary gene pool (GP-3): Members of this gene pool are more distantly related to the members of the primary gene pool. The primary and tertiary gene pools can be intermated, but gene transfer between them is impossible without the use of "rather extreme or radical measures" such as: embryo rescue (or embryo culture, a form of plant organ culture) induced polyploidy (chromosome doubling) bridging crosses (e.g., with members of the secondary gene pool). Gene pool centres Gene pool centres refers to areas on the earth where important crop plants and domestic animals originated. They have an extraordinary range of the wild counterparts of cultivated plant species and useful tropical plants. Gene pool centres also contain different sub tropical and temperate region species. See also Biodiversity Conservation biology Founder effect Gene flow Genetic drift Small population size Australian Grains Genebank References Ecology Conservation biology Selection Genetics concepts Classical genetics Population genetics Evolutionary biology Biorepositories
Gene pool
[ "Biology" ]
687
[ "Evolutionary biology", "Evolutionary processes", "Selection", "Genetics concepts", "Ecology", "Bioinformatics", "Conservation biology", "Biorepositories" ]
172,333
https://en.wikipedia.org/wiki/Dispersion%20%28optics%29
Dispersion is the phenomenon in which the phase velocity of a wave depends on its frequency. Sometimes the term chromatic dispersion is used to refer to optics specifically, as opposed to wave propagation in general. A medium having this common property may be termed a dispersive medium. Although the term is used in the field of optics to describe light and other electromagnetic waves, dispersion in the same sense can apply to any sort of wave motion such as acoustic dispersion in the case of sound and seismic waves, and in gravity waves (ocean waves). Within optics, dispersion is a property of telecommunication signals along transmission lines (such as microwaves in coaxial cable) or the pulses of light in optical fiber. In optics, one important and familiar consequence of dispersion is the change in the angle of refraction of different colors of light, as seen in the spectrum produced by a dispersive prism and in chromatic aberration of lenses. Design of compound achromatic lenses, in which chromatic aberration is largely cancelled, uses a quantification of a glass's dispersion given by its Abbe number V, where lower Abbe numbers correspond to greater dispersion over the visible spectrum. In some applications such as telecommunications, the absolute phase of a wave is often not important but only the propagation of wave packets or "pulses"; in that case one is interested only in variations of group velocity with frequency, so-called group-velocity dispersion. All common transmission media also vary in attenuation (normalized to transmission length) as a function of frequency, leading to attenuation distortion; this is not dispersion, although sometimes reflections at closely spaced impedance boundaries (e.g. crimped segments in a cable) can produce signal distortion which further aggravates inconsistent transit time as observed across signal bandwidth. Examples The most familiar example of dispersion is probably a rainbow, in which dispersion causes the spatial separation of a white light into components of different wavelengths (different colors). However, dispersion also has an effect in many other circumstances: for example, group-velocity dispersion causes pulses to spread in optical fibers, degrading signals over long distances; also, a cancellation between group-velocity dispersion and nonlinear effects leads to soliton waves. Material and waveguide dispersion Most often, chromatic dispersion refers to bulk material dispersion, that is, the change in refractive index with optical frequency. However, in a waveguide there is also the phenomenon of waveguide dispersion, in which case a wave's phase velocity in a structure depends on its frequency simply due to the structure's geometry. More generally, "waveguide" dispersion can occur for waves propagating through any inhomogeneous structure (e.g., a photonic crystal), whether or not the waves are confined to some region. In a waveguide, both types of dispersion will generally be present, although they are not strictly additive. For example, in fiber optics the material and waveguide dispersion can effectively cancel each other out to produce a zero-dispersion wavelength, important for fast fiber-optic communication. Material dispersion in optics Material dispersion can be a desirable or undesirable effect in optical applications. The dispersion of light by glass prisms is used to construct spectrometers and spectroradiometers. However, in lenses, dispersion causes chromatic aberration, an undesired effect that may degrade images in microscopes, telescopes, and photographic objectives. The phase velocity v of a wave in a given uniform medium is given by where c is the speed of light in vacuum, and n is the refractive index of the medium. In general, the refractive index is some function of the frequency f of the light, thus n = n(f), or alternatively, with respect to the wave's wavelength n = n(λ). The wavelength dependence of a material's refractive index is usually quantified by its Abbe number or its coefficients in an empirical formula such as the Cauchy or Sellmeier equations. Because of the Kramers–Kronig relations, the wavelength dependence of the real part of the refractive index is related to the material absorption, described by the imaginary part of the refractive index (also called the extinction coefficient). In particular, for non-magnetic materials (μ = μ0), the susceptibility χ that appears in the Kramers–Kronig relations is the electric susceptibility χe = n2 − 1. The most commonly seen consequence of dispersion in optics is the separation of white light into a color spectrum by a prism. From Snell's law it can be seen that the angle of refraction of light in a prism depends on the refractive index of the prism material. Since that refractive index varies with wavelength, it follows that the angle that the light is refracted by will also vary with wavelength, causing an angular separation of the colors known as angular dispersion. For visible light, refraction indices n of most transparent materials (e.g., air, glasses) decrease with increasing wavelength λ: or generally, In this case, the medium is said to have normal dispersion. Whereas if the index increases with increasing wavelength (which is typically the case in the ultraviolet), the medium is said to have anomalous dispersion. At the interface of such a material with air or vacuum (index of ~1), Snell's law predicts that light incident at an angle θ to the normal will be refracted at an angle arcsin(). Thus, blue light, with a higher refractive index, will be bent more strongly than red light, resulting in the well-known rainbow pattern. Group-velocity dispersion Beyond simply describing a change in the phase velocity over wavelength, a more serious consequence of dispersion in many applications is termed group-velocity dispersion (GVD). While phase velocity v is defined as v = c/n, this describes only one frequency component. When different frequency components are combined, as when considering a signal or a pulse, one is often more interested in the group velocity, which describes the speed at which a pulse or information superimposed on a wave (modulation) propagates. In the accompanying animation, it can be seen that the wave itself (orange-brown) travels at a phase velocity much faster than the speed of the envelope (black), which corresponds to the group velocity. This pulse might be a communications signal, for instance, and its information only travels at the group velocity rate, even though it consists of wavefronts advancing at a faster rate (the phase velocity). It is possible to calculate the group velocity from the refractive-index curve n(ω) or more directly from the wavenumber k = ωn/c, where ω is the radian frequency ω = 2πf. Whereas one expression for the phase velocity is vp = ω/k, the group velocity can be expressed using the derivative: vg = dω/dk. Or in terms of the phase velocity vp, When dispersion is present, not only the group velocity is not equal to the phase velocity, but generally it itself varies with wavelength. This is known as group-velocity dispersion and causes a short pulse of light to be broadened, as the different-frequency components within the pulse travel at different velocities. Group-velocity dispersion is quantified as the derivative of the reciprocal of the group velocity with respect to angular frequency, which results in group-velocity dispersion = d2k/dω2. If a light pulse is propagated through a material with positive group-velocity dispersion, then the shorter-wavelength components travel slower than the longer-wavelength components. The pulse therefore becomes positively chirped, or up-chirped, increasing in frequency with time. On the other hand, if a pulse travels through a material with negative group-velocity dispersion, shorter-wavelength components travel faster than the longer ones, and the pulse becomes negatively chirped, or down-chirped, decreasing in frequency with time. An everyday example of a negatively chirped signal in the acoustic domain is that of an approaching train hitting deformities on a welded track. The sound caused by the train itself is impulsive and travels much faster in the metal tracks than in air, so that the train can be heard well before it arrives. However, from afar it is not heard as causing impulses, but leads to a distinctive descending chirp, amidst reverberation caused by the complexity of the vibrational modes of the track. Group-velocity dispersion can be heard in that the volume of the sounds stays audible for a surprisingly long time, up to several seconds. Dispersion control The result of GVD, whether negative or positive, is ultimately temporal spreading of the pulse. This makes dispersion management extremely important in optical communications systems based on optical fiber, since if dispersion is too high, a group of pulses representing a bit-stream will spread in time and merge, rendering the bit-stream unintelligible. This limits the length of fiber that a signal can be sent down without regeneration. One possible answer to this problem is to send signals down the optical fibre at a wavelength where the GVD is zero (e.g., around 1.3–1.5 μm in silica fibres), so pulses at this wavelength suffer minimal spreading from dispersion. In practice, however, this approach causes more problems than it solves because zero GVD unacceptably amplifies other nonlinear effects (such as four-wave mixing). Another possible option is to use soliton pulses in the regime of negative dispersion, a form of optical pulse which uses a nonlinear optical effect to self-maintain its shape. Solitons have the practical problem, however, that they require a certain power level to be maintained in the pulse for the nonlinear effect to be of the correct strength. Instead, the solution that is currently used in practice is to perform dispersion compensation, typically by matching the fiber with another fiber of opposite-sign dispersion so that the dispersion effects cancel; such compensation is ultimately limited by nonlinear effects such as self-phase modulation, which interact with dispersion to make it very difficult to undo. Dispersion control is also important in lasers that produce short pulses. The overall dispersion of the optical resonator is a major factor in determining the duration of the pulses emitted by the laser. A pair of prisms can be arranged to produce net negative dispersion, which can be used to balance the usually positive dispersion of the laser medium. Diffraction gratings can also be used to produce dispersive effects; these are often used in high-power laser amplifier systems. Recently, an alternative to prisms and gratings has been developed: chirped mirrors. These dielectric mirrors are coated so that different wavelengths have different penetration lengths, and therefore different group delays. The coating layers can be tailored to achieve a net negative dispersion. In waveguides Waveguides are highly dispersive due to their geometry (rather than just to their material composition). Optical fibers are a sort of waveguide for optical frequencies (light) widely used in modern telecommunications systems. The rate at which data can be transported on a single fiber is limited by pulse broadening due to chromatic dispersion among other phenomena. In general, for a waveguide mode with an angular frequency ω(β) at a propagation constant β (so that the electromagnetic fields in the propagation direction z oscillate proportional to ei(βz−ωt)), the group-velocity dispersion parameter D is defined as where λ = 2c/ω is the vacuum wavelength, and vg = dω/dβ is the group velocity. This formula generalizes the one in the previous section for homogeneous media and includes both waveguide dispersion and material dispersion. The reason for defining the dispersion in this way is that |D| is the (asymptotic) temporal pulse spreading Δt per unit bandwidth Δλ per unit distance travelled, commonly reported in ps/(nm⋅km) for optical fibers. In the case of multi-mode optical fibers, so-called modal dispersion will also lead to pulse broadening. Even in single-mode fibers, pulse broadening can occur as a result of polarization mode dispersion (since there are still two polarization modes). These are not examples of chromatic dispersion, as they are not dependent on the wavelength or bandwidth of the pulses propagated. Higher-order dispersion over broad bandwidths When a broad range of frequencies (a broad bandwidth) is present in a single wavepacket, such as in an ultrashort pulse or a chirped pulse or other forms of spread spectrum transmission, it may not be accurate to approximate the dispersion by a constant over the entire bandwidth, and more complex calculations are required to compute effects such as pulse spreading. In particular, the dispersion parameter D defined above is obtained from only one derivative of the group velocity. Higher derivatives are known as higher-order dispersion. These terms are simply a Taylor series expansion of the dispersion relation β(ω) of the medium or waveguide around some particular frequency. Their effects can be computed via numerical evaluation of Fourier transforms of the waveform, via integration of higher-order slowly varying envelope approximations, by a split-step method (which can use the exact dispersion relation rather than a Taylor series), or by direct simulation of the full Maxwell's equations rather than an approximate envelope equation. Spatial dispersion In electromagnetics and optics, the term dispersion generally refers to aforementioned temporal or frequency dispersion. Spatial dispersion refers to the non-local response of the medium to the space; this can be reworded as the wavevector dependence of the permittivity. For an exemplary anisotropic medium, the spatial relation between electric and electric displacement field can be expressed as a convolution: where the kernel is dielectric response (susceptibility); its indices make it in general a tensor to account for the anisotropy of the medium. Spatial dispersion is negligible in most macroscopic cases, where the scale of variation of is much larger than atomic dimensions, because the dielectric kernel dies out at macroscopic distances. Nevertheless, it can result in non-negligible macroscopic effects, particularly in conducting media such as metals, electrolytes and plasmas. Spatial dispersion also plays role in optical activity and Doppler broadening, as well as in the theory of metamaterials. In gemology In the technical terminology of gemology, dispersion is the difference in the refractive index of a material at the B and G (686.7 nm and 430.8 nm) or C and F (656.3 nm and 486.1 nm) Fraunhofer wavelengths, and is meant to express the degree to which a prism cut from the gemstone demonstrates "fire". Fire is a colloquial term used by gemologists to describe a gemstone's dispersive nature or lack thereof. Dispersion is a material property. The amount of fire demonstrated by a given gemstone is a function of the gemstone's facet angles, the polish quality, the lighting environment, the material's refractive index, the saturation of color, and the orientation of the viewer relative to the gemstone. In imaging In photographic and microscopic lenses, dispersion causes chromatic aberration, which causes the different colors in the image not to overlap properly. Various techniques have been developed to counteract this, such as the use of achromats, multielement lenses with glasses of different dispersion. They are constructed in such a way that the chromatic aberrations of the different parts cancel out. Pulsar emissions Pulsars are spinning neutron stars that emit pulses at very regular intervals ranging from milliseconds to seconds. Astronomers believe that the pulses are emitted simultaneously over a wide range of frequencies. However, as observed on Earth, the components of each pulse emitted at higher radio frequencies arrive before those emitted at lower frequencies. This dispersion occurs because of the ionized component of the interstellar medium, mainly the free electrons, which make the group velocity frequency-dependent. The extra delay added at a frequency is where the dispersion constant kDM is given by and the dispersion measure (DM) is the column density of free electrons (total electron content) i.e. the number density of electrons ne integrated along the path traveled by the photon from the pulsar to the Earth and is given by with units of parsecs per cubic centimetre (1 pc/cm3 = 30.857 m−2). Typically for astronomical observations, this delay cannot be measured directly, since the emission time is unknown. What can be measured is the difference in arrival times at two different frequencies. The delay Δt between a high-frequency hi and a low-frequency lo component of a pulse will be Rewriting the above equation in terms of Δt allows one to determine the DM by measuring pulse arrival times at multiple frequencies. This in turn can be used to study the interstellar medium, as well as allow observations of pulsars at different frequencies to be combined. See also Calculation of glass properties incl. dispersion Cauchy's equation Dispersion relation Fast radio burst (astronomy) Fluctuation theorem Green–Kubo relations Group delay Intramodal dispersion Kramers–Kronig relations Linear response function Multiple-prism dispersion theory Sellmeier equation Ultrashort pulse Virtually imaged phased array References External links Dispersive Wiki – discussing the mathematical aspects of dispersion. Dispersion – Encyclopedia of Laser Physics and Technology Animations demonstrating optical dispersion by QED Interactive webdemo for chromatic dispersion Institute of Telecommunications, University of Stuttgart Glass physics Optical phenomena
Dispersion (optics)
[ "Physics", "Materials_science", "Engineering" ]
3,831
[ "Glass engineering and science", "Physical phenomena", "Optical phenomena", "Glass physics", "Condensed matter physics" ]
172,335
https://en.wikipedia.org/wiki/Dispersion%20%28materials%20science%29
In materials science, dispersion is the fraction of atoms of a material exposed to the surface. In general, D = NS/N, where D is the dispersion, NS is the number of surface atoms and NT is the total number of atoms of the material. It is an important concept in heterogeneous catalysis, since only atoms exposed to the surface can affect catalytic surface reactions. Dispersion increases with decreasing crystallite size and approaches unity at a crystallite diameter of about 0.1 nm. See also Emulsion dispersion References Materials science
Dispersion (materials science)
[ "Physics", "Materials_science", "Engineering" ]
119
[ "Materials science stubs", "Applied and interdisciplinary physics", "Materials science", "nan" ]
172,348
https://en.wikipedia.org/wiki/Thrust-specific%20fuel%20consumption
Thrust-specific fuel consumption (TSFC) is the fuel efficiency of an engine design with respect to thrust output. TSFC may also be thought of as fuel consumption (grams/second) per unit of thrust (newtons, or N), hence thrust-specific. This figure is inversely proportional to specific impulse, which is the amount of thrust produced per unit fuel consumed. TSFC or SFC for thrust engines (e.g. turbojets, turbofans, ramjets, rockets, etc.) is the mass of fuel needed to provide the net thrust for a given period e.g. lb/(h·lbf) (pounds of fuel per hour-pound of thrust) or g/(s·kN) (grams of fuel per second-kilonewton). Mass of fuel is used, rather than volume (gallons or litres) for the fuel measure, since it is independent of temperature. Specific fuel consumption of air-breathing jet engines at their maximum efficiency is more or less proportional to exhaust speed. The fuel consumption per mile or per kilometre is a more appropriate comparison for aircraft that travel at very different speeds. There also exists power-specific fuel consumption, which equals the thrust-specific fuel consumption divided by speed. It can have units of pounds per hour per horsepower. Significance of SFC SFC is dependent on engine design, but differences in the SFC between different engines using the same underlying technology tend to be quite small. Increasing overall pressure ratio on jet engines tends to decrease SFC. In practical applications, other factors are usually highly significant in determining the fuel efficiency of a particular engine design in that particular application. For instance, in aircraft, turbine (jet and turboprop) engines are typically much smaller and lighter than equivalently powerful piston engine designs, both properties reducing the levels of drag on the plane and reducing the amount of power needed to move the aircraft. Therefore, turbines are more efficient for aircraft propulsion than might be indicated by a simplistic look at the table below. SFC varies with throttle setting, altitude, climate. For jet engines, air flight speed is an important factor too. Air flight speed counteracts the jet's exhaust speed. (In an artificial and extreme case with the aircraft flying exactly at the exhaust speed, one can easily imagine why the jet's net thrust should be near zero.) Moreover, since work is force (i.e., thrust) times distance, mechanical power is force times speed. Thus, although the nominal SFC is a useful measure of fuel efficiency, it should be divided by speed when comparing engines at different speeds. For example, Concorde cruised at 1354 mph, or 7.15 million feet per hour, with its engines giving an SFC of 1.195 lb/(lbf·h) (see below); this means the engines transferred 5.98 million foot pounds per pound of fuel (17.9 MJ/kg), equivalent to an SFC of 0.50 lb/(lbf·h) for a subsonic aircraft flying at 570 mph, which would be better than even modern engines; the Olympus 593 used in the Concorde was the world's most efficient jet engine. However, Concorde ultimately has a heavier airframe and, due to being supersonic, is less aerodynamically efficient, i.e., the lift to drag ratio is far lower. In general, the total fuel burn of a complete aircraft is of far more importance to the customer. Units Typical values of SFC for thrust engines The following table gives the efficiency for several engines when running at 80% throttle, which is approximately what is used in cruising, giving a minimum SFC. The efficiency is the amount of power propelling the plane divided by the rate of energy consumption. Since the power equals thrust times speed, the efficiency is given by where V is speed and h is the energy content per unit mass of fuel (the higher heating value is used here, and at higher speeds the kinetic energy of the fuel or propellant becomes substantial and must be included). See also Notes References External links GE CF6 website NASA Cruise SFC vs. Year SFC by Engine/Mfg Engine technology Power (physics)
Thrust-specific fuel consumption
[ "Physics", "Mathematics", "Technology" ]
869
[ "Force", "Physical quantities", "Engines", "Quantity", "Engine technology", "Energy (physics)", "Power (physics)", "Wikipedia categories named after physical quantities" ]
172,384
https://en.wikipedia.org/wiki/Lichenology
Lichenology is the branch of mycology that studies the lichens, symbiotic organisms made up of an intimate symbiotic association of a microscopic alga (or a cyanobacterium) with a filamentous fungus. Lichens are chiefly characterized by this symbiosis. Study of lichens draws knowledge from several disciplines: mycology, phycology, microbiology and botany. Scholars of lichenology are known as lichenologists. Study of lichens is conducted by both professional and amateur lichenologists. Methods for species identification include reference to single-access keys on lichens. An example reference work is Lichens of North America (2001) by Irwin M. Brodo, Sylvia Sharnoff and Stephen Sharnoff and that book's 2016 expansion, Keys to Lichens of North America: Revised and Expanded by the same three authors joined by Susan Laurie-Bourque. A chemical spot test can be used to detect the presence of certain lichen products which can be characteristic of a given lichen species. Some components of certain lichens may also fluoresce under ultraviolet light, providing another form of lichen identification test. Lichenologists may also study the growth and growth rate of lichens, lichenometry, the role of lichens in nutrient cycling, the ecological role of lichens in biological soil crusts, the morphology of lichens, their anatomy and physiology, and ethnolichenology topics including the study of edible lichens. As with any other field of study, lichenology has its own set of rules for taxonomic nomenclature and its own set of other terminology. History The beginnings Lichens as a group have received less attention in classical treatises on botany than other groups although the relationship between humans and some species has been documented from early times. Several species have appeared in the works of Dioscorides, Pliny the Elder and Theophrastus although the studies are not very deep. During the first centuries of the modern age they were usually put forward as examples of spontaneous generation and their reproductive mechanisms were totally ignored. For centuries naturalists had included lichens in diverse groups until in the early 18th century a French researcher Joseph Pitton de Tournefort in his Institutiones Rei Herbariae grouped them into their own genus. He adopted the Latin term lichen, which had already been used by Pliny who had imported it from Theophrastus but up until then this term had not been widely employed. The original meaning of the Greek word λειχήν (leichen) was moss that in its turn derives from the Greek verb λείχω (liekho) to suck because of the great ability of these organisms to absorb water. In its original use the term signified mosses, liverworts as well as lichens. Some forty years later Dillenius in his Historia Muscorum made the first division of the group created by Tournefort separating the sub-families Usnea, Coralloides and Lichens in response to the morphological characteristics of the lichen thallus. After the revolution in taxonomy brought in by Linnaeus and his new system of classification lichens are retained in the Plant Kingdom forming a single group Lichen with eight divisions within the group according to the morphology of the thallus. The taxonomy of lichens was first intensively investigated by the Swedish botanist Erik Acharius (1757–1819), who is therefore sometimes named the "father of lichenology". Acharius was a student of Carl Linnaeus. Some of his more important works on the subject, which marked the beginning of lichenology as a discipline, are: Lichenographiae Suecia prodromus (1798) Methodus lichenum (1803) Lichenographia universalis (1810) Synopsis methodica lichenum (1814) Later lichenologists include the American scientists Vernon Ahmadjian and Edward Tuckerman and the Russian evolutionary biologist Konstantin Merezhkovsky, as well as amateurs such as Louisa Collings. Over the years research shed new light into the nature of these organisms still classified as plants. A controversial issue surrounding lichens since the early 19th century is their reproduction. In these years a group of researchers faithful to the tenets of Linnaeus considered that lichens reproduced sexually and had sexual reproductive organs, as in other plants, independent of whether asexual reproduction also occurred. Other researchers only considered asexual reproduction by means of Propagules. 19th century Against this background appeared the Swedish botanist Erik Acharius disciple of Linnaeus, who is today considered the father of lichenology, starting the taxonomy of lichens with his pioneering study of Swedish lichens in Lichenographiae Suecicae Prodromus of 1798 or in his Synopsis Methodica Lichenum, Sistens omnes hujus Ordinis Naturalis of 1814. These studies and classifications are the cornerstone of subsequent investigations. In these early years of structuring the new discipline various works of outstanding scientific importance appeared such as Lichenographia Europaea Reformata published in 1831 by Elias Fries or Enumeratio Critico Lichenum Europaeorum 1850 by Ludwig Schaerer in Germany. But these works suffer from being superficial and mere lists of species without further physiological studies. It took until the middle of the 19th century for research to catch up using biochemical and physiological methods. In Germany and Johann Bayrhoffer, in France Edmond Tulasne and Camille Montagne, in Russia Fedor Buhse, in England William Allport Leighton and in the United States Edward Tuckerman began to publish works of great scientific importance. Scientific publications settled many unknown facts about lichens. In the French publication Annales des Sciences Naturelles in an article of 1852 "Memorie pour servir a l'Histoire des Lichens Organographique et Physiologique" by Edmond Tulasne, the reproductive organs or apothecia of lichens was identified. These new discoveries were becoming increasingly contradictory for scientists. The apothecium reproductive organ being unique to fungi but absent in other photosynthetic organisms. With improvements in microscopy, algae were identified in the lichen structure, which heightened the contradictions. At first the presence of algae was taken as being due to contamination due to collection of samples in damp conditions and they were not considered as being in a symbiotic relation with the fungal part of the thallus. That the algae continued to multiply showed that they were not mere contaminants. It was Anton de Bary a German mycologist who specialised in phytopathology who first suggested in 1865 that lichens were merely the result of parasitism of various fungi of the ascomycetes group by nostoc type algae and others. Successive studies such as those carried out by Andrei Famintsyn and Baranetzky in 1867 showed no dependence of the algal component upon the lichen thallus and that the algal component could live independently of the thallus. It was in 1869 that Simon Schwendener demonstrated that all lichens were the result of fungal attack on the cells of algal cells and that all these algae also exist free in nature. This researcher was the first to recognise the dual nature of lichens as a result of the capture of the algal component by the fungal component. In 1873 Jean-Baptiste Edouard Bornet concluded form studying many different lichen species that the relationship between fungi and algae was purely symbiotic. It was also established that algae could associate with many different fungi to form different lichen phenotypes. 20th century In 1909 the Russian lichenologist Konstantin Mereschkowski presented a research paper "The Theory of two Plasms as the basis of Symbiogenesis, A new study on the Origin of Organisms", which aims to explain a new theory of Symbiogenesis by lichens and other organisms as evidenced by his earlier work "Nature and Origin of Chromatophores in the Plant Kingdom". These new ideas can be studied today under the title of the Theory of Endosymbiosis. Despite the above studies the dual nature of lichens remained no more than a theory until in 1939 the Swiss researcher Eugen A Thomas was able to reproduce in the laboratory the phenotype of the lichen Cladonia pyxidata by combining its two identified components. During the 20th century botany and mycology were still attempting to solve the two main problems surrounding lichens. On the one hand the definition of lichens and the relationship between the two symbionts and the taxonomic position of these organisms within the plant and fungal kingdoms. There appeared numerous renowned researchers within the field of lichenology such as Henry Nicollon des Abbayes, William Alfred Weber, Antonina Georgievna Borissova, Irwin M. Brodo, and George Albert Llano. Lichenology has found applications beyond biology itself in the field of geology in a technique known as lichenometry where the age of an exposed surface can be found by studying the age of lichens growing on them. Age dating in this way can be absolute or relative because the growth of these organisms can be arrested under various conditions. The technique provides an average age of the older individual lichens providing a minimum age of the medium being studied. Lichenometry relies upon the fact that the maximum diameter of the largest thallus of an epilithic lichen growing on a substrate is directly proportional to the time from first exposure of the area to the environment as seen in studies by Roland Beschel in 1950 and is especially useful in areas exposed for less than 1000 years. Growth is greatest in the first 20 to 100 years with 15–50 mm growth per year and less in the following years with average growth of 2–4 mm per year. The difficulty of giving a definition applicable to every known lichen has been debated since lichenologists first recognised the dual nature of lichens. In 1982 the International Association for Lichenology convened a meeting to adopt a single definition of lichen drawing on the proposals of a committee. The chairman of this committee was the renowned researcher Vernon Ahmadjian. The definition finally adopted was that lichen could be considered as the association between a fungus and a photosynthetic symbiont resulting in a thallus of specific structure. Such a simple a priori definition soon brought criticism from various lichenologists and there soon emerged reviews and suggestions for amendments. For example, David L. Hawksworth considered the definition imperfect because it is impossible to determine which one thallus is of a specific structure since thalli changed depending upon the substrate and conditions in which they developed. This researcher represents one of the main trends among lichenologists who consider it impossible to give a single definition to lichens since they are a unique type of organism. Today studies in lichenology are not restricted to the description and taxonomy of lichens but have application in various scientific fields. Especially important are studies on environmental quality that are made through the interaction of lichens with their environment. Lichen is extremely sensitive to various air pollutants, especially to sulphur dioxide, which causes acid rain and prevents water absorption. Lichens in pharmacology Although several species of lichen have been used in traditional medicine it was not until the early 20th century that modern science became interested in them. The discovery of various substances with antibacterial action in lichen thalli was essential for scientists to become aware of the possible importance of these organisms to medicine. From the 1940s there appeared various works by the noted microbiologist Rufus Paul Burkholder who demonstrated antibacterial action of lichens of the genus Usnea against Bacillus subtilis and Sarcina lutea. Studies showed that the substance that inhibited growth of bacteria was usnic acid. Something similar occurred with the substance Ramelina synthesised by the lichen Ramalina reticulata, nevertheless, these substances proved ineffective against Gram negative bacteria such as Escherichia coli and Pseudomonas. With these investigations the number of antibacterial substances and possible drug targets known to be produced by lichens increased ergosterol, usnic acid etc. Interest in the potential of substances synthesised by lichens increased with the end of World War II along with the growing interest in all antibiotic substances. In 1947 antibacterial action was identified in extracts of Cetraria islandica and the compounds identified as responsible for bacterial inhibition were shown to be d-protolichosteric acid and d-1-usnic acid. Further investigations have identified novel antibacterial substances, Alectosarmentin or Atranorin. Antibacterial action of substances produced by lichens is related to their ability to disrupt bacterial proteins with a subsequent loss of bacterial metabolic capacity. This is possible due to the action of lichen phenolics such as usnic acid derivatives. From the 1950s the lichen product usnic acid was the object of most antitumour research. These studies revealed some in vitro antitumour activity by substances identified in two common lichens Peltigera leucophlebia and Collema flaccidum. Recent work in the field of applied biochemistry has shown some antiviral activity with some lichen substances. In 1989 K Hirabayashi presented his investigations on inhibitory lichen polysaccharides in HIV infection. Bibliography "Protocols in Lichenology: Culturing, Biochemistry, Ecophysiology and Use in Biomonitoring" (Springer Lab Manuals, Kraner, Ilse, Beckett, Richard and Varma, Ajit (28 Nov 2001) Lichenology in the British Isles, 1568–1975: An Historical and Biographical Survey, D. L. Hawksworth and M. R. D. Seaward (Dec 1977) "Lichenology: Progress and Problems" (Special Volumes/Systematics Association) Denis Hunter Brown et al. (10 May 1976) Lichenology in Indian Subcontinent, Dharani Dhar Awasthi (1 Jan 2000) Lichenology in Indian Subcontinent 1966–1977, Ajay Singh (1980) CRC Handbook of Lichenology, Volume II: v.2, Margalith Galun (30 Sep 1988) A Textbook of General Lichenology, Albert Schneider (24 May 2013) Horizons in Lichenology D. H. Dalby (1988) Bibliography of Irish Lichenology, M. E. Mitchell (Nov 1972) Diccionario de Liquenologia/Dictionary of Lichenology, Kenneth Allen Hornak (1998) "Progress and Problems in Lichenology in the Eighties: Proceedings" (Bibliotheca Lichenologica), Elisabeth Peveling (1987) A Textbook of General Lichenology with Descriptions and Figures of the Genera Occurring in the North Eastern United States, Albert Schneider (Mar 2010) The Present Status and Potentialities of the Lichenology in China, Liu Hua Jie (1 Jan 2000) Lichens to Biomonitor the Environment, Shukla, D. K. Vertika, Upreti and Bajpai, Rajesh (Aug 2013) Lichenology and Bryology in the Galapagos Islands with Checklists of the Lichens and Bryophytes thus far Reported, William A. Weber (1966) Flechten Follmann: Contributions to Lichenology in Honour of Gerhard Follmann, Gerhard Follmann, F. J. A. Daniels, Margot Schultz and Jorge Peine (1995) Environmental Lichenology: Biomonitoring Trace Element Air Pollution, Joyce E. Sloof (1993) The Journal of the Hattori Botanical Laboratory: Devoted to Bryology and Lichenology, Zennosuke Iwatsuki (1983) Contemporary Lichenology and Lichens of Western Oregon, W. Clayton Fraser (1968) Irish Lichenology 1858–1880: Selected Letters of Isaac Carroll, Theobald Jones, Charles Larbalestier (1996) Lichens from West of Hudson's Bay (Lichens of Arctic America Vol. 1), John W. Thompson (1953) Les Lichens - Morphologie, Biologie, Systematique, Fernand Moreau (1927) "Eric Acharius and his Influence on English Lichenology" (Botany Bulletins), David J. Galloway (Jul 1988) "Lichenographia Thompsoniana: North American Lichenology in Honour of John W. Thompson", M. G. Gleen (May 1998) "Monitoring with Lichens-Proceedings of the NATO Advanced Research Workshop", Nimis, Pier Luigi, Scheidegger, Christoph and Wolseley, Patricia (Dec 2001) Contributions to Lichenology: In Honour of A. Henssen, H. M. Jahns and A. Henssen (1990) Studies in Lichenology with Emphasis on Chemotaxonomy, Geography and Phytochemistry: Festschrift Christian Leuckert, Johannes Gunther Knoph, Kunigunda Schrufer and Harry J. M. Sipman (1995) Swedish Lichenology: Dedicated to Roland Moberg, Jan Erik Mattsson, Mats Wedin and Inga Hedberg (Sep 1999) Index of Collectors in Knowles the Lichens of Ireland (1929) and Porter's Supplement: with a Conspectus of Lichen, M. E. Mitchell, Matilda C. Knowles and Lilian Porter (1998) Biodeterioration of Stone Surfaces: Lichens and Biofilms as Weathering Agents of Rocks and Cultural Heritage, Larry St. Clair and Mark Seaward (Oct 2011) The Lichen Symbiosis, Vernon Ahmadjian (Aug 1993) Lichen Biology, Thomas H. Nash (Jan 2008) Fortschritte der Chemie organischer Naturstoffe/ Progress in the Chemistry of Organic Natural Products, S. Hunek (Oct 2013) Notable lichenologists Lichen collections British Lichen Society Botanische Staatssammlung München Canadian Museum of Nature Centraalbureau voor Schimmelcultures National Botanical Research Institute (CSIR), India Iowa State University, Ada Hayden Herbarium, Ames, Iowa National Museum Cardiff Natural History Museum, London New York Botanical Garden Royal Botanic Garden, Edinburgh Royal Botanic Gardens, Kew, London University of Michigan Herbarium, Ann Arbor, Michigan University of Wisconsin-Madison Herbarium, Madison, Wisconsin Ulster Museum, Belfast See also Outline of lichens Acharius Medal, an award in lichenology Footnotes References External links American Bryological and Lichenological Society Belgium, Luxembourg and Northern France, Lichens of British Lichen Society Central European Bryological and Lichenological Society (Ger) Checklists of Lichens and Lichenolous Fungi Chilean Lichens (Spa) Czech Bryological and Lichenological Society (Cze) French Lichenological Society (Fre) Guide to using a Lichen-Based Index to Assess Nitrogen Air Quality Identifying North American Lichens a Guide to the Literature International Association for Lichenology Irish Lichens Italian Lichenological Society (Ita) Japanese Lichenological Society (Eng) Japanese Lichenological Society (Eng) Lichenological Resources (Rus) Lichen Herbarium University of Oslo Lichenland Oregon State University Links to Lichens and Lichenologists Lichens of Ireland Project LichenPortal.org - The Consortium of Lichen Herbaria Microscopy of Lichens (Ger) Netherlands Bryological and Lichenological Society (nl) National Biodiversity Gateway Nordic Lichen Society (Eng) North American Lichens Paleo-Lichenology (Ger) Russian Lichens (Rus) Scottish Lichens Swedish Lichens Lief & Anita Stridvall Swiss Bryological and Lichenological Society (Ger) Tropical Lichens UK Lichens Branches of biology Branches of mycology Fungi and humans Mycology
Lichenology
[ "Biology" ]
4,161
[ "Branches of mycology", "Fungi", "Mycology", "Fungi and humans", "nan", "Lichenology", "Humans and other species" ]
172,395
https://en.wikipedia.org/wiki/Larry%20Ellison
Lawrence Joseph Ellison (born August 17, 1944) is an American businessman and entrepreneur who co-founded software company Oracle Corporation. He was Oracle's chief executive officer from 1977 to 2014 and is now its chief technology officer and executive chairman. As of January 20, 2025, he is the fourth-wealthiest person in the world, according to Bloomberg Billionaires Index, with an estimated net worth of US$188 billion, and the second-wealthiest in the world according to Forbes, with an estimated net worth of $237 billion. Ellison is also known for his ownership of 98% of Lānaʻi, the sixth-largest island in the Hawaiian Islands. Early life and education Ellison was born on August 17, 1944, in New York City to Florence Spellman, an unwed Jewish mother. His biological father was an Italian-American United States Army Air Corps pilot. After Ellison contracted pneumonia at the age of nine months, his mother gave him to her aunt and uncle for adoption. He did not meet his biological mother again until he was 48. Ellison moved to Chicago's South Shore, then a primarily Jewish middle-class neighborhood. He remembers his adoptive mother, Lillian Spellman Ellison, as warm and loving, in contrast to his austere, unsupportive, and often distant adoptive father, who had chosen the name Ellison to honor his point of entry into the United States, Ellis Island. Louis Ellison was a government employee who had made a small fortune in Chicago real estate, only to lose it during the Great Depression. Although Ellison was raised in a Reform Jewish home by his adoptive parents, who attended synagogue regularly, he remained a religious skeptic. At age 13, Ellison refused to have a bar mitzvah celebration. Ellison states: "While I think I am religious in one sense, the particular dogmas of Judaism are not dogmas I subscribe to. I don't believe that they are real. They're interesting stories. They're interesting mythology, and I certainly respect people who believe these are literally true, but I don't. I see no evidence for this stuff." Ellison says that his fondness for Israel is not connected to religious sentiments but rather due to the innovative spirit of Israelis in the technology sector. Ellison attended South Shore High School in Chicago and later was admitted to University of Illinois at Urbana–Champaign and was enrolled as a pre-med student. At the university, he was named science student of the year. He withdrew without taking final exams after his sophomore year because his adoptive mother had just died. After spending the summer of 1966 in California, he then attended the University of Chicago for one term, where he studied physics and mathematics and also first encountered computer design. He then moved to Berkeley, California, and began his career as a computer programmer for different companies. Early career and Oracle 1977–1994 During the 1970s, after a brief stint at Amdahl Corporation, Ellison began working for Ampex Corporation. His first project included a database for the CIA, code-named "Oracle". Ellison was inspired by a paper written by Edgar F. Codd on relational database systems called "A Relational Model of Data for Large Shared Data Banks". In 1977, he founded Software Development Laboratories (SDL) with two partners and an investment of $2,000; $1,200 of the money was his. In 1979, the company renamed itself Relational Software, Inc. (RSI). Ellison had heard about the IBM System R database, also based on Codd's theories, and wanted Oracle to achieve compatibility with it, but IBM made this impossible by refusing to share System R's error codes. The initial release of the Oracle Database in 1979 was called Oracle version 2; there was no Oracle version 1. In 1983, the company officially became Oracle Systems Corporation after its flagship product. In 1990, Oracle laid off 10% of its workforce (about 400 people) because it was losing money. This crisis, which almost resulted in the company's bankruptcy, came about because of Oracle's "up-front" marketing strategy, in which sales people urged potential customers to buy the largest possible amount of software all at once. The sales people then booked the value of future license sales in the current quarter, thereby increasing their bonuses. This became a problem when the future sales subsequently failed to materialize. Oracle eventually had to restate its earnings twice, and had to settle class-action lawsuits arising from its having overstated its earnings. Ellison would later say that Oracle had made "an incredible business mistake". Although IBM dominated the mainframe relational database market with its DB2 and SQL/DS database products, it delayed entering the market for a relational database on Unix and Windows operating systems. This left the door open for Sybase, Oracle, Ingres, Informix, and eventually Microsoft to dominate mid-range systems and microcomputers. Around this time, Oracle fell behind Sybase. From 1990 to 1993, Sybase was the fastest-growing database company and the database industry's darling vendor, but soon it fell victim to merger mania. Sybase's 1996 merger with Powersoft resulted in a loss of focus on its core database technology. In 1993, Sybase sold the rights to its database software running under the Windows operating system to Microsoft Corporation, which now markets it under the name "SQL Server". In his early years at Oracle, Ellison was named an Award Recipient in the High Technology Category for the Ernst and Young Entrepreneur of the Year Program. 1994–2010 In 1994, Informix overtook Sybase and became Oracle's most important rival. The intense war between Informix CEO Phil White and Ellison was front-page Silicon Valley news for three years. In April 1997, Informix announced a major revenue shortfall and earnings restatements. Phil White eventually landed in jail, and IBM absorbed Informix in 2001. Also in 1997, Ellison was made a director of Apple Computer after Steve Jobs returned to the company. Ellison resigned in 2002. With the defeat of Informix and of Sybase, Oracle enjoyed years of industry dominance until the rise of Microsoft SQL Server in the late 1990s and IBM's acquisition of Informix Software in 2001 to complement their DB2 database. Oracle's main competition for new database licenses on UNIX, Linux, and Windows operating systems comes from IBM's DB2 and from Microsoft SQL Server. IBM's DB2 still dominates the mainframe database market. In 2005, Ellison agreed to settle a four-year-old insider-trading lawsuit by offering to pay $100 million to charity in Oracle's name. In 2005, Oracle Corporation paid Ellison a $975,000 salary, a $6,500,000 bonus, and other compensation of $955,100. In 2007, Ellison earned a total compensation of $61,180,524, which included a base salary of $1,000,000, a cash bonus of $8,369,000, and options granted of $50,087,100. In 2008, he earned a total compensation of $84,598,700, which included a base salary of $1,000,000, a cash bonus of $10,779,000, no stock grants, and options granted of $71,372,700. In the year ending May 31, 2009, he made $56.8 million. In 2006, Forbes ranked him as the richest Californian. In April 2009, after a tug-of-war with IBM and Hewlett-Packard, Oracle announced its intent to buy Sun Microsystems. On July 2, 2009, for the fourth year in a row, Oracle's board awarded Ellison another 7 million stock options. On August 22, 2009, it was reported that Ellison would be paid only $1 for his base salary for the fiscal year of 2010, down from the $1,000,000 he was paid in fiscal 2009. 2010–present The European Union approved Oracle's acquisition of Sun Microsystems on January 21, 2010, and agreed that Oracle's acquisition of Sun "has the potential to revitalize important assets and create new and innovative products". The Sun acquisition also gave Oracle control of the popular MySQL open source database, which Sun had acquired in 2008. On August 9, 2010, Ellison denounced Hewlett-Packard's board for firing CEO Mark Hurd, writing that: "The HP board just made the worst personnel decision since the idiots on the Apple board fired Steve Jobs many years ago." (Ellison and Hurd were close personal friends.) Then on September 6, Oracle hired Mark Hurd as co-president alongside Safra Catz. Ellison remained in his current role at Oracle. In March 2010, the Forbes list of billionaires ranked Ellison as the sixth-richest person in the world and as the third-richest American, with an estimated net worth of over $28 billion. On July 27, 2010, The Wall Street Journal reported that Ellison was the highest-paid executive in the last decade, collecting a total compensation of US$1.84 billion. In September 2011, Ellison was listed on the Forbes list of billionaires as the fifth richest man in the world and was still the third richest American, with a net worth of about $36.5 billion. In September 2012, Ellison was again listed on the Forbes list of billionaires as the third richest American citizen, behind Bill Gates and Warren Buffett, with a net worth of $44 billion. In October 2012, he was listed just behind David Hamilton Koch as the eighth richest person in the world, according to the Bloomberg Billionaires Index. Ellison owns stakes in Salesforce.com, NetSuite, Quark Biotechnology Inc. and Astex Pharmaceuticals. In June 2012, Ellison agreed to buy 98 percent of the Hawaiian island of Lānaʻi from David Murdock's company, Castle & Cooke. The price was reported to be between $500 million and $600 million. In 2013, according to The Wall Street Journal, Ellison earned $94.6 million. On September 18, 2014, Ellison appointed Mark Hurd to CEO of Oracle from his former position as president; Safra Catz was also made CEO, moving from her former role as CFO. Ellison assumed the positions of chief technology officer and executive chairman. In November 2016, Oracle bought NetSuite for $9.3 billion. Ellison owned 35% of NetSuite at the time of the purchase making him $3.5 billion personally. In 2017, Forbes estimated that Ellison was the 4th richest person in tech. In June 2018, Ellison's net worth was about $54.5 billion, according to Forbes. In December 2018, Ellison became a director on the board of Tesla, Inc., after purchasing 3 million shares earlier that year. Ellison left the Tesla Board in August 2022. As of June 2020, Ellison is said to be the seventh wealthiest person in the world, with a net worth of $66.8 billion. As of 2022, Ellison owns 42.9 percent of the shares of Oracle Corporation, and as of June 2023, 1.4 percent of the shares of Tesla. Ellison's software startup, Project Ronin, that he co-founded with David Agus and Dave Hodgson closed down in 2024. The company aimed at transforming cancer care whose products were intended to quickly analyze data within electronic medical records systems. Philanthropy and other endeavors In 1992 Ellison shattered his elbow in a high-speed bicycle crash. After receiving treatment at University of California, Davis, Ellison donated $5 million to seed the Lawrence J. Ellison Musculo-Skeletal Research Center. In 1998, the Lawrence J. Ellison Ambulatory Care Center opened on the Sacramento campus of the UC Davis Medical Center. To settle an insider trading lawsuit arising from his selling nearly $1 billion of Oracle stock, a court allowed Ellison to donate $100 million to his own charitable foundation without admitting wrongdoing. A California judge refused to allow Oracle to pay Ellison's legal fees of $24 million. Ellison's lawyer had argued that if Ellison were to pay the fees, that could be construed as an admission of guilt. His charitable donations to Stanford University raised questions about the independence of two Stanford professors who evaluated the case's merits for Oracle. In response to the September 11 terrorist attacks of 2001, Ellison made a controversial offer to donate software to the federal government that would have enabled it to build and run a national identification database and to issue ID cards. Forbes 2004 list of charitable donations made by the wealthiest 400 Americans stated that Ellison had donated $151,092,103, about 1% of his estimated personal wealth. In June 2006, Ellison announced he would not honor his earlier pledge of $115 million to Harvard University, claiming it was due to the departure of former president Lawrence Summers. Oracle spokesman Bob Wynne announced, "It was really Larry Summers' brainchild and once it looked like Larry Summers was leaving, Larry Ellison reconsidered ... [I]t was Larry Ellison and Larry Summers that had initially come up with this notion." In 2007 Ellison pledged $500,000 to fortify a community centre in Sderot, Israel, after discovering that the building was not fortified against rocket attacks. Other charitable donations by Ellison include a $10 million donation to the Friends of the Israel Defense Forces in 2014. In 2017 Ellison again donated to the Friends of the Israel Defense Forces, this time for $16.6 million. His donation was intended to support the construction of well-being facilities on a new campus for co-ed conscripts. Ellison was an early investor in Theranos. In August 2010 a report listed Ellison as one of the 40 billionaires who had signed "The Giving Pledge". In May 2016 Ellison donated $200 million to the University of Southern California for establishing a cancer research center: the Lawrence J. Ellison Institute for Transformative Medicine of USC. It was renamed the Ellison Institute of Technology, and an additional campus was established in Oxford in 2023 with the intention of providing a scholarship program for 20 students each year. Aviation Ellison is a licensed pilot who has owned several aircraft. He was cited by the city of San Jose, California, for violating its limits on late-night takeoffs and landings from San Jose Mineta International Airport by planes weighing more than 75,000 pounds (34,019 kg). In January 2000, Ellison sued over the interpretation of the airport rule, contending that his Gulfstream V aircraft "is certified by the manufacturer to fly at two weights: 75,000 pounds, and at 90,000 pounds for heavier loads or long flights requiring more fuel. But the pilot only lands the plane in San Jose when it weighs 75,000 pounds or less, and has the logs to prove it." US District Judge Jeremy Fogel ruled in Ellison's favor in June 2001, calling for a waiver for Ellison's jet, but did not invalidate the curfew. Ellison also owns at least two military jets: the Italian training aircraft SIAI-Marchetti S.211, and a decommissioned Soviet fighter MiG-29, which the US government has refused him permission to import. Movie cameo Ellison made a brief cameo appearance in the 2010 movie Iron Man 2. Restaurant In July 2013, Ellison opened a restaurant in Malibu named Nikita, which closed in December 2014. Tennis In 2009, Ellison purchased the Indian Wells Tennis Garden tennis facility in California's Coachella Valley and the Indian Wells Masters tournament for $100 million, and has subsequently invested another $100 million into the club. In 2010, Ellison purchased a 50% share of the BNP Paribas Open tennis tournament. Yachting With the economic downturn of 2010, Ellison sold his share of Rising Sun, the 12th largest yacht in the world, making David Geffen the sole owner. The vessel is long, and reportedly cost over $200million to build. He downsized to Musashi, a yacht built by Feadship. Ellison competes in yachting through Oracle Team USA. Following success racing Maxi yachts, Ellison founded BMW Oracle Racing to compete for the 2003 Louis Vuitton Cup. In 2002, Ellison's Oracle's team introduced kite yachting into the America's Cup environment. Kite sail flying lasting about 30 minutes was achieved during testing in New Zealand. BMW Oracle Racing was the "Challenger of Record" on behalf of the Golden Gate Yacht Club of San Francisco for the 2007 America's Cup in Valencia, Spain, until eliminated from the 2007 Louis Vuitton Cup challenger-selection series in the semi-finals. On February 14, 2010, Ellison's yacht USA 17 won the second race (in the best of three "deed of gift" series) of the 33rd America's Cup, after winning the first race two days earlier. Securing a historic victory, Ellison and his BMW Oracle team became the first challengers to win a "deed of gift" match. The Cup returned to American shores for the first time since 1995. Ellison served as a crew member in the second race. Previously, Ellison had filed several legal challenges, through the Golden Gate Yacht Club, against the way that Ernesto Bertarelli (also one of the world's richest men) proposed to organize the 33rd America's Cup following the 2007 victory of Bertarelli's team Alinghi. The races were finally held in February 2010 in Valencia. On September 25, 2013, Ellison's Oracle Team USA defeated Emirates Team New Zealand to win the 34th America's Cup in San Francisco Bay, California. Oracle Team USA had been penalized two points in the final for cheating by some team members during the America's Cup World Series warm-up events. The Oracle team came from a 1–8 deficit to win 9–8, in what has been called "one of the greatest comebacks in sports history". In 2019, Ellison, in conjunction with Russell Coutts, started the SailGP international racing series. The series used F50 foiling catamarans, the fastest class of boat in history with regattas held across the globe. Ellison committed to five years of funding to support the series until it could become self sustaining. The first season was successful with global audiences of over 1.8 billion. Political involvement Ellison was critical of NSA whistle-blower Edward Snowden, saying that "Snowden had yet to identify a single person who had been 'wrongly injured' by the NSA's data collection". In 2012, he donated to both Democratic and Republican politicians, and in late 2014 hosted Republican senator Rand Paul at a fundraiser at his home. Ellison was one of the top donors to Conservative Solutions PAC, a super PAC supporting Marco Rubio's 2016 presidential bid. As of February 2016, Ellison had given $4 million overall to the PAC. In 2020, Ellison allowed president Donald Trump to have a fundraiser at his Rancho Mirage estate, but Ellison was not present. In January 2022, Ellison donated $15 million to the Opportunity Matters Fund super PAC associated with Sen. Tim Scott (R-SC), which is one of the most significant financial contributions of the 2022 election cycle. The Washington Post reported in May 2022 that Ellison participated in a conference call days after the 2020 presidential election that focused on strategies for challenging the legitimacy of the vote. Other participants on the call included Fox News host Sean Hannity, Senator Lindsey Graham, Trump personal attorney Jay Sekulow and James Bopp, an attorney for True the Vote. The Post cited court documents and a participant on the call. Controversy According to reports in 2019, Larry Ellison has funded illegal annexation projects in Jerusalem that received criticism from Palestinians as well as Israeli peace activists and archaeologists. Additionally, in 2019 a $1 billion lawsuit was filed against several Israel supporters, including Ellison. The lawsuit accused Ellison and others of conspiring to ethnically cleanse Palestinians from Israeli-occupied territories, committing war crimes, and funding genocide. Ellison, who has close ties with Israeli prime minister Benjamin Netanyahu, reportedly lobbied Israeli mogul Arnon Milchan to drop his lawyer so that Netanyahu, implicated in one of his corruption cases, could hire him. It was also revealed in 2021 that Ellison offered Netanyahu a post at Oracle. Recognition In 1997, Ellison received the Golden Plate Award of the American Academy of Achievement. In 2013, Ellison was inducted into the Bay Area Business Hall of Fame. In 2019, the Lawrence J. Ellison Institute for Transformative Medicine of USC honored Ellison with the first Rebels With A Cause Award in recognition of his generous support through the years. Ellison was named one of the 100 most influential people in the world by Time magazine in 2024. Personal life Marriages Ellison has been married six times: Ellison married Adda Quinn in 1967. They divorced in 1974. Ellison married Nancy Wheeler Jenkins shortly after meeting her in late 1976. In 1978, the couple divorced and Wheeler sold back her shares in SDL to Ellison for $500. Ellison was married to Barbara Boothe from 1983 to 1986. Boothe was a former receptionist at RSI. They had two children, David and Megan, who are film producers at Skydance Media and Annapurna Pictures, respectively. On December 18, 2003, Ellison married Melanie Craft, a romance novelist, at his Woodside estate. Ellison's friend Steve Jobs, former CEO and co-founder of Apple Inc., was the official wedding photographer, and Representative Tom Lantos officiated. They divorced in 2010. From 2010 until 2020, Ellison was in a relationship and then fifth marriage with Ukrainian-American model and actress Nikita Kahn. They divorced in 2020. As of 2024, Ellison is married to Jolin Ellison, an alumna of the University of Michigan. Health Ellison abstains from alcohol and drugs, stating that "I can't stand anything that clouds my mind." Cars Ellison owns many exotic cars, including an Audi R8 and a McLaren F1. His favorite is the Acura NSX, which he gave as gifts each year during its production. Ellison is also reportedly the owner of a Lexus LFA. Homes Ellison styled his estimated $110 million Woodside, California, estate after feudal Japanese architecture, complete with a man-made lake and an extensive seismic retrofit. In 2004 and 2005 he purchased more than 12 properties in Malibu, California, worth more than $180 million. The $65 million Ellison spent on five contiguous lots at Malibu's Carbon Beach made this the most costly residential transaction in United States history until banker Ronald Perelman sold his Palm Beach, Florida, compound for $70 million later that same year. The entertainment system at his Pacific Heights home cost $1 million, and includes a rock concert-sized video projector at one end of a drained swimming pool, using the gaping hole as a giant subwoofer. In early 2010, Ellison purchased the Astors' Beechwood Mansion – formerly the summer home of the Astor family – in Newport, Rhode Island, for $10.5 million. In 2011 he purchased the 249-acre Porcupine Creek Estate and private golf course in Rancho Mirage, California, for $42.9 million. The property was formerly the home of Yellowstone Club founders Edra and Tim Blixseth, and was sold to Ellison by creditors following their divorce and bankruptcy. In December 2020, he left California and moved to Lānaʻi, of which he owns 98%. In 2022, Ellison bought a 22-acre property in Manalapan, Florida for $173 million. He purchased it from Jim Clark, who in turn had acquired it from the Ziff family. It is the most expensive residential property purchase in Florida history. Books See also Ellison Medical Foundation References Works cited Further reading External links Profile at Oracle Corporation Profile at Forbes Profile at Bloomberg L.P. Biography at BBC News 1944 births University of Illinois Urbana-Champaign alumni University of Chicago alumni 2007 America's Cup sailors 2010 America's Cup sailors America's Cup American adoptees American aviators American billionaires American computer businesspeople American male sailors (sport) Jewish sailors (sport) American Ashkenazi Jews 20th-century American Jews American people of Italian descent American people of Russian-Jewish descent American technology chief executives American technology company founders Businesspeople from California Businesspeople in software Directors of Apple Inc. 21st-century American philanthropists California Republicans Life extensionists Living people Oracle Corporation Oracle employees Oracle Racing sailors People from Rancho Mirage, California Businesspeople from the Bronx People from Woodside, California Philanthropists from New York (state) SailGP Tesla, Inc. 21st-century American Jews Lanai
Larry Ellison
[ "Technology" ]
5,027
[ "Lists of people in STEM fields", "Proprietary technology salespersons" ]
172,396
https://en.wikipedia.org/wiki/Lichen
A lichen ( , ) is a hybrid colony of algae or cyanobacteria living symbiotically among filaments of multiple fungi species, along with yeasts and bacteria embedded in the cortex or "skin", in a mutualistic relationship. Lichens are the lifeform that first brought the term symbiosis (as Symbiotismus) under biological context. Lichens have since been recognized as important actors in nutrient cycling and producers which many higher trophic feeders feed on, such as reindeer, gastropods, nematodes, mites, and springtails. Lichens have properties different from those of their component organisms. They come in many colors, sizes, and forms and are sometimes plant-like, but are not plants. They may have tiny, leafless branches (fruticose); flat leaf-like structures (foliose); grow crust-like, adhering tightly to a surface (substrate) like a thick coat of paint (crustose); have a powder-like appearance (leprose); or other growth forms. A macrolichen is a lichen that is either bush-like or leafy; all other lichens are termed microlichens. Here, "macro" and "micro" do not refer to size, but to the growth form. Common names for lichens may contain the word moss (e.g., "reindeer moss", "Iceland moss"), and lichens may superficially look like and grow with mosses, but they are not closely related to mosses or any plant. Lichens do not have roots that absorb water and nutrients as plants do, but like plants, they produce their own energy by photosynthesis. Instead, lichen absorb nutrients from rainwater and the air . When they grow on plants, they do not live as parasites, but instead use the plant's surface as a substrate. Lichens occur from sea level to high alpine elevations, in many environmental conditions, and can grow on almost any surface. They are abundant growing on bark, leaves, mosses, or other lichens and hanging from branches "living on thin air" (epiphytes) in rainforests and in temperate woodland. They grow on rock, walls, gravestones, roofs, exposed soil surfaces, rubber, bones, and in the soil as part of biological soil crusts. Various lichens have adapted to survive in some of the most extreme environments on Earth: arctic tundra, hot dry deserts, rocky coasts, and toxic slag heaps. They can even live inside solid rock, growing between the grains (endolithic). There are about 20,000 known species. Some lichens have lost the ability to reproduce sexually, yet continue to speciate. They can be seen as being relatively self-contained miniature ecosystems, where the fungi, algae, or cyanobacteria have the potential to engage with other microorganisms in a functioning system that may evolve as an even more complex composite organism. Lichens may be long-lived, with some considered to be among the oldest living things. They are among the first living things to grow on fresh rock exposed after an event such as a landslide. The long life-span and slow and regular growth rate of some species can be used to date events (lichenometry). Lichens are a keystone species in many ecosystems and benefit trees and birds. Etymology and pronunciation The English word lichen derives from the Greek ("tree moss, lichen, lichen-like eruption on skin") via Latin . The Greek noun, which literally means "licker", derives from the verb , "to lick". In American English, "lichen" is pronounced the same as the verb "liken" (). In British English, both this pronunciation and one rhyming with "kitchen" () are used. Anatomy and morphology Growth forms Lichens grow in a wide range of shapes and forms; this external appearance is known as their morphology. The shape of a lichen is usually determined by the organization of the fungal filaments. The nonreproductive tissues, or vegetative body parts, are called the thallus. Lichens are grouped by thallus type, since the thallus is usually the most visually prominent part of the lichen. Thallus growth forms typically correspond to a few basic internal structure types. Common names for lichens often come from a growth form or color that is typical of a lichen genus. Common groupings of lichen thallus growth forms are: fruticose – growing like a tuft or multiple-branched leafless mini-shrub, upright or hanging down, 3-dimensional branches with nearly round cross section (terete) or flattened foliose – growing in 2-dimensional, flat, leaf-like lobes crustose – crust-like, adhering tightly to a surface (substrate) like a thick coat of paint squamulose – formed of small leaf-like scales crustose below but free at the tips leprose – powdery gelatinous – jelly-like filamentous – stringy or like matted hair byssoid – wispy, like teased wool structureless There are variations in growth types in a single lichen species, grey areas between the growth type descriptions, and overlapping between growth types, so some authors might describe lichens using different growth type descriptions. When a crustose lichen gets old, the center may start to crack up like old-dried paint, old-broken asphalt paving, or like the polygonal "islands" of cracked-up mud in a dried lakebed. This is called being rimose or areolate, and the "island" pieces separated by the cracks are called areolas. The areolas appear separated, but are (or were) connected by an underlying prothallus or hypothallus. When a crustose lichen grows from a center and appears to radiate out, it is called crustose placodioid. When the edges of the areolas lift up from the substrate, it is called squamulose. These growth form groups are not precisely defined. Foliose lichens may sometimes branch and appear to be fruticose. Fruticose lichens may have flattened branching parts and appear leafy. Squamulose lichens may appear where the edges lift up. Gelatinous lichens may appear leafy when dry. The thallus is not always the part of the lichen that is most visually noticeable. Some lichens can grow inside solid rock between the grains (endolithic lichens), with only the sexual fruiting part visible growing outside the rock. These may be dramatic in color or appearance. Forms of these sexual parts are not in the above growth form categories. The most visually noticeable reproductive parts are often circular, raised, plate-like or disc-like outgrowths, with crinkly edges, and are described in sections below. Color Lichens come in many colors. Coloration is usually determined by the photosynthetic component. Special pigments, such as yellow usnic acid, give lichens a variety of colors, including reds, oranges, yellows, and browns, especially in exposed, dry habitats. In the absence of special pigments, lichens are usually bright green to olive gray when wet, gray or grayish-green to brown when dry. This is because moisture causes the surface skin (cortex) to become more transparent, exposing the green photobiont layer. Different colored lichens covering large areas of exposed rock surfaces, or lichens covering or hanging from bark can be a spectacular display when the patches of diverse colors "come to life" or "glow" in brilliant displays following rain. Different colored lichens may inhabit different adjacent sections of a rock face, depending on the angle of exposure to light. Colonies of lichens may be spectacular in appearance, dominating much of the surface of the visual landscape in forests and natural places, such as the vertical "paint" covering the vast rock faces of Yosemite National Park. Color is used in identification. The color of a lichen changes depending on whether the lichen is wet or dry. Color descriptions used for identification are based on the color that shows when the lichen is dry. Dry lichens with a cyanobacterium as the photosynthetic partner tend to be dark grey, brown, or black. The underside of the leaf-like lobes of foliose lichens is a different color from the top side (dorsiventral), often brown or black, sometimes white. A fruticose lichen may have flattened "branches", appearing similar to a foliose lichen, but the underside of a leaf-like structure on a fruticose lichen is the same color as the top side. The leaf-like lobes of a foliose lichen may branch, giving the appearance of a fruticose lichen, but the underside will be a different color from the top side. The sheen on some jelly-like gelatinous lichens is created by mucilaginous secretions. Internal structure A lichen consists of a simple photosynthesizing organism, usually a green alga or cyanobacterium, surrounded by filaments of a fungus. Generally, most of a lichen's bulk is made of interwoven fungal filaments, but this is reversed in filamentous and gelatinous lichens. The fungus is called a mycobiont. The photosynthesizing organism is called a photobiont. Algal photobionts are called phycobionts. Cyanobacteria photobionts are called cyanobionts. The part of a lichen that is not involved in reproduction, the "body" or "vegetative tissue" of a lichen, is called the thallus. The thallus form is very different from any form where the fungus or alga are growing separately. The thallus is made up of filaments of the fungus called hyphae. The filaments grow by branching then rejoining to create a mesh, which is called being "anastomosed". The mesh of fungal filaments may be dense or loose. Generally, the fungal mesh surrounds the algal or cyanobacterial cells, often enclosing them within complex fungal tissues that are unique to lichen associations. The thallus may or may not have a protective "skin" of densely packed fungal filaments, often containing a second fungal species, which is called a cortex. Fruticose lichens have one cortex layer wrapping around the "branches". Foliose lichens have an upper cortex on the top side of the "leaf", and a separate lower cortex on the bottom side. Crustose and squamulose lichens have only an upper cortex, with the "inside" of the lichen in direct contact with the surface they grow on (the substrate). Even if the edges peel up from the substrate and appear flat and leaf-like, they lack a lower cortex, unlike foliose lichens. Filamentous, byssoid, leprose, gelatinous, and other lichens do not have a cortex; in other words, they are ecorticate. Fruticose, foliose, crustose, and squamulose lichens generally have up to three different types of tissue, differentiated by having different densities of fungal filaments. The top layer, where the lichen contacts the environment, is called a cortex. The cortex is made of densely tightly woven, packed, and glued together (agglutinated) fungal filaments. The dense packing makes the cortex act like a protective "skin", keeping other organisms out, and reducing the intensity of sunlight on the layers below. The cortex layer can be up to several hundred micrometers (μm) in thickness (less than a millimeter). The cortex may be further topped by an epicortex of secretions, not cells, 0.6–1 μm thick in some lichens. This secretion layer may or may not have pores. Below the cortex layer is a layer called the photobiontic layer or symbiont layer. The symbiont layer has less densely packed fungal filaments, with the photosynthetic partner embedded in them. The less dense packing allows air circulation during photosynthesis, similar to the anatomy of a leaf. Each cell or group of cells of the photobiont is usually individually wrapped by hyphae, and in some cases penetrated by a haustorium. In crustose and foliose lichens, algae in the photobiontic layer are diffuse among the fungal filaments, decreasing in gradation into the layer below. In fruticose lichens, the photobiontic layer is sharply distinct from the layer below. The layer beneath the symbiont layer is called the medulla. The medulla is less densely packed with fungal filaments than the layers above. In foliose lichens, as in Peltigera, there is usually another densely packed layer of fungal filaments called the lower cortex. Root-like fungal structures called rhizines (usually) grow from the lower cortex to attach or anchor the lichen to the substrate. Fruticose lichens have a single cortex wrapping all the way around the "stems" and "branches". The medulla is the lowest layer, and may form a cottony white inner core for the branchlike thallus, or it may be hollow. Crustose and squamulose lichens lack a lower cortex, and the medulla is in direct contact with the substrate that the lichen grows on. In crustose areolate lichens, the edges of the areolas peel up from the substrate and appear leafy. In squamulose lichens the part of the lichen thallus that is not attached to the substrate may also appear leafy. But these leafy parts lack a lower cortex, which distinguishes crustose and squamulose lichens from foliose lichens. Conversely, foliose lichens may appear flattened against the substrate like a crustose lichen, but most of the leaf-like lobes can be lifted up from the substrate because it is separated from it by a tightly packed lower cortex. Gelatinous, byssoid, and leprose lichens lack a cortex (are ecorticate), and generally have only undifferentiated tissue, similar to only having a symbiont layer. In lichens that include both green algal and cyanobacterial symbionts, the cyanobacteria may be held on the upper or lower surface in small pustules called cephalodia. Pruinia is a whitish coating on top of an upper surface. An epinecral layer is "a layer of horny dead fungal hyphae with indistinct lumina in or near the cortex above the algal layer". In August 2016, it was reported that some macrolichens have more than one species of fungus in their tissues. Physiology Symbiotic relation A lichen is a composite organism that emerges from algae or cyanobacteria living among the filaments (hyphae) of the fungi in a mutually beneficial symbiotic relationship. The fungi benefit from the carbohydrates produced by the algae or cyanobacteria via photosynthesis. The algae or cyanobacteria benefit by being protected from the environment by the filaments of the fungi, which also gather moisture and nutrients from the environment, and (usually) provide an anchor to it. Although some photosynthetic partners in a lichen can survive outside the lichen, the lichen symbiotic association extends the ecological range of both partners, whereby most descriptions of lichen associations describe them as symbiotic. Both partners gain water and mineral nutrients mainly from the atmosphere, through rain and dust. The fungal partner protects the alga by retaining water, serving as a larger capture area for mineral nutrients and, in some cases, provides minerals obtained from the substrate. If a cyanobacterium is present, as a primary partner or another symbiont in addition to a green alga as in certain tripartite lichens, they can fix atmospheric nitrogen, complementing the activities of the green alga. In three different lineages the fungal partner has independently lost the mitochondrial gene atp9, which has key functions in mitochondrial energy production. The loss makes the fungi completely dependent on their symbionts. The algal or cyanobacterial cells are photosynthetic and, as in plants, they reduce atmospheric carbon dioxide into organic carbon sugars to feed both symbionts. Phycobionts (algae) produce sugar alcohols (ribitol, sorbitol, and erythritol), which are absorbed by the mycobiont (fungus). Cyanobionts produce glucose. Lichenized fungal cells can make the photobiont "leak" out the products of photosynthesis, where they can then be absorbed by the fungus. It appears many, probably the majority, of lichen also live in a symbiotic relationship with an order of basidiomycete yeasts called Cyphobasidiales. The absence of this third partner could explain why growing lichen in the laboratory is difficult. The yeast cells are responsible for the formation of the characteristic cortex of the lichen thallus, and could also be important for its shape. The lichen combination of alga or cyanobacterium with a fungus has a very different form (morphology), physiology, and biochemistry than the component fungus, alga, or cyanobacterium growing by itself, naturally or in culture. The body (thallus) of most lichens is different from those of either the fungus or alga growing separately. When grown in the laboratory in the absence of its photobiont, a lichen fungus develops as a structureless, undifferentiated mass of fungal filaments (hyphae). If combined with its photobiont under appropriate conditions, its characteristic form associated with the photobiont emerges, in the process called morphogenesis. In a few remarkable cases, a single lichen fungus can develop into two very different lichen forms when associating with either a green algal or a cyanobacterial symbiont. Quite naturally, these alternative forms were at first considered to be different species, until they were found growing in a conjoined manner. Evidence that lichens are examples of successful symbiosis is the fact that lichens can be found in almost every habitat and geographic area on the planet. Two species in two genera of green algae are found in over 35% of all lichens, but can only rarely be found living on their own outside of a lichen. In a case where one fungal partner simultaneously had two green algae partners that outperform each other in different climates, this might indicate having more than one photosynthetic partner at the same time might enable the lichen to exist in a wider range of habitats and geographic locations. At least one form of lichen, the North American beard-like lichens, are constituted of not two but three symbiotic partners: an ascomycetous fungus, a photosynthetic alga, and, unexpectedly, a basidiomycetous yeast. Phycobionts can have a net output of sugars with only water vapor. The thallus must be saturated with liquid water for cyanobionts to photosynthesize. Algae produce sugars that are absorbed by the fungus by diffusion into special fungal hyphae called appressoria or haustoria in contact with the wall of the algal cells. The appressoria or haustoria may produce a substance that increases permeability of the algal cell walls, and may penetrate the walls. The algae may contribute up to 80% of their sugar production to the fungus. Ecology Lichen associations may be examples of mutualism or commensalism, but the lichen relationship can be considered parasitic under circumstances where the photosynthetic partner can exist in nature independently of the fungal partner, but not vice versa. Photobiont cells are routinely destroyed in the course of nutrient exchange. The association continues because reproduction of the photobiont cells matches the rate at which they are destroyed. The fungus surrounds the algal cells, often enclosing them within complex fungal tissues unique to lichen associations. In many species the fungus penetrates the algal cell wall, forming penetration pegs (haustoria) similar to those produced by pathogenic fungi that feed on a host. Cyanobacteria in laboratory settings can grow faster when they are alone rather than when they are part of a lichen. Miniature ecosystem and holobiont theory Symbiosis in lichens is so well-balanced that lichens have been considered to be relatively self-contained miniature ecosystems in and of themselves. It is thought that lichens may be even more complex symbiotic systems that include non-photosynthetic bacterial communities performing other functions as partners in a holobiont. Many lichens are very sensitive to environmental disturbances and can be used to cheaply assess air pollution, ozone depletion, and metal contamination. Lichens have been used in making dyes, perfumes (oakmoss), and in traditional medicines. A few lichen species are eaten by insects or larger animals, such as reindeer. Lichens are widely used as environmental indicators or bio-indicators. When air is very badly polluted with sulphur dioxide, there may be no lichens present; only some green algae can tolerate those conditions. If the air is clean, then shrubby, hairy and leafy lichens become abundant. A few lichen species can tolerate fairly high levels of pollution, and are commonly found in urban areas, on pavements, walls and tree bark. The most sensitive lichens are shrubby and leafy, while the most tolerant lichens are all crusty in appearance. Since industrialisation, many of the shrubby and leafy lichens such as Ramalina, Usnea and Lobaria species have very limited ranges, often being confined to the areas which have the cleanest air. Lichenicolous fungi Some fungi can only be found living on lichens as obligate parasites. These are referred to as lichenicolous fungi, and are a different species from the fungus living inside the lichen; thus they are not considered to be part of the lichen. Reaction to water Moisture makes the cortex become more transparent. This way, the algae can conduct photosynthesis when moisture is available, and is protected at other times. When the cortex is more transparent, the algae show more clearly and the lichen looks greener. Metabolites, metabolite structures and bioactivity Lichens can show intense antioxidant activity. Secondary metabolites are often deposited as crystals in the apoplast. Secondary metabolites are thought to play a role in preference for some substrates over others. Growth rate Lichens often have a regular but very slow growth rate of less than a millimeter per year. In crustose lichens, the area along the margin is where the most active growth is taking place. Most crustose lichens grow only 1–2 mm in diameter per year. Life span Lichens may be long-lived, with some considered to be among the oldest living organisms. Lifespan is difficult to measure because what defines the "same" individual lichen is not precise. Lichens grow by vegetatively breaking off a piece, which may or may not be defined as the "same" lichen, and two lichens can merge, then becoming the "same" lichen. One specimen of Rhizocarpon geographicum on East Baffin Island has an estimated age of 9500 years. Thalli of Rhizocarpon geographicum and Rhizocarpon eupetraeoides/inarense in the central Brooks Range of northern Alaska have been given a maximum possible age of 10,000–11,500 years. Response to environmental stress Unlike simple dehydration in plants and animals, lichens may experience a complete loss of body water in dry periods. Lichens are capable of surviving extremely low levels of water content (poikilohydric). They quickly absorb water when it becomes available again, becoming soft and fleshy. In tests, lichen survived and showed remarkable results on the adaptation capacity of photosynthetic activity within the simulation time of 34 days under Martian conditions in the Mars Simulation Laboratory (MSL) maintained by the German Aerospace Center (DLR). The European Space Agency has discovered that lichens can survive unprotected in space. In an experiment led by Leopoldo Sancho from the Complutense University of Madrid, two species of lichen—Rhizocarpon geographicum and Rusavskia elegans—were sealed in a capsule and launched on a Russian Soyuz rocket 31 May 2005. Once in orbit, the capsules were opened and the lichens were directly exposed to the vacuum of space with its widely fluctuating temperatures and cosmic radiation. After 15 days, the lichens were brought back to earth and were found to be unchanged in their ability to photosynthesize. Reproduction and dispersal Vegetative reproduction Many lichens reproduce asexually, either by a piece breaking off and growing on its own (vegetative reproduction) or through the dispersal of diaspores containing a few algal cells surrounded by fungal cells. Because of the relative lack of differentiation in the thallus, the line between diaspore formation and vegetative reproduction is often blurred. Fruticose lichens can fragment, and new lichens can grow from the fragment (vegetative reproduction). Many lichens break up into fragments when they dry, dispersing themselves by wind action, to resume growth when moisture returns. Soredia (singular: "soredium") are small groups of algal cells surrounded by fungal filaments that form in structures called soralia, from which the soredia can be dispersed by wind. Isidia (singular: "isidium") are branched, spiny, elongated, outgrowths from the thallus that break off for mechanical dispersal. Lichen propagules (diaspores) typically contain cells from both partners, although the fungal components of so-called "fringe species" rely instead on algal cells dispersed by the "core species". Sexual reproduction Structures involved in reproduction often appear as discs, bumps, or squiggly lines on the surface of the thallus. Though it has been argued that sexual reproduction in photobionts is selected against, there is strong evidence that suggests meiotic activities (sexual reproduction) in Trebouxia. Many lichen fungi reproduce sexually like other fungi, producing spores formed by meiosis and fusion of gametes. Following dispersal, such fungal spores must meet with a compatible algal partner before a functional lichen can form. Some lichen fungi belong to the phylum Basidiomycota (basidiolichens) and produce mushroom-like reproductive structures resembling those of their nonlichenized relatives. Most lichen fungi belong to Ascomycetes (ascolichens). Among the ascolichens, spores are produced in spore-producing structures called ascomata. The most common types of ascomata are the apothecium (plural: apothecia) and perithecium (plural: perithecia). Apothecia are usually cups or plate-like discs located on the top surface of the lichen thallus. When apothecia are shaped like squiggly line segments instead of like discs, they are called lirellae. Perithecia are shaped like flasks that are immersed in the lichen thallus tissue, which has a small hole for the spores to escape the flask, and appear like black dots on the lichen surface. The three most common spore body types are raised discs called apothecia (singular: apothecium), bottle-like cups with a small hole at the top called perithecia (singular: perithecium), and pycnidia (singular: pycnidium), shaped like perithecia but without asci (an ascus is the structure that contains and releases the sexual spores in fungi of the Ascomycota). The apothecium has a layer of exposed spore-producing cells called asci (singular: ascus), and is usually a different color from the thallus tissue. When the apothecium has an outer margin, the margin is called the exciple. When the exciple has a color similar to colored thallus tissue the apothecium or lichen is called lecanorine, meaning similar to members of the genus Lecanora. When the exciple is blackened like carbon it is called lecideine meaning similar to members of the genus Lecidea. When the margin is pale or colorless it is called biatorine. A "podetium" (plural: podetia) is a lichenized stalk-like structure of the fruiting body rising from the thallus, associated with some fungi that produce a fungal apothecium. Since it is part of the reproductive tissue, podetia are not considered part of the main body (thallus), but may be visually prominent. The podetium may be branched, and sometimes cup-like. They usually bear the fungal pycnidia or apothecia or both. Many lichens have apothecia that are visible to the naked eye. Most lichens produce abundant sexual structures. Many species appear to disperse only by sexual spores. For example, the crustose lichens Graphis scripta and Ochrolechia parella produce no symbiotic vegetative propagules. Instead, the lichen-forming fungi of these species reproduce sexually by self-fertilization (i.e. they are homothallic). This breeding system may enable successful reproduction in harsh environments. Mazaedia (singular: mazaedium) are apothecia shaped like a dressmaker's pin in pin lichens, where the fruiting body is a brown or black mass of loose ascospores enclosed by a cup-shaped exciple, which sits on top of a tiny stalk. Taxonomy and classification Lichens are classified by the fungal component. Lichen species are given the same scientific name (binomial name) as the fungus species in the lichen. Lichens are being integrated into the classification schemes for fungi. The alga bears its own scientific name, which bears no relationship to that of the lichen or fungus. There are about 20,000 identified lichen species, and taxonomists have estimated that the total number of lichen species (including those yet undiscovered) might be as high as 28,000. Nearly 20% of known fungal species are associated with lichens. "Lichenized fungus" may refer to the entire lichen, or to just the fungus. This may cause confusion without context. A particular fungus species may form lichens with different algae species, giving rise to what appear to be different lichen species, but which are still classified (as of 2014) as the same lichen species. Formerly, some lichen taxonomists placed lichens in their own division, the Mycophycophyta, but this practice is no longer accepted because the components belong to separate lineages. Neither the ascolichens nor the basidiolichens form monophyletic lineages in their respective fungal phyla, but they do form several major solely or primarily lichen-forming groups within each phylum. Even more unusual than basidiolichens is the fungus Geosiphon pyriforme, a member of the Glomeromycota that is unique in that it encloses a cyanobacterial symbiont inside its cells. Geosiphon is not usually considered to be a lichen, and its peculiar symbiosis was not recognized for many years. The genus is more closely allied to endomycorrhizal genera. Fungi from Verrucariales also form marine lichens with the brown algae Petroderma maculiforme, and have a symbiotic relationship with seaweed (such as rockweed) and Blidingia minima, where the algae are the dominant components. The fungi is thought to help the rockweeds to resist desiccation when exposed to air. In addition, lichens can also use yellow-green algae (Heterococcus) as their symbiotic partner. Lichens independently emerged from fungi associating with algae and cyanobacteria multiple times throughout history. Fungi The fungal component of a lichen is called the mycobiont. The mycobiont may be an Ascomycete or Basidiomycete. The associated lichens are called either ascolichens or basidiolichens, respectively. Living as a symbiont in a lichen appears to be a successful way for a fungus to derive essential nutrients, since about 20% of all fungal species have acquired this mode of life. Thalli produced by a given fungal symbiont with its differing partners may be similar, and the secondary metabolites identical, indicating that the fungus has the dominant role in determining the morphology of the lichen. But the same mycobiont with different photobionts may also produce very different growth forms. Lichens are known in which there is one fungus associated with two or even three algal species. Although each lichen thallus generally appears homogeneous, some evidence seems to suggest that the fungal component may consist of more than one genetic individual of that species. Two or more fungal species can interact to form the same lichen. The following table lists the orders and families of fungi that include lichen-forming species. Photobionts The photosynthetic partner in a lichen is called a photobiont. The photobionts in lichens come from a variety of simple prokaryotic and eukaryotic organisms. In the majority of lichens the photobiont is a green alga (Chlorophyta) or a cyanobacterium. In some lichens both types are present; in such cases, the alga is typically the primary partner, with the cyanobacteria being located in cryptic pockets. Algal photobionts are called phycobionts, while cyanobacterial photobionts are called cyanobionts. About 90% of all known lichens have phycobionts, and about 10% have cyanobionts. Approximately 100 species of photosynthetic partners from 40 genera and five distinct classes (prokaryotic: Cyanophyceae; eukaryotic: Trebouxiophyceae, Phaeophyceae, Chlorophyceae) have been found to associate with the lichen-forming fungi. Common algal photobionts are from the genera Trebouxia, Trentepohlia, Pseudotrebouxia, or Myrmecia. Trebouxia is the most common genus of green algae in lichens, occurring in about 40% of all lichens. "Trebouxioid" means either a photobiont that is in the genus Trebouxia, or resembles a member of that genus, and is therefore presumably a member of the class Trebouxiophyceae. The second most commonly represented green alga genus is Trentepohlia. Overall, about 100 species of eukaryotes are known to occur as photobionts in lichens. All the algae are probably able to exist independently in nature as well as in the lichen. A "cyanolichen" is a lichen with a cyanobacterium as its main photosynthetic component (photobiont). Most cyanolichen are also ascolichens, but a few basidiolichen like Dictyonema and Acantholichen have cyanobacteria as their partner. The most commonly occurring cyanobacterium genus is Nostoc. Other common cyanobacterium photobionts are from Scytonema. Many cyanolichens are small and black, and have limestone as the substrate. Another cyanolichen group, the jelly lichens of the genera Collema or Leptogium are gelatinous and live on moist soils. Another group of large and foliose species including Peltigera, Lobaria, and Degelia are grey-blue, especially when dampened or wet. Many of these characterize the Lobarion communities of higher rainfall areas in western Britain, e.g., in the Celtic rain forest. Strains of cyanobacteria found in various cyanolichens are often closely related to one another. They differ from the most closely related free-living strains. The lichen association is a close symbiosis. It extends the ecological range of both partners but is not always obligatory for their growth and reproduction in natural environments, since many of the algal symbionts can live independently. A prominent example is the alga Trentepohlia, which forms orange-coloured populations on tree trunks and suitable rock faces. Lichen propagules (diaspores) typically contain cells from both partners, although the fungal components of so-called "fringe species" rely instead on algal cells dispersed by the "core species". The same cyanobiont species can occur in association with different fungal species as lichen partners. The same phycobiont species can occur in association with different fungal species as lichen partners. More than one phycobiont may be present in a single thallus. A single lichen may contain several algal genotypes. These multiple genotypes may better enable response to adaptation to environmental changes, and enable the lichen to inhabit a wider range of environments. Controversy over classification method and species names There are about 20,000 known lichen species. But what is meant by "species" is different from what is meant by biological species in plants, animals, or fungi, where being the same species implies that there is a common ancestral lineage. Because lichens are combinations of members of two or even three different biological kingdoms, these components must have a different ancestral lineage from each other. By convention, lichens are still called "species" anyway, and are classified according to the species of their fungus, not the species of the algae or cyanobacteria. Lichens are given the same scientific name (binomial name) as the fungus in them, which may cause some confusion. The alga bears its own scientific name, which has no relationship to the name of the lichen or fungus. Depending on context, "lichenized fungus" may refer to the entire lichen, or to the fungus when it is in the lichen, which can be grown in culture in isolation from the algae or cyanobacteria. Some algae and cyanobacteria are found naturally living outside of the lichen. The fungal, algal, or cyanobacterial component of a lichen can be grown by itself in culture. When growing by themselves, the fungus, algae, or cyanobacteria have very different properties than those of the lichen. Lichen properties such as growth form, physiology, and biochemistry, are very different from the combination of the properties of the fungus and the algae or cyanobacteria. The same fungus growing in combination with different algae or cyanobacteria, can produce lichens that are very different in most properties, meeting non-DNA criteria for being different "species". Historically, these different combinations were classified as different species. When the fungus is identified as being the same using modern DNA methods, these apparently different species get reclassified as the same species under the current (2014) convention for classification by fungal component. This has led to debate about this classification convention. These apparently different "species" have their own independent evolutionary history. There is also debate as to the appropriateness of giving the same binomial name to the fungus, and to the lichen that combines that fungus with an alga or cyanobacterium (synecdoche). This is especially the case when combining the same fungus with different algae or cyanobacteria produces dramatically different lichen organisms, which would be considered different species by any measure other than the DNA of the fungal component. If the whole lichen produced by the same fungus growing in association with different algae or cyanobacteria, were to be classified as different "species", the number of "lichen species" would be greater. Diversity The largest number of lichenized fungi occur in the Ascomycota, with about 40% of species forming such an association. Some of these lichenized fungi occur in orders with nonlichenized fungi that live as saprotrophs or plant parasites (for example, the Leotiales, Dothideales, and Pezizales). Other lichen fungi occur in only five orders in which all members are engaged in this habit (Orders Graphidales, Gyalectales, Peltigerales, Pertusariales, and Teloschistales). Overall, about 98% of lichens have an ascomycetous mycobiont. Next to the Ascomycota, the largest number of lichenized fungi occur in the unassigned fungi imperfecti, a catch-all category for fungi whose sexual form of reproduction has never been observed. Comparatively few basidiomycetes are lichenized, but these include agarics, such as species of Lichenomphalia, clavarioid fungi, such as species of Multiclavula, and corticioid fungi, such as species of Dictyonema. Identification methods Lichen identification uses growth form, microscopy and reactions to chemical tests. The outcome of the "Pd test" is called "Pd", which is also used as an abbreviation for the chemical used in the test, para-phenylenediamine. If putting a drop on a lichen turns an area bright yellow to orange, this helps identify it as belonging to either the genus Cladonia or Lecanora. Evolution and paleontology The fossil record for lichens is poor. The extreme habitats that lichens dominate, such as tundra, mountains, and deserts, are not ordinarily conducive to producing fossils. There are fossilized lichens embedded in amber. The fossilized Anzia is found in pieces of amber in northern Europe and dates back approximately 40 million years. Lichen fragments are also found in fossil leaf beds, such as Lobaria from Trinity County in northern California, US, dating back to the early to middle Miocene. The oldest fossil lichen in which both symbiotic partners have been recovered is Winfrenatia, an early zygomycetous (Glomeromycotan) lichen symbiosis that may have involved controlled parasitism, is permineralized in the Rhynie Chert of Scotland, dating from early Early Devonian, about 400 million years ago. The slightly older fossil Spongiophyton has also been interpreted as a lichen on morphological and isotopic grounds, although the isotopic basis is decidedly shaky. It has been demonstrated that Silurian-Devonian fossils Nematothallus and Prototaxites were lichenized. Thus lichenized Ascomycota and Basidiomycota were a component of Early Silurian-Devonian terrestrial ecosystems. Newer research suggests that lichen evolved after the evolution of land plants. The ancestral ecological state of both Ascomycota and Basidiomycota was probably saprobism, and independent lichenization events may have occurred multiple times. In 1995, Gargas and colleagues proposed that there were at least five independent origins of lichenization; three in the basidiomycetes and at least two in the Ascomycetes. Lutzoni et al. (2001) suggest lichenization probably evolved earlier and was followed by multiple independent losses. Some non-lichen-forming fungi may have secondarily lost the ability to form a lichen association. As a result, lichenization has been viewed as a highly successful nutritional strategy. Lichenized Glomeromycota may extend well back into the Precambrian. Lichen-like fossils consisting of coccoid cells (cyanobacteria?) and thin filaments (mucoromycotinan Glomeromycota?) are permineralized in marine phosphorite of the Doushantuo Formation in southern China. These fossils are thought to be 551 to 635 million years old or Ediacaran. Ediacaran acritarchs also have many similarities with Glomeromycotan vesicles and spores. It has also been claimed that Ediacaran fossils including Dickinsonia, were lichens, although this claim is controversial. Endosymbiotic Glomeromycota comparable with living Geosiphon may extend back into the Proterozoic in the form of 1500 million year old Horodyskia and 2200 million year old Diskagma. Discovery of these fossils suggest that fungi developed symbiotic partnerships with photoautotrophs long before the evolution of vascular plants, though the Ediacaran lichen hypothesis is largely rejected due to an inappropriate definition of lichens based on taphonomy and substrate ecology. However, a 2019 study by the same scientist who rejected the Ediacaran lichen hypothesis, Nelsen, used new time-calibrated phylogenies to conclude that there is no evidence of lichen before the existence of vascular plants. Lecanoromycetes, one of the most common classes of lichen-forming fungi, diverged from its ancestor, which may have also been lichen forming, around 258 million years ago, during the late Paleozoic period. However, the closely related clade Euritiomycetes appears to have become lichen-forming only 52 million years ago, during the early Cenozoic period. Ecology and interactions with environment Substrates and habitats Lichens grow on and in a wide range of substrates and habitats, including some of the most extreme conditions on earth. They are abundant growing on bark, leaves, and hanging from epiphyte branches in rain forests and in temperate woodland. They grow on bare rock, walls, gravestones, roofs, and exposed soil surfaces. They can survive in some of the most extreme environments on Earth: arctic tundra, hot dry deserts, rocky coasts, and toxic slag heaps. They can live inside solid rock, growing between the grains, and in the soil as part of a biological soil crust in arid habitats such as deserts. Some lichens do not grow on anything, living out their lives blowing about the environment. When growing on mineral surfaces, some lichens slowly decompose their substrate by chemically degrading and physically disrupting the minerals, contributing to the process of weathering by which rocks are gradually turned into soil. While this contribution to weathering is usually benign, it can cause problems for artificial stone structures. For example, there is an ongoing lichen growth problem on Mount Rushmore National Memorial that requires the employment of mountain-climbing conservators to clean the monument. Lichens are not parasites on the plants they grow on, but only use them as a substrate. The fungi of some lichen species may "take over" the algae of other lichen species. Lichens make their own food from their photosynthetic parts and by absorbing minerals from the environment. Lichens growing on leaves may have the appearance of being parasites on the leaves, but they are not. Some lichens in Diploschistes parasitise other lichens. Diploschistes muscorum starts its development in the tissue of a host Cladonia species. In the arctic tundra, lichens, together with mosses and liverworts, make up the majority of the ground cover, which helps insulate the ground and may provide forage for grazing animals. An example is "reindeer moss", which is a lichen, not a moss. There are only two species of known permanently submerged lichens; Hydrothyria venosa is found in fresh water environments, and Verrucaria serpuloides is found in marine environments. A crustose lichen that grows on rock is called a saxicolous lichen. Crustose lichens that grow on the rock are epilithic, and those that grow immersed inside rock, growing between the crystals with only their fruiting bodies exposed to the air, are called endolithic lichens. A crustose lichen that grows on bark is called a corticolous lichen. A lichen that grows on wood from which the bark has been stripped is called a lignicolous lichen. Lichens that grow immersed inside plant tissues are called endophloidic lichens or endophloidal lichens. Lichens that use leaves as substrates, whether the leaf is still on the tree or on the ground, are called epiphyllous or foliicolous. A terricolous lichen grows on the soil as a substrate. Many squamulose lichens are terricolous. Umbilicate lichens are foliose lichens that are attached to the substrate at only one point. A vagrant lichen is not attached to a substrate at all, and lives its life being blown around by the wind. Lichens and soils In addition to distinct physical mechanisms by which lichens break down raw stone, studies indicate lichens attack stone chemically, entering newly chelated minerals into the ecology. The substances exuded by lichens, known for their strong ability to bind and sequester metals, along with the common formation of new minerals, especially metal oxalates, and the traits of the substrates they alter, all highlight the important role lichens play in the process of chemical weathering. Over time, this activity creates new fertile soil from stone. Lichens may be important in contributing nitrogen to soils in some deserts through being eaten, along with their rock substrate, by snails, which then defecate, putting the nitrogen into the soils. Lichens help bind and stabilize soil sand in dunes. In deserts and semi-arid areas, lichens are part of extensive, living biological soil crusts, essential for maintaining the soil structure. Ecological interactions Lichens are pioneer species, among the first living things to grow on bare rock or areas denuded of life by a disaster. Lichens may have to compete with plants for access to sunlight, but because of their small size and slow growth, they thrive in places where higher plants have difficulty growing. Lichens are often the first to settle in places lacking soil, constituting the sole vegetation in some extreme environments such as those found at high mountain elevations and at high latitudes. Some survive in the tough conditions of deserts, and others on frozen soil of the Arctic regions. A major ecophysiological advantage of lichens is that they are poikilohydric (poikilo- variable, hydric- relating to water), meaning that though they have little control over the status of their hydration, they can tolerate irregular and extended periods of severe desiccation. Like some mosses, liverworts, ferns and a few resurrection plants, upon desiccation, lichens enter a metabolic suspension or stasis (known as cryptobiosis) in which the cells of the lichen symbionts are dehydrated to a degree that halts most biochemical activity. In this cryptobiotic state, lichens can survive wider extremes of temperature, radiation and drought in the harsh environments they often inhabit. Lichens do not have roots and do not need to tap continuous reservoirs of water like most higher plants, thus they can grow in locations impossible for most plants, such as bare rock, sterile soil or sand, and various artificial structures such as walls, roofs, and monuments. Many lichens also grow as epiphytes (epi- on the surface, phyte- plant) on plants, particularly on the trunks and branches of trees. When growing on plants, lichens are not parasites; they do not consume any part of the plant nor poison it. Lichens produce allelopathic chemicals that inhibit the growth of mosses. Some ground-dwelling lichens, such as members of the subgenus Cladina (reindeer lichens), produce allelopathic chemicals that leach into the soil and inhibit the germination of seeds, spruce and other plants. Stability (that is, longevity) of their substrate is a major factor of lichen habitats. Most lichens grow on stable rock surfaces or the bark of old trees, but many others grow on soil and sand. In these latter cases, lichens are often an important part of soil stabilization; indeed, in some desert ecosystems, vascular (higher) plant seeds cannot become established except in places where lichen crusts stabilize the sand and help retain water. Lichens may be eaten by some animals, such as reindeer, living in arctic regions. The larvae of a number of Lepidoptera species feed exclusively on lichens. These include common footman and marbled beauty. They are very low in protein and high in carbohydrates, making them unsuitable for some animals. The Northern flying squirrel uses it for nesting, food and winter water. Effects of air pollution If lichens are exposed to air pollutants at all times, without any deciduous parts, they are unable to avoid the accumulation of pollutants. Also lacking stomata and a cuticle, lichens may absorb aerosols and gases over the entire thallus surface from which they may readily diffuse to the photobiont layer. Because lichens do not possess roots, their primary source of most elements is the air, and therefore elemental levels in lichens often reflect the accumulated composition of ambient air. The processes by which atmospheric deposition occurs include fog and dew, gaseous absorption, and dry deposition. Consequently, environmental studies with lichens emphasize their feasibility as effective biomonitors of atmospheric quality. Not all lichens are equally sensitive to air pollutants, so different lichen species show different levels of sensitivity to specific atmospheric pollutants. The sensitivity of a lichen to air pollution is directly related to the energy needs of the mycobiont, so that the stronger the dependency of the mycobiont on the photobiont, the more sensitive the lichen is to air pollution. Upon exposure to air pollution, the photobiont may use metabolic energy for repair of its cellular structures that would otherwise be used for maintenance of its photosynthetic activity, therefore leaving less metabolic energy available for the mycobiont. The alteration of the balance between the photobiont and mycobiont can lead to the breakdown of the symbiotic association. Therefore, lichen decline may result not only from the accumulation of toxic substances, but also from altered nutrient supplies that favor one symbiont over the other. This interaction between lichens and air pollution has been used as a means of monitoring air quality since 1859, with more systematic methods developed by William Nylander in 1866. Human use Food Lichens are eaten by many different cultures across the world. Although some lichens are only eaten in times of famine, others are a staple food or even a delicacy. Two obstacles are often encountered when eating lichens: lichen polysaccharides are generally indigestible to humans, and lichens usually contain mildly toxic secondary compounds that should be removed before eating. Very few lichens are poisonous, but those high in vulpinic acid or usnic acid are toxic. Most poisonous lichens are yellow. In the past, Iceland moss (Cetraria islandica) was an important source of food for humans in northern Europe, and was cooked as a bread, porridge, pudding, soup, or salad. Bryoria fremontii (edible horsehair lichen) was an important food in parts of North America, where it was usually pitcooked. Northern peoples in North America and Siberia traditionally eat the partially digested reindeer lichen (Cladina spp.) after they remove it from the rumen of caribou or reindeer that have been killed. Rock tripe (Umbilicaria spp. and Lasalia spp.) is a lichen that has frequently been used as an emergency food in North America, and one species, Umbilicaria esculenta, (iwatake in Japanese) is used in a variety of traditional Korean and Japanese foods. Lichenometry Lichenometry is a technique used to determine the age of exposed rock surfaces based on the size of lichen thalli. Introduced by Beschel in the 1950s, the technique has found many applications. it is used in archaeology, palaeontology, and geomorphology. It uses the presumed regular but slow rate of lichen growth to determine the age of exposed rock. Measuring the diameter (or other size measurement) of the largest lichen of a species on a rock surface indicates the length of time since the rock surface was first exposed. Lichen can be preserved on old rock faces for up to 10,000 years, providing the maximum age limit of the technique, though it is most accurate (within 10% error) when applied to surfaces that have been exposed for less than 1,000 years. Lichenometry is especially useful for dating surfaces less than 500 years old, as radiocarbon dating techniques are less accurate over this period. The lichens most commonly used for lichenometry are those of the genera Rhizocarpon (e.g. the species Rhizocarpon geographicum, map lichen) and Xanthoria. Biodegradation Lichens have been shown to degrade polyester resins, as can be seen in archaeological sites in the Roman city of Baelo Claudia in Spain. Lichens can accumulate several environmental pollutants such as lead, copper, and radionuclides. Some species of lichen, such as Parmelia sulcata (called a hammered shield lichen, among other names) and Lobaria pulmonaria (lung lichen), and many in the Cladonia genus, have been shown to produce serine proteases capable of the degradation of pathogenic forms of prion protein (PrP), which may be useful in treating contaminated environmental reservoirs. Dyes Many lichens produce secondary compounds, including pigments that reduce harmful amounts of sunlight and powerful toxins that deter herbivores or kill bacteria. These compounds are very useful for lichen identification, and have had economic importance as dyes such as cudbear or primitive antibiotics. A pH indicator (which can indicate acidic or basic substances) called litmus is a dye extracted from the lichen Roccella tinctoria ("dyer's weed") by boiling. It gives its name to the well-known litmus test. Traditional dyes of the Scottish Highlands for Harris tweed and other traditional cloths were made from lichens, including the orange Xanthoria parietina ("common orange lichen") and the grey foliaceous Parmelia saxatilis common on rocks and known colloquially as "crottle". There are reports dating almost 2,000 years old of lichens being used to make purple and red dyes. Of great historical and commercial significance are lichens belonging to the family Roccellaceae, commonly called orchella weed or orchil. Orcein and other lichen dyes have largely been replaced by synthetic versions. Traditional medicine and research Historically, in traditional medicine of Europe, Lobaria pulmonaria was collected in large quantities as "lungwort", due to its lung-like appearance (the "doctrine of signatures" suggesting that herbs can treat body parts that they physically resemble).Similarly, Peltigera leucophlebia ("ruffled freckled pelt") was used as a supposed cure for thrush, due to the resemblance of its cephalodia to the appearance of the disease. Lichens produce metabolites being researched for their potential therapeutic or diagnostic value. Some metabolites produced by lichens are structurally and functionally similar to broad-spectrum antibiotics while few are associated respectively to antiseptic similarities. Usnic acid is the most commonly studied metabolite produced by lichens. It is also under research as a bactericidal agent against Escherichia coli and Staphylococcus aureus. Aesthetic appeal Colonies of lichens may be spectacular in appearance, dominating the surface of the visual landscape as part of the aesthetic appeal to visitors of Yosemite National Park, Sequoia National Park, and the Bay of Fires. Orange and yellow lichens add to the ambience of desert trees, tundras, and rocky seashores. Intricate webs of lichens hanging from tree branches add a mysterious aspect to forests. Fruticose lichens are used in model railroading and other modeling hobbies as a material for making miniature trees and shrubs. In literature In early Midrashic literature, the Hebrew word "vayilafeth" in Ruth 3:8 is explained as referring to Ruth entwining herself around Boaz like lichen. The 10th century Arab physician Al-Tamimi mentions lichens dissolved in vinegar and rose water being used in his day for the treatment of skin diseases and rashes. The plot of John Wyndham's science fiction novel Trouble with Lichen revolves around an anti-aging chemical extracted from a lichen. History Although lichens had been recognized as organisms for quite some time, it was not until 1867, when Swiss botanist Simon Schwendener proposed his dual theory of lichens, that lichens are a combination of fungi with algae or cyanobacteria, whereby the true nature of the lichen association began to emerge. Schwendener's hypothesis, which at the time lacked experimental evidence, arose from his extensive analysis of the anatomy and development in lichens, algae, and fungi using a light microscope. Many of the leading lichenologists at the time, such as James Crombie and Nylander, rejected Schwendener's hypothesis because the consensus was that all living organisms were autonomous. Other prominent biologists, such as Heinrich Anton de Bary, Albert Bernhard Frank, Beatrix Potter, Melchior Treub and Hermann Hellriegel, were not so quick to reject Schwendener's ideas and the concept soon spread into other areas of study, such as microbial, plant, animal and human pathogens. When the complex relationships between pathogenic microorganisms and their hosts were finally identified, Schwendener's hypothesis began to gain popularity. Further experimental proof of the dual nature of lichens was obtained when Eugen Thomas published his results in 1939 on the first successful re-synthesis experiment. In the 2010s, a new facet of the fungi–algae partnership was discovered. Toby Spribille and colleagues found that many types of lichen that were long thought to be ascomycete–algae pairs were actually ascomycete–basidiomycete–algae trios. The third symbiotic partner in many lichens is a basidiomycete yeast. See also Lichenology Lichens and nitrogen cycling Mycophycobiosis - a symbiosis where a fungus lives in the macroscopic thallus of fresh water and marine algae; technically not a lichen but a similar phenomenon where fungi and algae are in symbiosis Notes References Cryptogams Polyextremophiles Oligotrophs Bioindicators Mycology Symbiosis Polyphyletic groups
Lichen
[ "Chemistry", "Biology", "Environmental_science" ]
13,585
[ "Behavior", "Symbiosis", "Bioindicators", "Cryptogams", "Biological interactions", "Environmental chemistry", "Mycology", "Polyphyletic groups", "Phylogenetics", "Eukaryotes" ]
172,552
https://en.wikipedia.org/wiki/Stunnel
Stunnel is an open-source multi-platform application used to provide a universal TLS/SSL tunneling service. Stunnel is used to provide secure encrypted connections for clients or servers that do not speak TLS or SSL natively. It runs on a variety of operating systems, including most Unix-like operating systems and Windows. Stunnel relies on the OpenSSL library to implement the underlying TLS or SSL protocol. Stunnel uses public-key cryptography with X.509 digital certificates to secure the SSL connection, and clients can optionally be authenticated via a certificate. If linked against libwrap, it can be configured to act as a proxy–firewall service as well. Stunnel is maintained by Polish programmer Michał Trojnara and released under the terms of the GNU General Public License (GPL) with OpenSSL exception. Example A stunnel can be used to provide a secure SSL connection to an existing non-SSL-aware SMTP mail server. Assuming the SMTP server expects TCP connections on port 25, the stunnel would be configured to map the SSL port 465 to non-SSL port 25. A mail client connects via SSL to port 465. Network traffic from the client initially passes over SSL to the stunnel application, which transparently encrypts and decrypts traffic and forwards unsecured traffic to port 25 locally. The mail server sees a non-SSL mail client. The stunnel process could be running on the same or a different server from the unsecured mail application; however, both machines would typically be behind a firewall on a secure internal network (so that an intruder could not make its own unsecured connection directly to port 25). See also Tunneling protocol References External links Cryptographic software Free security software Unix network-related software Transport Layer Security implementation Tunneling protocols Network protocols
Stunnel
[ "Mathematics", "Engineering" ]
406
[ "Cryptographic software", "Computer networks engineering", "Tunneling protocols", "Mathematical software" ]
172,564
https://en.wikipedia.org/wiki/Chinese%20postman%20problem
In graph theory and combinatorial optimization, Guan's route problem, the Chinese postman problem, postman tour or route inspection problem is to find a shortest closed path or circuit that visits every edge of an (connected) undirected graph at least once. When the graph has an Eulerian circuit (a closed walk that covers every edge once), that circuit is an optimal solution. Otherwise, the optimization problem is to find the smallest number of graph edges to duplicate (or the subset of edges with the minimum possible total weight) so that the resulting multigraph does have an Eulerian circuit. It can be solved in polynomial time, unlike the Travelling Salesman Problem which is NP-hard. It is different from the Travelling Salesman Problem in that the travelling salesman cannot repeat visited nodes and does not have to visit every edge. The problem was originally studied by the Chinese mathematician Meigu Guan in 1960, whose Chinese paper was translated into English in 1962. The original name "Chinese postman problem" was coined in his honor; different sources credit the coinage either to Alan J. Goldman or Jack Edmonds, both of whom were at the U.S. National Bureau of Standards at the time. A generalization takes as input any set T of evenly many vertices, and must produce as output a minimum-weight edge set in the graph whose odd-degree vertices are precisely those of T. This output is called a T-join. This problem, the T-join problem, is also solvable in polynomial time by the same approach that solves the postman problem. Undirected solution and T-joins The undirected route inspection problem can be solved in polynomial time by an algorithm based on the concept of a T-join. Let T be a set of vertices in a graph. An edge set J is called a T-join if the collection of vertices that have an odd number of incident edges in J is exactly the set T. A T-join exists whenever every connected component of the graph contains an even number of vertices in T. The T-join problem is to find a T-join with the minimum possible number of edges or the minimum possible total weight. For any T, a smallest T-join (when it exists) necessarily consists of paths that join the vertices of T in pairs. The paths will be such that the total length or total weight of all of them is as small as possible. In an optimal solution, no two of these paths will share any edge, but they may have shared vertices. A minimum T-join can be obtained by constructing a complete graph on the vertices of T, with edges that represent shortest paths in the given input graph, and then finding a minimum weight perfect matching in this complete graph. The edges of this matching represent paths in the original graph, whose union forms the desired T-join. Both constructing the complete graph, and then finding a matching in it, can be done in O(n3) computational steps. For the route inspection problem, T should be chosen as the set of all odd-degree vertices. By the assumptions of the problem, the whole graph is connected (otherwise no tour exists), and by the handshaking lemma it has an even number of odd vertices, so a T-join always exists. Doubling the edges of a T-join causes the given graph to become an Eulerian multigraph (a connected graph in which every vertex has even degree), from which it follows that it has an Euler tour, a tour that visits each edge of the multigraph exactly once. This tour will be an optimal solution to the route inspection problem. Directed solution On a directed graph, the same general ideas apply, but different techniques must be used. If the directed graph is Eulerian, one need only find an Euler cycle. If it is not, one must find T-joins, which in this case entails finding paths from vertices with an in-degree greater than their out-degree to those with an out-degree greater than their in-degree such that they would make in-degree of every vertex equal to its out-degree. This can be solved as an instance of the minimum-cost flow problem in which there is one unit of supply for every unit of excess in-degree, and one unit of demand for every unit of excess out-degree. As such it is solvable in O(|V|2|E|) time. A solution exists if and only if the given graph is strongly connected. Applications Various combinatorial problems have been reduced to the Chinese Postman Problem, including finding a maximum cut in a planar graph and a minimum-mean length circuit in an undirected graph. Variants A few variants of the Chinese Postman Problem have been studied and shown to be NP-complete. The windy postman problem is a variant of the route inspection problem in which the input is an undirected graph, but where each edge may have a different cost for traversing it in one direction than for traversing it in the other direction. In contrast to the solutions for directed and undirected graphs, it is NP-complete. The Mixed Chinese postman problem: for this problem, some of the edges may be directed and can therefore only be visited from one direction. When the problem calls for a minimal traversal of a digraph (or multidigraph) it is known as the "New York Street Sweeper problem." The k-Chinese postman problem: find k cycles all starting at a designated location such that each edge is traversed by at least one cycle. The goal is to minimize the cost of the most expensive cycle. The "Rural Postman Problem": solve the problem with some edges not required. See also Travelling salesman problem Arc routing Mixed Chinese postman problem References External links NP-complete problems Computational problems in graph theory
Chinese postman problem
[ "Mathematics" ]
1,197
[ "Computational problems in graph theory", "Computational mathematics", "Graph theory", "Computational problems", "Mathematical relations", "Mathematical problems", "NP-complete problems" ]
172,566
https://en.wikipedia.org/wiki/Rete%20algorithm
The Rete algorithm ( , , rarely , ) is a pattern matching algorithm for implementing rule-based systems. The algorithm was developed to efficiently apply many rules or patterns to many objects, or facts, in a knowledge base. It is used to determine which of the system's rules should fire based on its data store, its facts. The Rete algorithm was designed by Charles L. Forgy of Carnegie Mellon University, first published in a working paper in 1974, and later elaborated in his 1979 Ph.D. thesis and a 1982 paper. Overview A naive implementation of an expert system might check each rule against known facts in a knowledge base, firing that rule if necessary, then moving on to the next rule (and looping back to the first rule when finished). For even moderate sized rules and facts knowledge-bases, this naive approach performs far too slowly. The Rete algorithm provides the basis for a more efficient implementation. A Rete-based expert system builds a network of nodes, where each node (except the root) corresponds to a pattern occurring in the left-hand-side (the condition part) of a rule. The path from the root node to a leaf node defines a complete rule left-hand-side. Each node has a memory of facts that satisfy that pattern. This structure is essentially a generalized trie. As new facts are asserted or modified, they propagate along the network, causing nodes to be annotated when that fact matches that pattern. When a fact or combination of facts causes all of the patterns for a given rule to be satisfied, a leaf node is reached and the corresponding rule is triggered. Rete was first used as the core engine of the OPS5 production system language, which was used to build early systems including R1 for Digital Equipment Corporation. Rete has become the basis for many popular rule engines and expert system shells, including CLIPS, Jess, Drools, IBM Operational Decision Management, BizTalk Rules Engine, Soar, and Evrete. The word 'Rete' is Latin for 'net' or 'comb'. The same word is used in modern Italian to mean 'network'. Charles Forgy has reportedly stated that he adopted the term 'Rete' because of its use in anatomy to describe a network of blood vessels and nerve fibers. The Rete algorithm is designed to sacrifice memory for increased speed. In most cases, the speed increase over naïve implementations is several orders of magnitude (because Rete performance is theoretically independent of the number of rules in the system). In very large expert systems, however, the original Rete algorithm tends to run into memory and server consumption problems. Other algorithms, both novel and Rete-based, have since been designed that require less memory (e.g. Rete* or Collection Oriented Match). Description The Rete algorithm provides a generalized logical description of an implementation of functionality responsible for matching data tuples ("facts") against productions ("rules") in a pattern-matching production system (a category of rule engine). A production consists of one or more conditions and a set of actions that may be undertaken for each complete set of facts that match the conditions. Conditions test fact attributes, including fact type specifiers/identifiers. The Rete algorithm exhibits the following major characteristics: It reduces or eliminates certain types of redundancy through the use of node sharing. It stores partial matches when performing joins between different fact types. This, in turn, allows production systems to avoid complete re-evaluation of all facts each time changes are made to the production system's working memory. Instead, the production system needs only to evaluate the changes (deltas) to working memory. It allows for efficient removal of memory elements when facts are retracted from working memory. The Rete algorithm is widely used to implement matching functionality within pattern-matching engines that exploit a match-resolve-act cycle to support forward chaining and inferencing. It provides a means for many–many matching, an important feature when many or all possible solutions in a search network must be found. Retes are directed acyclic graphs that represent higher-level rule sets. They are generally represented at run-time using a network of in-memory objects. These networks match rule conditions (patterns) to facts (relational data tuples). Rete networks act as a type of relational query processor, performing projections, selections and joins conditionally on arbitrary numbers of data tuples. Productions (rules) are typically captured and defined by analysts and developers using some high-level rules language. They are collected into rule sets that are then translated, often at run time, into an executable Rete. When facts are "asserted" to working memory, the engine creates working memory elements (WMEs) for each fact. Facts are tuples, and may therefore contain an arbitrary number of data items. Each WME may hold an entire tuple, or, alternatively, each fact may be represented by a set of WMEs where each WME contains a fixed-length tuple. In this case, tuples are typically triplets (3-tuples). Each WME enters the Rete network at a single root node. The root node passes each WME on to its child nodes, and each WME may then be propagated through the network, possibly being stored in intermediate memories, until it arrives at a terminal node. Alpha network The "left" (alpha) side of the node graph forms a discrimination network responsible for selecting individual WMEs based on simple conditional tests that match WME attributes against constant values. Nodes in the discrimination network may also perform tests that compare two or more attributes of the same WME. If a WME is successfully matched against the conditions represented by one node, it is passed to the next node. In most engines, the immediate child nodes of the root node are used to test the entity identifier or fact type of each WME. Hence, all the WMEs that represent the same entity type typically traverse a given branch of nodes in the discrimination network. Within the discrimination network, each branch of alpha nodes (also called 1-input nodes) terminates at a memory, called an alpha memory. These memories store collections of WMEs that match each condition in each node in a given node branch. WMEs that fail to match at least one condition in a branch are not materialised within the corresponding alpha memory. Alpha node branches may fork in order to minimise condition redundancy. Beta network The "right" (beta) side of the graph chiefly performs joins between different WMEs. It is optional, and is only included if required. It consists of 2-input nodes where each node has a "left" and a "right" input. Each beta node sends its output to a beta memory. In descriptions of Rete, it is common to refer to token passing within the beta network. In this article, however, we will describe data propagation in terms of WME lists, rather than tokens, in recognition of different implementation options and the underlying purpose and use of tokens. As any one WME list passes through the beta network, new WMEs may be added to it, and the list may be stored in beta memories. A WME list in a beta memory represents a partial match for the conditions in a given production. WME lists that reach the end of a branch of beta nodes represent a complete match for a single production, and are passed to terminal nodes. These nodes are sometimes called p-nodes, where "p" stands for production. Each terminal node represents a single production, and each WME list that arrives at a terminal node represents a complete set of matching WMEs for the conditions in that production. For each WME list it receives, a production node will "activate" a new production instance on the "agenda". Agendas are typically implemented as prioritised queues. Beta nodes typically perform joins between WME lists stored in beta memories and individual WMEs stored in alpha memories. Each beta node is associated with two input memories. An alpha memory holds WM and performs "right" activations on the beta node each time it stores a new WME. A beta memory holds WME lists and performs "left" activations on the beta node each time it stores a new WME list. When a join node is right-activated, it compares one or more attributes of the newly stored WME from its input alpha memory against given attributes of specific WMEs in each WME list contained in the input beta memory. When a join node is left-activated it traverses a single newly stored WME list in the beta memory, retrieving specific attribute values of given WMEs. It compares these values with attribute values of each WME in the alpha memory. Each beta node outputs WME lists that are either stored in a beta memory or sent directly to a terminal node. WME lists are stored in beta memories whenever the engine will perform additional left activations on subsequent beta nodes. Logically, a beta node at the head of a branch of beta nodes is a special case because it takes no input from any beta memory higher in the network. Different engines handle this issue in different ways. Some engines use specialised adapter nodes to connect alpha memories to the left input of beta nodes. Other engines allow beta nodes to take input directly from two alpha memories, treating one as a "left" input and the other as a "right" input. In both cases, "head" beta nodes take their input from two alpha memories. In order to eliminate node redundancies, any one alpha or beta memory may be used to perform activations on multiple beta nodes. As well as join nodes, the beta network may contain additional node types, some of which are described below. If a Rete contains no beta network, alpha nodes feed tokens, each containing a single WME, directly to p-nodes. In this case, there may be no need to store WMEs in alpha memories. Conflict resolution During any one match-resolve-act cycle, the engine will find all possible matches for the facts currently asserted to working memory. Once all the current matches have been found, and corresponding production instances have been activated on the agenda, the engine determines an order in which the production instances may be "fired". This is termed conflict resolution, and the list of activated production instances is termed the conflict set. The order may be based on rule priority (salience), rule order, the time at which facts contained in each instance were asserted to the working memory, the complexity of each production, or some other criteria. Many engines allow rule developers to select between different conflict resolution strategies or to chain a selection of multiple strategies. Conflict resolution is not defined as part of the Rete algorithm, but is used alongside the algorithm. Some specialised production systems do not perform conflict resolution. Production execution Having performed conflict resolution, the engine now "fires" the first production instance, executing a list of actions associated with the production. The actions act on the data represented by the production instance's WME list. By default, the engine will continue to fire each production instance in order until all production instances have been fired. Each production instance will fire only once, at most, during any one match-resolve-act cycle. This characteristic is termed refraction. However, the sequence of production instance firings may be interrupted at any stage by performing changes to the working memory. Rule actions can contain instructions to assert or retract WMEs from the working memory of the engine. Each time any single production instance performs one or more such changes, the engine immediately enters a new match-resolve-act cycle. This includes "updates" to WMEs currently in the working memory. Updates are represented by retracting and then re-asserting the WME. The engine undertakes matching of the changed data which, in turn, may result in changes to the list of production instances on the agenda. Hence, after the actions for any one specific production instance have been executed, previously activated instances may have been de-activated and removed from the agenda, and new instances may have been activated. As part of the new match-resolve-act cycle, the engine performs conflict resolution on the agenda and then executes the current first instance. The engine continues to fire production instances, and to enter new match-resolve-act cycles, until no further production instances exist on the agenda. At this point the rule engine is deemed to have completed its work, and halts. Some engines support advanced refraction strategies in which certain production instances executed in a previous cycle are not re-executed in the new cycle, even though they may still exist on the agenda. It is possible for the engine to enter into never-ending loops in which the agenda never reaches the empty state. For this reason, most engines support explicit "halt" verbs that can be invoked from production action lists. They may also provide automatic loop detection in which never-ending loops are automatically halted after a given number of iterations. Some engines support a model in which, instead of halting when the agenda is empty, the engine enters a wait state until new facts are asserted externally. As for conflict resolution, the firing of activated production instances is not a feature of the Rete algorithm. However, it is a central feature of engines that use Rete networks. Some of the optimisations offered by Rete networks are only useful in scenarios where the engine performs multiple match-resolve-act cycles. Existential and universal quantifications Conditional tests are most commonly used to perform selections and joins on individual tuples. However, by implementing additional beta node types, it is possible for Rete networks to perform quantifications. Existential quantification involves testing for the existence of at least one set of matching WMEs in working memory. Universal quantification involves testing that an entire set of WMEs in working memory meets a given condition. A variation of universal quantification might test that a given number of WMEs, drawn from a set of WMEs, meets given criteria. This might be in terms of testing for either an exact number or a minimum number of matches. Quantification is not universally implemented in Rete engines, and, where it is supported, several variations exist. A variant of existential quantification referred to as negation is widely, though not universally, supported, and is described in seminal documents. Existentially negated conditions and conjunctions involve the use of specialised beta nodes that test for non-existence of matching WMEs or sets of WMEs. These nodes propagate WME lists only when no match is found. The exact implementation of negation varies. In one approach, the node maintains a simple count on each WME list it receives from its left input. The count specifies the number of matches found with WMEs received from the right input. The node only propagates WME lists whose count is zero. In another approach, the node maintains an additional memory on each WME list received from the left input. These memories are a form of beta memory, and store WME lists for each match with WMEs received on the right input. If a WME list does not have any WME lists in its memory, it is propagated down the network. In this approach, negation nodes generally activate further beta nodes directly, rather than storing their output in an additional beta memory. Negation nodes provide a form of 'negation as failure'. When changes are made to working memory, a WME list that previously matched no WMEs may now match newly asserted WMEs. In this case, the propagated WME list and all its extended copies need to be retracted from beta memories further down the network. The second approach described above is often used to support efficient mechanisms for removal of WME lists. When WME lists are removed, any corresponding production instances are de-activated and removed from the agenda. Existential quantification can be performed by combining two negation beta nodes. This represents the semantics of double negation (e.g., "If NOT NOT any matching WMEs, then..."). This is a common approach taken by several production systems. Memory indexing The Rete algorithm does not mandate any specific approach to indexing the working memory. However, most modern production systems provide indexing mechanisms. In some cases, only beta memories are indexed, whilst in others, indexing is used for both alpha and beta memories. A good indexing strategy is a major factor in deciding the overall performance of a production system, especially when executing rule sets that result in highly combinatorial pattern matching (i.e., intensive use of beta join nodes), or, for some engines, when executing rules sets that perform a significant number of WME retractions during multiple match-resolve-act cycles. Memories are often implemented using combinations of hash tables, and hash values are used to perform conditional joins on subsets of WME lists and WMEs, rather than on the entire contents of memories. This, in turn, often significantly reduces the number of evaluations performed by the Rete network. Removal of WMEs and WME lists When a WME is retracted from working memory, it must be removed from every alpha memory in which it is stored. In addition, WME lists that contain the WME must be removed from beta memories, and activated production instances for these WME lists must be de-activated and removed from the agenda. Several implementation variations exist, including tree-based and rematch-based removal. Memory indexing may be used in some cases to optimise removal. Handling ORed conditions When defining productions in a rule set, it is common to allow conditions to be grouped using an OR connective. In many production systems, this is handled by interpreting a single production containing multiple ORed patterns as the equivalent of multiple productions. The resulting Rete network contains sets of terminal nodes which, together, represent single productions. This approach disallows any form of short-circuiting of the ORed conditions. It can also, in some cases, lead to duplicate production instances being activated on the agenda where the same set of WMEs match multiple internal productions. Some engines provide agenda de-duplication in order to handle this issue. Diagram The following diagram illustrates the basic Rete topography, and shows the associations between different node types and memories. Most implementations use type nodes to perform the first level of selection on tuple working memory elements. Type nodes can be considered as specialized select nodes. They discriminate between different tuple relation types. The diagram does not illustrate the use of specialized nodes types such as negated conjunction nodes. Some engines implement several different node specialisations in order to extend functionality and maximise optimisation. The diagram provides a logical view of the Rete. Implementations may differ in physical detail. In particular, the diagram shows dummy inputs providing right activations at the head of beta node branches. Engines may implement other approaches, such as adapters that allow alpha memories to perform right activations directly. The diagram does not illustrate all node-sharing possibilities. For a more detailed and complete description of the Rete algorithm, see chapter 2 of Production Matching for Large Learning Systems by Robert Doorenbos (see link below). Alternatives Alpha Network A possible variation is to introduce additional memories for each intermediate node in the discrimination network. This increases the overhead of the Rete, but may have advantages in situations where rules are dynamically added to or removed from the Rete, making it easier to vary the topology of the discrimination network dynamically. An alternative implementation is described by Doorenbos. In this case, the discrimination network is replaced by a set of memories and an index. The index may be implemented using a hash table. Each memory holds WMEs that match a single conditional pattern, and the index is used to reference memories by their pattern. This approach is only practical when WMEs represent fixed-length tuples, and the length of each tuple is short (e.g., 3-tuples). In addition, the approach only applies to conditional patterns that perform equality tests against constant values. When a WME enters the Rete, the index is used to locate a set of memories whose conditional pattern matches the WME attributes, and the WME is then added directly to each of these memories. In itself, this implementation contains no 1-input nodes. However, in order to implement non-equality tests, the Rete may contain additional 1-input node networks through which WMEs are passed before being placed in a memory. Alternatively, non-equality tests may be performed in the beta network described below. Beta Network A common variation is to build linked lists of tokens where each token holds a single WME. In this case, lists of WMEs for a partial match are represented by the linked list of tokens. This approach may be better because it eliminates the need to copy lists of WMEs from one token to another. Instead, a beta node needs only to create a new token to hold a WME it wishes to join to the partial match list, and then link the new token to a parent token stored in the input beta memory. The new token now forms the head of the token list, and is stored in the output beta memory. Beta nodes process tokens. A token is a unit of storage within a memory and also a unit of exchange between memories and nodes. In many implementations, tokens are introduced within alpha memories where they are used to hold single WMEs. These tokens are then passed to the beta network. Each beta node performs its work and, as a result, may create new tokens to hold a list of WMEs representing a partial match. These extended tokens are then stored in beta memories, and passed to subsequent beta nodes. In this case, the beta nodes typically pass lists of WMEs through the beta network by copying existing WME lists from each received token into new tokens and then adding further WMEs to the lists as a result of performing a join or some other action. The new tokens are then stored in the output memory. Miscellaneous considerations Although not defined by the Rete algorithm, some engines provide extended functionality to support greater control of truth maintenance. For example, when a match is found for one production, this may result in the assertion of new WMEs which, in turn, match the conditions for another production. If a subsequent change to working memory causes the first match to become invalid, it may be that this implies that the second match is also invalid. The Rete algorithm does not define any mechanism to define and handle these logical truth dependencies automatically. Some engines, however, support additional functionality in which truth dependencies can be automatically maintained. In this case, the retraction of one WME may lead to the automatic retraction of additional WMEs in order to maintain logical truth assertions. The Rete algorithm does not define any approach to justification. Justification refers to mechanisms commonly required in expert and decision systems in which, at its simplest, the system reports each of the inner decisions used to reach some final conclusion. For example, an expert system might justify a conclusion that an animal is an elephant by reporting that it is large, grey, has big ears, a trunk and tusks. Some engines provide built-in justification systems in conjunction with their implementation of the Rete algorithm. This article does not provide an exhaustive description of every possible variation or extension of the Rete algorithm. Other considerations and innovations exist. For example, engines may provide specialised support within the Rete network in order to apply pattern-matching rule processing to specific data types and sources such as programmatic objects, XML data or relational data tables. Another example concerns additional time-stamping facilities provided by many engines for each WME entering a Rete network, and the use of these time-stamps in conjunction with conflict resolution strategies. Engines exhibit significant variation in the way they allow programmatic access to the engine and its working memory, and may extend the basic Rete model to support forms of parallel and distributed processing. Optimization and performance Several optimizations for Rete have been identified and described in academic literature. Several of these, however, apply only in very specific scenarios, and therefore often have little or no application in a general-purpose rules engine. In addition, alternative algorithms such as TREAT, developed by Daniel P. Miranker LEAPS, and Design Time Inferencing (DeTI) have been formulated that may provide additional performance improvements. The Rete algorithm is suited to scenarios where forward chaining and "inferencing" is used to calculate new facts from existing facts, or to filter and discard facts in order to arrive at some conclusion. It is also exploited as a reasonably efficient mechanism for performing highly combinatorial evaluations of facts where large numbers of joins must be performed between fact tuples. Other approaches to performing rule evaluation, such as the use of decision trees, or the implementation of sequential engines, may be more appropriate for simple scenarios, and should be considered as possible alternatives. Performance of Rete is also largely a matter of implementation choices (independent of the network topology), one of which (the use of hash tables) leads to major improvements. Most of the performance benchmarks and comparisons available on the web are biased in some way or another. To mention only a frequent bias and an unfair type of comparison: 1) the use of toy problems such as the Manners and Waltz examples; such examples are useful to estimate specific properties of the implementation, but they may not reflect real performance on complex applications; 2) the use of an old implementation; for instance, the references in the following two sections (Rete II and Rete-NT) compare some commercial products to totally outdated versions of CLIPS and they claim that the commercial products may be orders of magnitude faster than CLIPS; this is forgetting that CLIPS 6.30 (with the introduction of hash tables as in Rete II) is orders of magnitude faster than the version used for the comparisons (CLIPS 6.04). Variants Rete II In the 1980s, Charles Forgy developed a successor to the Rete algorithm named Rete II. Unlike the original Rete (which is public domain) this algorithm was not disclosed. Rete II claims better performance for more complex problems (even orders of magnitude), and is officially implemented in CLIPS/R2, a C/++ implementation and in OPSJ, a Java implementation in 1998. Rete II gives about a 100 to 1 order of magnitude performance improvement in more complex problems as shown by KnowledgeBased Systems Corporation benchmarks. Rete II can be characterized by two areas of improvement; specific optimizations relating to the general performance of the Rete network (including the use of hashed memories in order to increase performance with larger sets of data), and the inclusion of a backward chaining algorithm tailored to run on top of the Rete network. Backward chaining alone can account for the most extreme changes in benchmarks relating to Rete vs. Rete II. Rete II is implemented in the commercial product Advisor from FICO, formerly called Fair Isaac Jess (at least versions 5.0 and later) also adds a commercial backward chaining algorithm on top of the Rete network, but it cannot be said to fully implement Rete II, in part due to the fact that no full specification is publicly available. Rete-III In the early 2000s, the Rete III engine was developed by Charles Forgy in cooperation with FICO engineers. The Rete III algorithm, which is not Rete-NT, is the FICO trademark for Rete II and is implemented as part of the FICO Advisor engine. It is basically the Rete II engine with an API that allows access to the Advisor engine because the Advisor engine can access other FICO products. Rete-NT In 2010, Forgy developed a new generation of the Rete algorithm. In an InfoWorld benchmark, the algorithm was deemed 500 times faster than the original Rete algorithm and 10 times faster than its predecessor, Rete II. This algorithm is now licensed to Sparkling Logic, the company that Forgy joined as investor and strategic advisor, as the inference engine of the SMARTS product. Rete-OO Considering that Rete aims to support first-order logic (basically if-then-else statements), Rete-OO aims to provide a rule-based system that supports uncertainty (where the information to make a decision is missing or is inaccurate). According to the author's proposal, the rule "if Danger then Alarm" would be improved to something such as "given the probability of Danger, there will be a certain probability of hearing an Alarm" or even "the greater the Danger, the louder should be Alarm". For this it extends the Drools language (which already implements the Rete algorithm) to make it support probabilistic logic, like fuzzy logic and Bayesian networks. See also Action selection mechanism Inference engine References External links Rete Algorithm explained Bruce Schneier, Dr. Dobb's Journal Production Matching for Large Learning Systems – R Doorenbos Detailed and accessible description of Rete, also describes a variant named Rete/UL, optimised for large systems (PDF) According to the Rules (A short introduction from cut-the-knot) Expert systems Pattern matching
Rete algorithm
[ "Technology" ]
6,028
[ "Information systems", "Expert systems" ]
172,586
https://en.wikipedia.org/wiki/Laser%20cooling
Laser cooling, sometimes also referred to as Doppler cooling, includes several techniques where atoms, molecules, and small mechanical systems are cooled with laser light. The directed energy of lasers is often associated with heating materials, e.g. laser cutting, so it can be counterintuitive that laser cooling often results in sample temperatures approaching absolute zero. It is a routine step in many atomic physics experiments where the laser-cooled atoms are then subsequently manipulated and measured, or in technologies, such as atom-based quantum computing architectures. Laser cooling relies on the change in momentum when an object, such as an atom, absorbs and re-emits a photon (a particle of light). For example, if laser light illuminates a warm cloud of atoms from all directions and the laser's frequency is tuned below an atomic resonance, the atoms will be cooled. This common type of laser cooling relies on the Doppler effect where individual atoms will preferentially absorb laser light from the direction opposite to the atom's motion. The absorbed light is re-emitted by the atom in a random direction. After repeated emission and absorption of light the net effect on the cloud of atoms is that they will expand more slowly. The slower expansion reflects a decrease in the velocity distribution of the atoms, which corresponds to a lower temperature and therefore the atoms have been cooled. For an ensemble of particles, their thermodynamic temperature is proportional to the variance in their velocity, therefore the lower the distribution of velocities, the lower temperature of the particles. The 1997 Nobel Prize in Physics was awarded to Claude Cohen-Tannoudji, Steven Chu, and William Daniel Phillips "for development of methods to cool and trap atoms with laser light". History Radiation pressure Radiation pressure is the force that electromagnetic radiation exerts on matter. In 1873 Maxwell published his treatise on electromagnetism in which he predicted radiation pressure. The force was experimentally demonstrated for the first time by Lebedev and reported at a conference in Paris in 1900, and later published in more detail in 1901. Following Lebedev's measurements Nichols and Hull also demonstrated the force of radiation pressure in 1901, with a refined measurement reported in 1903. Atoms and molecules have bound states and transitions can occur between these states in the presence of light. Sodium is historically notable because it has a strong transition at 589 nm, a wavelength which is close to the peak sensitivity of the human eye. This made it easy to see the interaction of light with sodium atoms. In 1933, Otto Frisch deflected an atomic beam of sodium atoms with light. This was the first realization of radiation pressure acting on an atom or molecule. Laser cooling proposals The introduction of lasers in atomic physics experiments was the precursor to the laser cooling proposals in the mid 1970s. Laser cooling was proposed separately in 1975 by two different research groups: Hänsch and Schawlow, and Wineland and Dehmelt. Both proposals outlined the simplest laser cooling process, known as Doppler cooling, where laser light tuned below an atom's resonant frequency is preferentially absorbed by atoms moving towards the laser and after absorption a photon is emitted in a random direction. This process is repeated many times and in a configuration with counterpropagating laser cooling light the velocity distribution of the atoms is reduced. In 1977 Ashkin submitted a paper which describes how Doppler cooling could be used to provide the necessary damping to load atoms into an optical trap. In this work he emphasized how this could allow for long spectroscopic measurements which would increase precision because the atoms would be held in place. He also discussed overlapping optical traps to study interactions between different atoms. Initial realizations Following the laser cooling proposals, in 1978 two research groups that Wineland, Drullinger and Walls of NIST, and Neuhauser, Hohenstatt, Toscheck and Dehmelt of the University of Washington succeeded in laser cooling atoms. The NIST group wanted to reduce the effect of Doppler broadening on spectroscopy. They cooled magnesium ions in a Penning trap to below 40 K. The Washington group cooled barium ions. The research from both groups served to illustrate the mechanical properties of light. Influenced by the Wineland's work on laser cooling ions, William Phillips applied the same principles to laser cool neutral atoms. In 1982, he published the first paper where neutral atoms were laser cooled. The process used is now known as the Zeeman slower and is a standard technique for slowing an atomic beam. Modern advances Atoms The Doppler cooling limit for electric dipole transitions is typically in the hundreds of microkelvins. In the 1980s this limit was seen as the lowest achievable temperature. It was a surprise then when sodium atoms were cooled to 43 microkelvins when their Doppler cooling limit is 240 microkelvins, this unforeseen low temperature was explained by considering the interaction of polarized laser light with more atomic states and transitions. Previous conceptions of laser cooling were decided to have been too simplistic. The major laser cooling breakthroughs in the 70s and 80s led to several improvements to preexisting technology and new discoveries with temperatures just above absolute zero. The cooling processes were utilized to make atomic clocks more accurate and to improve spectroscopic measurements, and led to the observation of a new state of matter at ultracold temperatures. The new state of matter, the Bose–Einstein condensate, was observed in 1995 by Eric Cornell, Carl Wieman, and Wolfgang Ketterle. Exotic Atoms Most laser cooling experiments bring the atoms close to at rest in the laboratory frame, but cooling of relativistic atoms has also been achieved, where the effect of cooling manifests as a narrowing of the velocity distribution. In 1990, a group at JGU successfully laser-cooled a beam of 7Li+ at in a storage ring from to lower than , using two counter-propagating lasers addressing the same transition, but at and , respectively, to compensate for the large Doppler shift. Laser cooling of antimatter has also been demonstrated, first in 2021 by the ALPHA collaboration on antihydrogen atoms. Molecules Molecules are significantly more challenging to laser cool than atoms because molecules have vibrational and rotational degrees of freedom. These extra degrees of freedom result in more energy levels that can be populated from excited state decays, requiring more lasers compared to atoms to address the more complex level structure. Vibrational decays are particularly challenging because there are no symmetry rules that restrict the vibrational states that can be populated. In 2010, a team at Yale successfully laser-cooled a diatomic molecule. In 2016, a group at MPQ successfully cooled formaldehyde to via optoelectric Sisyphus cooling. In 2022, a group at Harvard successfully laser cooled and trapped CaOH to in a magneto-optical trap. Mechanical systems Starting in the 2000s, laser cooling was applied to small mechanical systems, ranging from small cantilevers to the mirrors used in the LIGO observatory. These devices are connected to a larger substrate, such as a mechanical membrane attached to a frame, or they are held in optical traps, in both cases the mechanical system is a harmonic oscillator. Laser cooling reduces the random vibrations of the mechanical oscillator, removing thermal phonons from the system. In 2007, an MIT team successfully laser-cooled a macro-scale (1 gram) object to 0.8 K. In 2011, a team from the California Institute of Technology and the University of Vienna became the first to laser-cool a (10 μm × 1 μm) mechanical object to its quantum ground state. Methods The first example of laser cooling, and also still the most common method (so much so that it is still often referred to simply as 'laser cooling') is Doppler cooling. Doppler cooling Doppler cooling, which is usually accompanied by a magnetic trapping force to give a magneto-optical trap, is by far the most common method of laser cooling. It is used to cool low density gases down to the Doppler cooling limit, which for rubidium-85 is around 150 microkelvins. In Doppler cooling, initially, the frequency of light is tuned slightly below an electronic transition in the atom. Because the light is detuned to the "red" (i.e., at lower frequency) of the transition, the atoms will absorb more photons if they move towards the light source, due to the Doppler effect. Thus if one applies light from two opposite directions, the atoms will always scatter more photons from the laser beam pointing opposite to their direction of motion. In each scattering event the atom loses a momentum equal to the momentum of the photon. If the atom, which is now in the excited state, then emits a photon spontaneously, it will be kicked by the same amount of momentum, but in a random direction. Since the initial momentum change is a pure loss (opposing the direction of motion), while the subsequent change is random, the probable result of the absorption and emission process is to reduce the momentum of the atom, and therefore its speed—provided its initial speed was larger than the recoil speed from scattering a single photon. If the absorption and emission are repeated many times, the average speed, and therefore the kinetic energy of the atom, will be reduced. Since the temperature of a group of atoms is a measure of the average random internal kinetic energy, this is equivalent to cooling the atoms. Other methods Other methods of laser cooling include: Sisyphus cooling Resolved sideband cooling Raman sideband cooling Velocity selective coherent population trapping (VSCPT) Gray molasses Optical molasses Cavity-mediated cooling Use of a Zeeman slower Electromagnetically induced transparency (EIT) cooling Anti-Stokes cooling in solids Polarization gradient cooling Applications Laser cooling is very common in the field of atomic physics. Reducing the random motion of atoms has several benefits, including the ability to trap atoms with optical or magnetic fields. Spectroscopic measurements of a cold atomic sample will also have reduced systematic uncertainties due to thermal motion. Often multiple laser cooling techniques are used in a single experiment to prepare a cold sample of atoms, which is then subsequently manipulated and measured. In a representative experiment a vapor of strontium atoms is generated in a hot oven that exit the oven as an atomic beam. After leaving the oven the atoms are Doppler cooled in two dimensions transverse to their motion to reduce loss of atoms due to divergence of the atomic beam. The atomic beam is then slowed and cooled with a Zeeman slower to optimize the atom loading efficiency into a magneto-optical trap (MOT), which Doppler cools the atoms, that operates on the with lasers at 461 nm. The MOT transitions from using light at 461 nm to using light at 689 nm to drive the , which is a narrow transition, to realize even colder atoms. The atoms are then transferred into an optical dipole trap where evaporative cooling gets them to temperatures where they can be effectively loaded into an optical lattice. Laser cooling is important for quantum computing efforts based on neutral atoms and trapped atomic ions. In an ion trap Doppler cooling reduces the random motion of the ions so they form a well-ordered crystal structure in the trap. After Doppler cooling the ions are often cooled to their motional ground state to reduce decoherence during quantum gates between ions. Equipment Laser cooling atoms (and molecules especially) requires specialized experimental equipment that when assembled forms a cold atom machine. Such a machine generally consists of two parts: a vacuum chamber which houses the laser cooled atoms and the laser systems used for cooling, as well as for preparing and manipulating atomic states and detecting the atoms. Vacuum system In order for atoms to be laser cooled, the atoms cannot collide with room temperature background gas particles. Such collisions will drastically heat the atoms, and knock them out of weak traps. Acceptable collision rates for cold atom machines typically require vacuum pressures at 10−9 Torr, and very often hundreds or even thousands of times lower pressures are necessary. To achieve these low pressures, a vacuum chamber is needed. The vacuum chamber typically includes windows so that the atoms can be addressed with lasers (e.g. for laser cooling) and light emitted by the atoms or absorption of light be the atoms can be detected. The vacuum chamber also requires an atomic source for the atom(s) to be laser cooled. The atomic source is generally heated to produce thermal atoms that can be laser cooled. For ion trapping experiments the vacuum system must also hold the ion trap, with the appropriate electric feedthroughs for the trap. Neutral atom systems very often employ a Magneto-optical trap (MOT) as one of the early stages in collecting and cooling atoms. For a MOT typically magnetic field coils are placed outside of the vacuum chamber to generate magnetic field gradients for the MOT. Lasers The lasers required for cold atom machines are entirely dependent on the choice of atom. Each atom has unique electronic transitions at very distinct wavelengths that must be driven for the atom to be laser cooled. Rubidium, for example is a very commonly used atom which requires driving two transitions with laser light at 780 nm that are separated by a few GHz. The light for rubidium can be generated from a signal laser at 780 nm and an Electro-optic modulator. Generally tens of mW (and often hundreds of mW to cool significantly more atoms) is used to cool neutral atoms. Trapped ions on the other hand require microwatts of optical power, as they are generally tightly confined and the laser light can be focused to a small spot size. The strontium ion, for example requires light at both 422 nm and 1092 nm in order to be Doppler cooled. Because of the small Doppler shifts involved with laser cooling, very narrow lasers, order of a few MHz, are required for laser cooling. Such lasers are generally stabilized to spectroscopy reference cells, optical cavities, or sometimes wavemeters so the laser light can be precisely tuned relative to the atomic transitions. See also Particle beam cooling References Additional sources Laser Cooling HyperPhysics PhysicsWorld series of articles by Chad Orzel: Cold: how physicists learned to manipulate and move particles with laser cooling Colder: how physicists beat the theoretical limit for laser cooling and laid the foundations for a quantum revolution Coldest: how a letter to Einstein and advances in laser-cooling technology led physicists to new quantum states of matter Thermodynamics Atomic physics Cooling technology Laser applications
Laser cooling
[ "Physics", "Chemistry", "Mathematics" ]
2,983
[ "Dynamical systems", "Quantum mechanics", "Atomic physics", " molecular", "Thermodynamics", "Atomic", " and optical physics" ]
172,592
https://en.wikipedia.org/wiki/Organic%20reaction
Organic reactions are chemical reactions involving organic compounds. The basic organic chemistry reaction types are addition reactions, elimination reactions, substitution reactions, pericyclic reactions, rearrangement reactions, photochemical reactions and redox reactions. In organic synthesis, organic reactions are used in the construction of new organic molecules. The production of many man-made chemicals such as drugs, plastics, food additives, fabrics depend on organic reactions. The oldest organic reactions are combustion of organic fuels and saponification of fats to make soap. Modern organic chemistry starts with the Wöhler synthesis in 1828. In the history of the Nobel Prize in Chemistry awards have been given for the invention of specific organic reactions such as the Grignard reaction in 1912, the Diels–Alder reaction in 1950, the Wittig reaction in 1979 and olefin metathesis in 2005. Classifications Organic chemistry has a strong tradition of naming a specific reaction to its inventor or inventors and a long list of so-called named reactions exists, conservatively estimated at 1000. A very old named reaction is the Claisen rearrangement (1912) and a recent named reaction is the Bingel reaction (1993). When the named reaction is difficult to pronounce or very long as in the Corey–House–Posner–Whitesides reaction it helps to use the abbreviation as in the CBS reduction. The number of reactions hinting at the actual process taking place is much smaller, for example the ene reaction or aldol reaction. Another approach to organic reactions is by type of organic reagent, many of them inorganic, required in a specific transformation. The major types are oxidizing agents such as osmium tetroxide, reducing agents such as lithium aluminium hydride, bases such as lithium diisopropylamide and acids such as sulfuric acid. Finally, reactions are also classified by mechanistic class. Commonly these classes are (1) polar, (2) radical, and (3) pericyclic. Polar reactions are characterized by the movement of electron pairs from a well-defined source (a nucleophilic bond or lone pair) to a well-defined sink (an electrophilic center with a low-lying antibonding orbital). Participating atoms undergo changes in charge, both in the formal sense as well as in terms of the actual electron density. The vast majority of organic reactions fall under this category. Radical reactions are characterized by species with unpaired electrons (radicals) and the movement of single electrons. Radical reactions are further divided into chain and nonchain processes. Finally, pericyclic reactions involve the redistribution of chemical bonds along a cyclic transition state. Although electron pairs are formally involved, they move around in a cycle without a true source or sink. These reactions require the continuous overlap of participating orbitals and are governed by orbital symmetry considerations. Of course, some chemical processes may involve steps from two (or even all three) of these categories, so this classification scheme is not necessarily straightforward or clear in all cases. Beyond these classes, transition-metal mediated reactions are often considered to form a fourth category of reactions, although this category encompasses a broad range of elementary organometallic processes, many of which have little in common and very specific. Fundamentals Factors governing organic reactions are essentially the same as that of any chemical reaction. Factors specific to organic reactions are those that determine the stability of reactants and products such as conjugation, hyperconjugation and aromaticity and the presence and stability of reactive intermediates such as free radicals, carbocations and carbanions. An organic compound may consist of many isomers. Selectivity in terms of regioselectivity, diastereoselectivity and enantioselectivity is therefore an important criterion for many organic reactions. The stereochemistry of pericyclic reactions is governed by the Woodward–Hoffmann rules and that of many elimination reactions by Zaitsev's rule. Organic reactions are important in the production of pharmaceuticals. In a 2006 review, it was estimated that 20% of chemical conversions involved alkylations on nitrogen and oxygen atoms, another 20% involved placement and removal of protective groups, 11% involved formation of new carbon–carbon bond and 10% involved functional group interconversions. By mechanism There is no limit to the number of possible organic reactions and mechanisms. However, certain general patterns are observed that can be used to describe many common or useful reactions. Each reaction has a stepwise reaction mechanism that explains how it happens, although this detailed description of steps is not always clear from a list of reactants alone. Organic reactions can be organized into several basic types. Some reactions fit into more than one category. For example, some substitution reactions follow an addition-elimination pathway. This overview isn't intended to include every single organic reaction. Rather, it is intended to cover the basic reactions. In condensation reactions a small molecule, usually water, is split off when two reactants combine in a chemical reaction. The opposite reaction, when water is consumed in a reaction, is called hydrolysis. Many polymerization reactions are derived from organic reactions. They are divided into addition polymerizations and step-growth polymerizations. In general the stepwise progression of reaction mechanisms can be represented using arrow pushing techniques in which curved arrows are used to track the movement of electrons as starting materials transition to intermediates and products. By functional groups Organic reactions can be categorized based on the type of functional group involved in the reaction as a reactant and the functional group that is formed as a result of this reaction. For example, in the Fries rearrangement the reactant is an ester and the reaction product an alcohol. An overview of functional groups with their preparation and reactivity is presented below: Other classification In heterocyclic chemistry, organic reactions are classified by the type of heterocycle formed with respect to ring-size and type of heteroatom. See for instance the chemistry of indoles. Reactions are also categorized by the change in the carbon framework. Examples are ring expansion and ring contraction, homologation reactions, polymerization reactions, insertion reactions, ring-opening reactions and ring-closing reactions. Organic reactions can also be classified by the type of bond to carbon with respect to the element involved. More reactions are found in organosilicon chemistry, organosulfur chemistry, organophosphorus chemistry and organofluorine chemistry. With the introduction of carbon-metal bonds the field crosses over to organometallic chemistry. See also List of organic reactions Other chemical reactions: inorganic reactions, metabolism, organometallic reactions, polymerization reactions. Important publications in organic chemistry References External links Organic reactions @ Synarchive.com Organic reaction flashcards from OSU list of named reactions from UConn organic reactions Study-Organic-Chemistry.com Organic chemistry
Organic reaction
[ "Chemistry" ]
1,408
[ "nan", "Organic reactions" ]
172,601
https://en.wikipedia.org/wiki/AmigaOne
AmigaOne is a series of computers intended to run AmigaOS 4 developed by Hyperion Entertainment, as a successor to the Amiga series by Commodore International. Unlike the original Amiga computers which used Motorola 68k processors, the AmigaOne line uses PowerPC processors. Earlier models were produced by Eyetech; in September 2009, Hyperion secured an exclusive licence for the AmigaOne name and subsequently new AmigaOne computers were released by A-Eon Technology and Acube Systems. History AmigaOne by Eyetech (2000–05) Originally in 2000, AmigaOne was the name of a project for new computer hardware to run the Amiga Digital Environment (DE), later plans replaced by AmigaOS 4. Initially it was managed by Eyetech and designed by the German company Escena GmbH. The AmigaOne motherboard was to be available in two models, the AmigaOne-1200 and the AmigaOne-4000 as expansions for the Amiga 1200 and Amiga 4000 computers. This would probably not have been actually possible. This AmigaOne project was cancelled in the design stage in 2001, mostly due to the inability to find or design a suitable northbridge chip. Eyetech, who at this point had invested funds into the project, was forced instead to license the Teron CX board from Mai to form the basis of the new AmigaONE computer range. The first fruit of this partnership with Mai, AmigaOne SE, was announced with a connector for an optionally attached Amiga 1200, in order to use the old custom chips of an Amiga for backwards compatibility. However, no such solution was ever introduced. The main difference between the ATX-format AmigaOne SE and AmigaOne XE was that the SE had a soldered-on 600 MHz PowerPC 750CXe processor, whereas the XE used a CPU board attached to a MegArray connector on the motherboard. While the MegArray connector is physically similar to the Apple Power Mac G4 CPU daughtercard connector, it is not electrically compatible. There were G3 and G4 options with a maximum clock frequency of 800 MHz and 933 MHz. The G4 module originally used a Freescale 7451 processor which was later changed to a Freescale 7455, both without level 3 cache. The G4 CPU runs hotter and requires a better heatsink than that supplied on some machines. Consequently, the G4 was often supplied underclocked to run at 800 MHz. In 2007 Acube offered 1.267 GHz 7457. The Micro-A1 was announced in two configurations, under the Micro-A1 I (Industrial) and Micro-A1 C (Consumer) labels. Only the C configuration was produced. Both AmigaOneG3-XE and AmigaOneG4-XE has four 32-bit PCI-slots (3× 33 MHz, 1× 66 MHz) and one AGP-2x slot. The Micro-A1 has only one 32-bit PCI-slot and an integrated Radeon 7000 via AGP with dedicated 32 MB VRAM. AmigaOne (SE and XE) motherboards had several hardware issues including conflicts between the onboard IDE and Ethernet controllers, problems with USB device detection and initially no support for the on-board AC97 audio. Due to the mistaken belief that the on-board AC97 audio could not be supported, the AC97 codec was removed from later builds of the motherboard. The technical issues preventing AC97 audio support were later resolved. When the AmigaOne boards first became available, AmigaOS 4 was not ready: they were supplied with various Linux distributions. From April 2004 onwards, boards were shipped for developers with a pre-release version of OS4. The Final Update of OS4.0 was released in December 2006, for AmigaOne computers only, with the PowerUP version being released in December 2007. AmigaOS 4.1 for AmigaOne was released in September 2008. MAI Logic Inc. went bankrupt, and consequently the supply of Eyetech AmigaOnes dried up. Eyetech Group Ltd retired from the market in 2005, selling their remaining Amiga business to Amiga Kit. AmigaOne by Hyperion Entertainment (2009–present) In September 2009, as part of the resolution of a dispute over ownership of AmigaOS Hyperion was granted (among other provisions of the Settlement Agreement with Amiga, Inc.) an exclusive licence for the AmigaOne (or Amiga One) name. This Settlement Agreement thus created a legal basis for a new generation of AmigaOne computers. In February 2010, a new Belgian company A-Eon Technology CVBA, in co-operation with Hyperion Entertainment, officially announced a new AmigaOne model, the AmigaOne X1000, first presented at the Vintage Computer Fair at Bletchley Park in June 2010. The project was delayed but the new platform was launched in 2012 with AmigaOS 4.1.5. In September 2011, Acube Systems introduced the AmigaOne 500 based on a Sam460ex mainboard. In October 2011, Hyperion Entertainment announced that it was launching an AmigaOne netbook in mid-2012, but it was announced at Amiwest 2013 that the netbook project had been cancelled. Also at Amiwest 2013, A-Eon Technology Ltd, a British computer company, announced three new AmigaOne motherboards, with the project named Cyrus. A-Eon had a list of proposed names which could be voted for and in January 2014 A-Eon Technology announced names for new models as AmigaOne X5000/20, AmigaOne X5000/40 and AmigaOne X3500. The new motherboards were aimed as replacements for the AmigaOne X1000. The AmigaOne X5000/20 was released in October 2016 and - unlike the X1000 - sold via various distributors. ArsTechnica review of the AmigaOne X5000 commended its compatibility with old Amiga applications and games, but criticised the very high price and lack of new software. Lastly, A-Eon Technology Ltd announced at Amiwest 2013 that A-Eon had signed a 1.2 million-dollar investment contract with Ultra Varisys for the ongoing design, development and manufacture of PowerPC hardware for its AmigaOne line of desktop computers. In January 2015, Acube Systems started selling AmigaOne 500 computers based on the Sam460cr motherboard, a cost reduced version of original Sam460ex. Features that were removed included the Silicon Motion SM502 embedded MoC and 1× SATA2 port. In autumn 2015, A-Eon Technology Ltd announced a new motherboard with the project development name Tabor based on a P1022 1.2 GHz SoC. The motherboard design is a microATX form factor with single PCIe slot and SODIMM memory slots. The full system is to be designated as the AmigaOne A1222. The A1222 was released in early 2024. Models and variants Operating systems Linux for PowerPC. AmigaOS versions 4.0, 4.1. MorphOS support for AmigaOne 500 / SAM460 was announced in 2012 and introduced with MorphOS 3.8. Support for X5000 was introduced with MorphOS 3.10. FreeBSD. Other AmigaOS4 compatible models The Sam440 mainboard (complete with AMCC PowerPC 440EP SoC) is an embedded motherboard launched by Acube Systems in September 2007. AmigaOS 4 was released for the Sam440 in October 2008. The Sam460ex mainboard (complete with AMCC 460ex SoC, PowerPC 440 core) is an embedded motherboard launched by Acube Systems in April 2010. AmigaOS 4 was released for the Sam460ex in January 2011. A cost reduced version, the Sam460cr, was released with AmigaOS 4.1 Final Edition on January 8, 2015. The Pegasos II mainboard (complete with PPC G3 and G4 CPU) is a MicroATX motherboard launched by Genesi and discontinued in 2006. AmigaOS 4 was released for the Pegasos II in January 2009. See also Amiga models and variants Sam440 Sam460ex Pegasos AmigaOS MorphOS AROS Commodore International Commodore USA References External links Eyetech and Mai Logic - Mai Logic Incorporated And Eyetech Group Limited Partner to Capture New Amiga Territory The Register - Amiga returns with AmigaOne PPC hardware Eyetech - Archived page containing AmigaOne update and information on the AmigaOne partnership between Eyetech, Hyperion Entertainment and Amiga Inc. Amiga Inc - Amiga status announcement Amiga computers PowerPC mainboards AmigaOS
AmigaOne
[ "Technology" ]
1,759
[ "AmigaOS", "Computing platforms" ]
172,640
https://en.wikipedia.org/wiki/Iterator
In computer programming, an iterator is an object that progressively provides access to each item of a collection, in order. A collection may provide multiple iterators via its interface that provide items in different orders, such as forwards and backwards. An iterator is often implemented in terms of the structure underlying a collection implementation and is often tightly coupled to the collection to enable the operational semantics of the iterator. An iterator is behaviorally similar to a database cursor. Iterators date to the CLU programming language in 1974. Pattern An iterator provides access to an element of a collection (element access) and can change its internal state to provide access to the next element (element traversal). It also provides for creation and initialization to a first element and indicates whether all elements have been traversed. In some programming contexts, an iterator provides additional functionality. An iterator allows a consumer to process each element of a collection while isolating the consumer from the internal structure of the collection. The collection can store elements in any manner while the consumer can access them as a sequence. In object-oriented programming, an iterator class is usually designed in tight coordination with the corresponding collection class. Usually, the collection provides the methods for creating iterators. A loop counter is sometimes also referred to as a loop iterator. A loop counter, however, only provides the traversal functionality and not the element access functionality. Generator One way of implementing an iterator is via a restricted form of coroutine, known as a generator. By contrast with a subroutine, a generator coroutine can yield values to its caller multiple times, instead of returning just once. Most iterators are naturally expressible as generators, but because generators preserve their local state between invocations, they're particularly well-suited for complicated, stateful iterators, such as tree traversers. There are subtle differences and distinctions in the use of the terms "generator" and "iterator", which vary between authors and languages. In Python, a generator is an iterator constructor: a function that returns an iterator. An example of a Python generator returning an iterator for the Fibonacci numbers using Python's yield statement follows: def fibonacci(limit): a, b = 0, 1 for _ in range(limit): yield a a, b = b, a + b for number in fibonacci(100): # The generator constructs an iterator print(number) Internal Iterator An internal iterator is a higher order function (often taking anonymous functions) that traverses a collection while applying a function to each element. For example, Python's map function applies a caller-defined function to each element: digits = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9] squared_digits = map(lambda x: x**2, digits) # Iterating over this iterator would result in 0, 1, 4, 9, 16, ..., 81. Implicit iterator Some object-oriented languages such as C#, C++ (later versions), Delphi (later versions), Go, Java (later versions), Lua, Perl, Python, Ruby provide an intrinsic way of iterating through the elements of a collection without an explicit iterator. An iterator object may exist, but is not represented in the source code. An implicit iterator is often manifest in language syntax as foreach. In Python, a collection object can be iterated directly: for value in iterable: print(value) In Ruby, iteration requires accessing an iterator property: iterable.each do |value| puts value end This iteration style is sometimes called "internal iteration" because its code fully executes within the context of the iterable object (that controls all aspects of iteration), and the programmer only provides the operation to execute at each step (using an anonymous function). Languages that support list comprehensions or similar constructs may also make use of implicit iterators during the construction of the result list, as in Python: names = [person.name for person in roster if person.male] Sometimes the implicit hidden nature is only partial. The C++ language has a few function templates for implicit iteration, such as for_each(). These functions still require explicit iterator objects as their initial input, but the subsequent iteration does not expose an iterator object to the user. Stream Iterators are a useful abstraction of input streams – they provide a potentially infinite iterable (but not necessarily indexable) object. Several languages, such as Perl and Python, implement streams as iterators. In Python, iterators are objects representing streams of data. Alternative implementations of stream include data-driven languages, such as AWK and sed. Contrast with indexing Instead of using an iterator, many languages allow the use of a subscript operator and a loop counter to access each element. Although indexing may be used with collections, the use of iterators may have advantages such as: Counting loops are not suitable to all data structures, in particular to data structures with no or slow random access, like lists or trees. Iterators can provide a consistent way to iterate on data structures of all kinds, and therefore make the code more readable, reusable, and less sensitive to a change in the data structure. An iterator can enforce additional restrictions on access, such as ensuring that elements cannot be skipped or that a previously visited element cannot be accessed a second time. An iterator may allow the collection object to be modified without invalidating the iterator. For instance, once an iterator has advanced beyond the first element it may be possible to insert additional elements into the beginning of the collection with predictable results. With indexing this is problematic since the index numbers must change. The ability of a collection to be modified while iterating through its elements has become necessary in modern object-oriented programming, where the interrelationships between objects and the effects of operations may not be obvious. By using an iterator one is isolated from these sorts of consequences. This assertion must however be taken with a grain of salt, because more often than not, for efficiency reasons, the iterator implementation is so tightly bound to the collection that it does preclude modification of the underlying collection without invalidating itself. For collections that may move around their data in memory, the only way to not invalidate the iterator is, for the collection, to somehow keep track of all the currently alive iterators and update them on the fly. Since the number of iterators at a given time may be arbitrarily large in comparison to the size of the tied collection, updating them all will drastically impair the complexity guarantee on the collection's operations. An alternative way to keep the number of updates bound relatively to the collection size would be to use a kind of handle mechanism, that is a collection of indirect pointers to the collection's elements that must be updated with the collection, and let the iterators point to these handles instead of directly to the data elements. But this approach will negatively impact the iterator performance, since it must effectuate a double pointer following to access the actual data element. This is usually not desirable, because many algorithms using the iterators invoke the iterators data access operation more often than the advance method. It is therefore especially important to have iterators with very efficient data access. All in all, this is always a trade-off between security (iterators remain always valid) and efficiency. Most of the time, the added security is not worth the efficiency price to pay for it. Using an alternative collection (for example a singly linked list instead of a vector) would be a better choice (globally more efficient) if the stability of the iterators is needed. Classification Categories Iterators can be categorised according to their functionality. Here is a (non-exhaustive) list of iterator categories: Types Different languages or libraries used with these languages define iterator types. Some of them are In different programming languages .NET Iterators in the .NET Framework (i.e. C#) are called "enumerators" and represented by the IEnumerator interface.IEnumerator provides a MoveNext() method, which advances to the next element and indicates whether the end of the collection has been reached; a Current property, to obtain the value of the element currently being pointed at. and an optional Reset() method, to rewind the enumerator back to its initial position. The enumerator initially points to a special value before the first element, so a call to MoveNext() is required to begin iterating. Enumerators are typically obtained by calling the GetEnumerator() method of an object implementing the IEnumerable interface. a Current property, to obtain the value of the element currently being pointed at;Container classes typically implement this interface. However, the foreach statement in C# can operate on any object providing such a method, even if it does not implement IEnumerable (duck typing). Both interfaces were expanded into generic versions in .NET 2.0. The following shows a simple use of iterators in C# 2.0: // explicit version IEnumerator<MyType> iter = list.GetEnumerator(); while (iter.MoveNext()) Console.WriteLine(iter.Current); // implicit version foreach (MyType value in list) Console.WriteLine(value); C# 2.0 also supports generators: a method that is declared as returning IEnumerator (or IEnumerable), but uses the "yield return" statement to produce a sequence of elements instead of returning an object instance, will be transformed by the compiler into a new class implementing the appropriate interface. C++ The C++ language makes wide use of iterators in its Standard Library and describes several categories of iterators differing in the repertoire of operations they allow. These include forward iterators, bidirectional iterators, and random access iterators, in order of increasing possibilities. All of the standard container template types provide iterators of one of these categories. Iterators generalize pointers to elements of an array (which indeed can be used as iterators), and their syntax is designed to resemble that of C pointer arithmetic, where the * and -> operators are used to reference the element to which the iterator points and pointer arithmetic operators like ++ are used to modify iterators in the traversal of a container. Traversal using iterators usually involves a single varying iterator, and two fixed iterators that serve to delimit a range to be traversed. The distance between the limiting iterators, in terms of the number of applications of the operator ++ needed to transform the lower limit into the upper one, equals the number of items in the designated range; the number of distinct iterator values involved is one more than that. By convention, the lower limiting iterator "points to" the first element in the range, while the upper limiting iterator does not point to any element in the range, but rather just beyond the end of the range. For traversal of an entire container, the begin() method provides the lower limit, and end() the upper limit. The latter does not reference any element of the container at all but is a valid iterator value that can be compared against. The following example shows a typical use of an iterator. std::vector<int> items; items.push_back(5); // Append integer value '5' to vector 'items'. items.push_back(2); // Append integer value '2' to vector 'items'. items.push_back(9); // Append integer value '9' to vector 'items'. for (auto it = items.begin(), end = items.end(); it != end; ++it) { // Iterate through 'items'. std::cout << *it; // And print value of 'items' for current index. } // In C++11, the same can be done without using any iterators: for (auto x : items) { std::cout << x; // Print value of each element 'x' of 'items'. } // Both of the for loops print "529". Iterator types are separate from the container types they are used with, though the two are often used in concert. The category of the iterator (and thus the operations defined for it) usually depends on the type of container, with for instance arrays or vectors providing random access iterators, but sets (which use a linked structure as implementation) only providing bidirectional iterators. One same container type can have more than one associated iterator type; for instance the std::vector<T> container type allows traversal either using (raw) pointers to its elements (of type *<T>), or values of a special type std::vector<T>::iterator, and yet another type is provided for "reverse iterators", whose operations are defined in such a way that an algorithm performing a usual (forward) traversal will actually do traversal in reverse order when called with reverse iterators. Most containers also provide a separate const_iterator type, for which operations that would allow changing the values pointed to are intentionally not defined. Simple traversal of a container object or a range of its elements (including modification of those elements unless a const_iterator is used) can be done using iterators alone. But container types may also provide methods like insert or erase that modify the structure of the container itself; these are methods of the container class, but in addition require one or more iterator values to specify the desired operation. While it is possible to have multiple iterators pointing into the same container simultaneously, structure-modifying operations may invalidate certain iterator values (the standard specifies for each case whether this may be so); using an invalidated iterator is an error that will lead to undefined behavior, and such errors need not be signaled by the run time system. Implicit iteration is also partially supported by C++ through the use of standard function templates, such as std::for_each(), std::copy() and std::accumulate(). When used they must be initialized with existing iterators, usually begin and end, that define the range over which iteration occurs. But no explicit iterator object is subsequently exposed as the iteration proceeds. This example shows the use of for_each. ContainerType<ItemType> c; // Any standard container type of ItemType elements. void ProcessItem(const ItemType& i) { // Function that will process each item of the collection. std::cout << i << std::endl; } std::for_each(c.begin(), c.end(), ProcessItem); // A for-each iteration loop. The same can be achieved using std::copy, passing a std::ostream_iterator value as third iterator: std::copy(c.begin(), c.end(), std::ostream_iterator<ItemType>(std::cout, "\n")); Since C++11, lambda function syntax can be used to specify to operation to be iterated inline, avoiding the need to define a named function. Here is an example of for-each iteration using a lambda function: ContainerType<ItemType> c; // Any standard container type of ItemType elements. // A for-each iteration loop with a lambda function. std::for_each(c.begin(), c.end(), [](const ItemType& i) { std::cout << i << std::endl; }); Java Introduced in the Java JDK 1.2 release, the interface allows the iteration of container classes. Each Iterator provides a and method, and may optionally support a method. Iterators are created by the corresponding container class, typically by a method named iterator(). The next() method advances the iterator and returns the value pointed to by the iterator. The first element is obtained upon the first call to next(). To determine when all the elements in the container have been visited the hasNext() test method is used. The following example shows a simple use of iterators: Iterator iter = list.iterator(); // Iterator<MyType> iter = list.iterator(); // in J2SE 5.0 while (iter.hasNext()) { System.out.print(iter.next()); if (iter.hasNext()) System.out.print(", "); } To show that hasNext() can be called repeatedly, we use it to insert commas between the elements but not after the last element. This approach does not properly separate the advance operation from the actual data access. If the data element must be used more than once for each advance, it needs to be stored in a temporary variable. When an advance is needed without data access (i.e. to skip a given data element), the access is nonetheless performed, though the returned value is ignored in this case. For collection types that support it, the remove() method of the iterator removes the most recently visited element from the container while keeping the iterator usable. Adding or removing elements by calling the methods of the container (also from the same thread) makes the iterator unusable. An attempt to get the next element throws the exception. An exception is also thrown if there are no more elements remaining (hasNext() has previously returned false). Additionally, for there is a with a similar API but that allows forward and backward iteration, provides its current index in the list and allows setting of the list element at its position. The J2SE 5.0 release of Java introduced the interface to support an enhanced for (foreach) loop for iterating over collections and arrays. Iterable defines the method that returns an Iterator. Using the enhanced for loop, the preceding example can be rewritten as for (MyType obj : list) { System.out.print(obj); } Some containers also use the older (since 1.0) Enumeration class. It provides hasMoreElements() and nextElement() methods but has no methods to modify the container. Scala In Scala, iterators have a rich set of methods similar to collections, and can be used directly in for loops. Indeed, both iterators and collections inherit from a common base trait - scala.collection.TraversableOnce. However, because of the rich set of methods available in the Scala collections library, such as map, collect, filter etc., it is often not necessary to deal with iterators directly when programming in Scala. Java iterators and collections can be automatically converted into Scala iterators and collections, respectively, simply by adding the single line import scala.collection.JavaConversions._ to the file. The JavaConversions object provides implicit conversions to do this. Implicit conversions are a feature of Scala: methods that, when visible in the current scope, automatically insert calls to themselves into relevant expressions at the appropriate place to make them typecheck when they otherwise would not. MATLAB MATLAB supports both external and internal implicit iteration using either "native" arrays or cell arrays. In the case of external iteration where the onus is on the user to advance the traversal and request next elements, one can define a set of elements within an array storage structure and traverse the elements using the for-loop construct. For example, % Define an array of integers myArray = [1,3,5,7,11,13]; for n = myArray % ... do something with n disp(n) % Echo integer to Command Window end traverses an array of integers using the for keyword. In the case of internal iteration where the user can supply an operation to the iterator to perform over every element of a collection, many built-in operators and MATLAB functions are overloaded to execute over every element of an array and return a corresponding output array implicitly. Furthermore, the arrayfun and cellfun functions can be leveraged for performing custom or user defined operations over "native" arrays and cell arrays respectively. For example, function simpleFun % Define an array of integers myArray = [1,3,5,7,11,13]; % Perform a custom operation over each element myNewArray = arrayfun(@(a)myCustomFun(a),myArray); % Echo resulting array to Command Window myNewArray function outScalar = myCustomFun(inScalar) % Simply multiply by 2 outScalar = 2*inScalar; defines a primary function simpleFun that implicitly applies custom subfunction myCustomFun to each element of an array using built-in function arrayfun. Alternatively, it may be desirable to abstract the mechanisms of the array storage container from the user by defining a custom object-oriented MATLAB implementation of the Iterator Pattern. Such an implementation supporting external iteration is demonstrated in MATLAB Central File Exchange item Design Pattern: Iterator (Behavioral). This is written in the new class-definition syntax introduced with MATLAB software version 7.6 (R2008a) and features a one-dimensional cell array realization of the List Abstract Data Type (ADT) as the mechanism for storing a heterogeneous (in data type) set of elements. It provides the functionality for explicit forward List traversal with the hasNext(), next() and reset() methods for use in a while-loop. PHP PHP's foreach loop was introduced in version 4.0 and made compatible with objects as values in 4.0 Beta 4. However, support for iterators was added in PHP 5 through the introduction of the internal Traversable interface. The two main interfaces for implementation in PHP scripts that enable objects to be iterated via the foreach loop are Iterator and IteratorAggregate. The latter does not require the implementing class to declare all required methods, instead it implements an accessor method (getIterator) that returns an instance of Traversable. The Standard PHP Library provides several classes to work with special iterators. PHP also supports Generators since 5.5. The simplest implementation is by wrapping an array, this can be useful for type hinting and information hiding. namespace Wikipedia\Iterator; final class ArrayIterator extends \Iterator { private array $array; public function __construct(array $array) { $this->array = $array; } public function rewind(): void { echo 'rewinding' , PHP_EOL; reset($this->array); } public function current() { $value = current($this->array); echo "current: {$value}", PHP_EOL; return $value; } public function key() { $key = key($this->array); echo "key: {$key}", PHP_EOL; return $key; } public function next() { $value = next($this->array); echo "next: {$value}", PHP_EOL; return $value; } public function valid(): bool { $valid = $this->current() !== false; echo 'valid: ', ($valid ? 'true' : 'false'), PHP_EOL; return $valid; } } All methods of the example class are used during the execution of a complete foreach loop (foreach ($iterator as $key => $current) {}). The iterator's methods are executed in the following order: $iterator->rewind() ensures that the internal structure starts from the beginning. $iterator->valid() returns true in this example. $iterator->current() returned value is stored in $value. $iterator->key() returned value is stored in $key. $iterator->next() advances to the next element in the internal structure. $iterator->valid() returns false and the loop is aborted. The next example illustrates a PHP class that implements the Traversable interface, which could be wrapped in an IteratorIterator class to act upon the data before it is returned to the foreach loop. The usage together with the MYSQLI_USE_RESULT constant allows PHP scripts to iterate result sets with billions of rows with very little memory usage. These features are not exclusive to PHP nor to its MySQL class implementations (e.g. the PDOStatement class implements the Traversable interface as well). mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT); $mysqli = new \mysqli('host.example.com', 'username', 'password', 'database_name'); // The \mysqli_result class that is returned by the method call implements the internal Traversable interface. foreach ($mysqli->query('SELECT `a`, `b`, `c` FROM `table`', MYSQLI_USE_RESULT) as $row) { // Act on the returned row, which is an associative array. } Python Iterators in Python are a fundamental part of the language and in many cases go unseen as they are implicitly used in the for (foreach) statement, in list comprehensions, and in generator expressions. All of Python's standard built-in collection types support iteration, as well as many classes that are part of the standard library. The following example shows typical implicit iteration over a sequence: for value in sequence: print(value) Python dictionaries (a form of associative array) can also be directly iterated over, when the dictionary keys are returned; or the items() method of a dictionary can be iterated over where it yields corresponding key,value pairs as a tuple: for key in dictionary: value = dictionary[key] print(key, value) for key, value in dictionary.items(): print(key, value) Iterators however can be used and defined explicitly. For any iterable sequence type or class, the built-in function iter() is used to create an iterator object. The iterator object can then be iterated with the next() function, which uses the __next__() method internally, which returns the next element in the container. (The previous statement applies to Python 3.x. In Python 2.x, the next() method is equivalent.) A StopIteration exception will be raised when no more elements are left. The following example shows an equivalent iteration over a sequence using explicit iterators: it = iter(sequence) while True: try: value = it.next() # in Python 2.x value = next(it) # in Python 3.x except StopIteration: break print(value) Any user-defined class can support standard iteration (either implicit or explicit) by defining an __iter__() method that returns an iterator object. The iterator object then needs to define a __next__() method that returns the next element. Python's generators implement this iteration protocol. Raku Iterators in Raku are a fundamental part of the language, although usually users do not have to care about iterators. Their usage is hidden behind iteration APIs such as the for statement, map, grep, list indexing with .[$idx], etc. The following example shows typical implicit iteration over a collection of values: my @values = 1, 2, 3; for @values -> $value { say $value } # OUTPUT: # 1 # 2 # 3 Raku hashes can also be directly iterated over; this yields key-value Pair objects. The kv method can be invoked on the hash to iterate over the key and values; the keys method to iterate over the hash's keys; and the values method to iterate over the hash's values. my %word-to-number = 'one' => 1, 'two' => 2, 'three' => 3; for %word-to-number -> $pair { say $pair; } # OUTPUT: # three => 3 # one => 1 # two => 2 for %word-to-number.kv -> $key, $value { say "$key: $value" } # OUTPUT: # three: 3 # one: 1 # two: 2 for %word-to-number.keys -> $key { say "$key => " ~ %word-to-number{$key}; } # OUTPUT: # three => 3 # one => 1 # two => 2 Iterators however can be used and defined explicitly. For any iterable type, there are several methods that control different aspects of the iteration process. For example, the iterator method is supposed to return an Iterator object, and the pull-one method is supposed to produce and return the next value if possible, or return the sentinel value IterationEnd if no more values could be produced. The following example shows an equivalent iteration over a collection using explicit iterators: my @values = 1, 2, 3; my $it := @values.iterator; # grab iterator for @values loop { my $value := $it.pull-one; # grab iteration's next value last if $value =:= IterationEnd; # stop if we reached iteration's end say $value; } # OUTPUT: # 1 # 2 # 3 All iterable types in Raku compose the Iterable role, Iterator role, or both. The Iterable is quite simple and only requires the iterator to be implemented by the composing class. The Iterator is more complex and provides a series of methods such as pull-one, which allows for a finer operation of iteration in several contexts such as adding or eliminating items, or skipping over them to access other items. Thus, any user-defined class can support standard iteration by composing these roles and implementing the iterator and/or pull-one methods. The DNA class represents a DNA strand and implements the iterator by composing the Iterable role. The DNA strand is split into a group of trinucleotides when iterated over: subset Strand of Str where { .match(/^^ <[ACGT]>+ $$/) and .chars %% 3 }; class DNA does Iterable { has $.chain; method new(Strand:D $chain) { self.bless: :$chain } method iterator(DNA:D:){ $.chain.comb.rotor(3).iterator } }; for DNA.new('GATTACATA') { .say } # OUTPUT: # (G A T) # (T A C) # (A T A) say DNA.new('GATTACATA').map(*.join).join('-'); # OUTPUT: # GAT-TAC-ATA The Repeater class composes both the Iterable and Iterator roles: class Repeater does Iterable does Iterator { has Any $.item is required; has Int $.times is required; has Int $!count = 1; multi method new($item, $times) { self.bless: :$item, :$times; } method iterator { self } method pull-one(--> Mu){ if $!count <= $!times { $!count += 1; return $!item } else { return IterationEnd } } } for Repeater.new("Hello", 3) { .say } # OUTPUT: # Hello # Hello # Hello Ruby Ruby implements iterators quite differently; all iterations are done by means of passing callback closures to container methods - this way Ruby not only implements basic iteration but also several patterns of iteration like function mapping, filters and reducing. Ruby also supports an alternative syntax for the basic iterating method each, the following three examples are equivalent: (0...42).each do |n| puts n end ...and... for n in 0...42 puts n end or even shorter 42.times do |n| puts n end Ruby can also iterate over fixed lists by using Enumerators and either calling their #next method or doing a for each on them, as above. Rust Rust makes use of external iterators throughout the standard library, including in its for loop, which implicitly calls the next() method of an iterator until it is consumed. The most basic for loop for example iterates over a Range type: for i in 0..42 { println!("{}", i); } // Prints the numbers 0 to 41 Specifically, the for loop will call a value's into_iter() method, which returns an iterator that in turn yields the elements to the loop. The for loop (or indeed, any method that consumes the iterator), proceeds until the next() method returns a None value (iterations yielding elements return a Some(T) value, where T is the element type). All collections provided by the standard library implement the IntoIterator trait (meaning they define the into_iter() method). Iterators themselves implement the Iterator trait, which requires defining the next() method. Furthermore, any type implementing Iterator is automatically provided an implementation for IntoIterator that returns itself. Iterators support various adapters (map(), filter(), skip(), take(), etc.) as methods provided automatically by the Iterator trait. Users can create custom iterators by creating a type implementing the Iterator trait. Custom collections can implement the IntoIterator trait and return an associated iterator type for their elements, enabling their use directly in for loops. Below, the Fibonacci type implements a custom, unbounded iterator: struct Fibonacci(u64, u64); impl Fibonacci { pub fn new() -> Self { Self(0, 1) } } impl Iterator for Fibonacci { type Item = u64; fn next(&mut self) -> Option<Self::Item> { let next = self.0; self.0 = self.1; self.1 = self.0 + next; Some(next) } } let fib = Fibonacci::new(); for n in fib.skip(1).step_by(2).take(4) { println!("{n}"); } // Prints 1, 2, 5, and 13 See also References External links Java's Iterator, Iterable and ListIterator Explained .NET interface Article "Understanding and Using Iterators" by Joshua Gatcomb Article "A Technique for Generic Iteration and Its Optimization" (217 KB) by Stephen M. Watt Iterators Boost C++ Iterator Library Java interface PHP: Object Iteration STL Iterators What are iterators? - Reference description Articles with example C Sharp code Articles with example C++ code Articles with example Java code Articles with example PHP code Articles with example Python (programming language) code Articles with example Ruby code Iteration in programming Object (computer science) Abstract data types
Iterator
[ "Mathematics" ]
7,804
[ "Type theory", "Mathematical structures", "Abstract data types" ]
172,644
https://en.wikipedia.org/wiki/Handedness
In human biology, handedness is an individual's preferential use of one hand, known as the dominant hand, due to it being stronger, faster or more dextrous. The other hand, comparatively often the weaker, less dextrous or simply less subjectively preferred, is called the non-dominant hand. In a study from 1975 on 7,688 children in US grades 1–6, left handers comprised 9.6% of the sample, with 10.5% of male children and 8.7% of female children being left-handed. Overall, around 90% of people are right-handed. Handedness is often defined by one's writing hand, as it is fairly common for people to prefer to do a particular task with a particular hand. There are people with true ambidexterity (equal preference of either hand), but it is rare—most people prefer using one hand for most purposes. However, in some cultures, the use of the left hand can be considered disrespectful. Most of the current research suggests that left-handedness has an epigenetic marker—a combination of genetics, biology and the environment. Because the vast majority of the population is right-handed, many devices are designed for use by right-handed people, making their use by left-handed people more difficult. In many countries, left-handed people are or were required to write with their right hands. However, left-handed people have an advantage in sports that involve aiming at a target in an area of an opponent's control, as their opponents are more accustomed to the right-handed majority. As a result, they are over-represented in baseball, tennis, fencing, cricket, boxing, and mixed martial arts. Types Right-handedness is the most common type. Right-handed people are more skillful with their right hands. Studies suggest that approximately 90% of people are right-handed. Left-handedness is less common. Studies suggest that approximately 10% of people are left-handed. Ambidexterity refers to having equal ability in both hands. Those who learn it still tend to favor their originally dominant hand. This is uncommon, with about a 1% prevalence. Mixed-handedness or cross-dominance is the change of hand preference between different tasks. This is about as widespread as left-handedness. This is highly associated with the person's childhood brain development. Measurement Handedness may be measured behaviourally (performance measures) or through questionnaires (preference measures). The Edinburgh Handedness Inventory has been used since 1971 but contains some dated questions and is hard to score. Revisions have been published by Veale and by Williams. The longer Waterloo Handedness Questionnaire is not widely accessible. More recently, the Flinders Handedness Survey (FLANDERS) has been developed. Evolution Some non-human primates have a preferred hand for tasks, but they do not display a strong right-biased preference like modern humans, with individuals equally split between right-handed and left-handed preferences. When exactly a right handed preference developed in the human lineage is unknown, though it is known through various means that Neanderthals had a right-handedness bias like modern humans. Attempts to determine handedness of early humans by analysing the morphology of lithic artefacts have been found to be unreliable. Causes There are several theories of how handedness develops. Genetic factors Handedness displays a complex inheritance pattern. For example, if both parents of a child are left-handed, there is a 26% chance of that child being left-handed. A large study of twins from 25,732 families by Medland et al. (2006) indicates that the heritability of handedness is roughly 24%. Two theoretical single-gene models have been proposed to explain the patterns of inheritance of handedness, by Marian Annett of the University of Leicester, and by Chris McManus of UCL. However, growing evidence from linkage and genome-wide association studies suggests that genetic variance in handedness cannot be explained by a single genetic locus. From these studies, McManus et al. now conclude that handedness is polygenic and estimate that at least 40 loci contribute to the trait. Brandler et al. performed a genome-wide association study for a measure of relative hand skill and found that genes involved in the determination of left-right asymmetry in the body play a key role in handedness. Brandler and Paracchini suggest the same mechanisms that determine left-right asymmetry in the body (e.g. nodal signaling and ciliogenesis) also play a role in the development of brain asymmetry (handedness being a reflection of brain asymmetry for motor function). In 2019, Wiberg et al. performed a genome-wide association study and found that handedness was significantly associated with four loci, three of them in genes encoding proteins involved in brain development. Prenatal hormone exposure Four studies have indicated that individuals who have had in-utero exposure to diethylstilbestrol (a synthetic estrogen-based medication used between 1940 and 1971) were more likely to be left-handed over the clinical control group. Diethylstilbestrol animal studies "suggest that estrogen affects the developing brain, including the part that governs sexual behavior and right and left dominance". Ultrasound Another theory is that ultrasound may sometimes affect the brains of unborn children, causing higher rates of left-handedness in children whose mothers receive ultrasound during pregnancy. Research suggests there may be a weak association between ultrasound screening (sonography used to check the healthy development of the fetus and mother) and left-handedness. Epigenetic markers Twin studies indicate that genetic factors explain 25% of the variance in handedness, and environmental factors the remaining 75%. While the molecular basis of handedness epigenetics is largely unclear, Ocklenburg et al. (2017) found that asymmetric methylation of CpG sites plays a key role for gene expression asymmetries related to handedness. Language dominance One common handedness theory is the brain hemisphere division of labor. In most people, the left side of the brain controls speaking. The theory suggests it is more efficient for the brain to divide major tasks between the hemispheres—thus most people may use the non-speaking (right) hemisphere for perception and gross motor skills. As speech is a very complex motor control task, the specialised fine motor areas controlling speech are most efficiently used to also control fine motor movement in the dominant hand. As the right hand is controlled by the left hemisphere (and the left hand is controlled by the right hemisphere) most people are, therefore right-handed. The theory depends on left-handed people having a reversed organisation. However, the majority of left-handers have been found to have left-hemisphere language dominance—just like right-handers. Only around 30% of left-handers are not left-hemisphere dominant for language. Some of those have reversed brain organisation, where the verbal processing takes place in the right-hemisphere and visuospatial processing is dominant to the left hemisphere. Others have more ambiguous bilateral organisation, where both hemispheres do parts of typically lateralised functions. When tasks designed to investigate lateralisation (preference for handedness) are averaged across a group of left-handers, the overall effect is that left-handers show the same pattern of data as right-handers, but with a reduced asymmetry. This finding is likely due to the small proportion of left-handers who have atypical brain organisation. The majority of the evidence comes from literature assessing oral language production and comprehension. When it comes to writing, findings from recent studies were inconclusive for a difference in lateralization for writing between left-handers and right-handers. Developmental timeline Researchers studied fetuses in utero and determined that handedness in the womb was a very accurate predictor of handedness after birth. In a 2013 study, 39% of infants (6 to 14 months) and 97% of toddlers (18 to 24 months) demonstrated a hand preference. Infants have been observed to fluctuate heavily when choosing a hand to lead in grasping and object manipulation tasks, especially in one- versus two-handed grasping. Between 36 and 48 months, there is a significant decline in variability between handedness in one-handed grasping; it can be seen earlier in two-handed manipulation. Children of 18–36 months showed more hand preference when performing bi-manipulation tasks than with simple grasping. The decrease in handedness variability in children of 36–48 months may be attributable to preschool or kindergarten attendance due to increased single-hand activities such as writing and coloring. Scharoun and Bryden noted that right-handed preference increases with age up to the teenage years. Correlation with other factors The modern turn in handedness research has been towards emphasizing degree rather than direction of handedness as a critical variable. Intelligence In his book Right-Hand, Left-Hand, Chris McManus of University College London argues that the proportion of left-handers is increasing, and that an above-average quota of high achievers have been left-handed. He says that left-handers' brains are structured in a way that increases their range of abilities, and that the genes that determine left-handedness also govern development of the brain's language centers. Writing in Scientific American, he states: Studies in the U.K., U.S. and Australia have revealed that left-handed people differ from right-handers by only one IQ point, which is not noteworthy ... Left-handers' brains are structured differently from right-handers' in ways that can allow them to process language, spatial relations and emotions in more diverse and potentially creative ways. Also, a slightly larger number of left-handers than right-handers are especially gifted in music and math. A study of musicians in professional orchestras found a significantly greater proportion of talented left-handers, even among those who played instruments that seem designed for right-handers, such as violins. Similarly, studies of adolescents who took tests to assess mathematical giftedness found many more left-handers in the population. Left-handers are overrepresented among those with lower cognitive skills and mental impairments, with those with intellectual disability being roughly twice as likely to be left-handed, as well as generally lower cognitive and non-cognitive abilities amongst left-handed children. Left-handers are nevertheless also overrepresented in high IQ societies, such as Mensa. A 2005 study found that "approximately 20% of the members of Mensa are lefthanded, double the proportion in most general populations". Ghayas & Adil (2007) found that left-handers were significantly more likely to perform better on intelligence tests than right-handers and that right-handers also took more time to complete the tests. In a systematic review and meta-analysis, Ntolka & Papadatou-Pastou (2018) found that right-handers had higher IQ scores, but that difference was negligible (about 1.5 points). The prevalence of difficulties in left-right discrimination was investigated in a cohort of 2,720 adult members of Mensa and Intertel by Storfer. According to the study, 7.2% of the men and 18.8% of the women evaluated their left-right directional sense as poor or below average; moreover participants who were relatively ambidextrous experienced problems more frequently than did those who were more strongly left- or right-handed. The study also revealed an effect of age, with younger participants reporting more problems. Early childhood intelligence Nelson, Campbell, and Michel studied infants and whether developing handedness during infancy correlated with language abilities in toddlers. In the article they assessed 38 infants and followed them through to 12 months and then again once they became toddlers from 18 to 24 months. They discovered that when a child developed a consistent use of their right or left hand during infancy (such as using the right hand to put the pacifier back in, or grasping random objects with the left hand), they were more likely to have superior language skills as a toddler. Children who became lateral later than infancy (i.e., when they were toddlers) showed normal development of language and had typical language scores. The researchers used Bayley scales of infant and toddler development to assess the subjects. Music In two studies, Diana Deutsch found that left-handers, particularly those with mixed-hand preference, performed significantly better than right-handers in musical memory tasks. There are also handedness differences in perception of musical patterns. Left-handers as a group differ from right-handers, and are more heterogeneous than right-handers, in perception of certain stereo illusions, such as the octave illusion, the scale illusion, and the glissando illusion. Health Studies have found a positive correlation between left-handedness and several specific physical and mental disorders and health problems, including: Lower birth weight and complications at birth are positively correlated with left-handedness. A variety of neuropsychiatric and developmental disorders like autism spectrum, bipolar disorder, anxiety disorders, schizophrenia, and alcoholism have been associated with left- and mixed-handedness. A 2012 study showed that nearly 40% of children with cerebral palsy were left-handed, while another study demonstrated that left-handedness was associated with a 62% increased risk of Parkinson's disease in women, but not in men. Another study suggests that the risk of developing multiple sclerosis increases for left-handed women, but the effect is unknown for men at this point. Left-handed women may have a higher risk of breast cancer than right-handed women and the effect is greater in post-menopausal women. At least one study maintains that left-handers are more likely to suffer from heart disease, and are more likely to have reduced longevity from cardiovascular causes. Left-handers may be more likely to suffer bone fractures. Left-handers have a lower prevalence of arthritis and ulcer. One systematic review concluded: "Left-handers showed no systematic tendency to suffer from disorders of the immune system". As handedness is a highly heritable trait associated with various medical conditions, and because many of these conditions could have presented a Darwinian fitness challenge in ancestral populations, this indicates left-handedness may have previously been rarer than it currently is, due to natural selection. However, on average, left-handers have been found to have an advantage in fighting and competitive, interactive sports, which could have increased their reproductive success in ancestral populations. Income In 2006, researchers from Lafayette College and Johns Hopkins University concluded that there was no statistically significant correlation between handedness and earnings for the general population, but among college-educated people, left-handers earned 10 to 15% more than their right-handed counterparts. In a 2014 study published by the National Bureau of Economic Research, Harvard economist Joshua Goodman finds that left-handed people earn 10 to 12 percent less over the course of their lives than right-handed people. Goodman attributes this disparity to higher rates of emotional and behavioral problems in left-handed people. Sports Interactive sports such as table tennis, badminton and cricket have an overrepresentation of left-handedness, while non-interactive sports such as swimming show no overrepresentation. Smaller physical distance between participants increases the overrepresentation. In fencing, about half the participants are left-handed. In tennis, 40% of the seeded players are left-handed. The term southpaw is sometimes used to refer to a left-handed individual, especially in baseball and boxing. Some studies suggest that right handed male athletes tend to be statistically taller and heavier than left handed ones. Other, sports-specific factors may increase or decrease the advantage left-handers usually hold in one-on-one situations: In cricket, the overall advantage of a bowler's left-handedness exceeds that resulting from experience alone: even disregarding the experience factor (i.e., even for a batter whose experience against left-handed bowlers equals their experience against right-handed bowlers), a left-handed bowler challenges the average (i.e., right-handed) batter more than a right-handed bowler does, because the angle of a bowler's delivery to an opposite-handed batter is much more penetrating than that of a bowler to a same-handed batter (see Wasim Akram). In baseball, a right-handed pitcher's curve ball will break away from a right-handed batter and towards a left-handed batter (batting left or right does not indicate left or right handedness). While studies of handedness show that only 10% of the general population is left-handed, the proportion of left-handed MLB players is closer to 39% of hitters and 28% of pitchers, according to 2012 data. Historical batting averages show that left-handed batters have a slight advantage over right-handed batters when facing right-handed pitchers. Because there are fewer left-handed pitchers than right-handed pitchers, left-handed batters have more opportunities to face right-handed pitchers than their right-handed counterparts have against left-handed pitchers. Fifteen of the top twenty career batting average leaders in Major League Baseball history have been posted by left-handed batters. Left-handed batters have a slightly shorter run from the batter's box to first base than right-handers. This gives left-handers a slight advantage in beating throws to first base on infield ground balls. Perhaps more important, the follow through of a left-handed swing provides momentum in the direction of first base, while the right handed batter must overcome the swing momentum towards third base before beginning his run. Because a left-handed pitcher faces first base when he is in position to throw to the batter, whereas a right-handed pitcher has his back to first base, a left-handed pitcher has an advantage when attempting to pick off baserunners at first base. Defensively in baseball, left-handedness is considered an advantage for first basemen because they are better suited to fielding balls hit in the gap between first and second base, and because they do not have to pivot their body around before throwing the ball to another infielder. For the same reason, the other infielder's positions are seen as being advantageous to right-handed throwers. Historically, there have been few left-handed catchers because of the perceived disadvantage a left-handed catcher would have in making the throw to third base, especially with a right-handed hitter at the plate. A left-handed catcher would have a potentially more dangerous time tagging out a baserunner trying to score. With the ball in the glove on the right hand, a left-handed catcher would have to turn his body to the left to tag a runner. In doing so, he can lose the opportunity to brace himself for an impending collision. On the other hand, the Encyclopedia of Baseball Catchers states: In four wall handball, typical strategy is to play along the left wall forcing the opponent to use their left hand to counter the attack and playing into the strength of a left-handed competitor. In handball, left-handed players have an advantage on the right side of the field when attacking, getting a better angle, and that defenders might be unused to them. Since few people are left-handed, there is a demand for such players. In water polo, the centre forward position has an advantage in turning to shoot on net when rotating the reverse direction as expected by the centre of the opposition defence and gain an improved position to score. Left-handed drivers are usually on the right side of the field, because they can get better angles to pass the ball or shoot for goal. Ice hockey typically uses a strategy in which a defence pairing includes one left-handed and one right-handed defender. A disproportionately large number of ice hockey players of all positions, 62 percent, shoot left, though this does not necessarily indicate left-handedness. In American football, the handedness of a quarterback affects blocking patterns on the offensive line. Tight ends, when only one is used, typically line up on the same side as the throwing hand of the quarterback, while the offensive tackle on the opposite hand, which protects the quarterback's "blind side", is typically the most valued member of the offensive line. Receivers also have to adapt to the opposite spin. While uncommon, there have been several notable left-handed quarterbacks. In bowling, the oil pattern used on the bowling lane breaks down faster the more times a ball is rolled down the lane. Bowlers must continually adjust their shots to compensate for the ball's change in rotation as the game or series is played and the oil is altered from its original pattern. A left-handed bowler competes on the opposite side of the lane from the right-handed bowler and therefore deals with less breakdown of the original oil placement. This means left-handed bowlers have to adjust their shot less frequently than right-handed bowlers in team events or qualifying rounds where there are possibly 4-10 people per set of two lanes. This can allow them to stay more consistent. However, this advantage is not present in bracket rounds and tournament finals where matches are 1v1 on a pair of lanes. Sex According to a meta-analysis of 144 studies, totaling 1,787,629 participants, the best estimate for the male to female odds ratio was 1.23, indicating that men are 23% more likely to be left-handed. For example, if the incidence of female left-handedness was 10%, then the incidence of male left-handedness would be approximately 12% (10% incidence of left-handedness among women multiplied by an odds ratio of 1:1.23 for women:men results in a 12.3% incidence of left-handedness among men). Sexuality and gender identity Some studies examining the relationship between handedness and sexual orientation have reported that a disproportionate minority of homosexual people exhibit left-handedness, though findings are mixed. A 2001 study also found that people assigned male at birth whose gender identity did not align with their assigned sex, were more than twice as likely to be left-handed than a clinical control group (19.5% vs. 8.3%, respectively). Paraphilias (atypical sexual interests) have also been linked to higher rates of left-handedness. A 2008 study analyzing the sexual fantasies of 200 males found "elevated paraphilic interests were correlated with elevated non-right handedness". Greater rates of left-handedness have also been documented among pedophiles. A 2014 study attempting to analyze the biological markers of asexuality asserts that non-sexual men and women were 2.4 and 2.5 times, respectively, more likely to be left-handed than their heterosexual counterparts. Mortality rates in combat A study at Durham University—which examined mortality data for cricketers whose handedness was a matter of public record—found that left-handed men were almost twice as likely to die in war as their right-handed contemporaries. The study theorised that this was because weapons and other equipment was designed for the right-handed. "I can sympathise with all those left-handed cricketers who have gone to an early grave trying desperately to shoot straight with a right-handed Lee Enfield .303", wrote a journalist reviewing the study in the cricket press. The findings echo those of previous American studies, which found that left-handed US sailors were 34% more likely to have a serious accident than their right-handed counterparts. Episodic memory A high level of handedness (whether strongly favoring right or left) is associated with poorer episodic memory, and with poorer communication between brain hemispheres, which may give poorer emotional processing, although bilateral stimulation may reduce such effects. Corpus callosum A high level of handedness is associated with a smaller corpus callosum whereas low handedness with a larger one. Divergent thinking Left-handedness is associated with better divergent thinking. Products for left-handed use Many tools and procedures are designed to facilitate use by right-handed people, often without realizing the difficulties incurred by the left-handed. John W. Santrock has written, "For centuries, left-handers have suffered unfair discrimination in a world designed for right-handers." Many products for left-handed use are made by specialist producers, although not available from normal suppliers. Items as simple as a knife ground for use with the right hand are less convenient for left-handers. There is a multitude of examples: kitchen tools such as knives, corkscrews and scissors, garden tools, and so on. While not requiring a purpose-designed product, there are more appropriate ways for left-handers to tie shoelaces. There are companies that supply products designed specifically for left-handed use. One such is Anything Left-Handed, which in 1967 opened a shop in Soho, London; the shop closed in 2006, but the company continues to supply left-handed products worldwide by mail order. Writing from left to right as in many languages, in particular, with the left hand covers and tends to smear (depending upon ink drying) what was just written. Left-handed writers have developed various ways of holding a pen for best results. For using a fountain pen, preferred by many left-handers, nibs ground to optimise left-handed use (pushing rather than pulling across the paper) without scratching are available. Bias against left-handers McManus noted that, as the Industrial Revolution spread across Western Europe and the United States in the 19th century, workers needed to operate complex machines that were designed with right-handers in mind. This would have made left-handers more visible and at the same time appear less capable and more clumsy. Writing left-handed with a dip pen, in particular, was prone to blots and smearing. Negative connotations and discrimination Moreover, apart from inconvenience, left-handed people have historically been considered unlucky or even malicious for their difference by the right-handed majority. In many languages, including English, the word for the direction "right" also means "correct" or "proper". Throughout history, being left-handed was considered negative, or evil. The Latin adjective means as well as , and this double meaning survives in European derivatives of Latin, including the English words sinister (meaning both 'evil' and 'on the bearer's left on a coat of arms') and ambisinister meaning 'awkward or clumsy with both or either hand'. There are many negative connotations associated with the phrase left-handed: clumsy, awkward, unlucky, insincere, sinister, malicious, and so on. A "left-handed compliment" is one that has two meanings, one of which is unflattering to the recipient. In French, means both and or , while (cognate to English direct and related to adroit) means both and , as well as and the legal sense of . The name Dexter derives from the Latin for , as does the word dexterity meaning manual skill. As these are all very old words, they would tend to support theories indicating that the predominance of right-handedness is an extremely old phenomenon. Black magic is sometimes referred to as the "left-hand path". Discrimination in education Before the development of fountain pens and other writing instruments, children were taught to write with a dip pen. While a right-hander could smoothly drag the pen across paper from left to right, a dip pen could not easily be pushed across by the left hand without digging into the paper and making blots and stains. Even with more modern pens, writing from left to right, as in many languages, with the left hand covers and can smear what was just written when moving across the line. Into the 20th and even the 21st century, left-handed children in Uganda were beaten by schoolteachers or parents for writing with their left hand, or had their left hands tied behind their backs to force them to write with their right hand. As a child, the future British king George VI (1895–1952) was naturally left-handed. He was forced to write with his right hand, as was common practice at the time. He was not expected to become king, so that was not a factor. Until very recently in Taiwan, left-handed people were forced to switch to being right-handed, or at least switch to writing with the right hand. Due to the importance of stroke order, developed for the comfortable use of right-handed people, it is considered more difficult to write legible Chinese characters with the left hand than it is to write Latin letters, though difficulty is subjective and depends on the writer. Because writing when moving one's hand away from its side towards the other side of the body can cause smudging if the outward side of the hand is allowed to drag across the writing, writing in the Latin alphabet might possibly be less feasible with the left hand than the right under certain circumstances. Conversely, right-to-left alphabets, such as the Arabic and Hebrew, are generally considered easier to write with the left hand. Depending on the position and inclination of the writing paper, and the writing method, the left-handed writer can write as neatly and efficiently or as messily and slowly as right-handed writers. Usually the left-handed child needs to be taught how to write correctly with the left hand, since discovering a comfortable left-handed writing method on one's own may not be straightforward. In the Soviet school system, all left-handed children were forced to write with their right hand. International Left-Handers Day International Left-Handers Day is held annually every August 13. It was founded by the Left-Handers Club in 1992, with the club itself having been founded in 1990. International Left-Handers Day is, according to the club, "an annual event when left-handers everywhere can celebrate their sinistrality (left-handedness) and increase public awareness of the advantages and disadvantages of being left-handed." It celebrates their uniqueness and differences, who are from seven to ten percent of the world's population. Thousands of left-handed people in today's society have to adapt to use right-handed tools and objects. Again according to the club, "in the U.K. alone there were over 20 regional events to mark the day in 2001 – including left-v-right sports matches, a left-handed tea party, pubs using left-handed corkscrews where patrons drank and played pub games with the left hand only, and nationwide 'Lefty Zones' where left-handers' creativity, adaptability and sporting prowess were celebrated, whilst right-handers were encouraged to try out everyday left-handed objects to see just how awkward it can feel using the wrong equipment." In other animals Kangaroos and other macropod marsupials show a left-hand preference for everyday tasks in the wild. 'True' handedness is unexpected in marsupials however, because unlike placental mammals, they lack a corpus callosum. Left-handedness was particularly apparent in the red kangaroo (Macropus rufus) and the eastern gray kangaroo (Macropus giganteus). Red-necked (Bennett's) wallabies (Macropus rufogriseus) preferentially use their left hand for behaviours that involve fine manipulation, but the right for behaviours that require more physical strength. There was less evidence for handedness in arboreal species. Studies of dogs, horses, and domestic cats have shown that females of those species tend to be right-handed, while males tend to be left-handed. See also Lateralization of brain function General Cardinal direction Clockwise, which also discusses counterclockwise/anticlockwise, the two terms for the opposite sense of rotation Dexter and sinister Footedness Laterality Left- and right-hand traffic Left-right confusion Ocular dominance (eyedness) Proper right and proper left Handedness Edinburgh Handedness Inventory Geschwind–Galaburda hypothesis Neuroanatomy of handedness Situs inversus Twins and handedness References Further resources External links Lefties Have The Advantage In Adversarial Situations, ScienceDaily, April 14, 2006. Science Creative Quarterly's overview of some of the genetic underpinnings of left-handedness Hansard (1998). "Left-handed Children", Debate contribution by the Rt Hon. Mr. Peter Luff (MP for Mid-Worcestershire), House of Commons, 22 July. Handedness and Earnings / Higher paychecks: a left-handed compliment? Handedness & earnings, published in Journal of Human Resources 2007 Handedness Research Institute Study Reveals Why Lefties Are Rare Chirality Discrimination Mental processes Asymmetry ca:Dretà
Handedness
[ "Physics", "Chemistry", "Biology" ]
6,830
[ "Pharmacology", "Behavior", "Origin of life", "Motor control", "Stereochemistry", "Chirality", "Aggression", "Discrimination", "Asymmetry", "Biochemistry", "Handedness", "Symmetry", "Biological hypotheses" ]
172,716
https://en.wikipedia.org/wiki/Caltrop
A caltrop (also known as caltrap, galtrop, cheval trap, galthrap, galtrap, calthrop, jackrock or crow's foot) is an area denial weapon made up of usually four, but possibly more, sharp nails or spines arranged in such a manner that one of them always points upward from a stable base (for example, a tetrahedron). Historically, caltrops were part of defences that served to slow the advance of troops, especially horses, chariots, and war elephants, and were particularly effective against the soft feet of camels. In modern times, caltrops are effective when used against wheeled vehicles with pneumatic tires. Name The modern name "caltrop" is derived from the Old English (heel-trap), such as in the French usage (shoe-trap). The Latin word originally referred to this and provides part of the modern scientific name of a plant commonly called the caltrop, Tribulus terrestris, whose spiked seed cases resemble caltrops and can injure feet and puncture bicycle tires. This plant can also be compared to Centaurea calcitrapa, which is also sometimes referred to as the "caltrop". Trapa natans, a water plant with similarly shaped spiked seeds and edible fruit, is called the "water caltrop". History The caltrop was called by the ancient Romans, or sometimes , the latter meaning "jagged iron" (literally "iron spiny snail-shell"). The former term derives from the ancient Greek word meaning three spikes. The late Roman writer Vegetius, referring in his work De re militari to scythed chariots, wrote: Another example of the use of caltrops was found in Jamestown, Virginia, in the United States: The Japanese version of the caltrop is called . Makibishi were sharp spiked objects that were used in feudal Japan to slow pursuers and also were used in the defence of samurai fortifications. Iron makibishi were called , while the makibishi made from the dried seed pod of the water caltrop, or water chestnut (genus Trapa), formed a natural type of makibashi called . Both types of makibishi could penetrate the thin soles of shoes, such as the sandals, which were commonly worn in feudal Japan. Modern uses World War I During service in World War I, Australian Light Horse troops collected caltrops as keepsakes. These caltrops were either made by welding two pieces of wire together to form a four-pointed star or pouring molten steel into a mould to form a solid, seven-pointed star. The purpose of these devices was to disable horses. They were exchanged with French troops for bullets. The Australian Light Horse troops referred to them as "Horse Chestnuts". World War II Caltrops were used extensively and effectively during World War II. The modifications and variants produced by the Special Operations Executive (SOE) and the Office of Strategic Services (OSS) of the United States are still in use today within special forces and law enforcement bodies. The Germans dropped crow's feet (). These were made from two segments of sheet metal welded together into a tetrapod with four barbed points and then painted in camouflage colours. They came in two sizes with a side length of either . They were dropped from aircraft in containers the same size as bombs and were dispersed by a small explosive charge. Tire deflation device Inventors patented caltrop-like devices to deflate vehicle tires in a manner useful to law enforcement agencies or the military. They are currently used by the military and police. Labour disputes Caltrops have been used at times during labour strikes and other disputes. Such devices were used by some to destroy the tires of management and replacement workers. Caltrops, referred to as "jack rocks" in news articles, were used during the Caterpillar strike in 1995, puncturing tires on vehicles crossing the picket line in Peoria, Illinois. Because of their small size and the difficulty proving their source, both the company and the United Auto Workers blamed each other. Collateral damage included a school bus and a walking mail carrier. In Illinois, the state legislature passed a law making the possession of such devices a misdemeanor. Via drones During the Russian invasion of Ukraine, Ukraine has used drones to drop caltrops on key roads to disrupt wheeled vehicles carrying Russian military materiel, and make them easier to target with loitering munitions. Symbol A caltrop has a variety of symbolic uses and is commonly found as a charge in heraldry. For instance, the Finnish noble family Fotangel (Swedish for 'caltrop') had arms gules, three caltrops argent. It has also been adopted by military units: the caltrop is the symbol of the US Army's III Corps, which is based at Fort Cavazos, Texas. III Corps traces its lineage to the days of horse cavalry, which used the caltrop as a defensive area denial weapon. The caltrop is also the symbol of the United States Marine Corps' 3rd Division, formed on 16 September 1942. Similar devices Punji sticks perform a similar role to caltrops. These are sharpened sticks placed vertically in the ground. Their use in modern times targets the body and limbs of a falling victim by means of a pit or tripwire. During the Second World War, large caltrop-shaped objects made from reinforced concrete were used as anti-tank devices, although it seems that these were rare. Much more common were concrete devices called dragon's teeth, which were designed to wedge into tank treads. Large ones weighing over are still used defensively to deny access to wheeled vehicles, especially in camp areas. As dragon's teeth are immobile, the analogy with the caltrop is inexact. Another caltrop-like defence during World War II was the massive steel, freestanding Czech hedgehog; the works were designed as anti-tank obstacles and could also damage landing craft and warships that came too close to shore. These were used by the Germans to defend beaches in Normandy and other coastal areas. Czech hedgehogs are heavily featured and plainly visible in the 1998 Steven Spielberg-directed American epic war film Saving Private Ryan, throughout the scenes early in the film depicting the June6, 1944 Omaha Beach assault (part of the Normandy landings during World War II). Tetrapods are concrete blocks shaped like caltrops, which interlock when piled up. They are used as riprap in the construction of breakwaters and other sea defences, as they have been found to let the water pass through them and interrupt natural processes less than some other defenses. See also Area denial weapon Booby trap Knucklebones, a game, with similar hazards Triskelion Footnotes References Clan Drummond, a brief history, at Scot Clans John L. Cotter and J. Paul Hudson, New Discoveries at Jamestown, Site of the First Successful English Settlement in America, 1957 Project Gutenberg Official documents Roman weapons Roman fortifications Area denial weapons Engineering barrages Fortification (obstacles) Medieval weapons Guerrilla warfare tactics Metallic objects Tetrahedra
Caltrop
[ "Physics", "Engineering" ]
1,488
[ "Metallic objects", "Engineering barrages", "Military engineering", "Physical objects", "Area denial weapons", "Matter" ]
172,732
https://en.wikipedia.org/wiki/Glycerol
Glycerol () is a simple triol compound. It is a colorless, odorless, viscous liquid that is sweet-tasting and non-toxic. The glycerol backbone is found in lipids known as glycerides. It is also widely used as a sweetener in the food industry and as a humectant in pharmaceutical formulations. Because of its three hydroxyl groups, glycerol is miscible with water and is hygroscopic in nature. Modern use of the word glycerine (alternatively spelled glycerin) refers to commercial preparations of less than 100% purity, typically 95% glycerol. Structure Although achiral, glycerol is prochiral with respect to reactions of one of the two primary alcohols. Thus, in substituted derivatives, the stereospecific numbering labels the molecule with a sn- prefix before the stem name of the molecule. Production Natural sources Glycerol is generally obtained from plant and animal sources where it occurs in triglycerides, esters of glycerol with long-chain carboxylic acids. The hydrolysis, saponification, or transesterification of these triglycerides produces glycerol as well as the fatty acid derivative: Triglycerides can be saponified with sodium hydroxide to give glycerol and fatty sodium salt or soap. Typical plant sources include soybeans or palm. Animal-derived tallow is another source. From 2000 to 2004, approximately 950,000 tons per year were produced in the United States and Europe; 350,000 tons of glycerol were produced in the U.S. alone. Since around 2010, there is a large surplus of glycerol as a byproduct of biofuel, enforced for example by EU directive 2003/30/EC that required 5.75% of petroleum fuels to be replaced with biofuel sources across all member states. Crude glycerol produced from triglycerides is of variable quality, with a low selling price of as low as US$0.02–0.05 per kilogram already in 2011. It can be purified in a rather expensive process by treatment with activated carbon to remove organic impurities, alkali to remove unreacted glycerol esters, and ion exchange to remove salts. High purity glycerol (greater than 99.5%) is obtained by multi-step distillation; a vacuum chamber is necessary due to its high boiling point (290 °C). Consequently, glycerol recycling is more of a challenge than its production, for instance by conversion to glycerol carbonate or to synthetic precursors, such as acrolein and epichlorohydrin. Synthetic glycerol Although usually not economical anymore, glycerol can be synthesized by various routes. During World War II, synthetic glycerol processes became a national defense priority because it is a precursor to nitroglycerine. Epichlorohydrin is the most important precursor. Chlorination of propylene gives allyl chloride, which is oxidized with hypochlorite to dichlorohydrin, which reacts with a strong base to give epichlorohydrin. Epichlorohydrin can be hydrolyzed to glycerol. Chlorine-free processes from propylene include the synthesis of glycerol from acrolein and propylene oxide. Applications Food industry In food and beverages, glycerol serves as a humectant, solvent, and sweetener, and may help preserve foods. It is also used as filler in commercially prepared low-fat foods (e.g., cookies), and as a thickening agent in liqueurs. Glycerol and water are used to preserve certain types of plant leaves. As a sugar substitute, it has approximately 27 kilocalories per teaspoon (sugar has 20) and is 60% as sweet as sucrose. It does not feed the bacteria that form a dental plaque and cause dental cavities. As a food additive, glycerol is labeled as E number E422. It is added to icing (frosting) to prevent it from setting too hard. As used in foods, glycerol is categorized by the U.S. Academy of Nutrition and Dietetics as a carbohydrate. The U.S. Food and Drug Administration (FDA) carbohydrate designation includes all caloric macronutrients excluding protein and fat. Glycerol has a caloric density similar to table sugar, but a lower glycemic index and different metabolic pathway within the body. It is also recommended as an additive when polyol sweeteners such as erythritol and xylitol are used, as its heating effect in the mouth will counteract these sweeteners' cooling effect. Medical Glycerol is used in medical, pharmaceutical and personal care preparations, often as a means of improving smoothness, providing lubrication, and as a humectant. Ichthyosis and xerosis have been relieved by the topical use of glycerin. It is found in allergen immunotherapies, cough syrups, elixirs and expectorants, toothpaste, mouthwashes, skin care products, shaving cream, hair care products, soaps, and water-based personal lubricants. In solid dosage forms like tablets, glycerol is used as a tablet holding agent. For human consumption, glycerol is classified by the FDA among the sugar alcohols as a caloric macronutrient. Glycerol is also used in blood banking to preserve red blood cells prior to freezing. Taken rectally, glycerol functions as a laxative by irritating the anal mucosa and inducing a hyperosmotic effect, expanding the colon by drawing water into it to induce peristalsis resulting in evacuation. It may be administered undiluted either as a suppository or as a small-volume (2–10 ml) enema. Alternatively, it may be administered in a dilute solution, such as 5%, as a high-volume enema. Taken orally (often mixed with fruit juice to reduce its sweet taste), glycerol can cause a rapid, temporary decrease in the internal pressure of the eye. This can be useful for the initial emergency treatment of severely elevated eye pressure. In 2017, researchers showed that the probiotic Limosilactobacillus reuteri bacteria can be supplemented with glycerol to enhance its production of antimicrobial substances in the human gut. This was confirmed to be as effective as the antibiotic vancomycin at inhibiting Clostridioides difficile infection without having a significant effect on the overall microbial composition of the gut. Glycerol has also been incorporated as a component of bio-ink formulations in the field of bioprinting. The glycerol content acts to add viscosity to the bio-ink without adding large protein, saccharide, or glycoprotein molecules. Botanical extracts When utilized in "tincture" method extractions, specifically as a 10% solution, glycerol prevents tannins from precipitating in ethanol extracts of plants (tinctures). It is also used as an "alcohol-free" alternative to ethanol as a solvent in preparing herbal extractions. It is less extractive when utilized in a standard tincture methodology. Alcohol-based tinctures can also have the alcohol removed and replaced with glycerol for its preserving properties. Such products are not "alcohol-free" in a scientific or FDA regulatory sense, as glycerol contains three hydroxyl groups. Fluid extract manufacturers often extract herbs in hot water before adding glycerol to make glycerites. When used as a primary "true" alcohol-free botanical extraction solvent in non-tincture based methodologies, glycerol has been shown to possess a high degree of extractive versatility for botanicals including removal of numerous constituents and complex compounds, with an extractive power that can rival that of alcohol and water–alcohol solutions. That glycerol possesses such high extractive power assumes it is utilized with dynamic (critical) methodologies as opposed to standard passive "tincturing" methodologies that are better suited to alcohol. Glycerol does not denature or render a botanical's constituents inert as alcohols (ethanol, methanol, and so on) do. Glycerol is a stable preserving agent for botanical extracts that, when utilized in proper concentrations in an extraction solvent base, does not allow inverting or reduction-oxidation of a finished extract's constituents, even over several years. Both glycerol and ethanol are viable preserving agents. Glycerol is bacteriostatic in its action, and ethanol is bactericidal in its action. Electronic cigarette liquid Glycerin, along with propylene glycol, is a common component of e-liquid, a solution used with electronic vaporizers (electronic cigarettes). This glycerol is heated with an atomizer (a heating coil often made of Kanthal wire), producing the aerosol that delivers nicotine to the user. Antifreeze Like ethylene glycol and propylene glycol, glycerol is a non-ionic kosmotrope that forms strong hydrogen bonds with water molecules, competing with water-water hydrogen bonds. This interaction disrupts the formation of ice. The minimum freezing point temperature is about corresponding to 70% glycerol in water. Glycerol was historically used as an anti-freeze for automotive applications before being replaced by ethylene glycol, which has a lower freezing point. While the minimum freezing point of a glycerol-water mixture is higher than an ethylene glycol-water mixture, glycerol is not toxic and is being re-examined for use in automotive applications. In the laboratory, glycerol is a common component of solvents for enzymatic reagents stored at temperatures below due to the depression of the freezing temperature. It is also used as a cryoprotectant where the glycerol is dissolved in water to reduce damage by ice crystals to laboratory organisms that are stored in frozen solutions, such as fungi, bacteria, nematodes, and mammalian embryos. Some organisms like the moor frog produce glycerol to survive freezing temperatures during hibernation. Chemical intermediate Glycerol is used to produce a variety of useful derivatives. Nitration gives nitroglycerin, an essential ingredient of various explosives such as dynamite, gelignite, and propellants like cordite. Nitroglycerin under the name glyceryl trinitrate (GTN) is commonly used to relieve angina pectoris, taken in the form of sub-lingual tablets, patches, or as an aerosol spray. Trifunctional polyether polyols are produced from glycerol and propylene oxide. Oxidation of glycerol affords mesoxalic acid. Dehydrating glycerol affords hydroxyacetone. Chlorination of glycerol gives the 1-chloropropane-2,3-diol: The same compound can be produced by hydrolysis of epichlorohydrin. Epoxidation by reaction with epichlorohydrin and a Lewis acid yields Glycerol triglycidyl ether. Vibration damping Glycerol is used as fill for pressure gauges to damp vibration. External vibrations, from compressors, engines, pumps, etc., produce harmonic vibrations within Bourdon gauges that can cause the needle to move excessively, giving inaccurate readings. The excessive swinging of the needle can also damage internal gears or other components, causing premature wear. Glycerol, when poured into a gauge to replace the air space, reduces the harmonic vibrations that are transmitted to the needle, increasing the lifetime and reliability of the gauge. Niche uses Entertainment industry Glycerol is used by set decorators when filming scenes involving water to prevent an area meant to look wet from drying out too quickly. Glycerine is also used in the generation of theatrical smoke and fog as a component of the fluid used in fog machines as a replacement for glycol, which has been shown to be an irritant if exposure is prolonged. Ultrasonic couplant Glycerol can be sometimes used as replacement for water in ultrasonic testing, as it has favourably higher acoustic impedance (2.42 MRayl versus 1.483 MRayl for water) while being relatively safe, non-toxic, non-corrosive and relatively low cost. Internal combustion fuel Glycerol is also used to power diesel generators supplying electricity for the FIA Formula E series of electric race cars. Research on additional uses Research continues into potential value-added products of glycerol obtained from biodiesel production. Examples (aside from combustion of waste glycerol): Hydrogen gas production. Glycerine acetate is a potential fuel additive. Additive for starch thermoplastic. Conversion to various other chemicals: Propylene glycol Acrolein Ethanol Epichlorohydrin, a raw material for epoxy resins Metabolism Glycerol is a precursor for synthesis of triacylglycerols and of phospholipids in the liver and adipose tissue. When the body uses stored fat as a source of energy, glycerol and fatty acids are released into the bloodstream. Glycerol is mainly metabolized in the liver. Glycerol injections can be used as a simple test for liver damage, as its rate of absorption by the liver is considered an accurate measure of liver health. Glycerol metabolism is reduced in both cirrhosis and fatty liver disease. Blood glycerol levels are highly elevated during diabetes, and is believed to be the cause of reduced fertility in patients who suffer from diabetes and metabolic syndrome. Blood glycerol levels in diabetic patients average three times higher than healthy controls. Direct glycerol treatment of testes has been found to cause significant long-term reduction in sperm count. Further testing on this subject was abandoned due to the unexpected results, as this was not the goal of the experiment. Circulating glycerol does not glycate proteins as do glucose or fructose, and does not lead to the formation of advanced glycation endproducts (AGEs). In some organisms, the glycerol component can enter the glycolysis pathway directly and, thus, provide energy for cellular metabolism (or, potentially, be converted to glucose through gluconeogenesis). Before glycerol can enter the pathway of glycolysis or gluconeogenesis (depending on physiological conditions), it must be converted to their intermediate glyceraldehyde 3-phosphate in the following steps: The enzyme glycerol kinase is present mainly in the liver and kidneys, but also in other body tissues, including muscle and brain. In adipose tissue, glycerol 3-phosphate is obtained from dihydroxyacetone phosphate with the enzyme glycerol-3-phosphate dehydrogenase. Toxicity and safety Glycerol has very low toxicity when ingested; its LD50 oral dose for rats is 12600 mg/kg and 8700 mg/kg for mice. It does not appear to cause toxicity when inhaled, although changes in cell maturity occurred in small sections of lung in animals under the highest dose measured. A sub-chronic 90-day nose-only inhalation study in Sprague–Dawley (SD) rats exposed to 0.03, 0.16 and 0.66 mg/L glycerin (Per liter of air) for 6-hour continuous sessions revealed no treatment-related toxicity other than minimal metaplasia of the epithelium lining at the base of the epiglottis in rats exposed to 0.66 mg/L glycerin. Glycerol intoxication Excessive consumption by children can lead to glycerol intoxication. Symptoms of intoxication include hypoglycemia, nausea and a loss of consciousness. While intoxication as a result of excessive glycerol consumption is rare and its symptoms generally mild, occasional reports of hospitalization have occurred. In the United Kingdom in August 2023, manufacturers of syrup used in slush ice drinks were advised to reduce the amount of glycerol in their formulations by the Food Standards Agency to reduce the risk of intoxication. Food Standards Scotland advises that slush ice drinks containing glycerol should not be given to children under the age of 4, owing to the risk of intoxication. It also recommends that businesses do not use free refill offers for the drinks in venues where children under the age of 10 are likely to consume them, and that products should be appropriately labelled to inform consumers of the presence of glycerol. Historical cases of contamination with diethylene glycol On 4 May 2007, the FDA advised all U.S. makers of medicines to test all batches of glycerol for diethylene glycol contamination. This followed an occurrence of hundreds of fatal poisonings in Panama resulting from a falsified import customs declaration by Panamanian import/export firm Aduanas Javier de Gracia Express, S. A. The cheaper diethylene glycol was relabeled as the more expensive glycerol. Between 1990 and 1998, incidents of DEG poisoning reportedly occurred in Argentina, Bangladesh, India, and Nigeria, and resulted in hundreds of deaths. In 1937, more than one hundred people died in the United States after ingesting DEG-contaminated elixir sulfanilamide, a drug used to treat infections. Etymology The origin of the gly- and glu- prefixes for glycols and sugars is from Ancient Greek glukus which means sweet. Name glycérine was coined ca. 1811 by Michel Eugène Chevreul to denote what was previously called "sweet principle of fat" by its discoverer Carl Wilhelm Scheele. It was borrowed into English ca. 1838 and in the 20th c. displaced by 1872 term glycerol featuring an alcohols' suffix -ol. Properties Table of thermal and physical properties of saturated liquid glycerin: {|class="wikitable mw-collapsible mw-collapsed" !Temperature (°C) !Density (kg/m3) !Specific heat (kJ/kg·K) !Kinematic viscosity (m2/s) !Conductivity (W/m·K) !Thermal diffusivity (m2/s) !Prandtl number !Bulk modulus (K−1) |- |0 |1276.03 |2.261 | |0.282 | |84700 | |- |10 |1270.11 |2.319 | |0.284 | |31000 | |- |20 |1264.02 |2.386 | |0.286 | |12500 | |- |30 |1258.09 |2.445 | |0.286 | |5380 | |- |40 |1252.01 |2.512 | |0.286 | |2450 | |- |50 |1244.96 |2.583 | |0.287 | |1630 | |} See also Dioxalin Epichlorohydrin Nitroglycerin Oleochemicals Saponification/Soapmaking Solketal Transesterification References External links Mass spectrum of glycerol CDC – NIOSH Pocket Guide to Chemical Hazards – Glycerin (mist) Alcohol solvents Biofuels Commodity chemicals Cosmetics chemicals Demulcents E-number additives Food additives Glassforming liquids and melts Household chemicals Laxatives Sugar alcohols Triols By-products
Glycerol
[ "Chemistry" ]
4,255
[ "Carbohydrates", "Sugar alcohols", "Commodity chemicals", "Products of chemical industry" ]
172,740
https://en.wikipedia.org/wiki/Elias%20James%20Corey
Elias James Corey (born July 12, 1928) is an American organic chemist. In 1990, he won the Nobel Prize in Chemistry "for his development of the theory and methodology of organic synthesis", specifically retrosynthetic analysis. Regarded by many as one of the greatest living chemists, he has developed numerous synthetic reagents, methodologies and total syntheses and has advanced the science of organic synthesis considerably. Biography E.J. Corey (the surname was anglicized from Levantine Arabic Khoury, meaning priest) was born to Lebanese Greek Orthodox Christian immigrants Fatima (née Hasham) and Elias Corey in Methuen, Massachusetts, north of Boston. His mother changed his name from William to "Elias" to honor his father, who died eighteen months after Corey's birth. His widowed mother, brother, two sisters, aunt and uncle all lived together in a spacious house, struggling through the Great Depression. As a young boy, Corey was independent and enjoyed sports such as baseball, football, and hiking. He attended a Catholic elementary school and Lawrence High School in Lawrence, Massachusetts. At the age of 16 Corey entered MIT, where he earned both a bachelor's degree in 1948 and a Ph.D. under Professor John C. Sheehan in 1951. Upon entering MIT, Corey's only experience with science was in mathematics, and he began his college career pursuing a degree in engineering. After his first chemistry class in his sophomore year he began rethinking his long-term career plans and graduated with a bachelor's degree in chemistry. Immediately thereafter, at the invitation of Professor John C. Sheehan, Corey remained at MIT for his Ph.D. After his graduate career he was offered an appointment at the University of Illinois at Urbana–Champaign, where he became a full professor of chemistry in 1956 at the age of 27. He was initiated as a member of the Zeta chapter of Alpha Chi Sigma at the University of Illinois in 1952. In 1959, he moved to Harvard University, where he is currently an emeritus professor of organic chemistry with an active Corey Group research program. He chose to work in organic chemistry because of "its intrinsic beauty and its great relevance to human health". He has also been an advisor to Pfizer for more than 50 years. Among numerous honors, Corey was awarded the National Medal of Science in 1988, the Nobel Prize in Chemistry in 1990, and the American Chemical Society's greatest honor, the Priestley Medal, in 2004. Major contributions Reagents Corey has developed several new synthetic reagents: PCC (pyridinium chlorochromate), also referred to as the Corey-Suggs reagent, is widely used for the oxidation of alcohols to corresponding ketones and aldehydes. PCC has several advantages over other commercial oxidants. An air-stable yellow solid, it is only slightly hygroscopic. Unlike other oxidizing agents, PCC requires only about 1.5 equivalents to complete a single oxidation (scheme 1). In the reaction, the alcohol nucleophilically displaces chlorine from the electropositive chromium(VI) metal. The chloride anion then acts as a base to afford the aldehyde product and chromium(IV). The slightly acidic character of PCC makes it useful for cyclization reactions with alcohols and alkenes (Scheme 2). The initial oxidation yields the corresponding aldehyde, which can then undergo a Prins reaction with the neighboring alkene. After elimination and further oxidation, the product is a cyclic ketone. Conversely, powdered sodium acetate co-reagent inhibits reaction after formation of the aldehyde. PCC's oxidatory robustness has also rendered it useful in the realm of total synthesis (Scheme 3). This example illustrates that PCC is capable of performing a Dauben oxidative rearrangement with tertiary alcohols through a [3,3]-sigmatropic rearrangement. t-Butyldimethylsilyl ether (TBS), triisopropylsilyl ether (TIPS), and methoxyethoxymethyl (MEM) are popular alcohol protecting groups. The development of these protecting groups allowed the synthesis of several natural products whose functional groups could not withstand standard chemical transformations. Although the synthetic community attempts to minimize the use of protecting groups, it is still rare that a published natural-product synthesis omits them entirely. Since 1972 the TBS group has become the most popular silicon protecting group (Scheme 4). TBS is stable to chromatography and labile enough to cleave under basic and acidic conditions. More importantly, TBS ethers are stable to such carbon nucleophiles as Grignard reagents and enolates. CSA (Camphorsulfonic acid) selectively removes a primary TBS ether in the presence of TIPS and tertiary TBS ethers. Other TBS deprotection methods include acids (also Lewis acids), and fluorides. TIPS protecting groups provide increased selectivity of primary over secondary and tertiary alcohol protection. Their ethers are more stable under acidic and basic conditions than TBS ethers, but less labile for deprotection. The most common cleavage reagents employ the same conditions as TBS ether, but longer reaction times. Usually TBAF severs TBS ethers, but the hindered TBS ether above survives primary TIPS removal (scheme 5). The MEM protecting group was first described by Corey in 1976. This protecting group is similar in reactivity and stability to other alkoxy methyl ethers under acidic conditions. Acidic conditions usually accomplish cleavage of MEM protecting groups, but coordination with metal halides greatly enhances lability (scheme 6). 1,3-Dithianes are a temporary modification of a carbonyl group that reverses their reactivity in displacement and addition reactions. Dithianation introduced umpolung chemistry, now a key concept in organic synthesis. The formations of dithianes can be accomplished with a Lewis acid (scheme 7) or directly from carbonyl compounds. The pKa of dithianes is approximately 30, allowing deprotonation with an alkyl lithium reagent, typically n-butyllithium. The reaction between dithianes and aldehydes is now known as the Corey-Seebach reaction. The dithiane, once deprotonated, serves as an acyl anion, attacking incoming electrophiles. Dithiane deprotection, usually with HgO, constructs a ketone product. Corey also commenced detailed studies on cationic polyolefin cyclizations utilized in enzymatic production of cholesterol from simpler plant terpenes. Corey established the details of the remarkable cyclization process by first studying the biological synthesis of sterols from squalene. Methodology Several reactions developed in Corey's lab have become commonplace in modern synthetic organic chemistry. At least 302 methods have been developed in the Corey group since 1950. Several reactions have been named after him: Corey-Itsuno reduction, also known as the Corey-Bakshi-Shibata reduction, is an enantioselective reduction of ketones to alcohols through an oxazaborolidine catalyst, with various boranes as the stoichiometric reductant. The Corey group first demonstrated the catalyst's synthesis using borane and the chiral amino acid proline (scheme 9).Corey, E.J.; Kürti, L. Enantioselective Chemical Synthesis; Direct Book Publishing: Dallas, 2010 Later, Corey demonstrated that substituted boranes were easier to prepare and much more stable. The reduction mechanism begins with the oxazoborolidine, only slightly basic at nitrogen, coordinating to a borane reductant (scheme 10). Poor donation from the nitrogen to the boron leaves the Lewis acidity mostly intact, allowing coordination to the ketone substrate. The complexation of the substrate occurs from the most accessible lone pair of the oxygen, restricting rotation around the B-O bond due to the sterically neighboring phenyl group. Migration of the hydride from borane to the electrophilic ketone center occurs via a 6-membered ring transition state, leading to a four-membered ring intermediate, ultimately providing the chiral product and regeneration of the catalyst. The reaction has also been of great use to natural products chemists (scheme 11). The synthesis of dysidiolide by Corey and co-workers was achieved via an enantioselective CBS reduction using a borane-dimethylsulfide complex. Corey-Fuchs alkyne synthesis is the synthesis of terminal alkynes through a one-carbon homologation of aldehydes using triphenylphosphine and carbon tetrabromide.Corey, E.J.; Fuch, P.L. Tetrahedron Lett. 1972, 3769 The mechanism is similar to that of a combined Wittig reaction and Appell reaction. Reacting a phosphorus ylide formed in situ with the aldehyde substrate yields a dibromoolefin. On treatment with two equivalents of n-butyllithium, lithium halogen exchange and deprotonation yields a lithium acetylide species that undergoes hydrolysis to yield the terminal alkyne product (scheme 12). More recent developments include a modified procedure for one-pot synthesis. This synthetic transformation has been proven successful in the total synthesis (+)-taylorione by W.J. Kerr and co-workers (scheme 13). The Corey–Kim oxidation was a new conversion of alcohols into corresponding aldehydes and ketones. This combination of N-chlorosuccinimidosulfonium chloride (NCS), dimethylsulfide (DMS), and triethylamine (TEA) offers a less toxic alternative to chromium-based oxidations. The Corey-Kim reagent is formed in situ when the succinimide and sulfide react to form a dimethylsuccinimidosulfonium chloride species (scheme 14). Triethylamine deprotonates the alkoxysulfonium salt at the α position to afford the oxidized product. The reaction accommodates a wide array of functional groups, but allylic and benzylic alcohols are typically transformed into chlorides instead. Its application in synthesis is based on the mild protocol conditions and functional and protecting group compatibility. In the total synthesis of ingenol, Kuwajima and co-workers exploited the Corey-Kim oxidation by selectively oxidizing the less hindered secondary alcohol(scheme 15). Corey-Winter olefination is a stereospecific transformation of 1,2-diols to alkenes involving the diol substrate, thiocarbonyldiimidazole, and excess trialkylphosphite. The exact mechanism is unknown, but has been narrowed down to two possible pathways. The thionocarbonate and trialkylphosphite either form a phosphorus ylide or carbenoid intermediate. The reaction is stereospecific for most substrates unless the product would lead to an exceedingly strained structure, as discovered when Corey et al attempted to form sterically hindered trans alkenes in certain 7-membered rings. Stereospecfic alkenes are present in several natural products as the method continues to be exploited to yield a series of complex substrates. Professor T.K.M Shing et al used the Corey-Winter olefination reaction to synthesize (+)-Boesenoxide (scheme 16). CBS-type enantioselective Diels–Alder reaction has been developed using a similar scaffold to the enantioselective CBS reduction. After the development of this reaction the CBS reagent proved to be a very versatile reagent for a series of several powerful synthetic transformations. The use of a chiral Lewis acid such as the CBS catalyst includes a broad range of unsaturated enones substrates. The reaction likely proceeds via a highly organized 6-membered ring pre-transition state to deliver highly enantio-enriched products (scheme 17). This transition state likely occurs because of favorable pi-stacking with the phenyl substituent. The enantioselectivity of the process is facilitated from the diene approaching the dienophile from the opposite face of the phenyl substituent. The Diels-Alder reaction is one of the most powerful transformations in synthetic chemistry. The synthesis of natural products using the Diels-Alder reaction as a transform has been applied especially to the formation of six-membered rings(scheme 18). Corey-Nicolaou macrolactonization provides the first method for preparing medium-to-large-size lactones. Previously, intermolecular outcompeted intramolecular lactonization even at low concentrations. One big advantage of this reaction is that it is performed under neutral conditions allowing the presence of acid and base-labile functional groups. As of 2016, rings of 7–44 members have been successfully synthesized using this method. The reaction occurs in the presence of 2,2'-dipyridyl disulfide and triphenylphosphine with reflux of a nonpolar solvent such as benzene. The mechanism begins with formation of the 2-pyridinethiol ester (scheme 19). Proton-transfer provides a dipolar intermediate in which the alkoxide nucleophile attacks the electrophilic carbonyl center, providing a tetrahedral intermediate that yields the macrolactone product. One of the first examples of this protocol was applied to the total synthesis of zearalenone (scheme 20). The Johnson-Corey-Chaykovsky reaction synthesizes epoxides and cyclopropanes. The reaction forms a sulfur ylide in situ that reacts with enones, ketones, aldehydes, and imines to form corresponding epoxides, cyclopropanes, and aziridines. Two sulfur ylide variants have been employed that give different chemeoselective products (scheme 21).The dimethylsulfoxonium methylide provides epoxides from ketones, but yields the cyclopropanes when enones are employed. Dimethylsulfonium methylide transforms ketones and enones to the corresponding epoxides. Dimethylsulfonium methylide is much more reactive and less stable than dimethylsulfoxonium methylide, so it is generated at low temperatures. Based on their reactivity, another distinct advantage of these two variants is that kinetically they provide a difference in diastereoselectivity. The reaction is very well established, and enantioselective variants (catalytic and stoichiometric) have also been achieved. From a retrosynthetic analysis standpoint, this reaction provides a reasonable alternative to conventional epoxidation reactions with alkenes (scheme 22). Danishefsky utilized this methodology for the synthesis of taxol. Diastereoselectivity is established by 1,3 interactions in the transition state required for epoxide closure. Total syntheses E. J. Corey and his research group have completed many total syntheses. At least 265 natural compounds have been synthesized in the Corey group since 1950. His 1969 total syntheses of several prostaglandins are considered classics. Specifically the synthesis of Prostaglandin F2α presents several challenges. The presence of both cis and trans olefins as well as five asymmetric carbon atoms renders the molecule a desirable challenge for organic chemists. Corey's retrosynthetic analysis outlines a few key disconnections that lead to simplified precursors (scheme 23). Molecular simplification began first by disconnecting both carbon chains with a Wittig reaction and Horner-Wadsworth Emmons modification. The Wittig reaction affords the cis product, while the Horner-Wadsworth Emmons produces the trans olefin. The published synthesis reveals a 1:1 diastereomeric mixture of the carbonyl reduction using zinc borohydride. However, years later Corey and co-workers established the CBS reduction. One of the examples that exemplified this protocol was an intermediate in the prostaglandin synthesis revealing a 9:1 mixture of the desired diastereomer (scheme 24). The iodolactonization transform affords an allylic alcohol leading to a key Baeyer-Villiger intermediate. This oxidation regioselectively inserts an oxygen atom between the ketone and the most electron-rich site. The pivotal intermediate leads to a straightforward conversion to the Diels-Alder structural goal, which provides the carbon framework for the functionalized cyclopentane ring. Later Corey developed an asymmetric Diels-Alder reaction employing a chiral oxazoborolidine, greatly simplifying the synthetic route to the prostaglandins. Other notable syntheses: Longifolene Ginkgolides A and B Lactacystin Miroestrol Ecteinascidin 743 Salinosporamide A Computer programs Corey and his research group created LHASA, a program that uses artificial intelligence to discover sequences of reaction which may lead to total synthesis. The program was one of the first to use a graphical interface to input and display chemical structures. Publications E.J. Corey has more than 1100 publications. In 2002, the American Chemical Society (ACS) recognized him as the "Most Cited Author in Chemistry". In 2007, he received the first ACS Publications Division "Cycle of Excellence High Impact Contributor Award" and was ranked the number one chemist in terms of research impact by the Hirsch Index (h-index). His books include: Altom suicide Jason Altom, one of Corey's students, committed suicide in 1998. Altom's suicide caused controversy because he explicitly blamed Corey, his research advisor, for his suicide. Altom cited in his 1998 farewell note "abusive research supervisors" as one reason for taking his life. Altom's suicide note also contained explicit instructions on how to reform the relationship between students and their supervisors. Altom was the third member of Corey's lab to commit suicide since 1980. Corey was reportedly devastated and bewildered by his student's death. Corey said, "That letter doesn't make sense. At the end, Jason must have been delusional or irrational in the extreme." Corey also claimed he never questioned Altom's intellectual contributions. "I did my best to guide Jason as a mountain guide would to guide someone climbing a mountain. I did my best every step of the way," Corey states. "My conscience is clear. Everything Jason did came out of our partnership. We never had the slightest disagreement." The American Foundation for Suicide Prevention (AFSP) cited The New York Times article on Altom's suicide as an example of problematic reporting, arguing that Altom presented warning signs of depression and suicidal ideation and that the article had scapegoated Corey despite a lack of secondary evidence that the advisor's behavior had contributed to Altom's distress. According to The Boston Globe, students and professors said Altom actually retained Corey's support. Corey Group members As of 2010, approximately 700 people have been Corey Group members including notable students Eric Block, Dale L. Boger, Weston T. Borden, David E. Cane, Rick L. Danheiser, William L. Jorgensen, John Katzenellenbogen, Alan P. Kozikowski, Bruce H. Lipshutz, David R. Liu, Albert Meyers, K. C. Nicolaou, Ryōji Noyori, Gary H. Posner, Bengt I. Samuelsson, Dieter Seebach, Vinod K. Singh, Brian Stoltz, Alice Ting, Hisashi Yamamoto, Phil Baran and Jin-Quan Yu. A database of 580 former members and their current affiliation was developed for Corey's 80th birthday in July 2008. Woodward–Hoffmann rules When awarded the Priestley Medal in 2004, E. J. Corey created a controversy with his claim to have inspired Robert Burns Woodward prior to the development of the Woodward–Hoffmann rules. Corey wrote: "On May 4, 1964, I suggested to my colleague R. B. Woodward a simple explanation involving the symmetry of the perturbed (HOMO) molecular orbitals for the stereoselective cyclobutene → 1,3-butadiene and 1,3,5-hexatriene → cyclohexadiene conversions that provided the basis for the further development of these ideas into what became known as the Woodward–Hoffmann rules." This was Corey's first public statement on his claim that starting on May 5, 1964, Woodward put forth Corey's explanation as his own thought with no mention of Corey and the conversation of May 4. Corey had discussed his claim privately with Hoffmann and close colleagues since 1964. Corey mentions that he made the Priestley statement "so the historical record would be correct". Corey's claim and contribution were publicly rebutted by Roald Hoffmann in the journal Angewandte Chemie. In the rebuttal, Hoffmann states that he asked Corey over the course of their long discussion of the matter why Corey did not make the issue public. Corey responded that he thought such a public disagreement would hurt Harvard and that he would not "consider doing anything against Harvard, to which I was and am so devoted." Corey also hoped that Woodward himself would correct the historical record "as he grew older, more considerate, and more sensitive to his own conscience." Woodward died suddenly of a heart attack in his sleep in 1979. Awards and honors E.J. Corey has received more than 40 major awards including the Linus Pauling Award (1973), Franklin Medal (1978), Tetrahedron Prize (1983), Wolf Prize in Chemistry (1986), National Medal of Science (1988), Japan Prize (1989), Nobel Prize in Chemistry (1990), Golden Plate Award of the American Academy of Achievement (1991), Roger Adams Award (1993), and the Priestley Medal (2004). He was inducted into the Alpha Chi Sigma Hall of Fame in 1998. As of 2008, he has been awarded 19 honorary degrees from universities around the world including Oxford University (UK), Cambridge University (UK), and National Chung Cheng University. In 2013, the E.J. Corey Institute of Biomedical Research (CIBR) opened in Jiangyin, Jiangsu Province, China. Corey was elected a Foreign Member of the Royal Society (ForMemRS) in 1998. References External links Compiled Works of E.J. Corey Elias James Corey Nobel Lecture (PDF) Podcast interview with E.J. Corey about his Lifelong Pursuit of Learning – May 30, 2018 1928 births Living people Members of the United States National Academy of Sciences 20th-century American chemists 21st-century American chemists American organic chemists American people of Lebanese descent Nobel laureates in Chemistry American Nobel laureates Harvard University faculty Wolf Prize in Chemistry laureates Massachusetts Institute of Technology School of Science alumni National Medal of Science laureates Foreign members of the Royal Society University of Illinois Urbana-Champaign faculty People from Methuen, Massachusetts Lawrence High School (Massachusetts) alumni Members of the National Academy of Medicine Recipients of Franklin Medal
Elias James Corey
[ "Chemistry" ]
4,880
[ "Organic chemists", "American organic chemists" ]
172,761
https://en.wikipedia.org/wiki/Tricycle
A tricycle, sometimes abbreviated to trike, is a human-powered (or gasoline or electric motor powered or assisted, or gravity powered) three-wheeled vehicle. Some tricycles, such as cycle rickshaws (for passenger transport) and freight trikes, are used for commercial purposes, especially in the developing world, particularly Africa and Asia. In the West, adult-sized tricycles are used primarily for recreation, shopping, and exercise. Tricycles are favoured by children, the disabled, and senior adults for their apparent stability versus a bicycle; however a conventional trike may exhibit poor dynamic lateral stability, and the rider should exercise appropriate operating caution when cornering (e.g., with regard to speed, rate of turn, slope of surface) and operating technique (e.g., leaning the body 'into' the turn) to avoid tipping the trike over. Designs such as recumbents or others which place the rider lower relative to the wheel axles have a lower centre of gravity, and/or designs with canted wheels (tilted at the top towards the centerline) may be more resistant to lifting inner wheels or tipping during fast sharp turns, but still require operator awareness and technique. History A three-wheeled wheelchair was built in 1655 or 1680 by a disabled German man, Stephan Farffler, who wanted to be able to maintain his mobility. A watch-maker, Farffler created a vehicle that was powered by hand cranks. In 1789, two French inventors developed a three-wheeled vehicle, powered by pedals; they called it the tricycle. In 1818, British inventor Denis Johnson patented his approach to designing tricycles. In 1876, James Starley developed the Coventry Lever Tricycle, which used two small wheels on the right side and a large drive wheel on the left side; power was supplied by hand levers. In 1877, Starley developed a new vehicle he called the Coventry Rotary, which was "one of the first rotary chain drive tricycles." Starley's inventions started a tricycling craze in Britain; by 1879, there were "twenty types of tricycles and multi-wheel cycles ... produced in Coventry, England, and by 1884, there were over 120 different models produced by 20 manufacturers." The first front steering tricycle was manufactured in 1881 by The Leicester Safety Tricycle Company of Leicester, England, which was brought to the market in 1882 costing £18 (). They also developed a folding tricycle at the same time. Tricycles were used by riders who did not feel comfortable on the high wheelers, such as women who wore long, flowing dresses (see rational dress). In September, 1903 Edmund Payne, the popular comedian, started an attempt to beat the twenty-four hours' unpaced Tricycle record. At 100 miles Payne was inside his schedule time, but shortly afterwards had to desist at Wisbech, having encountered five hours of incessant rain. Associations In the UK, upright tricycles are sometimes referred to as "barrows". Many trike enthusiasts in the UK belong to the Tricycle Association, formed in 1929. They participate in day rides, tours, time trials, and a criterium (massed start racing) series. Wheel configurations Delta A delta tricycle has one front wheel and two rear wheels. Tadpole A tadpole tricycle has two front wheels and one rear wheel. Rear wheel steering is sometimes used, although this increases the turning circle and can affect handling (the geometry is similar to a regular tricycle operating in reverse, but with a steering damper added). Other Some early pedal tricycles from the late 19th century used two wheels in tandem on one side and a larger driving wheel on the other. An in-line three-wheeled vehicle has two steered wheels, one at the front and the other in the middle or at the rear. Types Upright Upright trikes resemble a two-wheeled bicycle, traditionally diamond frame, or open frame, but with either two widely spaced wheels at the back (called delta) or two wheels at the front (called tadpole). The rider straddles the frame in both delta and tadpole configurations. Steering is through a handlebar directly connected to the front wheel via a conventional bicycle fork in delta, or via a form of Ackermann steering geometry in the case of the upright tadpole. All non-tilting trikes have stability issues and great care must be used when riding a non-tilting upright trike. The center of gravity is quite high compared to recumbent trikes. Because of this, non-tilting trikes are more prone to tipping over in corners and on uneven or sloping terrain. Conversely, the rider enjoys better visibility than on a recumbent because their head is higher. Recumbent Recumbent trikes' advantages (over conventional trikes) include stability (through low centre of gravity) and low aerodynamic drag. Disadvantages (compared to bicycles) include greater cost, weight, and width. The very low seat may make entry difficult, and on the road they may be less visible to other traffic. Delta A recumbent delta is similar to an upright, with two wheels at the back and one at the front, but has a recumbent layout in which the rider is seated in a chair-like seat. One or both rear wheels can be driven, while the front is used for steering (the usual layout). Steering is either through a linkage, with the handlebars under the seat (under seat steering) or directly to the front wheel with a large handlebar (over seat steering). Some delta trikes can be stored upright by lifting the front wheel and resting the top of the seat on the ground. Delta trikes generally have higher seats and a tighter turning radius than tadpole trikes. The tight turning radius is useful if riding on trails with offset barriers, or navigating around closely placed obstacles. The higher seat makes mounting and dismounting easier. Even with the higher seat a delta trike can be quite stable provided most of the weight (including the rider) is shifted back towards the rear wheels. Many delta trikes place the seat too far forward and that takes weight off the two rear wheels and puts more weight onto the front wheel making the trike more unstable. The Hase Kettwiesel delta trike has an high seat that is placed to put most of the weight onto the cambered rear wheels making it more stable. Delta trikes are suitable to be used as manual scooters for mobility, rehabilitation and/or exercise. The Hase Lepus Comfort is an example of a rehabilitation delta trike designed mainly for comfort and ease of use. It has a lowered front boom and the seat can be adjusted to a height of , which aids in mounting and dismounting. It also has rear wheel suspension for comfort. The Lepus can be folded for easier storage and transportation. The weight of a delta trike can be quite close to the weight of a tadpole trike if they are both of a similar quality and similar materials are used. The Hase Kettwiesel Allround delta trike has an aluminium frame and weighs 39.4 lbs (17.9 kg). The Catrike Road tadpole trike has an aluminium frame and weighs 37.5 lbs (17 kg). Tadpole The recumbent tadpole or reverse trike is a recumbent design with two steered wheels at the front and one driven wheel at the back, though one model has the front wheels driven while the rear wheel steers. Steering is either through a single handlebar linked with tie rods to the front wheels' stub axle assemblies (indirect) or with two handlebars (rather, two half-handlebars) each bolted to a steerer tube, usually through a bicycle-type headset and connected to a stub axle assembly (direct). A single tie rod connects the left and right axle assemblies. The tadpole trike is often used by middle-aged or retiree-age former bicyclists who are tired of the associated pains from normal upright bikes. With its extremely low center of gravity, aerodynamic layout and light weight (for trikes), tadpoles are considered the highest performance trikes. Most velomobiles are built in a tadpole trike configuration since a wide front and narrow rear offer superior aerodynamics to a delta trike configuration. Hand-crank Hand-crank trikes use a hand-operated crank, either as a sole source of power or a double drive with footpower from pedals and hand-power from the hand crank. The hand-power only trikes can be used by individuals who do not have the use of their legs due to a disability or an injury. They are made by companies including Greenspeed, Invacare, Quickie and Druzin. In case of paralysis of the legs, more speed and range of distance can be obtained by adding functional electrical stimulation to the legs. The large leg muscles are activated by electrical impulses synchronized with the hand cranking movement. Tandem Recumbent tandem trikes allow two people to ride in a recumbent position with an extra-strong backbone frame to hold the extra weight. Some allow the "captain" (the rider who steers) and "stoker" (the rider who only pedals) to pedal at different speeds. They are often made with couplers so the frames can be broken down into pieces for easier transport. Manufacturers of recumbent trikes include Greenspeed, WhizWheelz and Inspired Cycle Engineering. Rickshaw Most cycle rickshaws, used for carrying passengers for hire, are tricycles with one steering wheel in the front and two wheels in the back supporting a seating area for one or two passengers. Cycle rickshaws often have a parasol or canopy to protect the passengers from sun and rain. These vehicles are widely used in South Asia and Southeast Asia, where rickshaw driving provides essential employment for recent immigrants from rural areas, generally impoverished men. In the 1990s and first decade of the 21st century, rickshaws became increasingly popular in big cities in Britain, Europe and the United States, where they provide urban transportation, novelty rides, and serve as advertising media. Spidertrike is a recumbent cycle rickshaw that is used in central London and operated by Eco Chariots. The trike pictured is called the SUV (Sensible Utility Vehicle) and is produced by the company Organic Engines, which operates in Florida in the United States. It is a front wheel drive tricycle, articulated behind the driver seat, and has hydraulic double disc brakes and internal hub gears. The passenger is protected from rain and sun with a canopy. Freight Urban delivery trikes are designed and constructed for transporting large loads. These trikes include a cargo area consisting of a steel tube carrier, an open or enclosed box, a flat platform, or a large, heavy-duty wire basket. These are usually mounted over one or both wheels, low behind the front wheel, or between parallel wheels at either the front or rear of the vehicle, to keep the center of gravity low. The frame and drivetrain must be constructed to handle loads several times that of an ordinary bicycle; as such, extra low gears may be added. Other specific design considerations include operator visibility and load suspension. Many, but not all, cycles used for the purpose of vending goods such as ice cream cart trikes or hot dog vending trikes are cargo bicycles. Many freight trikes are of the tadpole configuration, with the cargo box (platform, etc.) mounted between the front wheels. India and China are significant strongholds of the rear-loading "delta" carrier trike. Freight trikes are also designed for indoor use in large warehouses or industrial plants. The advantage of using freight trikes rather than a motor vehicle is that there is no exhaust, which means that the trike can be used inside warehouses. While another option is electric golf cart-style vehicles, freight trikes are human-powered, so they do not have the maintenance required to keep batteries on golf carts charged up. Common uses include: Delivery services in dense urban environments Food vending in high foot traffic areas (including specialist ice cream bikes) Transporting trade tools, including around large installations such as power stations and CERN Airport cargo handling Recycling collections Warehouse inventory transportation Mail Food collection Child transport (in Amsterdam, freight trikes are used primarily to carry children) Children's A tricycle is a typical toy for children between the ages of eighteen months and five years before a balance bike. Compared to adult models, children's trikes are simpler, without brakes or gears, and often with crude front-drive. Child trikes can be unstable, particularly if the wheelbase or track are insufficient. Some trikes have a push bar so adults can control the trike. Child trikes have frames made of metal, plastic, or wood. Children's trikes can have pedals directly driving the front wheels, allowing braking with the pedals, or they can use chain drive the rear wheels, often without a differential, so one rear wheel spins free. Children's trikes do not always have pneumatic tires, having instead wheels of solid rubber or hollow plastic. While this may add to the weight of the tricycle and reduces the shock-absorbing qualities, it eliminates the possibility of punctures. Pull brakes are rarely fitted to front-drive trikes, but the child can slow the trike down by resisting the forward motion through the pedals. Drift Drift trikes are a variety of tricycle with slick rear wheels, enabling them to drift, being countersteered round corners. They are commonly used for gravity-powered descents of paved roads with steep gradients. Hand and foot With hand and foot trikes, the rider makes a pair of front wheels change directions by shifting the center of weight and moves forward by rotating the rear wheel. The hand and foot trike can be also converted into a manual tricycle designed to be driven with both hands and both feet. There are also new hybrids between a handcycle, a recumbent bike and a tricycle, these bikes make it even possible to cycle with legs despite a spinal cord injury. Tilting Tricycles have been constructed that tilt in the direction of a turn, as a bicycle does, to avoid rolling over without a wide axle track. Examples have included upright, recumbent, delta, and tadpole configurations. Conversion sets Tricycle conversion sets or kits convert a bicycle to an upright tricycle. Tricycle kit can remove the front wheel and mounts two wheels under the handlebars for a quick and easy conversion. The advantages of a trike conversion set include lower cost compared with new hand built tricycles and the freedom to choose almost any donor bicycle frame. Tricycle conversion sets tend to be heavier than a high quality, hand built, sports, touring or racing tricycle. Conversion sets can give the would-be serious tricyclist a taste of triking before making the final decision to purchase a complete tricycle. Conversion sets can also supplied ready to be brazed onto a lightweight, steel bicycle frame to form a complete trike. Some trike conversion sets can also be used with recumbent bicycles to form recumbent trikes. Operation Adults may find upright tricycles difficult to ride because of familiarity with the counter-steering required to balance a bicycle. The variation in the camber of the road is the principal difficulty to be overcome once basic tricycle handling is mastered. Recumbent trikes are less affected by camber and, depending on track width and riding position, capable of very fast cornering. Some trikes are tilting three-wheelers, which lean into corners much as bicycles do. In the case of delta tricycles, the drive is often to just one of the rear wheels, though in some cases both wheels are driven through a differential. A double freewheel, preferably using no-backlash roller clutches, is considered superior. Trikes with a differential often use an internally geared hub as a gearbox in a 'mid drive' system. A jackshaft drive permits either single or two-wheel drive. Tadpoles generally use a bicycle's rear wheel drive and for that reason are usually lighter, cheaper and easier to replace and repair. Braking Some trikes use a geometry (also called center point steering) with the kingpin axis intersecting the ground directly ahead of the tire contact point, producing a normal amount of trail. This arrangement, elsewhere called "zero scrub radius" is used to mitigate the effects of one-sided braking on steering. While zero scrub can reduce steering feel and increase wandering it can also protect novices from spinning out and/or flipping. Tadpole trikes tend also to use Ackermann steering geometry, perhaps with both front brakes operated by the stronger hand. While the KMX Kart stunt trike with this setup allows the rear brake to be operated separately, letting the rider do "bootlegger turns", the standard setup for most trikes has the front brake for each side operated by each hand. The center-of-mass of most tadpole trikes is close to the front wheels, making the rear brake less useful. The rear brake may instead be connected to a latching brake lever for use as a parking brake when stopped on a hill. Recumbent trikes often brake one wheel with each hand, allowing the rider to brake one side alone to pull the trike in that direction. Records On 1 July 2005, Sudhakar Yadav from India rode a tricycle in Hyderabad with a height of , a wheel diameter of and length of . This tricycle is exhibited at the Sudha Cars Museum and has been verified as the largest tricycle by the Guinness World Records. See also List of land vehicles types by number of wheels List of motorized trikes Puppet Bike, a Chicago street performer group that uses a tricycle as a stage References External links Tricycles Cycle types Human-powered vehicles Physical activity and dexterity toys Traditional toys Vehicle technology Wheeled vehicles
Tricycle
[ "Engineering" ]
3,721
[ "Vehicle technology", "Mechanical engineering by discipline" ]
172,763
https://en.wikipedia.org/wiki/Nroff
nroff (short for "new roff") is a text-formatting program on Unix and Unix-like operating systems. It produces output suitable for simple fixed-width printers and terminal windows. It is an integral part of the Unix help system, being used to format man pages for display. nroff and the related troff were both developed from the original roff. While nroff was intended to produce output on terminals and line printers, troff was intended to produce output on typesetting systems. Both used the same underlying markup and a single source file could normally be used by nroff or troff without change. History nroff was written by Joe Ossanna for Version 2 Unix, in Assembly language and then ported to C. It was a descendant of the RUNOFF program from CTSS, the first computerized text-formatting program, and is a predecessor of the Unix troff document processing system. There is also a free software version of nroff in the groff package. Variants The Minix operating system, among others, uses a clone of nroff called cawf by Vic Abell, based on awf, the Amazingly Workable Formatter designed in awk by Henry Spencer. These are not full replacements for the nroff/troff suite of tools, but are sufficient for display and printing of basic documents and manual pages. In addition, a simplified version of nroff is available in Ratfor source code form as an example in the book Software Tools by Brian Kernighan and P. J. Plauger. See also troff groff TeX LaTeX man page References External links source code for Henry Spencer's AWF troff/nroff quick reference nroff source code in Illumos. Explanation by Bryan Cantrill Troff Assembly language software Markup languages Unix text processing utilities
Nroff
[ "Mathematics" ]
376
[ "Troff", "Mathematical markup languages" ]
172,787
https://en.wikipedia.org/wiki/Joe%20Ossanna
Joseph Frank Ossanna, Jr. (December 10, 1928 – November 28, 1977) was an American electrical engineer and computer programmer who worked as a member of the technical staff at the Bell Telephone Laboratories in Murray Hill, New Jersey. He became actively engaged in the software design of Multics (Multiplexed Information and Computing Service), a general-purpose operating system used at Bell. Education and career Ossanna received his Bachelor of Engineering (B.S.E.E.) from Wayne State University in 1952. At Bell Telephone Labs, Ossanna was concerned with low-noise amplifier design, feedback amplifier design, satellite look-angle prediction, mobile radio fading theory, and statistical data processing. He was also concerned with the operation of the Murray Hill Computation Center and was actively engaged in the software design of Multics. After learning how to program the PDP-7 computer, Ken Thompson, Dennis Ritchie, Joe Ossanna, and Rudd Canaday began to program the operating system that was designed earlier by Thompson (Unics, later named Unix). After writing the file system and a set of basic utilities, and assembler, a core of the Unix operating system was established. Doug McIlroy later wrote, "Ossanna, with the instincts of a motor pool sergeant, equipped our first lab and attracted the first outside users." When the team got a Graphic Systems CAT phototypesetter for making camera-ready copy of professional articles for publication and patent applications, Ossanna wrote a version of nroff that would drive it. It was dubbed troff, for typesetter roff. So it was that in 1973 he authored the first version of troff for Unix entirely written in PDP-11 assembly language. However, two years later, Ossanna re-wrote the code in the C programming language. He had planned another rewrite which was supposed to improve its usability but this work was taken over by Brian Kernighan. Ossanna was a member of the Association for Computing Machinery, Sigma Xi, and Tau Beta Pi. Later life and death He died as a consequence of heart disease. Selected publications Bogert, Bruce P.; Ossanna, Joseph F., "The heuristics of cepstrum analysis of a stationary complex echoed Gaussian signal in stationary Gaussian noise", IEEE Transactions on Information Theory, v.12, issue 3, July 19, 1966, pp. 373 – 380 Ossanna, Joseph F.; Kernighan, Brian W., Troff user's manual, UNIX Vol. II, W. B. Saunders Company, March 1990 Kernighan, B W; Lesk, M E; Ossanna, J F, Jr., Document preparation, in UNIX:3E system readings and applications. Volume I: UNIX:3E time-sharing system, Prentice-Hall, Inc., December 1986 Ossanna, Joseph F., "The current state of minicomputer software", AFIPS '72 (Spring): Proceedings of the May 16–18, 1972, spring joint computer conference, Publisher: ACM, May 1972 Ossanna, Joseph F., "Identifying terminals in terminal-oriented systems", Proceedings of the ACM second symposium on Problems in the optimizations of data communications systems, Publisher: ACM, January 1971 Ossanna, J. F.; Saltzer, J. H., "Technical and human engineering problems in connecting terminals to a time-sharing system", AFIPS '70 (Fall): Proceedings of the November 17–19, 1970, fall joint computer conference, Publisher: ACM, November 1970 Ossanna, J. F.; Mikus, L. E.; Dunten, S. D., "Communications and input/output switching in a multiplex computing system", AFIPS '65 (Fall, part I): Proceedings of the November 30—December 1, 1965, fall joint computer conference, part I, Publisher: ACM, November 1965 References 1928 births 1977 deaths Unix people Troff Wayne State University alumni Multics people
Joe Ossanna
[ "Mathematics" ]
850
[ "Troff", "Mathematical markup languages" ]
172,796
https://en.wikipedia.org/wiki/TYPSET%20and%20RUNOFF
TYPSET is an early document editor that was used with the 1964-released RUNOFF program, one of the earliest text formatting programs to see significant use. Of two earlier print/formatting programs DITTO and TJ-2, only the latter had, and introduced, text justification; RUNOFF also added pagination. The name RUNOFF, and similar names led to other formatting program implementations. By 1982, Runoff (a name not possible before lowercase letters were introduced to filenames) largely became associated with Digital Equipment Corporation and Unix computers. DEC used the terms VAX DSR and DSR to refer to VAX DIGITAL Standard Runoff. History CTSS The original RUNOFF type-setting program for CTSS was written by Jerome H. Saltzer circa 1964. Bob Morris and Doug McIlroy translated that from MAD to BCPL. Morris and McIlroy then moved the BCPL version to Multics when the IBM 7094 on which CTSS ran was being shut down. Multics Documentation for the Multics version of RUNOFF described it as "types out text segments in manuscript form." Other versions and implementations A later version of runoff for Multics was written in PL/I by Dennis Capps, in 1974. This runoff code was the ancestor of roff that was written for the fledgling Unix in assembly language by Ken Thompson. Other versions of Runoff were developed for various computer systems including Digital Equipment Corporation's PDP-11 minicomputer systems running RT-11, RSTS/E, RSX on Digital's PDP-10 and for OpenVMS on VAX minicomputers, as well as UNIVAC Series 90 mainframes using the EDT text editor under the VS/9 operating system. These different releases of Runoff typically had little in common except the convention of indicating a command to Runoff by beginning the line with a period. The origin of IBM's SCRIPT software began in 1968 when IBM contracted Stuart Madnick of MIT to write a simple document preparation tool for CP/67, which he modelled on MIT's CTSS RUNOFF. Background RUNOFF was written in 1964 for the CTSS operating system by Jerome H. Saltzer in MAD and FAP. It actually consisted of a pair of programs, TYPSET (which was basically a document editor), and RUNOFF (the output processor). RUNOFF had support for pagination and headers, as well as text justification (TJ-2 appears to have been the earliest text justification system, but it did not have the other capabilities). RUNOFF is a direct predecessor of the runoff document formatting program of Multics, which in turn was the ancestor of the roff and nroff document formatting programs of Unix, and their descendants. It was also the ancestor of FORMAT for the IBM System/360, and of course indirectly of every computerized word processing system. Likewise, RUNOFF for CTSS was the predecessor of the various RUNOFFs for DEC's operating systems, via the RUNOFF developed by the University of California, Berkeley's Project Genie for the SDS 940 system. The name is alleged to have come from the phrase at the time, I'll run off a copy. TYPESET contains features inspired by a variety of other programs including Colossal Typewriter and Expensive Typewriter. Example Input: When you're ready to order, call us at our toll free number: .BR .CENTER 1-800-555-xxxx .BR Your order will be processed within two working days and shipped Output: When you're ready to order, call us at our toll free number: 1-800-555-xxxx Your order will be processed within two working days and shipped See also SCRIPT (markup) TECO TJ-2 Further reading References Word processors Troff History of software Digital typography
TYPSET and RUNOFF
[ "Mathematics", "Technology" ]
781
[ "Troff", "Mathematical markup languages", "History of software", "History of computing" ]
172,797
https://en.wikipedia.org/wiki/Roff%20%28software%29
roff is a typesetting markup language. As the first Unix text-formatting computer program, it is a predecessor of the nroff and troff document processing systems. Roff was a Unix version of the runoff text-formatting program from Multics, which was a descendant of RUNOFF for CTSS (the first computerized text-formatting application). History CTSS roff is a descendant of the RUNOFF program by Jerry Saltzer, which ran on CTSS. Douglas McIlroy and Robert Morris wrote runoff for Multics in BCPL based on Saltzer's program written in MAD assembler. Their program in turn was "transliterated" by Ken Thompson into PDP-7 assembler language for his early Unix operating system, circa 1970. When the first PDP-11 was acquired for Unix in late 1970, the justification cited to management for the funding required was that it was to be used as a word processing system, and so roff was quickly transliterated again, into PDP-11 assembly, in 1971. roff printed the man pages for Versions 1 through 3 of Unix, and when the Bell Labs patent department began using it, it became the first Unix application with an outside client. Dennis Ritchie noted that the ability to rapidly modify roff (because it was locally written software) to provide special features was an important factor in leading to the adoption of Unix by the patent department to fill its word processing needs. This in turn gave UNIX enough credibility inside Bell Labs to secure the funding to purchase one of the first PDP-11/45s produced. See also nroff troff groff References Sources D. M. Ritchie, The Evolution of the UNIX Time-sharing System (AT&T Bell Laboratories Technical Journal, Vol. 63, No. 8, October 1984) External links roff - Concepts and history of roff typesetting Typesetting software
Roff (software)
[ "Technology" ]
389
[ "Computing stubs", "Digital typography stubs" ]
172,825
https://en.wikipedia.org/wiki/Condensation%20reaction
In organic chemistry, a condensation reaction is a type of chemical reaction in which two molecules are combined to form a single molecule, usually with the loss of a small molecule such as water. If water is lost, the reaction is also known as a dehydration synthesis. However other molecules can also be lost, such as ammonia, ethanol, acetic acid and hydrogen sulfide. The addition of the two molecules typically proceeds in a step-wise fashion to the addition product, usually in equilibrium, and with loss of a water molecule (hence the name condensation). The reaction may otherwise involve the functional groups of the molecule, and is a versatile class of reactions that can occur in acidic or basic conditions or in the presence of a catalyst. This class of reactions is a vital part of life as it is essential to the formation of peptide bonds between amino acids and to the biosynthesis of fatty acids. Many variations of condensation reactions exist. Common examples include the aldol condensation and the Knoevenagel condensation, which both form water as a by-product, as well as the Claisen condensation and the Dieckman condensation (intramolecular Claisen condensation), which form alcohols as by-products. Synthesis of prebiotic molecules Condensation reactions likely played major roles in the synthesis of the first biotic molecules including early peptides and nucleic acids. In fact, condensation reactions would be required at multiple steps in RNA oligomerization: the condensation of nucleobases and sugars, nucleoside phosphorylation, and nucleotide polymerization. See also Anabolism Hydrolysis, the opposite of a condensation reaction Condensed tannins References
Condensation reaction
[ "Chemistry" ]
370
[ "Condensation reactions", "Organic reactions" ]
172,851
https://en.wikipedia.org/wiki/Radiosonde
A radiosonde is a battery-powered telemetry instrument carried into the atmosphere usually by a weather balloon that measures various atmospheric parameters and transmits them by radio to a ground receiver. Modern radiosondes measure or calculate the following variables: altitude, pressure, temperature, relative humidity, wind (both wind speed and wind direction), cosmic ray readings at high altitude and geographical position (latitude/longitude). Radiosondes measuring ozone concentration are known as ozonesondes. Radiosondes may operate at a radio frequency of 403 MHz or 1680 MHz. A radiosonde whose position is tracked as it ascends to give wind speed and direction information is called a rawinsonde ("radar wind -sonde"). Most radiosondes have radar reflectors and are technically rawinsondes. A radiosonde that is dropped from an airplane and falls, rather than being carried by a balloon is called a dropsonde. Radiosondes are an essential source of meteorological data, and hundreds are launched all over the world daily. History The first flights of aerological instruments were done in the second half of the 19th century with kites and meteographs, a recording device measuring pressure and temperature that would be recovered after the experiment. This proved difficult because the kites were linked to the ground and were very difficult to manoeuvre in gusty conditions. Furthermore, the sounding was limited to low altitudes because of the link to the ground. Gustave Hermite and Georges Besançon, from France, were the first in 1892 to use a balloon to fly the meteograph. In 1898, Léon Teisserenc de Bort organized at the Observatoire de Météorologie Dynamique de Trappes the first regular daily use of these balloons. Data from these launches showed that the temperature lowered with height up to a certain altitude, which varied with the season, and then stabilized above this altitude. De Bort's discovery of the tropopause and stratosphere was announced in 1902 at the French Academy of Sciences. Other researchers, like Richard Aßmann and William Henry Dines, were working at the same times with similar instruments. In 1924, Colonel William Blaire in the U.S. Signal Corps did the first primitive experiments with weather measurements from balloon, making use of the temperature dependence of radio circuits. The first true radiosonde that sent precise encoded telemetry from weather sensors was invented in France by . Bureau coined the name "radiosonde" and flew the first instrument on January 7, 1929. Developed independently a year later, Pavel Molchanov flew a radiosonde on January 30, 1930. Molchanov's design became a popular standard because of its simplicity and because it converted sensor readings to Morse code, making it easy to use without special equipment or training. Working with a modified Molchanov sonde, Sergey Vernov was the first to use radiosondes to perform cosmic ray readings at high altitude. On April 1, 1935, he took measurements up to using a pair of Geiger counters in an anti-coincidence circuit to avoid counting secondary ray showers. This became an important technique in the field, and Vernov flew his radiosondes on land and sea over the next few years, measuring the radiation's latitude dependence caused by the Earth's magnetic field. In 1936, the U.S. Navy assigned the U.S. Bureau of Standards (NBS) to develop an official radiosonde for the Navy to use. The NBS gave the project to Harry Diamond, who had previously worked on radio navigation and invented a blind landing system for airplanes. The organization led by Diamond eventually (in 1992) became a part of the U.S. Army Research Laboratory. In 1937, Diamond, along with his associates Francis Dunmore and Wilbur Hinmann, Jr., created a radiosonde that employed audio-frequency subcarrier modulation with the help of a resistance-capacity relaxation oscillator. In addition, this NBS radiosonde was capable of measuring temperature and humidity at higher altitudes than conventional radiosondes at the time due to the use of electric sensors. In 1938, Diamond developed the first ground receiver for the radiosonde, which prompted the first service use of the NBS radiosondes in the Navy. Then in 1939, Diamond and his colleagues developed a ground-based radiosonde called the “remote weather station,” which allowed them to automatically collect weather data in remote and inhospitable locations. By 1940, the NBS radiosonde system included a pressure drive, which measured temperature and humidity as functions of pressure. It also gathered data on cloud thickness and light intensity in the atmosphere. Due to this and other improvements in cost (about $25), weight (> 1 kilogram), and accuracy, hundreds of thousands of NBS-style radiosondes were produced nationwide for research purposes, and the apparatus was officially adopted by the U.S. Weather Bureau. Diamond was given the Washington Academy of Sciences Engineering Award in 1940 and the IRE Fellow Award (which was later renamed the Harry Diamond Memorial Award) in 1943 for his contributions to radio-meteorology. The expansion of economically important government weather forecasting services during the 1930s and their increasing need for data motivated many nations to begin regular radiosonde observation programs In 1985, as part of the Soviet Union's Vega program, the two Venus probes, Vega 1 and Vega 2, each dropped a radiosonde into the atmosphere of Venus. The sondes were tracked for two days. Although modern remote sensing by satellites, aircraft and ground sensors is an increasing source of atmospheric data, none of these systems can match the vertical resolution ( or less) and altitude coverage () of radiosonde observations, so they remain essential to modern meteorology. Although hundreds of radiosondes are launched worldwide each day year-round, fatalities attributed to radiosondes are rare. The first known example was the electrocution of a lineman in the United States who was attempting to free a radiosonde from high-tension power lines in 1943. In 1970, an Antonov 24 operating Aeroflot Flight 1661 suffered a loss of control after striking a radiosonde in flight resulting in the death of all 45 people on board. Operation A rubber or latex balloon filled with either helium or hydrogen lifts the device up through the atmosphere. The maximum altitude to which the balloon ascends is determined by the diameter and thickness of the balloon. Balloon sizes can range from . As the balloon ascends through the atmosphere, the pressure decreases, causing the balloon to expand. Eventually, the balloon will expand to the extent that its skin will break, terminating the ascent. An balloon will burst at about . After bursting, a small parachute on the radiosonde's support line may slow its descent to Earth, while some rely on the aerodynamic drag of the shredded remains of the balloon, and the very light weight of the package itself. A typical radiosonde flight lasts 60 to 90 minutes. One radiosonde from Clark Air Base, Philippines, reached an altitude of . The modern radiosonde communicates via radio with a computer that stores all the variables in real time. The first radiosondes were observed from the ground with a theodolite, and gave only a wind estimation by the position. With the advent of radar by the Signal Corps it was possible to track a radar target carried by the balloons with the SCR-658 radar. Modern radiosondes can use a variety of mechanisms for determining wind speed and direction, such as a radio direction finder or GPS. The weight of a radiosonde is typically . Sometimes radiosondes are deployed by being dropped from an aircraft instead of being carried aloft by a balloon. Radiosondes deployed in this way are called dropsondes. Routine radiosonde launches Radiosondes weather balloons have conventionally been used as means of measuring atmospheric profiles of humidity, temperature, pressure, wind speed and direction. High-quality, spatially and temporally “continuous” data from upper-air monitoring along with surface observations are critical bases for understanding weather conditions and climate trends and providing weather and climate information for the welfare of societies. Reliable and timely information underpin society’s preparedness to extreme weather conditions and to changing climate patterns. Worldwide, there are about 1,300 radiosonde launch sites. Most countries share data with the rest of the world through international agreements. Nearly all routine radiosonde launches occur one hour before the official observation times of 0000 UTC and 1200 UTC to center the observation times during the roughly two-hour ascent. Radiosonde observations are important for weather forecasting, severe weather watches and warnings, and atmospheric research. The United States National Weather Service launches radiosondes twice daily from 92 stations, 69 in the conterminous United States, 13 in Alaska, nine in the Pacific, and one in Puerto Rico. It also supports the operation of 10 radiosonde sites in the Caribbean. A list of U.S. operated land based launch sites can be found in Appendix C, U.S. Land-based Rawinsonde Stations of the Federal Meteorological Handbook #3, titled Rawinsonde and Pibal Observations, dated May 1997. The UK launches Vaisala RS41 radiosondes four times daily (an hour before 00, 06, 12, and 18 UTC) from 6 launch sites (south to north): Camborne, (lat,lon)=(50.218, -5.327), SW tip of England; Herstmonceux (50.89, 0.318), near SE coast; Watnall, (53.005, -1.25), central England; Castor Bay, (54.50, -6.34), near the SE corner of Lough Neagh in Northern Ireland; Albemarle, (55.02, -1.88), NE England; and Lerwick, (60.139, -1.183), Shetland, Scotland. Uses of upper air observations Raw upper air data is routinely processed by supercomputers running numerical models. Forecasters often view the data in a graphical format, plotted on thermodynamic diagrams such as Skew-T log-P diagrams, Tephigrams, and or Stüve diagrams, all useful for the interpretation of the atmosphere's vertical thermodynamics profile of temperature and moisture as well as kinematics of vertical wind profile. Radiosonde data is a crucially important component of numerical weather prediction. Because a sonde may drift several hundred kilometers during the 90- to 120-minute flight, there may be concern that this could introduce problems into the model initialization. However, this appears not to be so except perhaps locally in jet stream regions in the stratosphere. This issue may in future be solved by weather drones, which have precise control over their location and can compensate for drift. Lamentably, in less developed parts of the globe such as Africa, which has high vulnerability to impacts of extreme weather events and climate change, there is paucity of surface- and upper-air observations. The alarming state of the issue was highlighted in 2020 by the World Meteorological Organisation which stated that "the situation in Africa shows a dramatic decrease of almost 50% from 2015 to 2020 in the number of radiosonde flights, the most important type of surface-based observations. Reporting now has poorer geographical coverage". Over the last two decades, some 82% of the countries in Africa have experienced severe (57%) and moderate (25%) radiosonde data gap. This dire situation has prompted call for urgent need to fill the data gap in Africa and globally. The vast data gap in such a large part the global landmass, home to some of the most vulnerable societies, the aforementioned call has galvanised a global effort to “plug the data gap” in the decade ahead and halt a further deterioration in the observation networks. International regulation According to the International Telecommunication Union, a meteorological aids service (also: meteorological aids radiocommunication service) is – according to Article 1.50 of the ITU Radio Regulations (RR) – defined as "A radiocommunication service used for meteorological, including hydrological, observations and exploration. Furthermore, according to article 1.109 of the ITU RR: Frequency allocation The allocation of radio frequencies is provided according to Article 5 of the ITU Radio Regulations (edition 2012). In order to improve harmonisation in spectrum utilisation, the majority of service-allocations stipulated in this document were incorporated in national Tables of Frequency Allocations and Utilisations which is with-in the responsibility of the appropriate national administration. The allocation might be primary, secondary, exclusive, and shared. primary allocation: is indicated by writing in capital letters (see example below) secondary allocation: is indicated by small letters exclusive or shared utilization: is within the responsibility of administrations However, military usage, in bands where there is civil usage, will be in accordance with the ITU Radio Regulations. Example of frequency allocation See also 6AK5 Aerography (meteorology) Atmospheric model Atmospheric thermodynamics CTD (instrument) Global horizontal sounding technique Rocketsonde Totex - a Japanese manufacturer of meteorological balloons Vaisala Vilho Väisälä Water-activated battery Cricketsonde References External links Upper air data for the world - past and present WMO spreadsheet of all Upper Air stations around the world Interpreting radiosonde data Tephigrams and Skew-T log P diagrams. Radiosonde Museum of North America Radiosonde Sounding System at webmet.com NOAA National Weather Service Radiosonde Factsheet Sergei Nikolaevich Vernov SCR-658 pics early pics Photo - Early Type Radiosonde Photo - Radiosonde, Transistor Type Telecommunications equipment Atmospheric thermodynamics French inventions Measuring instruments Meteorological instrumentation and equipment Russian inventions Science and technology in the Soviet Union Soviet inventions Sonde International Telecommunication Union Atmospheric sounding
Radiosonde
[ "Technology", "Engineering" ]
2,885
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
172,860
https://en.wikipedia.org/wiki/Meta-moderation%20system
Meta-moderation is a second level of comment moderation. A user is invited to rate a moderator's decision by being shown a post that was moderated up or down and marking whether the moderator acted fairly. This is used to improve the quality of moderation. Slashdot and Kuro5hin are two websites with meta-moderation. The GameFAQs message boards used to have it. See also Moderation system References Slashdot Metamoderation FAQ Meatball: MedaModeration Slash(dot) and Burn: Distributed Moderation in a Large Online Conversation Space Cliff Lampe, Paul Resnick Groupware Internet forum terminology Reputation management Content moderation
Meta-moderation system
[ "Technology" ]
139
[ "Computing stubs", "World Wide Web stubs" ]
172,896
https://en.wikipedia.org/wiki/L.%20E.%20J.%20Brouwer
Luitzen Egbertus Jan "Bertus" Brouwer (27 February 1881 – 2 December 1966) was a Dutch mathematician and philosopher who worked in topology, set theory, measure theory and complex analysis. Regarded as one of the greatest mathematicians of the 20th century, he is known as one of the founders of modern topology, particularly for establishing his fixed-point theorem and the topological invariance of dimension. Brouwer also became a major figure in the philosophy of intuitionism, a constructivist school of mathematics which argues that math is a cognitive construct rather than a type of objective truth. This position led to the Brouwer–Hilbert controversy, in which Brouwer sparred with his formalist colleague David Hilbert. Brouwer's ideas were subsequently taken up by his student Arend Heyting and Hilbert's former student Hermann Weyl. In addition to his mathematical work, Brouwer also published the short philosophical tract Life, Art, and Mysticism (1905). Biography Brouwer was born to Dutch Protestant parents. Early in his career, Brouwer proved a number of theorems in the emerging field of topology. The most important were his fixed point theorem, the topological invariance of degree, and the topological invariance of dimension. Among mathematicians generally, the best known is the first one, usually referred to now as the Brouwer fixed point theorem. It is a corollary to the second, concerning the topological invariance of degree, which is the best known among algebraic topologists. The third theorem is perhaps the hardest. Brouwer also proved the simplicial approximation theorem in the foundations of algebraic topology, which justifies the reduction to combinatorial terms, after sufficient subdivision of simplicial complexes, of the treatment of general continuous mappings. In 1912, at age 31, he was elected a member of the Royal Netherlands Academy of Arts and Sciences. He was an Invited Speaker of the ICM in 1908 at Rome and in 1912 at Cambridge, UK. He was elected to the American Philosophical Society in 1943. Brouwer founded intuitionism, a philosophy of mathematics that challenged the then-prevailing formalism of David Hilbert and his collaborators, who included Paul Bernays, Wilhelm Ackermann, and John von Neumann (cf. Kleene (1952), p. 46–59). A variety of constructive mathematics, intuitionism is a philosophy of the foundations of mathematics. It is sometimes (simplistically) characterized by saying that its adherents do not admit the law of excluded middle as a general axiom in mathematical reasoning, although it may be proven as a theorem in some special cases. Brouwer was a member of the Significs Group. It formed part of the early history of semiotics—the study of symbols—around Victoria, Lady Welby in particular. The original meaning of his intuitionism probably cannot be completely disentangled from the intellectual milieu of that group. In 1905, at the age of 24, Brouwer expressed his philosophy of life in a short tract Life, Art and Mysticism, which has been described by the mathematician Martin Davis as "drenched in romantic pessimism" (Davis (2002), p. 94). Arthur Schopenhauer had a formative influence on Brouwer, not least because he insisted that all concepts be fundamentally based on sense intuitions. Brouwer then "embarked on a self-righteous campaign to reconstruct mathematical practice from the ground up so as to satisfy his philosophical convictions"; indeed his thesis advisor refused to accept his Chapter II "as it stands, ... all interwoven with some kind of pessimism and mystical attitude to life which is not mathematics, nor has anything to do with the foundations of mathematics" (Davis, p. 94 quoting van Stigt, p. 41). Nevertheless, in 1908: "... Brouwer, in a paper titled 'The untrustworthiness of the principles of logic', challenged the belief that the rules of the classical logic, which have come down to us essentially from Aristotle (384--322 B.C.) have an absolute validity, independent of the subject matter to which they are applied" (Kleene (1952), p. 46). "After completing his dissertation, Brouwer made a conscious decision to temporarily keep his contentious ideas under wraps and to concentrate on demonstrating his mathematical prowess" (Davis (2000), p. 95); by 1910 he had published a number of important papers, in particular the Fixed Point Theorem. Hilbert—the formalist with whom the intuitionist Brouwer would ultimately spend years in conflict—admired the young man and helped him receive a regular academic appointment (1912) at the University of Amsterdam (Davis, p. 96). It was then that "Brouwer felt free to return to his revolutionary project which he was now calling intuitionism " (ibid). He was combative as a young man. According to Mark van Atten, this pugnacity reflected his combination of independence, brilliance, high moral standards and extreme sensitivity to issues of justice. He was involved in a very public and eventually demeaning controversy with Hilbert in the late 1920s over editorial policy at Mathematische Annalen, at the time a leading journal. According to Abraham Fraenkel, Brouwer espoused Germanic Aryanness and Hilbert removed him from the editorial board of Mathematische Annalen after Brouwer objected to contributions from Ostjuden. In later years Brouwer became relatively isolated; the development of intuitionism at its source was taken up by his student Arend Heyting. Dutch mathematician and historian of mathematics Bartel Leendert van der Waerden attended lectures given by Brouwer in later years, and commented: "Even though his most important research contributions were in topology, Brouwer never gave courses in topology, but always on — and only on — the foundations of his intuitionism. It seemed that he was no longer convinced of his results in topology because they were not correct from the point of view of intuitionism, and he judged everything he had done before, his greatest output, false according to his philosophy." About his last years, Davis (2002) remarks: "...he felt more and more isolated, and spent his last years under the spell of 'totally unfounded financial worries and a paranoid fear of bankruptcy, persecution and illness.' He was killed in 1966 at the age of 85, struck by a vehicle while crossing the street in front of his house." (Davis, p. 100 quoting van Stigt. p. 110.) Bibliography In English translation Jean van Heijenoort, 1967 3rd printing 1976 with corrections, A Source Book in Mathematical Logic, 1879-1931. Harvard University Press, Cambridge MA, pbk. The original papers are prefaced with valuable commentary. 1923. L. E. J. Brouwer: "On the significance of the principle of excluded middle in mathematics, especially in function theory." With two Addenda and corrigenda, 334-45. Brouwer gives brief synopsis of his belief that the law of excluded middle cannot be "applied without reservation even in the mathematics of infinite systems" and gives two examples of failures to illustrate his assertion. 1925. A. N. Kolmogorov: "On the principle of excluded middle", pp. 414–437. Kolmogorov supports most of Brouwer's results but disputes a few; he discusses the ramifications of intuitionism with respect to "transfinite judgements", e.g. transfinite induction. 1927. L. E. J. Brouwer: "On the domains of definition of functions". Brouwer's intuitionistic treatment of the continuum, with an extended commentary. 1927. David Hilbert: "The foundations of mathematics," 464-80 1927. L. E. J. Brouwer: "Intuitionistic reflections on formalism," 490-92. Brouwer lists four topics on which intuitionism and formalism might "enter into a dialogue." Three of the topics involve the law of excluded middle. 1927. Hermann Weyl: "Comments on Hilbert's second lecture on the foundations of mathematics," 480-484. In 1920 Weyl, Hilbert's prize pupil, sided with Brouwer against Hilbert. But in this address Weyl "while defending Brouwer against some of Hilbert's criticisms...attempts to bring out the significance of Hilbert's approach to the problems of the foundations of mathematics." Ewald, William B., ed., 1996. From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols. Oxford Univ. Press. 1928. "Mathematics, science, and language," 1170-85. 1928. "The structure of the continuum," 1186-96. 1952. "Historical background, principles, and methods of intuitionism," 1197-1207. Brouwer, L. E. J., Collected Works, Vol. I, Amsterdam: North-Holland, 1975. Brouwer, L. E. J., Collected Works, Vol. II, Amsterdam: North-Holland, 1976. Brouwer, L. E. J., "Life, Art, and Mysticism," Notre Dame Journal of Formal Logic, vol. 37 (1996), pp. 389–429. Translated by W. P. van Stigt with an introduction by the translator, pp. 381–87. Davis quotes from this work, "a short book... drenched in romantic pessimism" (p. 94). W. P. van Stigt, 1990, Brouwer's Intuitionism, Amsterdam: North-Holland, 1990 See also Gerrit Mannoury George F. C. Griss Bar induction Constructivist epistemology Notes References Further reading Dirk van Dalen, Mystic, Geometer, and Intuitionist: The Life of L. E. J. Brouwer. Oxford Univ. Press. 1999. Volume 1: The Dawning Revolution. 2005. Volume 2: Hope and Disillusion. 2013. L. E. J. Brouwer: Topologist, Intuitionist, Philosopher. How Mathematics is Rooted in Life. London: Springer (based on previous work). Martin Davis, 2000. The Engines of Logic, W. W. Norton, London, pbk. Cf. Chapter Five: "Hilbert to the Rescue" wherein Davis discusses Brouwer and his relationship with Hilbert and Weyl with brief biographical information of Brouwer. Davis's references include: Stephen Kleene, 1952 with corrections 1971, 10th reprint 1991, Introduction to Metamathematics, North-Holland Publishing Company, Amsterdam Netherlands, . Cf. in particular Chapter III: A Critique of Mathematical Reasoning, §13 "Intuitionism" and §14 "Formalism". Koetsier, Teun, Editor, Mathematics and the Divine: A Historical Study, Amsterdam: Elsevier Science and Technology, 2004, . Pambuccian, Victor, 2022, Brouwer’s Intuitionism: Mathematics in the Being Mode of Existence, Published in: Sriraman, B. (ed) Handbook of the History and Philosophy of Mathematical Practice. Springer, Cham. External links Life, Art and Mysticism written by L.E.J. Brouwer Luitzen Egbertus Jan Brouwer entry in Stanford Encyclopedia of Philosophy 1881 births 1966 deaths 20th-century Dutch mathematicians 20th-century Dutch philosophers 20th-century Dutch essayists Dutch logicians Dutch male writers Foreign members of the Royal Society Intuitionism Mathematical analysts Mathematical logicians Members of the Prussian Academy of Sciences Members of the Royal Netherlands Academy of Arts and Sciences Scientists from Rotterdam Philosophers of logic Philosophers of mathematics Road incident deaths in the Netherlands Set theorists Topologists University of Amsterdam alumni Academic staff of the University of Amsterdam Members of the American Philosophical Society
L. E. J. Brouwer
[ "Mathematics" ]
2,492
[ "Mathematical analysis", "Mathematical logicians", "Mathematical logic", "Mathematical analysts", "Topologists", "Topology", "Philosophers of mathematics" ]
172,902
https://en.wikipedia.org/wiki/OpenEXR
OpenEXR is a high-dynamic range, multi-channel raster file format, released as an open standard along with a set of software tools created by Industrial Light & Magic (ILM), under a free software license similar to the BSD license. It is notable for supporting multiple channels of potentially different pixel sizes, including 32-bit unsigned integer, 32-bit and 16-bit floating point values, as well as various compression techniques which include lossless and lossy compression algorithms. It also has arbitrary channels and encodes multiple points of view such as left- and right-camera images. Overview A full technical introduction of the format is available on the OpenEXR website. OpenEXR, or EXR for short, is a deep raster format developed by ILM and broadly used in the computer-graphics industry, both visual effects and animation. OpenEXR's multi-resolution and arbitrary channel format makes it appealing for compositing, as it alleviates several painful elements of the process. Since it can store arbitrary channels—specular, diffuse, alpha, RGB, normals, and various other types—in one file, it takes away the need to store this information in separate files. The multi-channel concept also reduces the necessity to "bake" in the aforementioned data to the final image. If a compositor is not happy with the current level of specularity, they can adjust that specific channel. OpenEXR's API makes tools development a relative ease for developers. Since there are almost never two identical production pipelines, custom tools always need to be developed to address problems (e.g. image-manipulation issue). OpenEXR's library allows quick and easy access to the image's attributes such as tiles and channels. The OpenEXR library is developed in C++ and is available in source format as well as compiled format for Microsoft Windows, macOS and Linux. Python bindings for the library are also available for version 2.x. History OpenEXR was created by ILM in 1999 and released to the public in 2003 along with an open source software library. It soon received wide adoption by software used in computer graphics, particularly for film and television production. The format has been updated several times, adding support for tiles, mipmaps, new compression methods, and other features. In 2007, OpenEXR was honored with an Academy Award for Technical Achievement. OpenEXR 2.0 was released in April 2013, extending the format with support for deep image buffers and multiple images embedded in a single file. Version 2.2, released August 2014, added the lossy DWA compression format. Distribution The OpenEXR software distribution includes: libraries Half, a C++ class for manipulating half values as if they were a built-in C++ data type exrdisplay, a sample application for viewing OpenEXR images on a display at various exposure settings Libraries = library made by Industrial Light & Magic (Ilm) for low-level operations on the files with OpenEXR image format (Imf) on linux on windows Color depth OpenEXR has support for color depth using: 16-bit floating-point (half) 32-bit floating-point 32-bit unsigned integer Compression methods There are three general types of lossless compression built into OpenEXR, with two different methods of Zip compressing. For most images without a lot of grain, the two Zip compression methods seem to work best, while the PIZ compression algorithm is better suited to grainy images. The following options are available: None Disables all compression. Run Length Encoding (RLE) This is a basic form of compression that is comparable to that used by standard Targa files. Zip (per scanline) deflate compression with zlib wrapper applied to individual scanlines (not based on the ZIP file format despite its name). Zip (16 scanline blocks) deflate compression applied to blocks of 16 scanlines at time. This tends to be the most effective style of compression to use with rendered images that do not have film grain applied. PIZ (wavelet compression) This lossless method uses a new combined wavelet / Huffman compression. This form of compression is quite effective when dealing with grainy images, and will often surpass any of the other options under grainy conditions. PXR24 (24-bit data conversion then deflate compression) This form of compression from Pixar Animation Studios converts 32-bit floats to 24 bits then uses deflate compression. It is lossless for half and 32-bit integer data and slightly lossy for 32-bit float data. B44 This form of compression is lossy for half data and stores 32-bit data uncompressed. It maintains a fixed compression size of either 2.28:1 or 4.57:1 and is designed for realtime playback. B44 compresses uniformly regardless of image content. B44A An extension to B44 where areas of flat color are further compressed, such as alpha channels. DWAA JPEG-like lossy compression format contributed by DreamWorks Animation. Compresses 32 scanlines together. DWAB Same as DWAA, but compresses blocks of 256 scanlines. Credits From OpenEXR.org's Technical Introduction: The ILM OpenEXR file format was designed and implemented by Florian Kainz, Wojciech Jarosz, and Rod Bogart. The PIZ compression scheme is based on an algorithm by Christian Rouet. Josh Pines helped extend the PIZ algorithm for 16-bit and found optimizations for the float-to-half conversions. Drew Hess packaged and adapted ILM's internal source code for public release and maintains the OpenEXR software distribution. The PXR24 compression method is based on an algorithm written by Loren Carpenter at Pixar Animation Studios. See also High dynamic range References External links exrtools incl. exrtoppm exe Free graphics software High dynamic range High dynamic range file formats Lucasfilm Open formats Raster graphics file formats Software using the BSD license
OpenEXR
[ "Engineering" ]
1,260
[ "Electrical engineering", "High dynamic range" ]
172,911
https://en.wikipedia.org/wiki/Nuclear%20weapon%20design
Nuclear weapons design are physical, chemical, and engineering arrangements that cause the physics package of a nuclear weapon to detonate. There are three existing basic design types: Pure fission weapons are the simplest, least technically demanding, were the first nuclear weapons built, and so far the only type ever used in warfare, by the United States on Japan in World War II. Boosted fission weapons increase yield beyond that of the implosion design, by using small quantities of fusion fuel to enhance the fission chain reaction. Boosting can more than double the weapon's fission energy yield. Staged thermonuclear weapons are arrangements of two or more "stages", most usually two. The first stage is normally a boosted fission weapon as above (except for the earliest thermonuclear weapons, which used a pure fission weapon instead). Its detonation causes it to shine intensely with X-rays, which illuminate and implode the second stage filled with a large quantity of fusion fuel. This sets in motion a sequence of events which results in a thermonuclear, or fusion, burn. This process affords potential yields up to hundreds of times those of fission weapons. Pure fission weapons have been the first type to be built by new nuclear powers. Large industrial states with well-developed nuclear arsenals have two-stage thermonuclear weapons, which are the most compact, scalable, and cost effective option, once the necessary technical base and industrial infrastructure are built. Most known innovations in nuclear weapon design originated in the United States, though some were later developed independently by other states. In early news accounts, pure fission weapons were called atomic bombs or A-bombs and weapons involving fusion were called hydrogen bombs or H-bombs. Practitioners of nuclear policy, however, favor the terms nuclear and thermonuclear, respectively. Nuclear reactions Nuclear fission separates or splits heavier atoms to form lighter atoms. Nuclear fusion combines lighter atoms to form heavier atoms. Both reactions generate roughly a million times more energy than comparable chemical reactions, making nuclear bombs a million times more powerful than non-nuclear bombs, which a French patent claimed in May 1939. In some ways, fission and fusion are opposite and complementary reactions, but the particulars are unique for each. To understand how nuclear weapons are designed, it is useful to know the important similarities and differences between fission and fusion. The following explanation uses rounded numbers and approximations. Fission When a free neutron hits the nucleus of a fissile atom like uranium-235 (235U), the uranium nucleus splits into two smaller nuclei called fission fragments, plus more neutrons (for 235U three about as often as two; an average of just under 2.5 per fission). The fission chain reaction in a supercritical mass of fuel can be self-sustaining because it produces enough surplus neutrons to offset losses of neutrons escaping the supercritical assembly. Most of these have the speed (kinetic energy) required to cause new fissions in neighboring uranium nuclei. The uranium-235 nucleus can split in many ways, provided the atomic numbers add up to 92 and the mass numbers add up to 236 (uranium-235 plus the neutron that caused the split). The following equation shows one possible split, namely into strontium-95 (95Sr), xenon-139 (139Xe), and two neutrons (n), plus energy: The immediate energy release per atom is about 180 million electron volts (MeV); i.e., 74 TJ/kg. Only 7% of this is gamma radiation and kinetic energy of fission neutrons. The remaining 93% is kinetic energy (or energy of motion) of the charged fission fragments, flying away from each other mutually repelled by the positive charge of their protons (38 for strontium, 54 for xenon). This initial kinetic energy is 67 TJ/kg, imparting an initial speed of about 12,000 kilometers per second (i.e. 1.2 cm per nanosecond). The charged fragments' high electric charge causes many inelastic coulomb collisions with nearby nuclei, and these fragments remain trapped inside the bomb's fissile pit and tamper until their kinetic energy is converted into heat. Given the speed of the fragments and the mean free path between nuclei in the compressed fuel assembly (for the implosion design), this takes about a millionth of a second (a microsecond), by which time the core and tamper of the bomb have expanded to a ball of plasma several meters in diameter with a temperature of tens of millions of degrees Celsius. This is hot enough to emit black-body radiation in the X-ray spectrum. These X-rays are absorbed by the surrounding air, producing the fireball and blast of a nuclear explosion. Most fission products have too many neutrons to be stable so they are radioactive by beta decay, converting neutrons into protons by throwing off beta particles (electrons), neutrinos and gamma rays. Their half-lives range from milliseconds to about 200,000 years. Many decay into isotopes that are themselves radioactive, so from 1 to 6 (average 3) decays may be required to reach stability. In reactors, the radioactive products are the nuclear waste in spent fuel. In bombs, they become radioactive fallout, both local and global. Meanwhile, inside the exploding bomb, the free neutrons released by fission carry away about 3% of the initial fission energy. Neutron kinetic energy adds to the blast energy of a bomb, but not as effectively as the energy from charged fragments, since neutrons do not give up their kinetic energy as quickly in collisions with charged nuclei or electrons. The dominant contribution of fission neutrons to the bomb's power is the initiation of subsequent fissions. Over half of the neutrons escape the bomb core, but the rest strike 235U nuclei causing them to fission in an exponentially growing chain reaction (1, 2, 4, 8, 16, etc.). Starting from one atom, the number of fissions can theoretically double a hundred times in a microsecond, which could consume all uranium or plutonium up to hundreds of tons by the hundredth link in the chain. Typically in a modern weapon, the weapon's pit contains of plutonium and at detonation produces approximately yield, representing the fissioning of approximately of plutonium. Materials which can sustain a chain reaction are called fissile. The two fissile materials used in nuclear weapons are: 235U, also known as highly enriched uranium (HEU), "oralloy" meaning "Oak Ridge alloy", or "25" (a combination of the last digit of the atomic number of uranium-235, which is 92, and the last digit of its mass number, which is 235); and 239Pu, also known as plutonium-239, or "49" (from "94" and "239"). Uranium's most common isotope, 238U, is fissionable but not fissile, meaning that it cannot sustain a chain reaction because its daughter fission neutrons are not (on average) energetic enough to cause follow-on 238U fissions. However, the neutrons released by fusion of the heavy hydrogen isotopes deuterium and tritium will fission 238U. This 238U fission reaction in the outer jacket of the secondary assembly of a two-stage thermonuclear bomb produces by far the greatest fraction of the bomb's energy yield, as well as most of its radioactive debris. For national powers engaged in a nuclear arms race, this fact of 238U's ability to fast-fission from thermonuclear neutron bombardment is of central importance. The plenitude and cheapness of both bulk dry fusion fuel (lithium deuteride) and 238U (a byproduct of uranium enrichment) permit the economical production of very large nuclear arsenals, in comparison to pure fission weapons requiring the expensive 235U or 239Pu fuels. Fusion Fusion produces neutrons which dissipate energy from the reaction. In weapons, the most important fusion reaction is called the D-T reaction. Using the heat and pressure of fission, hydrogen-2, or deuterium (2D), fuses with hydrogen-3, or tritium (3T), to form helium-4 (4He) plus one neutron (n) and energy: The total energy output, 17.6 MeV, is one tenth of that with fission, but the ingredients are only one-fiftieth as massive, so the energy output per unit mass is approximately five times as great. In this fusion reaction, 14 of the 17.6 MeV (80% of the energy released in the reaction) shows up as the kinetic energy of the neutron, which, having no electric charge and being almost as massive as the hydrogen nuclei that created it, can escape the scene without leaving its energy behind to help sustain the reaction – or to generate x-rays for blast and fire. The only practical way to capture most of the fusion energy is to trap the neutrons inside a massive bottle of heavy material such as lead, uranium, or plutonium. If the 14 MeV neutron is captured by uranium (of either isotope; 14 MeV is high enough to fission both 235U and 238U) or plutonium, the result is fission and the release of 180 MeV of fission energy, multiplying the energy output tenfold. For weapon use, fission is necessary to start fusion, helps to sustain fusion, and captures and multiplies the energy carried by the fusion neutrons. In the case of a neutron bomb (see below), the last-mentioned factor does not apply, since the objective is to facilitate the escape of neutrons, rather than to use them to increase the weapon's raw power. Tritium production An essential nuclear reaction is the one that creates tritium, or hydrogen-3. Tritium is employed in two ways. First, pure tritium gas is produced for placement inside the cores of boosted fission devices in order to increase their energy yields. This is especially so for the fission primaries of thermonuclear weapons. The second way is indirect, and takes advantage of the fact that the neutrons emitted by a supercritical fission "spark plug" in the secondary assembly of a two-stage thermonuclear bomb will produce tritium in situ when these neutrons collide with the lithium nuclei in the bomb's lithium deuteride fuel supply. Elemental gaseous tritium for fission primaries is also made by bombarding lithium-6 (6Li) with neutrons (n), only in a nuclear reactor. This neutron bombardment will cause the lithium-6 nucleus to split, producing an alpha particle, or helium-4 (4He), plus a triton (3T) and energy: But as was discovered in the first test of this type of device, Castle Bravo, when lithium-7 is present, one also has some amounts of the following two net reactions: Li + n → T + He + n Li + H → 2 He + n + 15.123 MeV Most lithium is 7Li, and this gave Castle Bravo a yield 2.5 times larger than expected. The neutrons are supplied by the nuclear reactor in a way similar to production of plutonium 239Pu from 238U feedstock: target rods of the 6Li feedstock are arranged around a uranium-fueled core, and are removed for processing once it has been calculated that most of the lithium nuclei have been transmuted to tritium. Of the four basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three. Pure fission weapons The first task of a nuclear weapon design is to rapidly assemble a supercritical mass of fissile (weapon grade) uranium or plutonium. A supercritical mass is one in which the percentage of fission-produced neutrons captured by other neighboring fissile nuclei is large enough that each fission event, on average, causes more than one follow-on fission event. Neutrons released by the first fission events induce subsequent fission events at an exponentially accelerating rate. Each follow-on fissioning continues a sequence of these reactions that works its way throughout the supercritical mass of fuel nuclei. This process is conceived and described colloquially as the nuclear chain reaction. To start the chain reaction in a supercritical assembly, at least one free neutron must be injected and collide with a fissile fuel nucleus. The neutron joins with the nucleus (technically a fusion event) and destabilizes the nucleus, which explodes into two middleweight nuclear fragments (from the severing of the strong nuclear force holding the mutually-repulsive protons together), plus two or three free neutrons. These race away and collide with neighboring fuel nuclei. This process repeats over and over until the fuel assembly goes sub-critical (from thermal expansion), after which the chain reaction shuts down because the daughter neutrons can no longer find new fuel nuclei to hit before escaping the less-dense fuel mass. Each following fission event in the chain approximately doubles the neutron population (net, after losses due to some neutrons escaping the fuel mass, and others that collide with any non-fuel impurity nuclei present). For the gun assembly method (see below) of supercritical mass formation, the fuel itself can be relied upon to initiate the chain reaction. This is because even the best weapon-grade uranium contains a significant number of 238U nuclei. These are susceptible to spontaneous fission events, which occur randomly (it is a quantum mechanical phenomenon). Because the fissile material in a gun-assembled critical mass is not compressed, the design need only ensure the two sub-critical masses remain close enough to each other long enough that a 238U spontaneous fission will occur while the weapon is in the vicinity of the target. This is not difficult to arrange as it takes but a second or two in a typical-size fuel mass for this to occur. (Still, many such bombs meant for delivery by air (gravity bomb, artillery shell or rocket) use injected neutrons to gain finer control over the exact detonation altitude, important for the destructive effectiveness of airbursts.) This condition of spontaneous fission highlights the necessity to assemble the supercritical mass of fuel very rapidly. The time required to accomplish this is called the weapon's critical insertion time. If spontaneous fission were to occur when the supercritical mass was only partially assembled, the chain reaction would begin prematurely. Neutron losses through the void between the two subcritical masses (gun assembly) or the voids between not-fully-compressed fuel nuclei (implosion assembly) would sap the bomb of the number of fission events needed to attain the full design yield. Additionally, heat resulting from the fissions that do occur would work against the continued assembly of the supercritical mass, from thermal expansion of the fuel. This failure is called predetonation. The resulting explosion would be called a "fizzle" by bomb engineers and weapon users. Plutonium's high rate of spontaneous fission makes uranium fuel a necessity for gun-assembled bombs, with their much greater insertion time and much greater mass of fuel required (because of the lack of fuel compression). There is another source of free neutrons that can spoil a fission explosion. All uranium and plutonium nuclei have a decay mode that results in energetic alpha particles. If the fuel mass contains impurity elements of low atomic number (Z), these charged alphas can penetrate the coulomb barrier of these impurity nuclei and undergo a reaction that yields a free neutron. The rate of alpha emission of fissile nuclei is one to two million times that of spontaneous fission, so weapon engineers are careful to use fuel of high purity. Fission weapons used in the vicinity of other nuclear explosions must be protected from the intrusion of free neutrons from outside. Such shielding material will almost always be penetrated, however, if the outside neutron flux is intense enough. When a weapon misfires or fizzles because of the effects of other nuclear detonations, it is called nuclear fratricide. For the implosion-assembled design, once the critical mass is assembled to maximum density, a burst of neutrons must be supplied to start the chain reaction. Early weapons used a modulated neutron generator code named "Urchin" inside the pit containing polonium-210 and beryllium separated by a thin barrier. Implosion of the pit crushes the neutron generator, mixing the two metals, thereby allowing alpha particles from the polonium to interact with beryllium to produce free neutrons. In modern weapons, the neutron generator is a high-voltage vacuum tube containing a particle accelerator which bombards a deuterium/tritium-metal hydride target with deuterium and tritium ions. The resulting small-scale fusion produces neutrons at a protected location outside the physics package, from which they penetrate the pit. This method allows better timing of the first fission events in the chain reaction, which optimally should occur at the point of maximum compression/supercriticality. Timing of the neutron injection is a more important parameter than the number of neutrons injected: the first generations of the chain reaction are vastly more effective due to the exponential function by which neutron multiplication evolves. The critical mass of an uncompressed sphere of bare metal is for uranium-235 and for delta-phase plutonium-239. In practical applications, the amount of material required for criticality is modified by shape, purity, density, and the proximity to neutron-reflecting material, all of which affect the escape or capture of neutrons. To avoid a premature chain reaction during handling, the fissile material in the weapon must be kept subcritical. It may consist of one or more components containing less than one uncompressed critical mass each. A thin hollow shell can have more than the bare-sphere critical mass, as can a cylinder, which can be arbitrarily long without ever reaching criticality. Another method of reducing criticality risk is to incorporate material with a large cross-section for neutron capture, such as boron (specifically 10B comprising 20% of natural boron). Naturally this neutron absorber must be removed before the weapon is detonated. This is easy for a gun-assembled bomb: the projectile mass simply shoves the absorber out of the void between the two subcritical masses by the force of its motion. The use of plutonium affects weapon design due to its high rate of alpha emission. This results in Pu metal spontaneously producing significant heat; a 5 kilogram mass produces 9.68 watts of thermal power. Such a piece would feel warm to the touch, which is no problem if that heat is dissipated promptly and not allowed to build up the temperature. But this is a problem inside a nuclear bomb. For this reason bombs using Pu fuel use aluminum parts to wick away the excess heat, and this complicates bomb design because Al plays no active role in the explosion processes. A tamper is an optional layer of dense material surrounding the fissile material. Due to its inertia it delays the thermal expansion of the fissioning fuel mass, keeping it supercritical for longer. Often the same layer serves both as tamper and as neutron reflector. Gun-type assembly Little Boy, the Hiroshima bomb, used of uranium with an average enrichment of around 80%, or of uranium-235, just about the bare-metal critical mass . When assembled inside its tamper/reflector of tungsten carbide, the was more than twice critical mass. Before the detonation, the uranium-235 was formed into two sub-critical pieces, one of which was later fired down a gun barrel to join the other, starting the nuclear explosion. Analysis shows that less than 2% of the uranium mass underwent fission; the remainder, representing most of the entire wartime output of the giant Y-12 factories at Oak Ridge, scattered uselessly. The inefficiency was caused by the speed with which the uncompressed fissioning uranium expanded and became sub-critical by virtue of decreased density. Despite its inefficiency, this design, because of its shape, was adapted for use in small-diameter, cylindrical artillery shells (a gun-type warhead fired from the barrel of a much larger gun). Such warheads were deployed by the United States until 1992, accounting for a significant fraction of the 235U in the arsenal, and were some of the first weapons dismantled to comply with treaties limiting warhead numbers. The rationale for this decision was undoubtedly a combination of the lower yield and grave safety issues associated with the gun-type design. Implosion-type For both the Trinity device and the Fat Man (Nagasaki) bomb, nearly identical plutonium fission through implosion designs were used. The Fat Man device specifically used , about in volume, of Pu-239, which is only 41% of bare-sphere critical mass . Surrounded by a U-238 reflector/tamper, the Fat Man's pit was brought close to critical mass by the neutron-reflecting properties of the U-238. During detonation, criticality was achieved by implosion. The plutonium pit was squeezed to increase its density by simultaneous detonation, as with the "Trinity" test detonation three weeks earlier, of the conventional explosives placed uniformly around the pit. The explosives were detonated by multiple exploding-bridgewire detonators. It is estimated that only about 20% of the plutonium underwent fission; the rest, about , was scattered. An implosion shock wave might be of such short duration that only part of the pit is compressed at any instant as the wave passes through it. To prevent this, a pusher shell may be needed. The pusher is located between the explosive lens and the tamper. It works by reflecting some of the shock wave backward, thereby having the effect of lengthening its duration. It is made out of a low density metal – such as aluminium, beryllium, or an alloy of the two metals (aluminium is easier and safer to shape, and is two orders of magnitude cheaper; beryllium has high neutron-reflective capability). Fat Man used an aluminium pusher. The series of RaLa Experiment tests of implosion-type fission weapon design concepts, carried out from July 1944 through February 1945 at the Los Alamos Laboratory and a remote site east of it in Bayo Canyon, proved the practicality of the implosion design for a fission device, with the February 1945 tests positively determining its usability for the final Trinity/Fat Man plutonium implosion design. The key to Fat Man's greater efficiency was the inward momentum of the massive U-238 tamper. (The natural uranium tamper did not undergo fission from thermal neutrons, but did contribute perhaps 20% of the total yield from fission by fast neutrons). After the chain reaction started in the plutonium, it continued until the explosion reversed the momentum of the implosion and expanded enough to stop the chain reaction. By holding everything together for a few hundred nanoseconds more, the tamper increased the efficiency. Plutonium pit The core of an implosion weapon – the fissile material and any reflector or tamper bonded to it – is known as the pit. Some weapons tested during the 1950s used pits made with U-235 alone, or in composite with plutonium, but all-plutonium pits are the smallest in diameter and have been the standard since the early 1960s. Casting and then machining plutonium is difficult not only because of its toxicity, but also because plutonium has many different metallic phases. As plutonium cools, changes in phase result in distortion and cracking. This distortion is normally overcome by alloying it with 30–35 mMol (0.9–1.0% by weight) gallium, forming a plutonium-gallium alloy, which causes it to take up its delta phase over a wide temperature range. When cooling from molten it then has only a single phase change, from epsilon to delta, instead of the four changes it would otherwise pass through. Other trivalent metals would also work, but gallium has a small neutron absorption cross section and helps protect the plutonium against corrosion. A drawback is that gallium compounds are corrosive and so if the plutonium is recovered from dismantled weapons for conversion to plutonium dioxide for power reactors, there is the difficulty of removing the gallium. Because plutonium is chemically reactive it is common to plate the completed pit with a thin layer of inert metal, which also reduces the toxic hazard. The gadget used galvanic silver plating; afterward, nickel deposited from nickel tetracarbonyl vapors was used, but thereafter and since, gold became the preferred material. Recent designs improve safety by plating pits with vanadium to make the pits more fire-resistant. Levitated-pit implosion The first improvement on the Fat Man design was to put an air space between the tamper and the pit to create a hammer-on-nail impact. The pit, supported on a hollow cone inside the tamper cavity, was said to be "levitated". The three tests of Operation Sandstone, in 1948, used Fat Man designs with levitated pits. The largest yield was 49 kilotons, more than twice the yield of the unlevitated Fat Man. It was immediately clear that implosion was the best design for a fission weapon. Its only drawback seemed to be its diameter. Fat Man was wide vs for Little Boy. The Pu-239 pit of Fat Man was only in diameter, the size of a softball. The bulk of Fat Man's girth was the implosion mechanism, namely concentric layers of U-238, aluminium, and high explosives. The key to reducing that girth was the two-point implosion design. Two-point linear implosion In the two-point linear implosion, the nuclear fuel is cast into a solid shape and placed within the center of a cylinder of high explosive. Detonators are placed at either end of the explosive cylinder, and a plate-like insert, or shaper, is placed in the explosive just inside the detonators. When the detonators are fired, the initial detonation is trapped between the shaper and the end of the cylinder, causing it to travel out to the edges of the shaper where it is diffracted around the edges into the main mass of explosive. This causes the detonation to form into a ring that proceeds inward from the shaper. Due to the lack of a tamper or lenses to shape the progression, the detonation does not reach the pit in a spherical shape. To produce the desired spherical implosion, the fissile material itself is shaped to produce the same effect. Due to the physics of the shock wave propagation within the explosive mass, this requires the pit to be a prolate spheroid, that is, roughly egg shaped. The shock wave first reaches the pit at its tips, driving them inward and causing the mass to become spherical. The shock may also change plutonium from delta to alpha phase, increasing its density by 23%, but without the inward momentum of a true implosion. The lack of compression makes such designs inefficient, but the simplicity and small diameter make it suitable for use in artillery shells and atomic demolition munitions – ADMs – also known as backpack or suitcase nukes; an example is the W48 artillery shell, the smallest nuclear weapon ever built or deployed. All such low-yield battlefield weapons, whether gun-type U-235 designs or linear implosion Pu-239 designs, pay a high price in fissile material in order to achieve diameters between six and ten inches (15 and 25 cm). Hollow-pit implosion A more efficient implosion system uses a hollow pit. A hollow plutonium pit was the original plan for the 1945 Fat Man bomb, but there was not enough time to develop and test the implosion system for it. A simpler solid-pit design was considered more reliable, given the time constraints, but it required a heavy U-238 tamper, a thick aluminium pusher, and three tons of high explosives. After the war, interest in the hollow pit design was revived. Its obvious advantage is that a hollow shell of plutonium, shock-deformed and driven inward toward its empty center, would carry momentum into its violent assembly as a solid sphere. It would be self-tamping, requiring a smaller U-238 tamper, no aluminium pusher, and less high explosive. Fusion-boosted fission The next step in miniaturization was to speed up the fissioning of the pit to reduce the minimum inertial confinement time. This would allow the efficient fission of the fuel with less mass in the form of tamper or the fuel itself. The key to achieving faster fission would be to introduce more neutrons, and among the many ways to do this, adding a fusion reaction was relatively easy in the case of a hollow pit. The easiest fusion reaction to achieve is found in a 50–50 mixture of tritium and deuterium. For fusion power experiments this mixture must be held at high temperatures for relatively lengthy times in order to have an efficient reaction. For explosive use, however, the goal is not to produce efficient fusion, but simply provide extra neutrons early in the process. Since a nuclear explosion is supercritical, any extra neutrons will be multiplied by the chain reaction, so even tiny quantities introduced early can have a large effect on the outcome. For this reason, even the relatively low compression pressures and times (in fusion terms) found in the center of a hollow pit warhead are enough to create the desired effect. In the boosted design, the fusion fuel in gas form is pumped into the pit during arming. This will fuse into helium and release free neutrons soon after fission begins. The neutrons will start a large number of new chain reactions while the pit is still critical or nearly critical. Once the hollow pit is perfected, there is little reason not to boost; deuterium and tritium are easily produced in the small quantities needed, and the technical aspects are trivial. The concept of fusion-boosted fission was first tested on May 25, 1951, in the Item shot of Operation Greenhouse, Eniwetok, yield 45.5 kilotons. Boosting reduces diameter in three ways, all the result of faster fission: Since the compressed pit does not need to be held together as long, the massive U-238 tamper can be replaced by a light-weight beryllium shell (to reflect escaping neutrons back into the pit). The diameter is reduced. The mass of the pit can be reduced by half, without reducing yield. Diameter is reduced again. Since the mass of the metal being imploded (tamper plus pit) is reduced, a smaller charge of high explosive is needed, reducing diameter even further. The first device whose dimensions suggest employment of all these features (two-point, hollow-pit, fusion-boosted implosion) was the Swan device. It had a cylindrical shape with a diameter of and a length of . It was first tested standalone and then as the primary of a two-stage thermonuclear device during Operation Redwing. It was weaponized as the Robin primary and became the first off-the-shelf, multi-use primary, and the prototype for all that followed. After the success of Swan, seemed to become the standard diameter of boosted single-stage devices tested during the 1950s. Length was usually twice the diameter, but one such device, which became the W54 warhead, was closer to a sphere, only long. One of the applications of the W54 was the Davy Crockett XM-388 recoilless rifle projectile. It had a dimension of just , and is shown here in comparison to its Fat Man predecessor (). Another benefit of boosting, in addition to making weapons smaller, lighter, and with less fissile material for a given yield, is that it renders weapons immune to predetonation. It was discovered in the mid-1950s that plutonium pits would be particularly susceptible to partial predetonation if exposed to the intense radiation of a nearby nuclear explosion (electronics might also be damaged, but this was a separate problem). RI was a particular problem before effective early warning radar systems because a first strike attack might make retaliatory weapons useless. Boosting reduces the amount of plutonium needed in a weapon to below the quantity which would be vulnerable to this effect. Two-stage thermonuclear Pure fission or fusion-boosted fission weapons can be made to yield hundreds of kilotons, at great expense in fissile material and tritium, but by far the most efficient way to increase nuclear weapon yield beyond ten or so kilotons is to add a second independent stage, called a secondary. In the 1940s, bomb designers at Los Alamos thought the secondary would be a canister of deuterium in liquefied or hydride form. The fusion reaction would be D-D, harder to achieve than D-T, but more affordable. A fission bomb at one end would shock-compress and heat the near end, and fusion would propagate through the canister to the far end. Mathematical simulations showed it would not work, even with large amounts of expensive tritium added. The entire fusion fuel canister would need to be enveloped by fission energy, to both compress and heat it, as with the booster charge in a boosted primary. The design breakthrough came in January 1951, when Edward Teller and Stanislaw Ulam invented radiation implosion – for nearly three decades known publicly only as the Teller-Ulam H-bomb secret. The concept of radiation implosion was first tested on May 9, 1951, in the George shot of Operation Greenhouse, Eniwetok, yield 225 kilotons. The first full test was on November 1, 1952, the Mike shot of Operation Ivy, Eniwetok, yield 10.4 megatons. In radiation implosion, the burst of X-ray energy coming from an exploding primary is captured and contained within an opaque-walled radiation channel which surrounds the nuclear energy components of the secondary. The radiation quickly turns the plastic foam that had been filling the channel into a plasma which is mostly transparent to X-rays, and the radiation is absorbed in the outermost layers of the pusher/tamper surrounding the secondary, which ablates and applies a massive force (much like an inside out rocket engine) causing the fusion fuel capsule to implode much like the pit of the primary. As the secondary implodes a fissile "spark plug" at its center ignites and provides neutrons and heat which enable the lithium deuteride fusion fuel to produce tritium and ignite as well. The fission and fusion chain reactions exchange neutrons with each other and boost the efficiency of both reactions. The greater implosive force, enhanced efficiency of the fissile "spark plug" due to boosting via fusion neutrons, and the fusion explosion itself provide significantly greater explosive yield from the secondary despite often not being much larger than the primary. For example, for the Redwing Mohawk test on July 3, 1956, a secondary called the Flute was attached to the Swan primary. The Flute was in diameter and long, about the size of the Swan. But it weighed ten times as much and yielded 24 times as much energy (355 kilotons vs 15 kilotons). Equally important, the active ingredients in the Flute probably cost no more than those in the Swan. Most of the fission came from cheap U-238, and the tritium was manufactured in place during the explosion. Only the spark plug at the axis of the secondary needed to be fissile. A spherical secondary can achieve higher implosion densities than a cylindrical secondary, because spherical implosion pushes in from all directions toward the same spot. However, in warheads yielding more than one megaton, the diameter of a spherical secondary would be too large for most applications. A cylindrical secondary is necessary in such cases. The small, cone-shaped re-entry vehicles in multiple-warhead ballistic missiles after 1970 tended to have warheads with spherical secondaries, and yields of a few hundred kilotons. In engineering terms, radiation implosion allows for the exploitation of several known features of nuclear bomb materials which heretofore had eluded practical application. For example: The optimal way to store deuterium in a reasonably dense state is to chemically bond it with lithium, as lithium deuteride. But the lithium-6 isotope is also the raw material for tritium production, and an exploding bomb is a nuclear reactor. Radiation implosion will hold everything together long enough to permit the complete conversion of lithium-6 into tritium, while the bomb explodes. So the bonding agent for deuterium permits use of the D-T fusion reaction without any pre-manufactured tritium being stored in the secondary. The tritium production constraint disappears. For the secondary to be imploded by the hot, radiation-induced plasma surrounding it, it must remain cool for the first microsecond, i.e., it must be encased in a massive radiation (heat) shield. The shield's massiveness allows it to double as a tamper, adding momentum and duration to the implosion. No material is better suited for both of these jobs than ordinary, cheap uranium-238, which also happens to undergo fission when struck by the neutrons produced by D-T fusion. This casing, called the pusher, thus has three jobs: to keep the secondary cool; to hold it, inertially, in a highly compressed state; and, finally, to serve as the chief energy source for the entire bomb. The consumable pusher makes the bomb more a uranium fission bomb than a hydrogen fusion bomb. Insiders never used the term "hydrogen bomb". Finally, the heat for fusion ignition comes not from the primary but from a second fission bomb called the spark plug, embedded in the heart of the secondary. The implosion of the secondary implodes this spark plug, detonating it and igniting fusion in the material around it, but the spark plug then continues to fission in the neutron-rich environment until it is fully consumed, adding significantly to the yield. In the ensuing fifty years, no one has come up with a more efficient way to build a thermonuclear bomb. It is the design of choice for the United States, Russia, the United Kingdom, China, and France, the five thermonuclear powers. On 3 September 2017 North Korea carried out what it reported as its first "two-stage thermo-nuclear weapon" test. According to Dr. Theodore Taylor, after reviewing leaked photographs of disassembled weapons components taken before 1986, Israel possessed boosted weapons and would require supercomputers of that era to advance further toward full two-stage weapons in the megaton range without nuclear test detonations. The other nuclear-armed nations, India and Pakistan, probably have single-stage weapons, possibly boosted. Interstage In a two-stage thermonuclear weapon the energy from the primary impacts the secondary. An essential energy transfer modulator called the interstage, between the primary and the secondary, protects the secondary's fusion fuel from heating too quickly, which could cause it to explode in a conventional (and small) heat explosion before the fusion and fission reactions get a chance to start. There is very little information in the open literature about the mechanism of the interstage. Its first mention in a U.S. government document formally released to the public appears to be a caption in a graphic promoting the Reliable Replacement Warhead Program in 2007. If built, this new design would replace "toxic, brittle material" and "expensive 'special' material" in the interstage. This statement suggests the interstage may contain beryllium to moderate the flux of neutrons from the primary, and perhaps something to absorb and re-radiate the x-rays in a particular manner. There is also some speculation that this interstage material, which may be code-named Fogbank, might be an aerogel, possibly doped with beryllium and/or other substances. The interstage and the secondary are encased together inside a stainless steel membrane to form the canned subassembly (CSA), an arrangement which has never been depicted in any open-source drawing. The most detailed illustration of an interstage shows a British thermonuclear weapon with a cluster of items between its primary and a cylindrical secondary. They are labeled "end-cap and neutron focus lens", "reflector/neutron gun carriage", and "reflector wrap". The origin of the drawing, posted on the internet by Greenpeace, is uncertain, and there is no accompanying explanation. Specific designs While every nuclear weapon design falls into one of the above categories, specific designs have occasionally become the subject of news accounts and public discussion, often with incorrect descriptions about how they work and what they do. Examples are listed below. Alarm Clock/Sloika The first effort to exploit the symbiotic relationship between fission and fusion was a 1940s design that mixed fission and fusion fuel in alternating thin layers. As a single-stage device, it would have been a cumbersome application of boosted fission. It first became practical when incorporated into the secondary of a two-stage thermonuclear weapon. The U.S. name, Alarm Clock, came from Teller: he called it that because it might "wake up the world" to the possibility of the potential of the Super. The Russian name for the same design was more descriptive: Sloika (), a layered pastry cake. A single-stage Soviet Sloika was tested as RDS-6s on August 12, 1953. No single-stage U.S. version was tested, but the code named Castle Union shot of Operation Castle, April 26, 1954, was a two-stage thermonuclear device code-named Alarm Clock. Its yield, at Bikini, was 6.9 megatons. Because the Soviet Sloika test used dry lithium-6 deuteride eight months before the first U.S. test to use it (Castle Bravo, March 1, 1954), it was sometimes claimed that the USSR won the H-bomb race, even though the United States tested and developed the first hydrogen bomb: the Ivy Mike H-bomb test. The 1952 U.S. Ivy Mike test used cryogenically cooled liquid deuterium as the fusion fuel in the secondary, and employed the D-D fusion reaction. However, the first Soviet test to use a radiation-imploded secondary, the essential feature of a true H-bomb, was on November 23, 1955, three years after Ivy Mike. In fact, real work on the implosion scheme in the Soviet Union only commenced in the very early part of 1953, several months after the successful testing of Sloika. Clean bombs On March 1, 1954, the largest-ever U.S. nuclear test explosion, the 15-megaton Castle Bravo shot of Operation Castle at Bikini Atoll, delivered a promptly lethal dose of fission-product fallout to more than of Pacific Ocean surface. Radiation injuries to Marshall Islanders and Japanese fishermen made that fact public and revealed the role of fission in hydrogen bombs. In response to the public alarm over fallout, an effort was made to design a clean multi-megaton weapon, relying almost entirely on fusion. The energy produced by the fissioning of unenriched natural uranium, when used as the tamper material in the secondary and subsequent stages in the Teller-Ulam design, can far exceed the energy released by fusion, as was the case in the Castle Bravo test. Replacing the fissionable material in the tamper with another material is essential to producing a "clean" bomb. In such a device, the tamper no longer contributes energy, so for any given weight, a clean bomb will have less yield. The earliest known incidence of a three-stage device being tested, with the third stage, called the tertiary, being ignited by the secondary, was May 27, 1956, in the Bassoon device. This device was tested in the Zuni shot of Operation Redwing. This shot used non-fissionable tampers; an inert substitute material such as tungsten or lead was used. Its yield was 3.5 megatons, 85% fusion and only 15% fission. The Ripple concept, which used ablation to achieve fusion using very little fission, was and still is by far the cleanest design. Unlike previous clean bombs, which were clean simply by replacing fission fuel with inert substance, Ripple was by design clean. Ripple was also extremely efficient; plans for a 15 kt/kg were made during Operation Dominic. Shot Androscoggin featured a proof-of-concept Ripple design, resulting in a 63-kiloton fizzle (significantly lower than the predicted 15 megatons). It was repeated in shot Housatonic, which featured a 9.96 megaton explosion that was reportedly >99.9% fusion. The public records for devices that produced the highest proportion of their yield via fusion reactions are the peaceful nuclear explosions of the 1970s. Others include the 10 megaton Dominic Housatonic at over 99.9% fusion, 50-megaton Tsar Bomba at 97% fusion, the 9.3-megaton Hardtack Poplar test at 95%, and the 4.5-megaton Redwing Navajo test at 95% fusion. The most ambitious peaceful application of nuclear explosions was pursued by the USSR with the aim of creating a long canal between the Pechora river basin and the Kama river basin, about half of which was to be constructed through a series of underground nuclear explosions. It was reported that about 250 nuclear devices might be used to get the final goal. The Taiga test was to demonstrate the feasibility of the project. Three of these "clean" devices of 15 kiloton yield each were placed in separate boreholes spaced about apart at depths of . They were simultaneously detonated on March 23, 1971, catapulting a radioactive plume into the air that was carried eastward by wind. The resulting trench was around long and wide, with an unimpressive depth of just . Despite their "clean" nature, the area still exhibits a noticeably higher (albeit mostly harmless) concentration of fission products, the intense neutron bombardment of the soil, the device itself and the support structures also activated their stable elements to create a significant amount of man-made radioactive elements like 60Co. The overall danger posed by the concentration of radioactive elements present at the site created by these three devices is still negligible, but a larger scale project as was envisioned would have had significant consequences both from the fallout of radioactive plume and the radioactive elements created by the neutron bombardment. On July 19, 1956, AEC Chairman Lewis Strauss said that the Redwing Zuni shot clean bomb test "produced much of importance ... from a humanitarian aspect." However, less than two days after this announcement, the dirty version of Bassoon, called Bassoon Prime, with a uranium-238 tamper in place, was tested on a barge off the coast of Bikini Atoll as the Redwing Tewa shot. The Bassoon Prime produced a 5-megaton yield, of which 87% came from fission. Data obtained from this test, and others, culminated in the eventual deployment of the highest-yielding US nuclear weapon known, and the highest yield-to-weight weapon ever made, a three-stage thermonuclear weapon with a maximum "dirty" yield of 25 megatons, designated as the B41 nuclear bomb, which was to be carried by U.S. Air Force bombers until it was decommissioned; this weapon was never fully tested. Third generation First and second generation nuclear weapons release energy as omnidirectional blasts. Third generation nuclear weapons are experimental special effect warheads and devices that can release energy in a directed manner, some of which were tested during the Cold War but were never deployed. These include: Project Prometheus, also known as "Nuclear Shotgun", which would have used a nuclear explosion to accelerate kinetic penetrators against ICBMs. Project Excalibur, a nuclear-pumped X-ray laser to destroy ballistic missiles. Nuclear shaped charges that focus their energy in particular directions. Project Orion explored the use of nuclear explosives for rocket propulsion. Fourth generation The idea of "4th-generation" nuclear weapons has been proposed as a possible successor to the examples of weapons designs listed above. These methods tend to revolve around using non-nuclear primaries to set off further fission or fusion reactions. For example, if antimatter were usable and controllable in macroscopic quantities, a reaction between a small amount of antimatter and an equivalent amount of matter could release energy comparable to a small fission weapon, and could in turn be used as the first stage of a very compact thermonuclear weapon. Extremely-powerful lasers could also potentially be used this way, if they could be made powerful-enough, and compact-enough, to be viable as a weapon. Most of these ideas are versions of pure fusion weapons, and share the common property that they involve hitherto unrealized technologies as their "primary" stages. While many nations have invested significantly in inertial confinement fusion research programs, since the 1970s it has not been considered promising for direct weapons use, but rather as a tool for weapons- and energy-related research that can be used in the absence of full-scale nuclear testing. Whether any nations are aggressively pursuing "4th-generation" weapons is not clear. In many case (as with antimatter) the underlying technology is presently thought to be very far from being viable, and if it was viable would be a powerful weapon in and of itself, outside of a nuclear weapons context, and without providing any significant advantages above existing nuclear weapons designs Pure fusion weapons Since the 1950s, the United States and Soviet Union investigated the possibility of releasing significant amounts of nuclear fusion energy without the use of a fission primary. Such "pure fusion weapons" were primarily imagined as low-yield, tactical nuclear weapons whose advantage would be their ability to be used without producing fallout on the scale of weapons that release fission products. In 1998, the United States Department of Energy declassified the following: Red mercury, a likely hoax substance, has been hyped as a catalyst for a pure fusion weapon. Cobalt bombs A doomsday bomb, made popular by Nevil Shute's 1957 novel, and subsequent 1959 movie, On the Beach, the cobalt bomb is a hydrogen bomb with a jacket of cobalt. The neutron-activated cobalt would have maximized the environmental damage from radioactive fallout. These bombs were popularized in the 1964 film Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb; the material added to the bombs is referred to in the film as 'cobalt-thorium G'. Such "salted" weapons were investigated by U.S. Department of Defense. Fission products are as deadly as neutron-activated cobalt. Initially, gamma radiation from the fission products of an equivalent size fission-fusion-fission bomb are much more intense than Cobalt-60 (): 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long-lived isotopes produced by fission would overtake the again after about 75 years. The triple "taiga" nuclear salvo test, as part of the preliminary March 1971 Pechora–Kama Canal project, produced a small amount of fission products and therefore a comparatively large amount of case material activated products are responsible for most of the residual activity at the site today, namely . fusion generated neutron activation was responsible for about half of the gamma dose at the test site. That dose is too small to cause deleterious effects, and normal green vegetation exists all around the lake that was formed. Arbitrarily large multi-staged devices The idea of a device which has an arbitrarily large number of Teller-Ulam stages, with each driving a larger radiation-driven implosion than the preceding stage, is frequently suggested, but technically disputed. There are "well-known sketches and some reasonable-looking calculations in the open literature about two-stage weapons, but no similarly accurate descriptions of true three stage concepts." During the mid-1950s through early 1960s, scientists working in the weapons laboratories of the United States investigated weapons concepts as large as 1,000 megatons, and Edward Teller announced the design of a 10,000-megaton weapon code-named SUNDIAL at a meeting of the General Advisory Committee of the Atomic Energy Commission. Much of the information about these efforts remains classified, but such "gigaton" range weapons do not appear to have made it beyond theoretical investigations. While both the US and Soviet Union investigated (and in the case of the Soviets, tested) "very high yield" (e.g. 50 to 100-megaton) weapons designs in the 1950s and early 1960s, these appear to represent the upper-limit of Cold War weapon yields pursued seriously, and were so physically heavy and massive that they could not be carried entirely within the bomb bays of the largest bombers. Cold War warhead development trends from the mid-1960s onward, and especially after the Limited Test Ban Treaty, instead resulted in highly-compact warheads with yields in the range from hundreds of kilotons to the low megatons that gave greater options for deliverability. Following the concern caused by the estimated gigaton scale of the 1994 Comet Shoemaker-Levy 9 impacts on the planet Jupiter, in a 1995 meeting at Lawrence Livermore National Laboratory (LLNL), Edward Teller proposed to a collective of U.S. and Russian ex-Cold War weapons designers that they collaborate on designing a 1,000-megaton nuclear explosive device for diverting extinction-class asteroids (10+ km in diameter), which would be employed in the event that one of these asteroids were on an impact trajectory with Earth. Neutron bombs A neutron bomb, technically referred to as an enhanced radiation weapon (ERW), is a type of tactical nuclear weapon designed specifically to release a large portion of its energy as energetic neutron radiation. This contrasts with standard thermonuclear weapons, which are designed to capture this intense neutron radiation to increase its overall explosive yield. In terms of yield, ERWs typically produce about one-tenth that of a fission-type atomic weapon. Even with their significantly lower explosive power, ERWs are still capable of much greater destruction than any conventional bomb. Meanwhile, relative to other nuclear weapons, damage is more focused on biological material than on material infrastructure (though extreme blast and heat effects are not eliminated). ERWs are more accurately described as suppressed yield weapons. When the yield of a nuclear weapon is less than one kiloton, its lethal radius from blast, , is less than that from its neutron radiation. However, the blast is more than potent enough to destroy most structures, which are less resistant to blast effects than even unprotected human beings. Blast pressures of upwards of are survivable, whereas most buildings will collapse with a pressure of only . Commonly misconceived as a weapon designed to kill populations and leave infrastructure intact, these bombs (as mentioned above) are still very capable of leveling buildings over a large radius. The intent of their design was to kill tank crews – tanks giving excellent protection against blast and heat, surviving (relatively) very close to a detonation. Given the Soviets' vast tank forces during the Cold War, this was the perfect weapon to counter them. The neutron radiation could instantly incapacitate a tank crew out to roughly the same distance that the heat and blast would incapacitate an unprotected human (depending on design). The tank chassis would also be rendered highly radioactive, temporarily preventing its re-use by a fresh crew. Neutron weapons were also intended for use in other applications, however. For example, they are effective in anti-nuclear defenses – the neutron flux being capable of neutralising an incoming warhead at a greater range than heat or blast. Nuclear warheads are very resistant to physical damage, but are very difficult to harden against extreme neutron flux. ERWs were two-stage thermonuclears with all non-essential uranium removed to minimize fission yield. Fusion provided the neutrons. Developed in the 1950s, they were first deployed in the 1970s, by U.S. forces in Europe. The last ones were retired in the 1990s. A neutron bomb is only feasible if the yield is sufficiently high that efficient fusion stage ignition is possible, and if the yield is low enough that the case thickness will not absorb too many neutrons. This means that neutron bombs have a yield range of 1–10 kilotons, with fission proportion varying from 50% at 1 kiloton to 25% at 10 kilotons (all of which comes from the primary stage). The neutron output per kiloton is then 10 to 15 times greater than for a pure fission implosion weapon or for a strategic warhead like a W87 or W88. Weapon design laboratories All the nuclear weapon design innovations discussed in this article originated from the following three labs in the manner described. Other nuclear weapon design labs in other countries duplicated those design innovations independently, reverse-engineered them from fallout analysis, or acquired them by espionage. Lawrence Berkeley The first systematic exploration of nuclear weapon design concepts took place in mid-1942 at the University of California, Berkeley. Important early discoveries had been made at the adjacent Lawrence Berkeley Laboratory, such as the 1940 cyclotron-made production and isolation of plutonium. A Berkeley professor, J. Robert Oppenheimer, had just been hired to run the nation's secret bomb design effort. His first act was to convene the 1942 summer conference. By the time he moved his operation to the new secret town of Los Alamos, New Mexico, in the spring of 1943, the accumulated wisdom on nuclear weapon design consisted of five lectures by Berkeley professor Robert Serber, transcribed and distributed as the (classified but now fully declassified and widely available online as a PDF) Los Alamos Primer. The Primer addressed fission energy, neutron production and capture, nuclear chain reactions, critical mass, tampers, predetonation, and three methods of assembling a bomb: gun assembly, implosion, and "autocatalytic methods", the one approach that turned out to be a dead end. Los Alamos At Los Alamos, it was found in April 1944 by Emilio Segrè that the proposed Thin Man Gun assembly type bomb would not work for plutonium because of predetonation problems caused by Pu-240 impurities. So Fat Man, the implosion-type bomb, was given high priority as the only option for plutonium. The Berkeley discussions had generated theoretical estimates of critical mass, but nothing precise. The main wartime job at Los Alamos was the experimental determination of critical mass, which had to wait until sufficient amounts of fissile material arrived from the production plants: uranium from Oak Ridge, Tennessee, and plutonium from the Hanford Site in Washington. In 1945, using the results of critical mass experiments, Los Alamos technicians fabricated and assembled components for four bombs: the Trinity Gadget, Little Boy, Fat Man, and an unused spare Fat Man. After the war, those who could, including Oppenheimer, returned to university teaching positions. Those who remained worked on levitated and hollow pits and conducted weapon effects tests such as Crossroads Able and Baker at Bikini Atoll in 1946. All of the essential ideas for incorporating fusion into nuclear weapons originated at Los Alamos between 1946 and 1952. After the Teller-Ulam radiation implosion breakthrough of 1951, the technical implications and possibilities were fully explored, but ideas not directly relevant to making the largest possible bombs for long-range Air Force bombers were shelved. Because of Oppenheimer's initial position in the H-bomb debate, in opposition to large thermonuclear weapons, and the assumption that he still had influence over Los Alamos despite his departure, political allies of Edward Teller decided he needed his own laboratory in order to pursue H-bombs. By the time it was opened in 1952, in Livermore, California, Los Alamos had finished the job Livermore was designed to do. Lawrence Livermore With its original mission no longer available, the Livermore lab tried radical new designs that failed. Its first three nuclear tests were fizzles: in 1953, two single-stage fission devices with uranium hydride pits, and in 1954, a two-stage thermonuclear device in which the secondary heated up prematurely, too fast for radiation implosion to work properly. Shifting gears, Livermore settled for taking ideas Los Alamos had shelved and developing them for the Army and Navy. This led Livermore to specialize in small-diameter tactical weapons, particularly ones using two-point implosion systems, such as the Swan. Small-diameter tactical weapons became primaries for small-diameter secondaries. Around 1960, when the superpower arms race became a ballistic missile race, Livermore warheads were more useful than the large, heavy Los Alamos warheads. Los Alamos warheads were used on the first intermediate-range ballistic missiles, IRBMs, but smaller Livermore warheads were used on the first intercontinental ballistic missiles, ICBMs, and submarine-launched ballistic missiles, SLBMs, as well as on the first multiple warhead systems on such missiles. In 1957 and 1958, both labs built and tested as many designs as possible, in anticipation that a planned 1958 test ban might become permanent. By the time testing resumed in 1961 the two labs had become duplicates of each other, and design jobs were assigned more on workload considerations than lab specialty. Some designs were horse-traded. For example, the W38 warhead for the Titan I missile started out as a Livermore project, was given to Los Alamos when it became the Atlas missile warhead, and in 1959 was given back to Livermore, in trade for the W54 Davy Crockett warhead, which went from Livermore to Los Alamos. Warhead designs after 1960 took on the character of model changes, with every new missile getting a new warhead for marketing reasons. The chief substantive change involved packing more fissile uranium-235 into the secondary, as it became available with continued uranium enrichment and the dismantlement of the large high-yield bombs. Starting with the Nova facility at Livermore in the mid-1980s, nuclear design activity pertaining to radiation-driven implosion was informed by research with indirect drive laser fusion. This work was part of the effort to investigate Inertial Confinement Fusion. Similar work continues at the more powerful National Ignition Facility. The Stockpile Stewardship and Management Program also benefited from research performed at NIF. Explosive testing Nuclear weapons are in large part designed by trial and error. The trial often involves test explosion of a prototype. In a nuclear explosion, a large number of discrete events, with various probabilities, aggregate into short-lived, chaotic energy flows inside the device casing. Complex mathematical models are required to approximate the processes, and in the 1950s there were no computers powerful enough to run them properly. Even today's computers and simulation software are not adequate. It was easy enough to design reliable weapons for the stockpile. If the prototype worked, it could be weaponized and mass-produced. It was much more difficult to understand how it worked or why it failed. Designers gathered as much data as possible during the explosion, before the device destroyed itself, and used the data to calibrate their models, often by inserting fudge factors into equations to make the simulations match experimental results. They also analyzed the weapon debris in fallout to see how much of a potential nuclear reaction had taken place. Light pipes An important tool for test analysis was the diagnostic light pipe. A probe inside a test device could transmit information by heating a plate of metal to incandescence, an event that could be recorded by instruments located at the far end of a long, very straight pipe. The picture below shows the Shrimp device, detonated on March 1, 1954, at Bikini, as the Castle Bravo test. Its 15-megaton explosion was the largest ever by the United States. The silhouette of a man is shown for scale. The device is supported from below, at the ends. The pipes going into the shot cab ceiling, which appear to be supports, are actually diagnostic light pipes. The eight pipes at the right end (1) sent information about the detonation of the primary. Two in the middle (2) marked the time when X-rays from the primary reached the radiation channel around the secondary. The last two pipes (3) noted the time radiation reached the far end of the radiation channel, the difference between (2) and (3) being the radiation transit time for the channel. From the shot cab, the pipes turned horizontally and traveled along a causeway built on the Bikini reef to a remote-controlled data collection bunker on Namu Island. While x-rays would normally travel at the speed of light through a low-density material like the plastic foam channel filler between (2) and (3), the intensity of radiation from the exploding primary creates a relatively opaque radiation front in the channel filler, which acts like a slow-moving logjam to retard the passage of radiant energy. While the secondary is being compressed via radiation-induced ablation, neutrons from the primary catch up with the x-rays, penetrate into the secondary, and start breeding tritium via the third reaction noted in the first section above. This 6Li + n reaction is exothermic, producing 5 MeV per event. The spark plug has not yet been compressed and thus remains subcritical, so no significant fission or fusion takes place as a result. If enough neutrons arrive before implosion of the secondary is complete, though, the crucial temperature differential between the outer and inner parts of the secondary can be degraded, potentially causing the secondary to fail to ignite. The first Livermore-designed thermonuclear weapon, the Morgenstern device, failed in this manner when it was tested as Castle Koon on April 7, 1954. The primary ignited, but the secondary, preheated by the primary's neutron wave, suffered what was termed as an inefficient detonation; thus, a weapon with a predicted one-megaton yield produced only 110 kilotons, of which merely 10 kt were attributed to fusion. These timing effects, and any problems they cause, are measured by light-pipe data. The mathematical simulations which they calibrate are called radiation flow hydrodynamics codes, or channel codes. They are used to predict the effect of future design modifications. It is not clear from the public record how successful the Shrimp light pipes were. The unmanned data bunker was far enough back to remain outside the mile-wide crater, but the 15-megaton blast, two and a half times as powerful as expected, breached the bunker by blowing its 20-ton door off the hinges and across the inside of the bunker. (The nearest people were farther away, in a bunker that survived intact.) Fallout analysis The most interesting data from Castle Bravo came from radio-chemical analysis of weapon debris in fallout. Because of a shortage of enriched lithium-6, 60% of the lithium in the Shrimp secondary was ordinary lithium-7, which doesn't breed tritium as easily as lithium-6 does. But it does breed lithium-6 as the product of an (n, 2n) reaction (one neutron in, two neutrons out), a known fact, but with unknown probability. The probability turned out to be high. Fallout analysis revealed to designers that, with the (n, 2n) reaction, the Shrimp secondary effectively had two and half times as much lithium-6 as expected. The tritium, the fusion yield, the neutrons, and the fission yield were all increased accordingly. As noted above, Bravo's fallout analysis also told the outside world, for the first time, that thermonuclear bombs are more fission devices than fusion devices. A Japanese fishing boat, Daigo Fukuryū Maru, sailed home with enough fallout on her decks to allow scientists in Japan and elsewhere to determine, and announce, that most of the fallout had come from the fission of U-238 by fusion-produced 14 MeV neutrons. Underground testing The global alarm over radioactive fallout, which began with the Castle Bravo event, eventually drove nuclear testing literally underground. The last U.S. above-ground test took place at Johnston Island on November 4, 1962. During the next three decades, until September 23, 1992, the United States conducted an average of 2.4 underground nuclear explosions per month, all but a few at the Nevada Test Site (NTS) northwest of Las Vegas. The Yucca Flat section of the NTS is covered with subsidence craters resulting from the collapse of terrain over radioactive caverns created by nuclear explosions (see photo). After the 1974 Threshold Test Ban Treaty (TTBT), which limited underground explosions to 150 kilotons or less, warheads like the half-megaton W88 had to be tested at less than full yield. Since the primary must be detonated at full yield in order to generate data about the implosion of the secondary, the reduction in yield had to come from the secondary. Replacing much of the lithium-6 deuteride fusion fuel with lithium-7 hydride limited the tritium available for fusion, and thus the overall yield, without changing the dynamics of the implosion. The functioning of the device could be evaluated using light pipes, other sensing devices, and analysis of trapped weapon debris. The full yield of the stockpiled weapon could be calculated by extrapolation. Production facilities When two-stage weapons became standard in the early 1950s, weapon design determined the layout of the new, widely dispersed U.S. production facilities, and vice versa. Because primaries tend to be bulky, especially in diameter, plutonium is the fissile material of choice for pits, with beryllium reflectors. It has a smaller critical mass than uranium. The Rocky Flats plant near Boulder, Colorado, was built in 1952 for pit production and consequently became the plutonium and beryllium fabrication facility. The Y-12 plant in Oak Ridge, Tennessee, where mass spectrometers called calutrons had enriched uranium for the Manhattan Project, was redesigned to make secondaries. Fissile U-235 makes the best spark plugs because its critical mass is larger, especially in the cylindrical shape of early thermonuclear secondaries. Early experiments used the two fissile materials in combination, as composite Pu-Oy pits and spark plugs, but for mass production, it was easier to let the factories specialize: plutonium pits in primaries, uranium spark plugs and pushers in secondaries. Y-12 made lithium-6 deuteride fusion fuel and U-238 parts, the other two ingredients of secondaries. The Hanford Site near Richland WA operated Plutonium production nuclear reactors and separations facilities during World War 2 and the Cold War. Nine Plutonium production reactors were built and operated there. The first being the B-Reactor which began operations in September 1944 and the last being the N-Reactor which ceased operations in January 1987. The Savannah River Site in Aiken, South Carolina, also built in 1952, operated nuclear reactors which converted U-238 into Pu-239 for pits, and converted lithium-6 (produced at Y-12) into tritium for booster gas. Since its reactors were moderated with heavy water, deuterium oxide, it also made deuterium for booster gas and for Y-12 to use in making lithium-6 deuteride. Warhead design safety Because even low-yield nuclear warheads have astounding destructive power, weapon designers have always recognised the need to incorporate mechanisms and associated procedures intended to prevent accidental detonation. Gun-type It is inherently dangerous to have a weapon containing a quantity and shape of fissile material which can form a critical mass through a relatively simple accident. Because of this danger, the propellant in Little Boy (four bags of cordite) was inserted into the bomb in flight, shortly after takeoff on August 6, 1945. This was the first time a gun-type nuclear weapon had ever been fully assembled. If the weapon falls into water, the moderating effect of the water can also cause a criticality accident, even without the weapon being physically damaged. Similarly, a fire caused by an aircraft crashing could easily ignite the propellant, with catastrophic results. Gun-type weapons have always been inherently unsafe. In-flight pit insertion Neither of these effects is likely with implosion weapons since there is normally insufficient fissile material to form a critical mass without the correct detonation of the lenses. However, the earliest implosion weapons had pits so close to criticality that accidental detonation with some nuclear yield was a concern. On August 9, 1945, Fat Man was loaded onto its airplane fully assembled, but later, when levitated pits made a space between the pit and the tamper, it was feasible to use in-flight pit insertion. The bomber would take off with no fissile material in the bomb. Some older implosion-type weapons, such as the US Mark 4 and Mark 5, used this system. In-flight pit insertion will not work with a hollow pit in contact with its tamper. Steel ball safety method As shown in the diagram above, one method used to decrease the likelihood of accidental detonation employed metal balls. The balls were emptied into the pit: this prevented detonation by increasing the density of the hollow pit, thereby preventing symmetrical implosion in the event of an accident. This design was used in the Green Grass weapon, also known as the Interim Megaton Weapon, which was used in the Violet Club and Yellow Sun Mk.1 bombs. Chain safety method Alternatively, the pit can be "safed" by having its normally hollow core filled with an inert material such as a fine metal chain, possibly made of cadmium to absorb neutrons. While the chain is in the center of the pit, the pit cannot be compressed into an appropriate shape to fission; when the weapon is to be armed, the chain is removed. Similarly, although a serious fire could detonate the explosives, destroying the pit and spreading plutonium to contaminate the surroundings as has happened in several weapons accidents, it could not cause a nuclear explosion. One-point safety While the firing of one detonator out of many will not cause a hollow pit to go critical, especially a low-mass hollow pit that requires boosting, the introduction of two-point implosion systems made that possibility a real concern. In a two-point system, if one detonator fires, one entire hemisphere of the pit will implode as designed. The high-explosive charge surrounding the other hemisphere will explode progressively, from the equator toward the opposite pole. Ideally, this will pinch the equator and squeeze the second hemisphere away from the first, like toothpaste in a tube. By the time the explosion envelops it, its implosion will be separated both in time and space from the implosion of the first hemisphere. The resulting dumbbell shape, with each end reaching maximum density at a different time, may not become critical. It is not possible to tell on the drawing board how this will play out. Nor is it possible using a dummy pit of U-238 and high-speed x-ray cameras, although such tests are helpful. For final determination, a test needs to be made with real fissile material. Consequently, starting in 1957, a year after Swan, both labs began one-point safety tests. Out of 25 one-point safety tests conducted in 1957 and 1958, seven had zero or slight nuclear yield (success), three had high yields of 300 t to 500 t (severe failure), and the rest had unacceptable yields between those extremes. Of particular concern was Livermore's W47, which generated unacceptably high yields in one-point testing. To prevent an accidental detonation, Livermore decided to use mechanical safing on the W47. The wire safety scheme described below was the result. When testing resumed in 1961, and continued for three decades, there was sufficient time to make all warhead designs inherently one-point safe, without need for mechanical safing. Wire safety method In the last test before the 1958 moratorium the W47 warhead for the Polaris SLBM was found to not be one-point safe, producing an unacceptably high nuclear yield of of TNT equivalent (Hardtack II Titania). With the test moratorium in force, there was no way to refine the design and make it inherently one-point safe. A solution was devised consisting of a boron-coated wire inserted into the weapon's hollow pit at manufacture. The warhead was armed by withdrawing the wire onto a spool driven by an electric motor. Once withdrawn, the wire could not be re-inserted. The wire had a tendency to become brittle during storage, and break or get stuck during arming, preventing complete removal and rendering the warhead a dud. It was estimated that 50–75% of warheads would fail. This required a complete rebuild of all W47 primaries. The oil used for lubricating the wire also promoted corrosion of the pit. Strong link/weak link Under the strong link/weak link system, "weak links" are constructed between critical nuclear weapon components (the "hard links"). In the event of an accident the weak links are designed to fail first in a manner that precludes energy transfer between them. Then, if a hard link fails in a manner that transfers or releases energy, energy can't be transferred into other weapon systems, potentially starting a nuclear detonation. Hard links are usually critical weapon components that have been hardened to survive extreme environments, while weak links can be both components deliberately inserted into the system to act as a weak link and critical nuclear components that can fail predictably. An example of a weak link would be an electrical connector that contains electrical wires made from a low melting point alloy. During a fire, those wires would melt, breaking any electrical connection. Permissive action link A permissive action link is an access control device designed to prevent unauthorised use of nuclear weapons. Early PALs were simple electromechanical switches and have evolved into complex arming systems that include integrated yield control options, lockout devices and anti-tamper devices. References Notes Bibliography Cohen, Sam, The Truth About the Neutron Bomb: The Inventor of the Bomb Speaks Out, William Morrow & Co., 1983 Coster-Mullen, John, "Atom Bombs: The Top Secret Inside Story of Little Boy and Fat Man", Self-Published, 2011 Glasstone, Samuel and Dolan, Philip J., editors, The Effects of Nuclear Weapons (third edition) (PDF), U.S. Government Printing Office, 1977. Grace, S. Charles, Nuclear Weapons: Principles, Effects and Survivability (Land Warfare: Brassey's New Battlefield Weapons Systems and Technology, vol 10) Hansen, Chuck, "Swords of Armageddon: U.S. Nuclear Weapons Development since 1945 " (CD-ROM & download available). PDF. 2,600 pages, Sunnyvale, California, Chucklea Publications, 1995, 2007. (2nd Ed.) The Effects of Nuclear War , Office of Technology Assessment (May 1979). Rhodes, Richard. The Making of the Atomic Bomb. Simon and Schuster, New York, (1986 ) Rhodes, Richard. Dark Sun: The Making of the Hydrogen Bomb. Simon and Schuster, New York, (1995 ) Smyth, Henry DeWolf, Atomic Energy for Military Purposes , Princeton University Press, 1945. (see: Smyth Report) External links Carey Sublette's Nuclear Weapon Archive is a reliable source of information and has links to other sources. Nuclear Weapons Frequently Asked Questions: Section 4.0 Engineering and Design of Nuclear Weapons The Federation of American Scientists provides solid information on weapons of mass destruction, including nuclear weapons and their effects More information on the design of two-stage fusion bombs Militarily Critical Technologies List (MCTL), Part II (1998) (PDF) from the US Department of Defense at the Federation of American Scientists website. "Restricted Data Declassification Decisions from 1946 until Present", Department of Energy report series published from 1994 until January 2001 which lists all known declassification actions and their dates. Hosted by Federation of American Scientists. The Holocaust Bomb: A Question of Time is an update of the 1979 court case USA v. The Progressive, with links to supporting documents on nuclear weapon design. Annotated bibliography on nuclear weapons design from the Alsos Digital Library for Nuclear Issues The Woodrow Wilson Center's Nuclear Proliferation International History Project or NPIHP is a global network of individuals and institutions engaged in the study of international nuclear history through archival documents, oral history interviews and other empirical sources. Design Weapon design
Nuclear weapon design
[ "Engineering" ]
16,940
[ "Design", "Weapon design" ]
172,928
https://en.wikipedia.org/wiki/BoPET
BoPET (biaxially oriented polyethylene terephthalate) is a polyester film made from stretched polyethylene terephthalate (PET) and is used for its high tensile strength, chemical stability, dimensional stability, transparency reflectivity, and electrical insulation. When metallized, it has gas and moisture barrier properties. The film is "biaxially oriented", which means that the polymer chains are oriented parallel to the plane of the film, and therefore oriented in two axes. A variety of companies manufacture boPET and other polyester films under different brand names. In the UK and US, the best-known trade names are Mylar, Melinex, Lumirror and Hostaphan. It was the first biaxially oriented polymer to be manufactured on a mass commercial scale. History BoPET film was developed in the mid-1950s, originally by DuPont, Imperial Chemical Industries (ICI), and Hoechst. In 1953 Buckminster Fuller used Mylar as a skin for a geodesic dome, which he built with students at the University of Oregon. In 1955 Eastman Kodak used Mylar as a support for photographic film and called it "ESTAR Base". The very thin and tough film allowed reels to be exposed on long-range U-2 reconnaissance flights. In 1964, NASA launched Echo II, a diameter balloon constructed from a thick mylar film sandwiched between two layers of thick aluminium foil bonded together. Manufacture and properties The manufacturing process begins with a film of molten polyethylene terephthalate (PET) being extruded onto a chill roll, which quenches it into the amorphous state. It is then biaxially oriented by drawing. The most common way of doing this is the sequential process, in which the film is first drawn in the machine direction using heated rollers and subsequently drawn in the transverse direction, i.e., orthogonally to the direction of travel, in a heated oven. It is also possible to draw the film in both directions simultaneously, although the equipment required for this is somewhat more elaborate. Draw ratios are typically around 3 to 4 in each direction. Once the drawing is completed, the film is "heat set" and crystallized under tension in the oven at temperatures typically above . The heat setting step prevents the film from shrinking back to its original unstretched shape and locks in the molecular orientation in the film plane. The orientation of the polymer chains is responsible for the high strength and stiffness of biaxially oriented PET film, which has a typical Young's modulus of about . Another important consequence of the molecular orientation is that it induces the formation of many crystal nuclei. The crystallites that grow rapidly reach the boundary of the neighboring crystallite and remain smaller than the wavelength of visible light. As a result, biaxially oriented PET film has excellent clarity, despite its semicrystalline structure. If it were produced without any additives, the surface of the film would be so smooth that layers would adhere strongly to one another when the film is wound up, similar to the sticking of clean glass plates when stacked. To make handling possible, microscopic inert inorganic particles, such as silicon dioxide, are usually embedded in the PET to roughen the surface of the film. Biaxially oriented PET film can be metallized by vapor deposition of a thin film of evaporated aluminium, gold, or other metal onto it. The result is much less permeable to gases (important in food packaging) and reflects up to 99% of light, including much of the infrared spectrum. For some applications like food packaging, the aluminized boPET film can be laminated with a layer of polyethylene, which provides sealability and improves puncture resistance. The polyethylene side of such a laminate appears dull and the boPET side shiny. Other coatings, such as conductive indium tin oxide (ITO), can be applied to boPET film by sputter deposition. Applications Uses for boPET polyester films include, but are not limited to: Flexible packaging and food contact Laminates containing metallized boPET foil (in technical language called printin or laminate web substrate) protect food against oxidation and aroma loss, achieving long shelf life. Examples are coffee "foil" packaging and pouches for convenience foods. Pop-Tarts are sold in pairs wrapped in silver boPET. They were previously wrapped in foil. White boPET web substrate is used as lidding for dairy goods such as yogurt. Clear boPET web substrate is used as lidding for fresh or frozen ready meals. Due to its excellent heat resistance, it can remain on the package during microwave or oven heating. Roasting bags Metallised films Laminated sheet metal (aluminium or steel) used in the manufacture of cans (bisphenol A-free alternative to lacquers) Covering over paper A clear overlay on a map, on which notations, additional data, or copied data, can be drawn without damaging the map Metallized boPET is used as a mirror-like decorative surface on some book covers, T-shirts, and other flexible cloths. Protective covering over buttons/pins/badges The glossy top layer of a Polaroid SX-70 photographic print As a backing for very fine sandpaper boPET film is used in bagging comic books, in order to best protect them during storage from environmental conditions (moisture, heat, and cold) that would otherwise cause paper to slowly deteriorate over time. This material is used for archival quality storage of documents by the Library of Congress (Mylar type D, ICI Melinex 516 or equivalent) and several major library comic book research collections, including the Comic Art Collection at Michigan State University. While boPET is widely (and effectively) used in this archival sense, it is not immune to the effects of fire and heat and could potentially melt, depending on the intensity of the heat source, causing further damage to the encased item. Similarly, trading card decks (such as Pokémon, Magic: The Gathering, and Yu-Gi-Oh!) are packaged in pouches or sleeves made of metallized boPET. It can also be used to make the holographic artwork featured on some cards, typically known as "holos", "foils", "shinies", or "holofoils". For protecting the spine of important documents, such as medical records. Insulating material An electrical insulating material Insulation for houses and tents, reflecting thermal radiation Five layers of metallized boPET film in NASA's spacesuits make them radiation resistant and help regulate temperature. Metallized boPET film emergency blankets conserve a shock victim's body heat. As a thin strip to form an airtight seal between the control surfaces and adjacent structure of aircraft, especially gliders. Light insulation for indoor gardening. Aluminized proximity suits used by fire fighters for protection from the high amount of heat release from fuel fires. Used in sock and glove liners to lock in warmth Gasketing material in fuel cells and related devices Solar, marine, and aviation Metallized boPET is intended to be used for solar sails as an alternative means of propulsion for spacecraft such as Cosmos 1 Translucent Mylar film, as wide as 48" and in up to 12' in length, found widespread use as a non-dimensional engineering drawing media in the aerospace industry due to its dimensional stability (also see Printing Media section below). This allows production and engineering staff to lay manufactured parts directly over or under the drawing film in order to verify the fidelity of part profiles, hole locations and other part features. Metallized boPET solar curtains reflect sunlight and heat away from windows. Aluminized, as an inexpensive solar eclipse viewer, although care must be taken, because invisible fissures can form in the metal film, reducing its effectiveness. High performance sails for sailboats, hang gliders, paragliders and kites Use boPET films as the back face of the PV modules in solar panels Metallized boPET as a reflector material for solar cooking stoves To bridge control surface gaps on sailplanes (gliders), reducing profile drag Science Amateur and professional visual and telescopic solar filters. BoPET films are often annealed to a glass element to improve thermal conductivity, and guarantee the necessary flat surface needed for even telescopic solar observation. Manufacturers will typically use films with thicknesses of , in order to give the films better resilience. thickness films with a heavy aluminium coating are generally preferred for naked-eye Solar observation during eclipses. Films in annular ring mounts on gas-tight cells, will readily deform into spherical mirrors. Photomultiplier cosmic-ray observatories often make use of these mirrors for inexpensive large (1.0 m and above), lightweight mirror surfaces for sky-sector low and medium energy cosmic ray research. As a light diaphragm material separating gases in hypersonic shock and expansion tube facilities. As a beamsplitter in Fourier transform infrared spectroscopy, typically with laser applications. Film thicknesses are often in the 500 micrometre range. Coating around hematocrit tubes. Insulating material for a cryocooler radiation shield. As a window material to confine gas in detectors and targets in nuclear physics. In CT scanners it acts as a physical barrier between the X-ray tube, detector ring and the patient allowing negligible attenuation of the X-ray beam when active. Spacecraft are insulated with a metallized BoPET film. The descent stage of the Apollo Lunar Module was covered with BoPET to control the temperature of equipment for lunar exploration carried in the Modular Equipment Stowage Assembly. Electronic and acoustic Carrier for flexible printed circuits. BoPET film is often used as the diaphragm material in headphones, electrostatic loudspeakers and microphones. BoPET film has been used in the production of banjos and drumheads since 1958 due to its durability and acoustical properties when stretched over the bearing edge of the drum. They are made in single- and double-ply versions, with each ply being in thickness, with a transparent or opaque surface, originally used by the company Evans. BoPET film is used as the substrate in practically all magnetic recording tapes and floppy disks. Metallized boPET film, along with other plastic films, is used as a dielectric in foil capacitors. Clear boPET bags are used as packaging for audio media such as compact discs and vinyl records. Clear and white boPET films are used as core layers and overlays in smart cards. Printing media Before the widespread adoption of computer aided design (CAD), engineering drawings or architectural drawings were plotted onto sheets of boPET film, known as drafting film. The boPET sheets become legal documents from which copies or blueprints are made. boPET sheets are more durable and can withstand more handling than bond paper. Although "blueprint" duplication has fallen out of use, mylar is still used for its archival properties, typically as a record set of plans for building departments to keep on file. Overhead transparency film for photocopiers or laser printers (boPET film withstands the high heat). Modern lithography printing plates aka "Pronto Plates" (boPET resists oil) Other Balloons, metallic balloons Route information signs, called rollsigns or destination blinds, displayed by public transport vehicles For materials in kites Covering glass to decrease probability of shattering In theatre effects such as confetti As the adhesive strip to attach the string to a teabag One of the many materials used as windsavers or valves for valved harmonicas On farmland and domestic gardens, highly reflective aluminized PET film ribbons are used to keep birds away from plants Measuring tape Protecting pinball machine playfields from wear Used in dentistry when restoring teeth with composite In nail polish, as a coloured and finely shredded additive to create a glitter effect NumismaticsStoring coins for long periods of time. PVC was previously used for this, but over long periods of time PVC can release chlorine, which reacts with the silver and copper in coins. BoPET does not have this problem. In fishing fly tying, metallized Mylar strips are sometimes wound around the hook shank for reflective striping or shimmer in certain patterns. Military uniform accoutrements are often accented by gold mylar, such as shoulder epaulets or shoulder knots. For example: US Army Officer's Mess Dress Uniform. See also Kapton References External links History of Polymers & Plastics for Teachers. by The American Chemistry Council (HTML format) or (PDF format) - 1.9MB, which includes the "chasing arrow" recycling symbols (PET is #1) and a description of plastics. An interesting toy has been developed using boPET and a stick-shaped Van de Graaff generator. Dielectrics Plastics Polyesters Reflective building components Packaging materials Food packaging Terephthalate esters
BoPET
[ "Physics" ]
2,691
[ "Unsolved problems in physics", "Materials", "Dielectrics", "Amorphous solids", "Matter", "Plastics" ]
172,987
https://en.wikipedia.org/wiki/Solar%20mass
The solar mass () is a standard unit of mass in astronomy, equal to approximately (2 nonillion kilograms in US short scale). It is approximately equal to the mass of the Sun. It is often used to indicate the masses of other stars, as well as stellar clusters, nebulae, galaxies and black holes. More precisely, the mass of the Sun is The solar mass is about times the mass of Earth (), or times the mass of Jupiter (). History of measurement The value of the gravitational constant was first derived from measurements that were made by Henry Cavendish in 1798 with a torsion balance. The value he obtained differs by only 1% from the modern value, but was not as precise. The diurnal parallax of the Sun was accurately measured during the transits of Venus in 1761 and 1769, yielding a value of (9 arcseconds, compared to the present value of ). From the value of the diurnal parallax, one can determine the distance to the Sun from the geometry of Earth. The first known estimate of the solar mass was by Isaac Newton. In his work Principia (1687), he estimated that the ratio of the mass of Earth to the Sun was about . Later he determined that his value was based upon a faulty value for the solar parallax, which he had used to estimate the distance to the Sun. He corrected his estimated ratio to in the third edition of the Principia. The current value for the solar parallax is smaller still, yielding an estimated mass ratio of . As a unit of measurement, the solar mass came into use before the AU and the gravitational constant were precisely measured. This is because the relative mass of another planet in the Solar System or the combined mass of two binary stars can be calculated in units of Solar mass directly from the orbital radius and orbital period of the planet or stars using Kepler's third law. Calculation The mass of the Sun cannot be measured directly, and is instead calculated from other measurable factors, using the equation for the orbital period of a small body orbiting a central mass. Based on the length of the year, the distance from Earth to the Sun (an astronomical unit or AU), and the gravitational constant (), the mass of the Sun is given by solving Kepler's third law: The value of G is difficult to measure and is only known with limited accuracy (see Cavendish experiment). The value of G times the mass of an object, called the standard gravitational parameter, is known for the Sun and several planets to a much higher accuracy than G alone. As a result, the solar mass is used as the standard mass in the astronomical system of units. Variation The Sun is losing mass because of fusion reactions occurring within its core, leading to the emission of electromagnetic energy, neutrinos and by the ejection of matter with the solar wind. It is expelling about /year. The mass loss rate will increase when the Sun enters the red giant stage, climbing to /year when it reaches the tip of the red-giant branch. This will rise to /year on the asymptotic giant branch, before peaking at a rate of 10−5 to 10−4 /year as the Sun generates a planetary nebula. By the time the Sun becomes a degenerate white dwarf, it will have lost 46% of its starting mass. The mass of the Sun has been decreasing since the time it formed. This occurs through two processes in nearly equal amounts. First, in the Sun's core, hydrogen is converted into helium through nuclear fusion, in particular the p–p chain, and this reaction converts some mass into energy in the form of gamma ray photons. Most of this energy eventually radiates away from the Sun. Second, high-energy protons and electrons in the atmosphere of the Sun are ejected directly into outer space as the solar wind and coronal mass ejections. The original mass of the Sun at the time it reached the main sequence remains uncertain. The early Sun had much higher mass-loss rates than at present, and it may have lost anywhere from 1–7% of its natal mass over the course of its main-sequence lifetime. Related units One solar mass, , can be converted to related units: (Lunar mass) (Earth mass) (Jupiter mass) It is also frequently useful in general relativity to express mass in units of length or time. (half the Schwarzschild radius of the Sun) The solar mass parameter (G·), as listed by the IAU Division I Working Group, has the following estimates: (TCG-compatible) (TDB-compatible) See also Chandrasekhar limit Gaussian gravitational constant Orders of magnitude (mass) Stellar mass Sun References Mass Units of mass Mass
Solar mass
[ "Physics", "Astronomy", "Mathematics" ]
984
[ "Units of measurement", "Quantity", "Units of mass", "Mass", "Units of measurement in astronomy", "Matter" ]
173,062
https://en.wikipedia.org/wiki/List%20of%20Arecaceae%20genera
This is a list of all the genera in the botanical family Arecaceae, the palm family, based on Baker & Dransfield (2016), which is a revised listing of genera given in the 2008 edition of Genera Palmarum. Taxonomy This is a list of all the genera in the botanical family Arecaceae, the palm family, arranged by tribes and subtribes within the family. Genera Palmarum (2008) lists 183 genera. Lanonia, Saribus, and the monotypic genera Jailoloa, Wallaceodoxa, Manjekia, and Sabinaria, which were described after 2008, have also been included below. Ceratolobus, Daemonorops, Pogonotium, Wallichia, Lytocaryum, and the monotypic genera Retispatha, Pritchardiopsis, and Solfia have since been removed from Genera Palmarum (2008) as obsolete genera. This brings the total number of genera to 181 as of 2016. Phylogenetic tree of Arecaceae. Subfamily Calamoideae Tribe Eugeissoneae Eugeissona – Borneo, Malay Peninsula Tribe Lepidocaryeae – Africa and South America Subtribe Ancistrophyllinae – Africa Oncocalamus – Central Africa Eremospatha – Africa Laccosperma – Africa Subtribe Raphiinae Raphia – Africa, Madagascar, parts of South America Subtribe Mauritiinae – northern South America Lepidocaryum – central Amazon basin; monotypic genus Mauritia – northern South America Mauritiella – northern South America Tribe Calameae – Africa and Asia Subtribe Korthalsiinae Korthalsia – Malesia, New Guinea, Indochina Subtribe Salaccinae – Malesia, Indochina Eleiodoxa – Malay Peninsula, Sumatra, Borneo; monotypic genus Salacca – Malesia, Indochina Subtribe Metroxylinae Metroxylon – New Guinea, Melanesia Subtribe Pigafettinae Pigafetta – Sulawesi, Moluccas and New Guinea Subtribe Plectocomiinae – Malesia, Indochina Plectocomia – Malesia, Indochina Myrialepis – Malesia, Malay Peninsula, Sumatra; monotypic genus Plectocomiopsis – Malesia, Indochina Subtribe Calaminae – Africa, Asia Calamus – Africa, Asia Obselete genera: Calospatha (synonym of Calamus) – Malay Peninsula Daemonorops (syn. of Calamus) – Malesia, Indochina Ceratolobus (syn. of Calamus) – Malay Peninsula, Sumatra, Borneo Pogonotium (syn. of Calamus) – northern Borneo Subfamily Nypoideae Nypa Subfamily Coryphoideae Tribe Sabaleae Sabal – Caribbean, Gulf of Mexico, Mexico Tribe Cryosophileae – Americas Schippia – Guatemala and Belize; monotypic genus Trithrinax – south-central South America Zombia – Hispaniola; monotypic genus Coccothrinax – Caribbean Hemithrinax – Cuba Leucothrinax – northern Caribbean; monotypic genus Thrinax – Caribbean and Central America Chelyocarpus – Peru and nearby Cryosophila – Central America Itaya – Amazon basin; monotypic genus Sabinaria – Colombia and Panama; monotypic genus Tribe Phoeniceae Phoenix – Africa and Asia Tribe Trachycarpeae Subtribe Rhapidinae Chamaerops – Mediterranean; monotypic genus Guihaia – Vietnam and China Trachycarpus – southern China, northern Indochina, Himalayas Rhapidophyllum – Florida; monotypic genus Maxburretia – Malay Peninsula Rhapis – Indochina, Aceh Subtribe Livistoninae Livistona – Indomalaya, Australasia, Gulf of Aden Licuala – Indochina, Malesia, Melanesia Johannesteijsmannia – Malay Peninsula and nearby parts of Sumatra and Borneo Pholidocarpus – Malaysia, northern Indonesia Saribus – Malesia, New Guinea, Island Melanesia Lanonia – southern China, Indochina, Java Unplaced members of Trachycarpeae Acoelorraphe – Central America, Cuba, Bahamas, Florida; monotypic genus Serenoa – Florida, US Gulf Coast except Texas; monotypic genus Brahea – Mexico, Central America Colpothrinax – Central America, Cuba Copernicia – Greater Antilles, South America Pritchardia – Polynesia Washingtonia – Baja California, Colorado River region Tribe Chuniophoeniceae Chuniophoenix – Vietnam, southern China Kerriodoxa – southern Thailand; monotypic genus Nannorrhops – SE Arabia, SE Iran, SW Pakistan; monotypic genus Tahina – NW Madagascar; monotypic genus Tribe Caryoteae – Indomalaya, Australasia Caryota – Indomalaya, Australasia Arenga – Indomalaya, Australasia Tribe Corypheae Corypha – Indomalaya, Australasia Tribe Borasseae – Africa and Asia Subtribe Hyphaenieae – Africa, Indian Ocean Bismarckia – W Madagascar; monotypic genus Satranala – NE Madagascar; monotypic genus Hyphaene – Africa, Indian Ocean Medemia – Upper Nile (Sudan, Egypt); monotypic genus Subtribe Lataniieae – Africa and Asia Latania – Mascarenes Lodoicea – Seychelles; monotypic genus Borassodendron – Malay Peninsula, Borneo Borassus – Africa and Asia Obsolete genera: Pritchardiopsis – New Caledonia Wallichia – Indochina, Himalayas; accepted by Plants of the World Online Subfamily Ceroxyloideae Tribe Cyclospatheae Pseudophoenix – Greater Antilles and nearby Tribe Ceroxyleae Ceroxylon – northern Andes Juania – Juan Fernández Islands; monotypic genus Oraniopsis – Queensland; monotypic genus Ravenea – Madagascar and Comoros Tribe Phytelepheae – NW South America, Panama, Costa Rica Ammandra – Colombia, Ecuador; monotypic genus Aphandra – western Amazon basin; monotypic genus Phytelephas – NW South America, Panama, Costa Rica Subfamily Arecoideae Tribe Iriarteeae – northern South America Iriartella – northern South America Dictyocaryum – northern Andes, parts of Amazons Iriartea – NW South America, Central America; monotypic genus Socratea – northern South America, Central America Wettinia – NW South America Tribe Chamaedoreeae – Americas, Mascarenes Hyophorbe – Mascarenes Wendlandiella – Peruvian Amazon Synechanthus – Central America, Colombia, Ecuador Chamaedorea – Central America, NW South America Gaussia – Mexico, Belize, Cuba Tribe Podococceae Podococcus – Nigeria, Cameroon, Equatorial Guinea Tribe Oranieae Orania – Malesia, New Guinea, Madagascar Tribe Sclerospermeae Sclerosperma – Central Africa Tribe Roystoneeae Roystonea – Caribbean Tribe Reinhardtieae Reinhardtia – Central America Tribe Cocoseae Subtribe Attaleinae Beccariophoenix – Madagascar Jubaeopsis – South Africa; monotypic genus Voanioala – NE Madagascar; monotypic genus Allagoptera – Central South America Attalea – Americas Butia – SE South America Cocos – cosmopolitan; monotypic genus Jubaea – Chile; monotypic genus Syagrus – South America Parajubaea – Andes Subtribe Bactridinae – Americas Acrocomia – Americas Astrocaryum – Americas Aiphanes – NW South America, Caribbean Bactris – South America, Central America, Caribbean Desmoncus – South America, Central America Subtribe Elaeidinae Barcella – northern Brazil; monotypic genus Elaeis – Africa, northern South America Tribe Manicarieae Manicaria – northern South America Tribe Euterpeae – South America, Central America, Caribbean Hyospathe – northern South America Euterpe – South America, Central America Prestoea – northern South America, Caribbean Neonicholsonia – Central America; monotypic genus Oenocarpus – South America Tribe Geonomateae – Americas Welfia – Central America, Colombia, Ecuador Pholidostachys – NW South America, Panama, Costa Rica Calyptrogyne – Central America Calyptronoma – Greater Antilles Asterogyne – northern South America, Central America Geonoma – Americas Tribe Leopoldinieae Leopoldinia – Central Amazon basin Tribe Pelagodoxeae – New Guinea, Oceania Pelagodoxa – Marquesas Islands; monotypic genus Sommieria – NW New Guinea; monotypic genus Tribe Areceae – Malesia, Australasia, Madagascar Subtribe Archontophoenicinae – New Guinea, Australia, New Caledonia Actinorhytis – New Guinea; monotypic genus Archontophoenix – eastern Australia Actinokentia – New Caledonia Chambeyronia – New Caledonia Kentiopsis – New Caledonia Subtribe Arecinae – Indochina, Malesia, New Guinea Areca – Indochina, Malesia, New Guinea Nenga – Malay Peninsula, Sumatra, Borneo, Java Pinanga – Indochina, Malesia, New Guinea Subtribe Basseliniinae – Island Melanesia Basselinia – New Caledonia Burretiokentia – New Caledonia Cyphophoenix – New Caledonia Cyphosperma – New Caledonia, Vanuatu, Fiji Lepidorrhachis – Lord Howe Island; monotypic genus Physokentia – Island Melanesia Subtribe Carpoxylinae – Island Melanesia, Ryukyu Islands Carpoxylon – northern Vanuatu; monotypic genus Satakentia – Ryukyu Islands; monotypic genus Neoveitchia – Vanuatu, Fiji Subtribe Clinospermatinae – New Caledonia Cyphokentia – New Caledonia Clinosperma – New Caledonia Subtribe Dypsidinae – Madagascar, Comoros Dypsis – Madagascar, Comoros Lemurophoenix – NE Madagascar; monotypic genus Marojejya – E Madagascar Masoala – E Madagascar Subtribe Laccospadicinae – New Guinea, E Australia Calyptrocalyx – New Guinea Linospadix – New Guinea, E Australia Howea – Lord Howe Island Laccospadix – northern Queensland; monotypic genus Subtribe Oncospermatinae – Malesia, Sri Lanka, Seychelles, Mascarenes Oncosperma – Malesia Deckenia – Seychelles; monotypic genus Acanthophoenix – Mascarenes Tectiphiala – Mauritius; monotypic genus Subtribe Ptychospermatinae – Australasia Ptychosperma – New Guinea, northern Australia Ponapea – Caroline Islands, Bismarcks Adonidia – Palawan Balaka – Fiji, Samoa Veitchia – Vanuatu, Fiji Carpentaria – Northern Territory; monotypic genus Wodyetia – northern Queensland; monotypic genus Drymophloeus – NW New Guinea, Bismarcks, Solomon Islands, Samoa Normanbya – northern Queensland; monotypic genus Brassiophoenix – Papua New Guinea Ptychococcus – New Guinea Jailoloa – Halmahera; monotypic genus Manjekia – Biak; monotypic genus Wallaceodoxa – Raja Ampat Islands; monotypic genus Subtribe Rhopalostylidinae – New Zealand Rhopalostylis – New Zealand, Chatham Islands, Norfolk Island, and Kermadec Islands Hedyscepe – Lord Howe Island; monotypic genus Subtribe Verschaffeltiinae – Seychelles Nephrosperma – Seychelles; monotypic genus Phoenicophorium – Seychelles; monotypic genus Roscheria – Seychelles; monotypic genus Verschaffeltia – Seychelles; monotypic genus Unplaced members of Areceae Bentinckia – South India, Nicobars Clinostigma – western Oceania Cyrtostachys – Malesia, New Guinea, Solomon Islands Dictyosperma – Mascarenes; monotypic genus Dransfieldia – West Papua; monotypic genus Heterospathe – Philippines, New Guinea, Island Melanesia Hydriastele – Australasia Iguanura – Malay Peninsula, Sumatra, Borneo Loxococcus – Sri Lanka; monotypic genus Rhopaloblaste – New Guinea, Solomon Islands, Malay Peninsula, Nicobar Islands Obsolete genera: Lytocaryum – Brazil Solfia – Samoa Geographical distributions Below are geographical distributions of all the genera in the botanical family Arecaceae, following the 2008 edition of Genera Palmarum (pp. 647-650). Islands and archipelagos with large numbers of endemic genera include New Caledonia, Lord Howe Island, New Guinea, Sri Lanka, Madagascar, Seychelles, and the Mascarenes. Old World Arabia Hyphaene Livistona Nannorrhops Phoenix Australia (see also Lord Howe Island) Archontophoenix Arenga Calamus Carpentaria Caryota Cocos Corypha Hydriastele Laccospadix Licuala Linospadix Livistona Normanbya Nypa Oraniopsis Ptychosperma Wodyetia Borneo Adonidia Areca Arenga Borassodendron Calamus Caryota Corypha Cyrtostachys Eleiodoxa Eugeissona Iguanura Johannesteijsmannia Korthalsia Licuala Livistona Nenga Nypa Oncosperma Orania Pholidocarpus Pinanga Plectocomia Plectocomiopsis Salacca China Arenga Borassus Calamus Caryota Chuniophoenix Corypha Guihaia Lanonia Licuala Livistona Nypa Phoenix Pinanga Plectocomia Rhapis Salacca Trachycarpus Wallichia Europe, North Africa, Egypt, Asia Minor Chamaerops Hyphaene Medemia Phoenix Fiji and Samoa Balaka Calamus Clinostigma Cyphosperma Drymophloeus Heterospathe Metroxylon Neoveitchia Physokentia Pritchardia Solfia Veitchia Hawaii Pritchardia India, including Andamans and Nicobars Areca Arenga Bentinckia Borassus Calamus Caryota Corypha Hyphaene Korthalsia Licuala Livistona Nannorrhops Nypa Phoenix Pinanga Plectocomia Rhopaloblaste Trachycarpus Wallichia Indochina Areca Arenga Borassus Calamus Caryota Corypha Chuniophoenix Guihaia Korthalsia Licuala Livistona Myrialepis Nenga Nypa Oncosperma Phoenix Pinanga Plectocomia Plectocomiopsis Rhapis Salacca Trachycarpus Wallichia Iran, Afghanistan, Pakistan Hyphaene Nannorrhops Phoenix Java Areca Arenga Borassus Calamus Caryota Corypha Korthalsia Lanonia Licuala Livistona Nenga Nypa Oncosperma Orania Pinanga Plectocomia Salacca Lesser Sunda Islands Borassus Calamus Caryota Corypha Licuala Nypa Lord Howe Island Hedyscepe Howea Lepidorrhachis Madagascar and Comoros Beccariophoenix Bismarckia Borassus Chrysalidocarpus Dypsis Elaeis Hyphaene Lemurophoenix Marojejya Masoala Orania Phoenix Raphia Ravenea Satranala Tahina Voanioala Malay Peninsula Areca Arenga Borassodendron Calamus Caryota Corypha Cyrtostachys Eleiodoxa Eugeissona Iguanura Johannesteijsmannia Korthalsia Licuala Livistona Maxburretia Myrialepis Nenga Nypa Oncosperma Orania Phoenix Pholidocarpus Pinanga Plectocomia Plectocomiopsis Rhopaloblaste Salacca Marquesas Pelagodoxa Mascarenes Acanthophoenix Corypha Dictyosperma Hyophorbe Latania Tectiphiala Micronesia Clinostigma Heterospathe Hydriastele Livistona Metroxylon Nypa Pinanga Ponapea Moluccas Areca Arenga Borassus Calamus Calyptrocalyx Caryota Corypha Drymophloeus Heterospathe Hydriastele Jailoloa Licuala Livistona Metroxylon Nypa Oncosperma Orania Pholidocarpus Pigafetta Pinanga Ptychosperma Rhopaloblaste Wallaceodoxa Myanmar Areca Arenga Borassus Calamus Caryota Corypha Korthalsia Licuala Livistona Myrialepis Nypa Oncosperma Phoenix Pinanga Plectocomia Salacca Trachycarpus Wallichia New Caledonia Actinokentia Basselinia Burretiokentia Chambeyronia Clinosperma Cyphokentia Cyphophoenix Cyphosperma Kentiopsis Pritchardiopsis New Guinea and the Bismarck Archipelago Actinorhytis Areca Arenga Borassus Brassiophoenix Calamus Calyptrocalyx Caryota Clinostigma Corypha Cyrtostachys Dransfieldia Drymophloeus Heterospathe Hydriastele Korthalsia Licuala Linospadix Livistona Metroxylon Nypa Orania Physokentia Pigafetta Pinanga Ptychococcus Ptychosperma Rhopaloblaste Sommieria New Hebrides Calamus Carpoxylon Caryota Clinostigma Cyphosperma Heterospathe Hydriastele Licuala Metroxylon Neoveitchia Physokentia Veitchia New Zealand Rhopalostylis Philippines Adonidia Areca Arenga Livistona Calamus Caryota Cocos Corypha Heterospathe Korthalsia Licuala Nypa Oncosperma Orania Phoenix Pinanga Plectocomia Salacca Ryukyu Islands Arenga Livistona Nypa Phoenix Satakentia Seychelles Deckenia Lodoicea Nephrosperma Phoenicophorium Roscheria Verschaffeltia Solomon Islands Actinorhytis Areca Calamus Caryota Clinostigma Cyrtostachys Drymophloeus Heterospathe Hydriastele Licuala Livistona Metroxylon Nypa Physokentia Ptychosperma Rhopaloblaste Sri Lanka (Ceylon) Areca Borassus Calamus Caryota Corypha Loxococcus Oncosperma Phoenix Sulawesi Areca Arenga Borassus Calamus Caryota Corypha Hydriastele Korthalsia Licuala Livistona Nypa Oncosperma Orania Pholidocarpus Pigafetta Pinanga Sumatra Areca Arenga Calamus Caryota Corypha Cyrtostachys Eleiodoxa Iguanura Johannesteijsmannia Korthalsia Licuala Livistona Myrialepis Nenga Nypa Oncosperma Orania Phoenix Pholidocarpus Pinanga Plectocomia Plectocomiopsis Rhapis Salacca Thailand Areca Arenga Borassodendron Borassus Calamus Caryota Corypha Cyrtostachys Eleiodoxa Eugeissona Iguanura Johannesteijsmannia Kerriodoxa Korthalsia Licuala Livistona Maxburretia Myrialepis Nenga Nypa Oncosperma Orania Phoenix Pinanga Plectocomia Plectocomiopsis Rhapis Salacca Trachycarpus Wallichia Africa Sub-Saharan Africa (i.e., Africa, but excluding North Africa) has 16 genera and 65 species. Tropical West Africa Borassus Elaeis Eremospatha Hyphaene Laccosperma Oncocalamus Phoenix Podococcus Raphia Sclerosperma Tropical East Africa Borassus Chrysalidocarpus Dictyosperma Dypsis Elaeis Eremospatha Hyphaene Hyophorbe Laccosperma Latania Livistona Medemia Phoenix Raphia Southern Africa Borassus Hyphaene Jubaeopsis Phoenix Raphia New World There are 65 genera and 730 species in the New World. Argentina Acrocomia Allagoptera Butia Copernicia Euterpe Syagrus Trithrinax Bolivia Acrocomia Aiphanes Astrocaryum Attalea Bactris Ceroxylon Chamaedorea Chelyocarpus Copernicia Desmoncus Dictyocaryum Euterpe Geonoma Hyospathe Iriartea Iriartella Mauritia Mauritiella Oenocarpus Parajubaea Phytelephas Prestoea Socratea Syagrus Trithrinax Wendlandiella Wettinia Brazil Acrocomia Aiphanes Aphandra Attalea Allagoptera Astrocaryum Bactris Barcella Butia Chamaedorea Chelyocarpus Copernicia Desmoncus Dictyocaryum Elaeis Euterpe Geonoma Hyospathe Iriartea Iriartella Itaya Leopoldinia Lepidocaryum Manicaria Mauritia Mauritiella Oenocarpus Pholidostachys Phytelephas Prestoea Raphia Socratea Syagrus Trithrinax Wendlandiella Wettinia Central America Acoelorraphe Acrocomia Aiphanes Asterogyne Astrocaryum Attalea Bactris Brahea Calyptrogyne Chamaedorea Colpothrinax Cryosophila Desmoncus Elaeis Euterpe Gaussia Geonoma Hyospathe Iriartea Manicaria Neonicholsonia Oenocarpus Pholidostachys Phytelephas Prestoea Pseudophoenix Raphia Reinhardtia Roystonea Sabal Schippia Socratea Synechanthus Welfia Chile (see also Juan Fernández Islands) Jubaea Colombia Acrocomia Aiphanes Ammandra Asterogyne Astrocaryum Attalea Bactris Calyptrogyne Ceroxylon Chamaedorea Chelyocarpus Copernicia Cryosophila Desmoncus Dictyocaryum Elaeis Euterpe Geonoma Iriartea Iriartella Itaya Hyospathe Leopoldinia Lepidocaryum Manicaria Mauritia Mauritiella Oenocarpus Pholidostachys Phytelephas Prestoea Raphia Reinhardtia Roystonea Sabal Sabinaria Socratea Syagrus Synechanthus Welfia Wettinia Ecuador Aiphanes Aphandra Astrocaryum Attalea Bactris Ceroxylon Chamaedorea Chelyocarpus Desmoncus Dictyocaryum Elaeis Euterpe Geonoma Hyospathe Iriartea Manicaria Mauritia Mauritiella Oenocarpus Parajubaea Pholidostachys Phytelephas Prestoea Socratea Syagrus Synechanthus Welfia Wettinia Greater Antilles (Cuba, Hispaniola, Jamaica, Puerto Rico) Acoelorraphe Acrocomia Attalea Bactris Calyptronoma Coccothrinax Colpothrinax Copernicia Gaussia Geonoma Hemithrinax Leucothrinax Prestoea Pseudophoenix Roystonea Sabal Thrinax Zombia Guianas Acrocomia Astrocaryum Attalea Bactris Desmoncus Dictyocaryum Elaeis Euterpe Geonoma Hyospathe Iriartella Lepidocaryum Manicaria Mauritia Mauritiella Oenocarpus Prestoea Roystonea Socratea Syagrus Juan Fernández Islands Juania Lesser Antilles Acrocomia Aiphanes Coccothrinax Desmoncus Euterpe Geonoma Leucothrinax Prestoea Pseudophoenix Roystonea Sabal Syagrus Mexico Acoelorraphe Acrocomia Astrocaryum Attalea Bactris Brahea Calyptrogyne Chamaedorea Coccothrinax Cryosophila Desmoncus Gaussia Geonoma Pseudophoenix Reinhardtia Roystonea Sabal Synechanthus Thrinax Washingtonia Paraguay Acrocomia Allagoptera Attalea Bactris Butia Copernicia Geonoma Syagrus Trithrinax Peru Aiphanes Aphandra Astrocaryum Attalea Bactris Ceroxylon Chamaedorea Chelyocarpus Desmoncus Dictyocaryum Euterpe Geonoma Hyospathe Iriartella Iriartea Itaya Lepidocaryum Mauritia Mauritiella Oenocarpus Pholidostachys Phytelephas Prestoea Socratea Syagrus Wendlandiella Wettinia Uruguay Butia Syagrus Trithrinax USA (continental) Acoelorraphe Coccothrinax Leucothrinax Pseudophoenix Rhapidophyllum Roystonea Sabal Serenoa Thrinax Washingtonia Venezuela, Trinidad and Tobago Acrocomia Aiphanes Asterogyne Astrocaryum Attalea Bactris Ceroxylon Chamaedorea Coccothrinax Copernicia Desmoncus Dictyocaryum Euterpe Geonoma Hyospathe Iriartea Iriartella Leopoldinia Lepidocaryum Manicaria Mauritia Mauritiella Oenocarpus Prestoea Roystonea Sabal Socratea Syagrus Wettinia Extinct genera Paschalococos – Extinct in around AD 800 to 1600 Latanites – Middle Eocene to Early-Middle Oligocene Palaeoraphe – Miocene to around late Pliocene or early Pleistocene Palmoxylon – Late Cretaceous to Miocene Phoenicites – Cretaceous to Miocene Sabalites – Late Cretaceous to Miocene See also List of Arecaceae genera by alphabetical order Climbing palm Fan palm List of hardy palms References External links Classification on Palmweb Arecaceae Arecaceae
List of Arecaceae genera
[ "Biology" ]
5,673
[ "Lists of biota", "Lists of plants", "Plants" ]
173,072
https://en.wikipedia.org/wiki/Growth%20hormone
Growth hormone (GH) or somatotropin, also known as human growth hormone (hGH or HGH) in its human form, is a peptide hormone that stimulates growth, cell reproduction, and cell regeneration in humans and other animals. It is thus important in human development. GH also stimulates production of insulin-like growth factor 1 (IGF-1) and increases the concentration of glucose and free fatty acids. It is a type of mitogen which is specific only to the receptors on certain types of cells. GH is a 191-amino acid, single-chain polypeptide that is synthesized, stored and secreted by somatotropic cells within the lateral wings of the anterior pituitary gland. A recombinant form of HGH called somatropin (INN) is used as a prescription drug to treat children's growth disorders and adult growth hormone deficiency. In the United States, it is only available legally from pharmacies by prescription from a licensed health care provider. In recent years in the United States, some health care providers are prescribing growth hormone in the elderly to increase vitality. While legal, the efficacy and safety of this use for HGH has not been tested in a clinical trial. Many of the functions of HGH remain unknown. In its role as an anabolic agent, HGH has been used by competitors in sports since at least 1982 and has been banned by the IOC and NCAA. Traditional urine analysis does not detect doping with HGH, so the ban was not enforced until the early 2000s, when blood tests that could distinguish between natural and artificial HGH were starting to be developed. Blood tests conducted by WADA at the 2004 Olympic Games in Athens, Greece, targeted primarily HGH. Use of the drug for performance enhancement is not currently approved by the FDA. GH has been studied for use in raising livestock more efficiently in industrial agriculture and several efforts have been made to obtain governmental approval to use GH in livestock production. These uses have been controversial. In the United States, the only FDA-approved use of GH for livestock is the use of a cow-specific form of GH called bovine somatotropin for increasing milk production in dairy cows. Retailers are permitted to label containers of milk as produced with or without bovine somatotropin. Nomenclature The names somatotropin (STH) or somatotropic hormone refer to the growth hormone produced naturally in animals and extracted from carcasses. Hormone extracted from human cadavers is abbreviated hGH. The main growth hormone produced by recombinant DNA technology has the approved generic name (INN) somatropin and the brand name Humatrope and is properly abbreviated rhGH in the scientific literature. Since its introduction in 1992, Humatrope has been a banned sports doping agent and in this context is referred to as HGH. The term growth hormone has been incorrectly applied to refer to anabolic sex hormones in the European beef hormone controversy, which initially restricts the use of estradiol, progesterone, testosterone, zeranol, melengestrol acetate and trenbolone acetate. Biology Gene Genes for human growth hormone, known as growth hormone 1 (somatotropin; pituitary growth hormone) and growth hormone 2 (placental growth hormone; growth hormone variant), are localized in the q22-24 region of chromosome 17 and are closely related to human chorionic somatomammotropin (also known as placental lactogen) genes. GH, human chorionic somatomammotropin, and prolactin belong to a group of homologous hormones with growth-promoting and lactogenic activity. Structure The major isoform of the human growth hormone is a protein of 191 amino acids and a molecular weight of 22,124 daltons. The structure includes four helices necessary for functional interaction with the GH receptor. It appears that, in structure, GH is evolutionarily homologous to prolactin and chorionic somatomammotropin. Despite marked structural similarities between growth hormone from different species, only human and Old World monkey growth hormones have significant effects on the human growth hormone receptor. Several molecular isoforms of GH exist in the pituitary gland and are released to blood. In particular, a variant of approximately 20 kDa originated by an alternative splicing is present in a rather constant 1:9 ratio, while recently an additional variant of ~ 23-24 kDa has also been reported in post-exercise states at higher proportions. This variant has not been identified, but it has been suggested to coincide with a 22 kDa glycosylated variant of 23 kDa identified in the pituitary gland. Furthermore, these variants circulate partially bound to a protein (growth hormone-binding protein, GHBP), which is the truncated part of the growth hormone receptor, and an acid-labile subunit (ALS). Regulation Secretion of growth hormone (GH) in the pituitary is regulated by the neurosecretory nuclei of the hypothalamus. These cells release the peptides growth hormone-releasing hormone (GHRH or somatocrinin) and growth hormone-inhibiting hormone (GHIH or somatostatin) into the hypophyseal portal venous blood surrounding the pituitary. GH release in the pituitary is primarily determined by the balance of these two peptides, which in turn is affected by many physiological stimulators (e.g., exercise, nutrition, sleep) and inhibitors (e.g., free fatty acids) of GH secretion. Somatotropic cells in the anterior pituitary gland then synthesize and secrete GH in a pulsatile manner, in response to these stimuli by the hypothalamus. The largest and most predictable of these GH peaks occurs about an hour after onset of sleep with plasma levels of 13 to 72 ng/mL. Maximal secretion of GH may occur within minutes of the onset of slow-wave (SW) sleep (stage III or IV). Otherwise there is wide variation between days and individuals. Nearly fifty percent of GH secretion occurs during the third and fourth NREM sleep stages. Surges of secretion during the day occur at 3- to 5-hour intervals. The plasma concentration of GH during these peaks may range from 5 to even 45 ng/mL. Between the peaks, basal GH levels are low, usually less than 5 ng/mL for most of the day and night. Additional analysis of the pulsatile profile of GH described in all cases less than 1 ng/ml for basal levels while maximum peaks were situated around 10-20 ng/mL. A number of factors are known to affect GH secretion, such as age, sex, diet, exercise, stress, and other hormones. Young adolescents secrete GH at the rate of about 700 μg/day, while healthy adults secrete GH at the rate of about 400 μg/day. Sleep deprivation generally suppresses GH release, particularly after early adulthood. Stimulators of growth hormone (GH) secretion include: Peptide hormones GHRH (somatocrinin) through binding to the growth hormone-releasing hormone receptor (GHRHR) Ghrelin through binding to growth hormone secretagogue receptors (GHSR) Sex hormones Increased androgen secretion during puberty (in males from testes and in females from adrenal cortex) Testosterone and DHEA Estrogen Clonidine, moxonidine and L-DOPA by stimulating GHRH release α4β2 nicotinic agonists, including nicotine, which also act synergistically with clonidine or moxonidine. Hypoglycemia, arginine, pramipexole, ornitine, lysine, tryptophan, γ-Aminobutyric acid and propranolol by inhibiting somatostatin release Deep sleep Glucagon Sodium oxybate or γ-Hydroxybutyric acid Niacin as nicotinic acid (vitamin B3) Fasting Insulin Vigorous exercise Inhibitors of GH secretion include: GHIH (somatostatin) from the periventricular nucleus circulating concentrations of GH and IGF-1 (negative feedback on the pituitary and hypothalamus) Hyperglycemia Glucocorticoids Dihydrotestosterone Phenothiazines In addition to control by endogenous and stimulus processes, a number of foreign compounds (xenobiotics such as drugs and endocrine disruptors) are known to influence GH secretion and function. Function Effects of growth hormone on the tissues of the body can generally be described as anabolic (building up). Like most other peptide hormones, GH acts by interacting with a specific receptor on the surface of cells. Increased height during childhood is the most widely known effect of GH. Height appears to be stimulated by at least two mechanisms: Because polypeptide hormones are not fat-soluble, they cannot penetrate cell membranes. Thus, GH exerts some of its effects by binding to receptors on target cells, where it activates the MAPK/ERK pathway. Through this mechanism GH directly stimulates division and multiplication of chondrocytes of cartilage. GH also stimulates, through the JAK-STAT signaling pathway, the production of insulin-like growth factor 1 (IGF-1, formerly known as somatomedin C), a hormone homologous to proinsulin. The liver is a major target organ of GH for this process and is the principal site of IGF-1 production. IGF-1 has growth-stimulating effects on a wide variety of tissues. Additional IGF-1 is generated within target tissues, making it what appears to be both an endocrine and an autocrine/paracrine hormone. IGF-1 also has stimulatory effects on osteoblast and chondrocyte activity to promote bone growth. In addition to increasing height in children and adolescents, growth hormone has many other effects on the body: Increases calcium retention, and strengthens and increases the mineralization of bone Increases muscle mass through sarcomere hypertrophy Promotes lipolysis Increases protein synthesis Stimulates the growth of all internal organs excluding the brain Plays a role in homeostasis Reduces liver uptake of glucose Promotes gluconeogenesis in the liver Contributes to the maintenance and function of pancreatic islets Stimulates the immune system Increases deiodination of T4 to T3 Induces insulin resistance Biochemistry GH has a short biological half-life of about 10 to 20 minutes. Clinical significance Excess The most common disease of GH excess is a pituitary tumor composed of somatotroph cells of the anterior pituitary. These somatotroph adenomas are benign and grow slowly, gradually producing more and more GH. For years, the principal clinical problems are those of GH excess. Eventually, the adenoma may become large enough to cause headaches, impair vision by pressure on the optic nerves, or cause deficiency of other pituitary hormones by displacement. Prolonged GH excess thickens the bones of the jaw, fingers and toes, resulting in heaviness of the jaw and increased size of digits, referred to as acromegaly. Accompanying problems can include sweating, pressure on nerves (e.g., carpal tunnel syndrome), muscle weakness, excess sex hormone-binding globulin (SHBG), insulin resistance or even a rare form of type 2 diabetes, and reduced sexual function. GH-secreting tumors are typically recognized in the fifth decade of life. It is extremely rare for such a tumor to occur in childhood, but, when it does, the excessive GH can cause excessive growth, traditionally referred to as pituitary gigantism. Surgical removal is the usual treatment for GH-producing tumors. In some circumstances, focused radiation or a GH antagonist such as pegvisomant may be employed to shrink the tumor or block function. Other drugs like octreotide (somatostatin agonist) and bromocriptine (dopamine agonist) can be used to block GH secretion because both somatostatin and dopamine negatively inhibit GHRH-mediated GH release from the anterior pituitary. Deficiency The effects of growth hormone (GH) deficiency vary depending on the age at which they occur. Alterations in somatomedin can result in growth hormone deficiency with two known mechanisms; failure of tissues to respond to somatomedin, or failure of the liver to produce somatomedin. Major manifestations of GH deficiency in children are growth failure, the development of a short stature, and delayed sexual maturity. In adults, somatomedin alteration contributes to increased osteoclast activity, resulting in weaker bones that are more prone to pathologic fracture and osteoporosis. However, deficiency is rare in adults, with the most common cause being a pituitary adenoma. Other adult causes include a continuation of a childhood problem, other structural lesions or trauma, and very rarely idiopathic GHD. Adults with GHD "tend to have a relative increase in fat mass and a relative decrease in muscle mass and, in many instances, decreased energy and quality of life". Diagnosis of GH deficiency involves a multiple-step diagnostic process, usually culminating in GH stimulation tests to see if the patient's pituitary gland will release a pulse of GH when provoked by various stimuli. Psychological effects Quality of life Several studies, primarily involving patients with GH deficiency, have suggested a crucial role of GH in both mental and emotional well-being and maintaining a high energy level. Adults with GH deficiency often have higher rates of depression than those without. While GH replacement therapy has been proposed to treat depression as a result of GH deficiency, the long-term effects of such therapy are unknown. Cognitive function GH has also been studied in the context of cognitive function, including learning and memory. GH in humans appears to improve cognitive function and may be useful in the treatment of patients with cognitive impairment that is a result of GH deficiency. Medical uses Replacement therapy GH is used as replacement therapy in adults with GH deficiency of either childhood-onset or adult-onset (usually as a result of an acquired pituitary tumor). In these patients, benefits have variably included reduced fat mass, increased lean mass, increased bone density, improved lipid profile, reduced cardiovascular risk factors, and improved psychosocial well-being. Long acting growth hormone (LAGH) analogues are now available for treating growth hormone deficiency both in children and adults. These are once weekly injections as compared to conventional growth hormone which has to be taken as daily injections. LAGH injection 4 times a month has been found to be as safe and effective as daily growth hormone injections. Other approved uses GH can be used to treat conditions that produce short stature but are not related to deficiencies in GH. However, results are not as dramatic when compared to short stature that is solely attributable to deficiency of GH. Examples of other causes of shortness often treated with GH are Turner syndrome, Growth failure secondary to chronic kidney disease in children, Prader–Willi syndrome, intrauterine growth restriction, and severe idiopathic short stature. Higher ("pharmacologic") doses are required to produce significant acceleration of growth in these conditions, producing blood levels well above normal ("physiologic"). One version of rHGH has also been FDA approved for maintaining muscle mass in wasting due to AIDS. Off-label use Off-label prescription of HGH is controversial and may be illegal. Claims for GH as an anti-aging treatment date back to 1990 when the New England Journal of Medicine published a study wherein GH was used to treat 12 men over 60. At the conclusion of the study, all the men showed statistically significant increases in lean body mass and bone mineral density, while the control group did not. The authors of the study noted that these improvements were the opposite of the changes that would normally occur over a 10- to 20-year aging period. Despite the fact the authors at no time claimed that GH had reversed the aging process itself, their results were misinterpreted as indicating that GH is an effective anti-aging agent. This has led to organizations such as the controversial American Academy of Anti-Aging Medicine promoting the use of this hormone as an "anti-aging agent". A Stanford University School of Medicine meta-analysis of clinical studies on the subject published in early 2007 showed that the application of GH on healthy elderly patients increased muscle by about 2 kg and decreased body fat by the same amount. However, these were the only positive effects from taking GH. No other critical factors were affected, such as bone density, cholesterol levels, lipid measurements, maximal oxygen consumption, or any other factor that would indicate increased fitness. Researchers also did not discover any gain in muscle strength, which led them to believe that GH merely let the body store more water in the muscles rather than increase muscle growth. This would explain the increase in lean body mass. GH has also been used experimentally to treat multiple sclerosis, to enhance weight loss in obesity, as well as in fibromyalgia, heart failure, Crohn's disease and ulcerative colitis, and burns. GH has also been used experimentally in patients with short bowel syndrome to lessen the requirement for intravenous total parenteral nutrition. In 1990, the US Congress passed an omnibus crime bill, the Crime Control Act of 1990, that amended the Federal Food, Drug, and Cosmetic Act, that classified anabolic steroids as controlled substances and added a new section that stated that a person who "knowingly distributes, or possesses with intent to distribute, human growth hormone for any use in humans other than the treatment of a disease or other recognized medical condition, where such use has been authorized by the Secretary of Health and Human Services" has committed a felony. The Drug Enforcement Administration of the US Department of Justice considers off-label prescribing of HGH to be illegal, and to be a key path for illicit distribution of HGH. This section has also been interpreted by some doctors, most notably the authors of a commentary article published in the Journal of the American Medical Association in 2005, as meaning that prescribing HGH off-label may be considered illegal. And some articles in the popular press, such as those criticizing the pharmaceutical industry for marketing drugs for off-label use (with concern of ethics violations) have made strong statements about whether doctors can prescribe HGH off-label: "Unlike other prescription drugs, HGH may be prescribed only for specific uses. U.S. sales are limited by law to treat a rare growth defect in children and a handful of uncommon conditions like short bowel syndrome or Prader-Willi syndrome, a congenital disease that causes reduced muscle tone and a lack of hormones in sex glands." At the same time, anti-aging clinics where doctors prescribe, administer, and sell HGH to people are big business. In a 2012 article in Vanity Fair, when asked how HGH prescriptions far exceed the number of adult patients estimated to have HGH-deficiency, Dragos Roman, who leads a team at the FDA that reviews drugs in endocrinology, said "The F.D.A. doesn't regulate off-label uses of H.G.H. Sometimes it's used appropriately. Sometimes it's not." Side effects Injection site reactions are common. More rarely, patients can experience joint swelling, joint pain, carpal tunnel syndrome, and an increased risk of diabetes. In some cases, the patient can produce an immune response against GH. GH may also be a risk factor for Hodgkin's lymphoma. One survey of adults that had been treated with replacement cadaver GH (which has not been used anywhere in the world since 1985) during childhood showed a mildly increased incidence of colon cancer and prostate cancer, but linkage with the GH treatment was not established. Performance enhancement The first description of the use of GH as a doping agent was Dan Duchaine's "Underground Steroid handbook" which emerged from California in 1982; it is not known where and when GH was first used this way. Athletes in many sports have used human growth hormone in order to attempt to enhance their athletic performance. Some recent studies have not been able to support claims that human growth hormone can improve the athletic performance of professional male athletes. Many athletic societies ban the use of GH and will issue sanctions against athletes who are caught using it. However, because GH is a potent endogenous protein, it is very difficult to detect GH doping. In the United States, GH is legally available only by prescription from a medical doctor. Dietary supplements To capitalize on the idea that GH might be useful to combat aging, companies selling dietary supplements have websites selling products linked to GH in the advertising text, with medical-sounding names described as "HGH Releasers". Typical ingredients include amino acids, minerals, vitamins, and/or herbal extracts, the combination of which are described as causing the body to make more GH with corresponding beneficial effects. In the United States, because these products are marketed as dietary supplements, it is illegal for them to contain GH, which is a drug. Also, under United States law, products sold as dietary supplements cannot have claims that the supplement treats or prevents any disease or condition, and the advertising material must contain a statement that the health claims are not approved by the FDA. The FTC and the FDA do enforce the law when they become aware of violations. Agricultural use In the United States, it is legal to give a bovine GH to dairy cows to increase milk production, and is legal to use GH in raising cows for beef; see article on Bovine somatotropin, cattle feeding, dairy farming and the beef hormone controversy. The use of GH in poultry farming is illegal in the United States. Similarly, no chicken meat for sale in Australia is administered hormones. Several companies have attempted to have a version of GH for use in pigs (porcine somatotropin) approved by the FDA but all applications have been withdrawn. Drug development history Genentech pioneered the use of recombinant human growth hormone for human therapy, which was approved by the FDA in 1985. Prior to its production by recombinant DNA technology, growth hormone used to treat deficiencies was extracted from the pituitary glands of cadavers. Attempts to create a wholly synthetic HGH failed. Limited supplies of HGH resulted in the restriction of HGH therapy to the treatment of idiopathic short stature. Very limited clinical studies of growth hormone derived from an Old World monkey, the rhesus macaque, were conducted by John C. Beck and colleagues in Montreal, in the late 1950s. The study published in 1957, which was conducted on "a 13-year-old male with well-documented hypopituitarism secondary to a crainiophyaryngioma," found that: "Human and monkey growth hormone resulted in a significant enhancement of nitrogen storage ... (and) there was a retention of potassium, phosphorus, calcium, and sodium. ... There was a gain in body weight during both periods. ... There was a significant increase in urinary excretion of aldosterone during both periods of administration of growth hormone. This was most marked with the human growth hormone. ... Impairment of the glucose tolerance curve was evident after 10 days of administration of the human growth hormone. No change in glucose tolerance was demonstrable on the fifth day of administration of monkey growth hormone." The other study, published in 1958, was conducted on six people: the same subject as the Science paper; an 18-year-old male with statural and sexual retardation and a skeletal age of between 13 and 14 years; a 15-year-old female with well-documented hypopituitarism secondary to a craniopharyngioma; a 53-year-old female with carcinoma of the breast and widespread skeletal metastases; a 68-year-old female with advanced postmenopausal osteoporosis; and a healthy 24-year-old medical student without any clinical or laboratory evidence of systemic disease. In 1985, unusual cases of Creutzfeldt–Jakob disease were found in individuals that had received cadaver-derived HGH ten to fifteen years previously. Based on the assumption that infectious prions causing the disease were transferred along with the cadaver-derived HGH, cadaver-derived HGH was removed from the market. In 1985, biosynthetic human growth hormone replaced pituitary-derived human growth hormone for therapeutic use in the U.S. and elsewhere. As of 2005, recombinant growth hormones available in the United States (and their manufacturers) included Nutropin (Genentech), Humatrope (Lilly), Genotropin (Pfizer), Norditropin (Novo), and Saizen (Merck Serono). In 2006, the U.S. Food and Drug Administration (FDA) approved a version of rHGH called Omnitrope (Sandoz). A sustained-release form of growth hormone, Nutropin Depot (Genentech and Alkermes) was approved by the FDA in 1999, allowing for fewer injections (every 2 or 4 weeks instead of daily); however, the product was discontinued by Genentech/Alkermes in 2004 for financial reasons (Nutropin Depot required significantly more resources to produce than the rest of the Nutropin line). See also Acromegaly Somatopause Epigenetic clock References External links Anterior pituitary hormones Anti-aging substances Galactagogues Hormones of the somatotropic axis Peptide hormones Drugs developed by Pfizer Recombinant proteins World Anti-Doping Agency prohibited substances Stress hormones
Growth hormone
[ "Chemistry", "Biology" ]
5,513
[ "Senescence", "Anti-aging substances", "Recombinant proteins", "Biotechnology products" ]
173,077
https://en.wikipedia.org/wiki/Starwisp
Starwisp is a hypothetical unmanned interstellar probe design proposed by the late Robert L. Forward. It is propelled by a microwave sail, similar to a solar sail in concept, but powered by microwaves from a human-made source. It would fly through the target system without slowing down. Description "Starwisp" is a concept for an ultra-low-mass interstellar probe pushed by a microwave beam. It was proposed by scientist and author Robert L. Forward in 1985, and further work was published by Geoffrey A. Landis in 2000. The proposed device uses beam-powered propulsion in the form of a high-power microwave antenna pushing a sail. The probe itself would consist of a mesh of extremely fine carbon wires about 100 m across, with the wires spaced the same distance apart as the 3 mm wavelength of the microwaves that will be used to push it. Forward proposed that the wires would incorporate nanoscale computer circuitry, sensors, microwave power collection systems and microwave radio transmitters fabricated on the wire surfaces, giving the probe data collection and transmission capability. Being distributed across the entire sail, no "rigging" is needed, as would be the case if the mission electronics were placed in a separate probe that was pulled by the sail. The original Starwisp concept assumed that the microwaves would be efficiently reflected, with the wire mesh surface acting as a superconductor and nearly perfectly efficient mirror. This assumption is not valid. Landis showed that a grid will absorb a significant fraction of the power incident on it, and therefore cannot stay cool enough to be superconducting. The design is thermally limited, hence the use of carbon as the material in Landis's concept. Low mass was the key feature of the Starwisp probe. In Landis's calculations, the mesh has a density of only 100 kg/km2, for a total mass of 1 kg, plus a payload of 80 grams. Although the diffraction limit severely constrains the range of the transmitting antenna, the probe is designed to have an acceleration of 24 m/s2, so that it can reach a significant fraction of the speed of light within a very short distance, before passing out of range. The antenna uses a microwave lens 560 km in diameter, would transmit 56 GW of power, and would accelerate the probe to 10% of the speed of light. The probe would cruise without power for decades until it finally approached the target star, at which point the antenna which launched it would again target its beam on Starwisp. This would be done when the Starwisp was about 80% of the way to its destination, so that the beam and Starwisp would arrive there at the same time. At such extreme long range the antenna would be unable to provide any propulsion, but Starwisp would be able to use its wire sail to collect and convert some of the microwave energy into electricity to operate its sensors and transmit the data it collects back home. Starwisp would not slow down at the target star, performing a high-speed flyby mission instead. Since the antenna is only required for a few days at Starwisp's launch and again for another few days several decades later to power it while it passes its target, Starwisp probes might be mass-produced and launched by the maser every few days. In this manner, a continuous stream of data could be collected about distant solar systems even though any given Starwisp probe only spends a few days travelling through it. Alternatively, the launching transmitter could be used in the interim to transmit power to Earth for commercial use, as with a solar power satellite. Possible methods of fabrication Constructing such a delicate probe would be a significant challenge. One proposed method would be to "paint" the probe and its circuitry onto an enormous sheet of plastic which degrades when exposed to ultraviolet light, and then wait for the sheet to evaporate away under the assault of solar UV after it has been deployed in space. Another proposed method noted that the Starwisp probe wires were of the same physical scale as wires and circuit elements on modern computer microchips and could be produced by the same photolithographic fabrication technologies as those of computer chips. The probe would have to be built in sections the size of current chip fabrication silicon wafers and then connected together. Technical problems A major problem this design would face would be the radiation encountered en route. Travelling at 20% of light speed, ordinary interstellar hydrogen would become a significant radiation hazard, and the Starwisp would be without shielding and likely without active self-repair capability. Another problem would be keeping the acceleration of the Starwisp uniform enough across its sail area so that its delicate wires would not tear or be twisted out of shape. Distorting the shape of the Starwisp even slightly could result in a runaway catastrophe, since one portion of the Starwisp would be reflecting microwaves in a different direction than the other portion and be thrust even farther out of shape. Such delicate and finely-balanced control may prove impossible to realize. The possibility of using a dusty plasma sail in which a dusty substance that is maintained as a plasma within space is responsible for the reflection of electromagnetic radiation could circumvent problems associated with radiation damage to the medium responsible for the transfer of radiation pressure (the dusty plasma sail might not be as easy to damage as a thin film or the like). Dusty plasma sails can also adapt their three-dimensional structure in real time to ensure reflection perpendicular to any incident light/microwave beam. In fiction In a science fiction story Forward suggested that the beam from a solar power satellite could be used to push a Starwisp probe while the solar power satellite was being tested after construction. See also Breakthrough Starshot, a funded proposal for laser-propelled Starwisp-type spacecraft References External links Light Sails Small Laser-propelled Interstellar Probe Setting Sail for the Stars Beamed Power Propulsion To The Stars Hypothetical spacecraft Interstellar travel 1985 in science
Starwisp
[ "Astronomy", "Technology" ]
1,218
[ "Hypothetical spacecraft", "Astronomical hypotheses", "Interstellar travel", "Exploratory engineering" ]
173,121
https://en.wikipedia.org/wiki/Claddagh%20ring
A Claddagh ring () is a traditional Irish ring in which a heart represents love, the crown stands for loyalty, and two clasped hands symbolize friendship. The design and customs associated with it originated in Claddagh, County Galway. Its modern form was first produced in the 17th century. Claddagh rings have been used as engagement and wedding rings in medieval and Renaissance Europe. The oldest surviving examples of the Claddagh ring have been forged by Bartholomew Fallon. Description The Claddagh ring belongs to a group of European finger rings called fede rings. The name derives from the Italian phrase ("hands [joined] in faith" or "hands [joined] in loyalty"). This group dates to Ancient Rome, where the gesture of clasping hands meant pledging vows. Cut or cast in bezels, they were used as engagement and wedding rings in medieval and Renaissance Europe to signify "plighted troth". In recent years it has been embellished with interlace designs and combined with other Celtic and Irish symbols, corresponding with its popularity as an emblem of Irish identity. Origins Galway has produced Claddagh rings continuously since at least 1700, but the name "Claddagh ring" was not used before the 1830s. Although there are various myths and legends around the origin of the Claddagh ring, it is almost certain that it originated in or close to the small fishing village of Claddagh in Galway. As an example of a maker, Bartholomew Fallon was a 17th-century Irish goldsmith, based in Galway, who made Claddagh rings until circa 1700. His name first appears in the will of one Dominick Martin, also a jeweller, dated 26 January 1676, in which Martin willed Fallon some of his tools. Fallon continued working as a goldsmith until 1700. His are among the oldest surviving examples of the Claddagh ring, in many cases bearing his signature. There are many legends about the origins of the ring, particularly concerning Richard Joyce, a silversmith from Galway circa 1700, who is said to have invented the Claddagh design. Legend has it that Joyce was captured and enslaved by Algerian Corsairs around 1675 while on a passage to the West Indies; he was sold into slavery to a Moorish goldsmith who taught him the craft. King William III sent an ambassador to Algeria to demand the release of any and all British subjects who were enslaved in that country, which at the time would have included Richard Joyce. After fourteen years, Joyce was released and returned to Galway and brought along with him the ring he had fashioned while in captivity: what we've come to know as the Claddagh. He gave the ring to his sweetheart, married, and became a goldsmith with "considerable success". His initials are in one of the earliest surviving Claddagh rings, but there are three other rings also made around that time bearing the mark of goldsmith Thomas Meade. The Victorian antiquarian Sir William Jones described the Claddagh, and gives Chambers' Book of Days as the source, in his book Finger-Ring Lore. Jones says: An account written in 1906 by William Dillon, a Galway jeweller, claimed that the "Claddagh" ring was worn in the Aran Isles, Connemara and beyond. Knowledge of the ring and its customs spread within Ireland and Britain during the Victorian period, and this is when its name became established. Galway jewellers began to market it beyond the local area in the 19th century. Further recognition came in the 20th century. Usage and symbolism The Claddagh's distinctive design features two hands clasping a heart and usually surmounted by a crown. These elements symbolize the qualities of love (the heart), friendship (the hands), and loyalty (the crown). A "Fenian" Claddagh ring, without a crown, is a slightly different take on the design but has not achieved the level of popularity of the crowned version. Claddagh rings are relatively popular among the Irish and those of Irish heritage, such as Irish Americans, as cultural symbols and as friendship, engagement, and wedding rings. While Claddagh rings are sometimes used as friendship rings, they are most commonly used as engagement and wedding rings. Mothers sometimes give these rings to their daughters when they come of age. There are several mottos and wishes associated with the ring, such as: "Let love and friendship reign." In Ireland, the United States, Canada, and other parts of the Irish diaspora, the Claddagh is sometimes handed down mother-to-eldest daughter or grandmother-to-granddaughter. According to Irish author Colin Murphy, a Claddagh ring is traditionally worn with the intention of conveying the wearer's relationship status: On the right hand with the point of the heart toward the fingertips: the wearer is single and might be looking for love. On the right hand with the point of the heart toward the wrist: the wearer is in a relationship; someone "has captured their heart" On the left ring finger with the point of the heart toward the fingertips: the wearer is engaged. On the left ring finger with the point of the heart toward the wrist: the wearer is married. There are other localized variations and oral traditions, in both Ireland and the Irish diaspora, involving the hand and the finger on which the Claddagh is worn. Folklore about the ring is relatively recent, not ancient, with the lore about them almost wholly based in oral tradition; there is "very little native Irish writing about the ring", hence, the difficulty today in finding any scholarly or non-commercial source that explains the traditional ways of wearing the ring. Modern usage The Claddagh ring can be seen on the fingers of political figures, Hollywood icons, and literary figures. Public figures including John F. Kennedy, Ronald Reagan and Bill Clinton have worn the Claddagh ring. Kennedy and his wife received theirs on a trip to Galway in 1963. Reagan and Clinton both received the rings as a gift from Ireland. Royalty, such as Queen Victoria, King Edward VII and Queen Alexandria were seen wearing the Claddagh ring after 1849 when they traveled to Ireland. After visiting Ireland with his wife, Walt Disney was seen wearing the Claddagh ring. It is also apparent on the Partners Statue in Disney World. On the statue his ring was facing outward, although he was married. The ring can be found on actors such as Maureen O'Hara and John Wayne, who received their rings during the movie "The Quiet Man". Peter O'Toole and Daniel Day-Lewis were frequently seen wearing the Claddagh ring, as well as Mia Farrow and Gabriel Byrne. Jim Morrison and Patricia Kennealy completed their Celtic wedding with Claddagh rings. In the television show, Buffy the Vampire Slayer, the Claddagh ring is seen when Angel presents the ring to Buffy as a birthday present. In the novel, Goldfinger, Jill Masterton is wearing the ring. However, without the crown to emphasize her lack of loyalty. Another book titled, "Unfinished Business", featured the ring, as the actor Joe Pistone stated his wife gifted it to him. The Claddagh ring was also seen in Days of Our Lives as Bo presented the Claddagh ring to Carly. See also References External links 18th century Claddagh ring – Victoria and Albert museum 17th-century introductions Culture in Galway (city) Engagement Culture of Ireland Rings (jewellery) Wedding objects Heart symbols
Claddagh ring
[ "Mathematics" ]
1,520
[ "Heart symbols", "Symbols" ]
173,155
https://en.wikipedia.org/wiki/Public%20good%20%28economics%29
In economics, a public good (also referred to as a social good or collective good) is a good that is both non-excludable and non-rivalrous. Use by one person neither prevents access by other people, nor does it reduce availability to others. Therefore, the good can be used simultaneously by more than one person. This is in contrast to a common good, such as wild fish stocks in the ocean, which is non-excludable but rivalrous to a certain degree. If too many fish were harvested, the stocks would deplete, limiting the access of fish for others. A public good must be valuable to more than one user, otherwise, its simultaneous availability to more than one person would be economically irrelevant. Capital goods may be used to produce public goods or services that are "...typically provided on a large scale to many consumers." Similarly, using capital goods to produce public goods may result in the creation of new capital goods. In some cases, public goods or services are considered "...insufficiently profitable to be provided by the private sector.... (and), in the absence of government provision, these goods or services would be produced in relatively small quantities or, perhaps, not at all." Public goods include knowledge, official statistics, national security, common languages, law enforcement, broadcast radio, flood control systems, aids to navigation, and street lighting. Collective goods that are spread all over the face of the Earth may be referred to as global public goods. This includes physical book literature, but also media, pictures and videos. For instance, knowledge is well shared globally. Information about men's, women's and youth health awareness, environmental issues, and maintaining biodiversity is common knowledge that every individual in the society can get without necessarily preventing others access. Also, sharing and interpreting contemporary history with a cultural lexicon (particularly about protected cultural heritage sites and monuments) is another source of knowledge that the people can freely access. Public goods problems are often closely related to the "free-rider" problem, in which people not paying for the good may continue to access it. Thus, the good may be under-produced, overused or degraded. Public goods may also become subject to restrictions on access and may then be considered to be club goods; exclusion mechanisms include toll roads, congestion pricing, and pay television with an encoded signal that can be decrypted only by paid subscribers. There is a good deal of debate and literature on how to measure the significance of public goods problems in an economy, and to identify the best remedies. Academic literature Paul A. Samuelson is usually credited as the economist who articulated the modern theory of public goods in a mathematical formalism, building on earlier work of Wicksell and Lindahl. In his classic 1954 paper The Pure Theory of Public Expenditure, he defined a public good, or as he called it in the paper a "collective consumption good", as follows:[goods] which all enjoy in common in the sense that each individual's consumption of such a good leads to no subtractions from any other individual's consumption of that good... Many mechanisms have been proposed to achieve efficient public goods provision in various settings and under various assumptions. Lindahl tax A Lindahl tax is a type of taxation brought forward by Erik Lindahl, an economist from Sweden in 1919. His idea was to tax individuals, for the provision of a public good, according to the marginal benefit they receive. Public goods are costly and eventually someone needs to pay the cost. It is difficult to determine how much each person should pay. So, Lindahl developed a theory of how the expense of public utilities needs to be settled. His argument was that people would pay for the public goods according to the way they benefit from the good. The more a person benefits from these goods, the higher the amount they pay. People are more willing to pay for goods that they value. Taxes are needed to fund public goods and people are willing to bear the burden of taxes. Additionally, the theory dwells on people's willingness to pay for the public good. From the fact that public goods are paid through taxation according to the Lindahl idea, the basic duty of the organization that should provide the people with this services and products is the government. Vickrey–Clarke–Groves mechanism Vickrey–Clarke–Groves mechanisms (VCG) are one of the best-studied procedures for funding public goods. VCG encompasses a wide class of similar mechanisms, but most work focuses on the Clarke Pivot Rule which ensures that all individuals pay into the public good and that the mechanism is individually rational. The main issue with the VCG mechanism is that it requires a very large amount of information from each user. Participants may not have a detailed sense of their utility function with respect to different funding levels. Compare this with other mechanisms that only require users to provide a single contribution amount. This, among other issues, has prevented the use of VCG mechanisms in practice. However, it is still possible that VCG mechanisms could be adopted among a set of sophisticated actors. Quadratic funding Quadratic funding (QF) is one of the newest innovations in public goods funding mechanisms. The idea of Quadratic voting was turned into a mechanism for public goods funding by Buterin, Hitzig, and Weyl and is now referred to as quadratic funding. Quadratic funding has a close theoretical link with the VCG mechanism, and like VCG, it requires a subsidy in order to induce incentive compatibility and efficiency. Both mechanisms also fall prone to collusion between players and sybil attacks. However, in contrast to VCG, contributors only have to submit a single contribution – the total contribution to the public good is the sum of the square roots of individual contributions. It can be proved that there is always a deficit that the mechanism designer must pay. One technique to reduce collusion is to identify groups of contributors that will likely coordinate and lower the subsidy going to their preferred causes. Assurance contracts First proposed by Bagnoli and Lipman, assurance contracts have a simple and intuitive appeal. Each funder agrees to spend a certain amount towards a public good conditional on the total funding being sufficient to produce the good. If not everyone agrees to the terms, then no money is spent on the project. Donors can feel assured that their money will only be spent if there is sufficient support for the public good. Assurance contracts work particularly well with smaller groups of easily identifiable participants, especially when the game can be repeated. Several crowdfunding platforms such as Kickstarter and IndieGoGo have used assurance contracts to support various projects (though not all of them are public goods). Assurance contracts can be used for non-monetary coordination as well, for example, Free State Project obtained mutual commitments for 20,000 individuals to move to New Hampshire in a bid to influence the politics of the state. Alex Tabarrok suggested a modification called dominant assurance contracts where the mechanism designer gives every contributor a refund bonus if the contract fails. For example, in addition to returning their contributions, the mechanism designer might give all contributors an additional $5 if the total donations aren’t sufficient to support the project. If there’s a chance that the contract will fail, a refund bonus incentivizes people to participate in the mechanism, making the all-pay equilibrium more likely. This comes with the drawback that the mechanism designer must pay the participants in some cases (e.g. when the contract fails), which is a common theme. Zubrickas proposed a simple modification of dominant assurance contracts where people are given a refund bonus proportional to the amount they offered to donate, this incentivizes larger contributions than the fixed refund from Tabarrok’s original proposal. There have been many variations on the idea of conditional donations towards a public good. For example, the Conditional Contributions Mechanism allows donors to make variable sized commitments to fund the project conditional on the total amount committed. Similarly, the Binary Conditional Contributions Mechanism allows users to condition their donation on the number of unique funders. Extensions such as the Street Performer Protocol consider time-limited spending commitments. Lotteries Lotteries have historically been used as a means to finance public goods. Morgan initiated the first formal study of lotteries as a public goods funding mechanism. Since then, lotteries have undergone extensive theoretical and experimental research. Combined with their historical success, lotteries are a promising crowdfunding mechanism. They work by using an external source of funding to provide a lottery prize. Individual “donors” buy lottery tickets for a chance to receive the cash prize, knowing that ticket sales will be spent towards the public good. A winner is selected randomly from one of the tickets and the winner receives the entire lottery prize. All lottery proceeds from ticket sales are spent towards the public good. Like the other mechanisms, this approach requires subsidies in the form of a lottery prize in order to function. It can be shown that altruistic donors can generate more funding for the good by donating towards the lottery prize rather than buying tickets directly. Lotteries are approximately efficient public goods funding mechanisms and the level of funding approaches the optimal level as the prize grows. However, in the limit of large populations, contributions from the lottery mechanism converge to that of voluntary contributions and should fall to zero. Role of non-profits Public goods provision is in most cases part of governmental activities. In the introductory section of his book, Public Good Theories of the Nonprofit Sector, Bruce R. Kingma stated that; In the Weisbrod model nonprofit organizations satisfy a demand for public goods, which is left unfilled by government provision. The government satisfies the demand of the median voters and therefore provides a level of the public good less than some citizens'-with a level of demand greater than the median voter's-desire. This unfilled demand for the public good is satisfied by nonprofit organizations. These nonprofit organizations are financed by the donations of citizens who want to increase the output of the public good. Terminology, and types of goods Non-rivalrous: accessible by all while one's usage of the product does not affect the availability for subsequent use. Non-excludability: that is, it is impossible to exclude any individuals from consuming the good. Pay walls, memberships and gates are common ways to create excludability. Pure public: when a good exhibits the two traits, non-rivalry and non-excludability, it is referred to as the pure public good. Pure public goods are rare. Impure public goods: the goods that satisfy the two public good conditions (non-rivalry and non-excludability) only to a certain extent or only some of the time. For instance, some aspects of cybersecurity, such as threat intelligence and vulnerability information sharing, collective response to cyber-attacks, the integrity of elections, and critical infrastructure protection, have the characteristics of impure public goods. Private good: The opposite of a public good which does not possess these properties. A loaf of bread, for example, is a private good; its owner can exclude others from using it, and once it has been consumed, it cannot be used by others. Common-pool resource: A good that is rivalrous but non-excludable. Such goods raise similar issues to public goods: the mirror to the public goods problem for this case is the 'tragedy of the commons', where the unfettered access to a good sometimes results in the overconsumption and thus depletion of that resource. For example, it is so difficult to enforce restrictions on deep-sea fishing that the world's fish stocks can be seen as a non-excludable resource, but one which is finite and diminishing. Club goods: are the goods that are excludable but are non-rivalrous such as private parks. Mixed good: final goods that are intrinsically private but that are produced by the individual consumer by means of private and public good inputs. The benefits enjoyed from such a good for any one individual may depend on the consumption of others, as in the cases of a crowded road or a congested national park. Definition matrix Challenges in identifying public goods The definition of non-excludability states that it is impossible to exclude individuals from consumption. Technology now allows radio or TV broadcasts to be encrypted such that persons without a special decoder are excluded from the broadcast. Many forms of information goods have characteristics of public goods. For example, a poem can be read by many people without reducing the consumption of that good by others; in this sense, it is non-rivalrous. Similarly, the information in most patents can be used by any party without reducing consumption of that good by others. Official statistics provide a clear example of information goods that are public goods, since they are created to be non-excludable. Creative works may be excludable in some circumstances, however: the individual who wrote the poem may decline to share it with others by not publishing it. Copyrights and patents both encourage the creation of such non-rival goods by providing temporary monopolies, or, in the terminology of public goods, providing a legal mechanism to enforce excludability for a limited period of time. For public goods, the "lost revenue" of the producer of the good is not part of the definition: a public good is a good whose consumption does not reduce any other's consumption of that good. Public goods also incorporate private goods, which makes it challenging to define what is private or public. For instance, you may think that the community soccer field is a public good. However, you need to bring your own cleats and ball to be able to play. There is also a rental fee that you would have to pay for you to be able to occupy that space. It is a mixed case of public and private goods. Debate has been generated among economists whether such a category of "public goods" exists. Steven Shavell has suggested the following: when professional economists talk about public goods they do not mean that there are a general category of goods that share the same economic characteristics, manifest the same dysfunctions, and that may thus benefit from pretty similar corrective solutions...there is merely an infinite series of particular problems (some of overproduction, some of underproduction, and so on), each with a particular solution that cannot be deduced from the theory, but that instead would depend on local empirical factors. There is a common misconception that public goods are goods provided by the public sector. Although it is often the case that government is involved in producing public goods, this is not always true. Public goods may be naturally available, or they may be produced by private individuals, by firms, or by non-state groups, called collective action. The theoretical concept of public goods does not distinguish geographic region in regards to how a good may be produced or consumed. However, some theorists, such as Inge Kaul, use the term "global public good" for a public good that is non-rivalrous and non-excludable throughout the whole world, as opposed to a public good that exists in just one national area. Knowledge has been argued as an example of a global public good, but also as a commons, the knowledge commons. Graphically, non-rivalry means that if each of several individuals has a demand curve for a public good, then the individual demand curves are summed vertically to get the aggregate demand curve for the public good. This is in contrast to the procedure for deriving the aggregate demand for a private good, where individual demands are summed horizontally. Some writers have used the term "public good" to refer only to non-excludable "pure public goods" and refer to excludable public goods as "club goods". Digital public goods Digital public goods include software, data sets, AI models, standards and content that are open source. Use of the term “digital public good” appears as early as April, 2017 when Nicholas Gruen wrote Building the Public Goods of the Twenty-First Century, and has gained popularity with the growing recognition of the potential for new technologies to be implemented at scale to effectively serve people. Digital technologies have also been identified by countries, NGOs and private sector entities as a means to achieve the Sustainable Development Goals (SDGs). A digital public good is defined by the UN Secretary-General's Roadmap for Digital Cooperation, as: “open source software, open data, open AI models, open standards and open content that adhere to privacy and other applicable laws and best practices, do no harm, and help attain the SDGs.” Examples Common examples of public goods include public fireworks clean air and other environmental goods information goods, such as official statistics free and open-source software authorship public television radio invention herd immunity Wikipedia national defense fire service flood defense street lights Misclassified public goods Some goods, like orphan drugs, require special governmental incentives to be produced, but cannot be classified as public goods since they do not fulfill the above requirements (non-excludable and non-rivalrous.) Law enforcement, streets, libraries, museums, and education are commonly misclassified as public goods, but they are technically classified in economic terms as quasi-public goods because excludability is possible, but they do still fit some of the characteristics of public goods. The provision of a lighthouse is a standard example of a public good, since it is difficult to exclude ships from using its services. No ship's use detracts from that of others, but since most of the benefit of a lighthouse accrues to ships using particular ports, lighthouse maintenance can be profitably bundled with port fees (Ronald Coase, The Lighthouse in Economics 1974). This has been sufficient to fund actual lighthouses. Technological progress can create new public goods. The most simple examples are street lights, which are relatively recent inventions (by historical standards). One person's enjoyment of them does not detract from other persons' enjoyment, and it currently would be prohibitively expensive to charge individuals separately for the amount of light they presumably use. Official statistics are another example. The government's ability to collect, process and provide high-quality information to guide decision-making at all levels has been strongly advanced by technological progress. On the other hand, a public good's status may change over time. Technological progress can significantly impact excludability of traditional public goods: encryption allows broadcasters to sell individual access to their programming. The costs for electronic road pricing have fallen dramatically, paving the way for detailed billing based on actual use. Public goods are not restricted to human beings. It is one aspect of the study of cooperation in biology. Free rider problem The free rider problem is a primary issue in collective decision-making. An example is that some firms in a particular industry will choose not to participate in a lobby whose purpose is to affect government policies that could benefit the industry, under the assumption that there are enough participants to result in a favourable outcome without them. The free rider problem is also a form of market failure, in which market-like behavior of individual gain-seeking does not produce economically efficient results. The production of public goods results in positive externalities which are not remunerated. If private organizations do not reap all the benefits of a public good which they have produced, their incentives to produce it voluntarily might be insufficient. Consumers can take advantage of public goods without contributing sufficiently to their creation. This is called the free rider problem, or occasionally, the "easy rider problem". If too many consumers decide to "free-ride", private costs exceed private benefits and the incentive to provide the good or service through the market disappears. The market thus fails to provide a good or service for which there is a need. The free rider problem depends on a conception of the human being as Homo economicus: purely rational and also purely selfish—extremely individualistic, considering only those benefits and costs that directly affect him or her. Public goods give such a person an incentive to be a free rider. For example, consider national defence, a standard example of a pure public good. Suppose Homo economicus thinks about exerting some extra effort to defend the nation. The benefits to the individual of this effort would be very low, since the benefits would be distributed among all of the millions of other people in the country. There is also a very high possibility that he or she could get injured or killed during the course of his or her military service. On the other hand, the free rider knows that he or she cannot be excluded from the benefits of national defense, regardless of whether he or she contributes to it. There is also no way that these benefits can be split up and distributed as individual parcels to people. The free rider would not voluntarily exert any extra effort, unless there is some inherent pleasure or material reward for doing so (for example, money paid by the government, as with an all-volunteer army or mercenaries). The free-riding problem is even more complicated than it was thought to be until recently. Any time non-excludability results in failure to pay the true marginal value (often called the "demand revelation problem"), it will also result in failure to generate proper income levels, since households will not give up valuable leisure if they cannot individually increment a good. This implies that, for public goods without strong special interest support, under-provision is likely since cost–benefit analysis is being conducted at the wrong income levels, and all of the un-generated income would have been spent on the public good, apart from general equilibrium considerations. In the case of information goods, an inventor of a new product may benefit all of society, but hardly anyone is willing to pay for the invention if they can benefit from it for free. In the case of an information good, however, because of its characteristics of non-excludability and also because of almost zero reproduction costs, commoditization is difficult and not always efficient even from a neoclassical economic point of view. Efficient production levels The socially optimal provision of a public good in a society occurs when the sum of the marginal valuations of the public good (taken across all individuals) is equal to the marginal cost of providing that public good. These marginal valuations are, formally, marginal rates of substitution relative to some reference private good, and the marginal cost is a marginal rate of transformation that describes how much of that private good it costs to produce an incremental unit of the public good. This contrasts to the social optimality condition of private goods, which equates each consumer's valuation of the private good to its marginal cost of production. For an example, consider a community of just two consumers and the government is considering whether or not to build a public park. One person is prepared to pay up to $200 for its use, while the other is willing to pay up to $100. The total value to the two individuals of having the park is $300. If it can be produced for $225, there is a $75 surplus to maintaining the park, since it provides services that the community values at $300 at a cost of only $225. The classical theory of public goods defines efficiency under idealized conditions of complete information, a situation already acknowledged in Wicksell (1896). Samuelson emphasized that this poses problems for the efficient provision of public goods in practice and the assessment of an efficient Lindahl tax to finance public goods, because individuals have incentives to underreport how much they value public goods. Subsequent work, especially in mechanism design and the theory of public finance developed how valuations and costs could actually be elicited in practical conditions of incomplete information, using devices such as the Vickrey–Clarke–Groves mechanism. Thus, deeper analysis of problems of public goods motivated much work that is at the heart of modern economic theory. Local public goods The basic theory of public goods as discussed above begins with situations where the level of a public good (e.g., quality of the air) is equally experienced by everyone. However, in many important situations of interest, the incidence of benefits and costs is not so simple. For example, when people at a workplace keep an office clean or residrnts monitor a neighborhood for signs of crime, the benefits of that effort accrue to some people (those in their neighborhoods) more than to others. The overlapping structure of these neighborhoods is often modeled as a network. (When neighborhoods are totally separate, i.e., non-overlapping, the standard model is the Tiebout model.) An example of locally public good that could help everyone, even ones not from the neighborhood, is a bus service. If you are a college student who is visiting their friend who goes to school in another city that has bus service, you get to benefit from this bus service just like everyone that resides in and goes to school in said city. There is also a correlation of benefits and costs that you are now a part of. You are benefiting by not having to walk to your destination and taking a bus instead. However, others might prefer to walk, so they do not become a part of the problem, which is pollution due to gas emitted by automobiles and congestion. In 2019, economists developed the theory of local public goods with overlapping neighborhoods, or public goods in networks: both their efficient provision, and how much can be provided voluntarily in a non-cooperative equilibrium. When it comes to socially efficient provision, networks that are more dense or close-knit in terms of how much people can benefit each other have more scope for improving on an inefficient status quo. On the other hand, voluntary provision is typically below the efficient level, and equilibrium outcomes tend to involve strong specialization, with a few individuals contributing heavily and their neighbors free-riding on those contributions. Ownership Economic theorists such as Oliver Hart (1995) have emphasized that ownership matters for investment incentives when contracts are incomplete. The incomplete contracting paradigm has been applied to public goods by Besley and Ghatak (2001). They consider the government and a non-governmental organization (NGO) who can both make investments to provide a public good. Besley and Ghatak argue that the party who has a larger valuation of the public good should be the owner, regardless of whether the government or the NGO has a better investment technology. This result contrasts with the case of private goods studied by Hart (1995), where the party with the better investment technology should be the owner. However, it has been shown that the investment technology may matter also in the public-good case when a party is indispensable or when there are bargaining frictions between the government and the NGO. Halonen-Akatwijuka and Pafilis (2020) have demonstrated that Besley and Ghatak's results are not robust when there is a long-term relationship, such that the parties interact repeatedly. Moreover, Schmitz (2021) has shown that when the parties have private information about their valuations of the public good, then the investment technology can be an important determinant of the optimal ownership structure. See also Anti-rival good Excludability Lindahl tax, a method proposed by Erik Lindahl for financing public goods Private-collective model of innovation, which explains the creation of public goods by private investors Public bad Public trust doctrine Public goods game, a standard of experimental economics Public works, government-financed constructions Privileged group Tragedy of the commons Tragedy of the anticommons Rivalry (economics) Quadratic funding, a mechanism to allocate funding for the production of public goods based on democratic principles References Bibliography Further reading Acoella, Nicola (2006), ‘Distributive issues in the provision and use of global public goods’, in: ‘Studi economici’, 88(1): 23–42. Zittrain, Jonathan, The Future of the Internet: And How to Stop It. 2008 Lessig, Lawrence, Code 2.0, Chapter 7, What Things Regulate External links Public Goods: A Brief Introduction, by The Linux Information Project (LINFO) Global Public Goods – analysis from Global Policy Forum The Nature of Public Goods Hardin, Russell, "The Free Rider Problem", The Stanford Encyclopedia of Philosophy (Spring 2013 Edition), Edward N. Zalta (ed.) Community building Goods (economics) Market failure Public economics Good
Public good (economics)
[ "Physics" ]
5,823
[ "Materials", "Goods (economics)", "Matter" ]
173,160
https://en.wikipedia.org/wiki/Lens%20flare
A lens flare happens when light is scattered, or flared, in a lens system, often in response to a bright light, producing a sometimes undesirable artifact in the image. This happens through light scattered by the imaging mechanism itself, for example through internal reflection and forward scatter from material imperfections in the lens. Lenses with large numbers of elements such as zooms tend to have more lens flare, as they contain a relatively large number of interfaces at which internal scattering may occur. These mechanisms differ from the focused image generation mechanism, which depends on rays from the refraction of light from the subject itself. There are two types of flare: visible artifacts and glare across the image. The glare makes the image look "washed out" by reducing contrast and color saturation (adding light to dark image regions, and adding white to saturated regions, reducing their saturation). Visible artifacts, usually in the shape of the aperture made by the iris diaphragm, are formed when light follows a pathway through the lens that contains one or more reflections from the lens surfaces. Flare is particularly caused by very bright light sources. Most commonly, this occurs when aiming toward the Sun (when the Sun is in frame or the lens is pointed sunward), and is reduced by using a lens hood or other shade. For good-quality optical systems, and for most images (which do not have a bright light shining into the lens), flare is a secondary effect that is widely distributed across the image and thus not visible, although it does reduce contrast. Manifestation The spatial distribution of the lens flare typically manifests as several starbursts, rings, or circles in a row across the image or view. Lens flare patterns typically spread widely across the scene and change location with the camera's movement relative to light sources, tracking with the light position and fading as the camera points away from the bright light until it causes no flare at all. The specific spatial distribution of the flare depends on the shape of the aperture of the image formation elements. For example, if the lens has a 6-bladed aperture, the flare may have a hexagonal pattern. Such internal scattering is also present in the human eye, and manifests in an unwanted veiling glare most obvious when viewing very bright lights or highly reflective surfaces. In some situations, eyelashes can also create flare-like irregularities, although these are technically diffraction artifacts. When a bright light source is shining on the lens but not in its field of view, lens flare appears as a haze that washes out the image and reduces contrast. This can be avoided by shading the lens using a lens hood. In a studio, a gobo or set of barn doors can be attached to the lighting to keep it from shining on the camera. Filters can be attached to the camera lens which will also minimise lens flare, which is especially useful for outdoor photographers. When using an anamorphic lens, as is common in analog cinematography, lens flare can manifest itself as horizontal lines. This is most commonly seen in car headlights in a dark scene, and may be desired as part of the "film look". Deliberate use A lens flare is often deliberately used to invoke a sense of drama. A lens flare is also useful when added to an artificial or modified image composition because it adds a sense of realism, implying that the image is an un-edited original photograph of a "real life" scene. For both of these reasons (implying realism and/or drama) artificial lens flare is a common effect in various graphics editing programs, although its use can be a point of contention among professional graphic designers. Lens flare was one of the first special effects developed for computer graphics because it can be imitated using relatively simple means. Basic flare-like effects, for instance in video games, can be obtained by drawing starburst, ring, and disc textures over the image and moving them as the location of the light source changes. More sophisticated rendering techniques have been developed based on ray tracing or photon mapping. Lens flare was typically avoided by Hollywood cinematographers, but the director J. J. Abrams deliberately added numerous lens flares to his films Star Trek (2009) and Super 8 (2011) by aiming powerful off-camera light sources at the lens. He explained in an interview about Star Trek: "I wanted a visual system that felt unique. I know there are certain shots where even I watch and think, 'Oh that's ridiculous, that was too many.' But I love the idea that the future was so bright it couldn't be contained in the frame." Many complained of the frequent use; Abrams conceded it was "overdone, in some places." In contrast, the low-budget independent film Easy Rider (1969) contains numerous incidental lens flares that resulted from Harrison Arnold's need to modify a camera car for his Arriflex as he shot motorcycle footage against landscapes of the Southwestern United States. David Boyd, the director of photography of the sci-fi Firefly series, desired this style's evocation of 1970s television so much that he sent back cutting-edge lenses that reduced lens flare in exchange for cheaper ones. Other forms of photographic flare Filter flare The use of photographic filters can cause flare, particularly ghosts of bright lights (under central inversion). This can be eliminated by not using a filter, and reduced by using higher-quality filters or narrower aperture. Diffraction artifact in digital cameras One form of flare is specific to digital cameras. With the sun shining on an unprotected lens, a group of small rainbows appears. This artifact is formed by internal diffraction on the image sensor, which acts like a diffraction grating. Unlike true lens flare, this artifact is not visible in the eyepiece of a digital SLR camera, making it more difficult to avoid. Gallery See also Anti-reflective coating, used to reduce lens flare and produces the red and green colors common in lens flare. Bokeh, a source of circles around out-of-focus bright points, also due in part to the internals of the lens. Diffraction spike, a type of lens flare seen in some telescopes References Science of photography Lenses Optical phenomena
Lens flare
[ "Physics" ]
1,267
[ "Optical phenomena", "Physical phenomena" ]