id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
252572 | https://en.wikipedia.org/wiki/Glory%20%28optical%20phenomenon%29 | Glory (optical phenomenon) | A glory is an optical phenomenon, resembling an iconic saint's halo around the shadow of the observer's head, caused by sunlight or (more rarely) moonlight interacting with the tiny water droplets that comprise mist or clouds. The glory consists of one or more concentric, successively dimmer rings, each of which is red on the outside and bluish towards the centre. Due to its appearance, the phenomenon is sometimes mistaken for a circular rainbow, but the latter has a much larger diameter and is caused by different physical processes.
Glories arise due to wave interference of light internally refracted within small droplets.
Appearance and observation
Depending on circumstances (such as the uniformity of droplet size in the clouds), one or more of the glory's rings can be visible. The rings are rarely complete, being interrupted by the shadow of the viewer. The angular size of the inner and brightest ring is much smaller than that of a rainbow, about 5° to 20°, depending on the size of the droplets. In the right conditions, a glory and a rainbow can occur simultaneously.
"Glories can be seen on mountains and hillsides, from aircraft and in sea fog and even indoors."
Like a rainbow, outdoor glories are centred on the antisolar (or, in case of the moon, antilunar) point, which coincides with the shadow of the observer's head. Because this point is diametrically opposite to the sun's (or moon's) position in the sky, it usually lies below the observer's horizon except at sun (or moon) rise and set. Outdoor glories are commonly observed from aircraft. In the latter case, if the plane is flying sufficiently low for its shadow to be visible on the clouds, the glory always surrounds it. This is sometimes called The Glory of the Pilot.
In 2024 astronomers suggested that the existence of glory might explain certain observations of the exoplanet WASP-76b. If this interpretation could be confirmed, it would become the first extrasolar glory-like phenomenon to be discovered.
Brocken spectre
When viewed from a mountain or tall building, glories are often seen in association with a "Brocken spectre": the apparently enormously magnified shadow of an observer, cast (when the sun is low) on clouds below the mountain on which the viewer is standing. The name derives from the Brocken, the tallest peak of the Harz mountain range in Germany. Because the peak is above the cloud level and the area is frequently misty, conditions conducive to casting a shadow on a cloud layer are common. Giant shadows that seemed to move by themselves due to movement of the cloud layer (this movement is another part of the definition of the Brocken spectre), and that were surrounded by glories, may have contributed to the reputation the Harz mountains hold as a refuge for witches and evil spirits. In Goethe's Faust, the Brocken is called the Blocksberg and is the site of the Witches' Sabbath on Walpurgis Night.
Ulloa's halo
Before the first reports of the phenomenon in Europe, two members of the French Geodesic Mission to the Equator, Antonio de Ulloa and Pierre Bouguer, reported that while walking near the summit of the Pambamarca mountain, in the Ecuadorian Andes, they saw their shadows projected on a lower-lying cloud, with a circular "halo or glory" around the shadow of the observer's head. Ulloa noted that
This was then called "Ulloa's halo" or "Bouguer's halo". Ulloa reported that the glories were surrounded by a larger ring of white light, which would today be called a fog bow. On other occasions, he observed arches of white light formed by reflected moonlight, whose explanation is unknown but which may have been related to ice-crystal halos.
Theory
Modern theories of light, first described by Henri Poincaré in 1887, are able to explain the phenomenon of glories through the complex angular momentum (rotation) of the electromagnetic field of a light wave, and do not need quantum theories. A summary of rainbowlike phenomena was provided in Scientific American in 1977, and states:
Most 20th century work on the phenomenon of rainbows and glories has focused on determining the correct intensity of light at each point in the phenomenon, which does require quantum theories. In 1947, the Dutch astronomer Hendrik van de Hulst suggested that surface waves are involved. He speculated that the brightness of the coloured rings of the glory are caused by two-ray interference between "short" and "long" path surface waves—which are generated by light rays entering the droplets at diametrically opposite points (both rays suffer one internal reflection). A theory by Brazilian physicist Herch Moysés Nussenzveig suggests that the light energy beamed back by a glory originates mostly from classical wave tunneling (synonymous in the paper to the evanescent wave coupling), which is an interaction between an evanescent light wave traveling along the surface of the drop and the waves inside the drop.
In culture
C. T. R. Wilson saw a glory while working as a temporary observer at the Ben Nevis weather station. Inspired by the impressive sight, he decided to build a device for creating clouds in the laboratory, so that he could make a synthetic, small-scale glory. His work led directly to the cloud chamber, a device for detecting ionizing radiation for which he and Arthur Compton received the Nobel Prize for Physics in 1927.
In China, the phenomenon is called Buddha's light (or halo). It is often observed on cloud-shrouded high mountains, such as Huangshan and Mount Emei. Records of the phenomenon at Mount Emei date back to A.D. 63. The colourful halo always surrounds the observer's own shadow, and thus was often taken to show the observer's personal enlightenment (associated with Buddha or divinity).
Stylized glories appear occasionally in Western heraldry. Two glories appear on the Great Seal of the United States: A glory breaking through clouds surrounding a cluster of 13 stars on the obverse, and a glory surrounding the Eye of Providence surmounting an unfinished pyramid on the reverse.
Gallery
| Physical sciences | Atmospheric optics | Earth science |
252582 | https://en.wikipedia.org/wiki/Landscape%20ecology | Landscape ecology | Landscape ecology is the science of studying and improving relationships between ecological processes in the environment and particular ecosystems. This is done within a variety of landscape scales, development spatial patterns, and organizational levels of research and policy. Landscape ecology can be described as the science of "landscape diversity" as the synergetic result of biodiversity and geodiversity.
As a highly interdisciplinary field in systems science, landscape ecology integrates biophysical and analytical approaches with humanistic and holistic perspectives across the natural sciences and social sciences. Landscapes are spatially heterogeneous geographic areas characterized by diverse interacting patches or ecosystems, ranging from relatively natural terrestrial and aquatic systems such as forests, grasslands, and lakes to human-dominated environments including agricultural and urban settings.
The most salient characteristics of landscape ecology are its emphasis on the relationship among pattern, process and scales, and its focus on broad-scale ecological and environmental issues. These necessitate the coupling between biophysical and socioeconomic sciences. Key research topics in landscape ecology include ecological flows in landscape mosaics, land use and land cover change, scaling, relating landscape pattern analysis with ecological processes, and landscape conservation and sustainability. Landscape ecology also studies the role of human impacts on landscape diversity in the development and spreading of new human pathogens that could trigger epidemics.
Terminology
The German term – thus landscape ecology – was coined by German geographer Carl Troll in 1939. He developed this terminology and many early concepts of landscape ecology as part of his early work, which consisted of applying aerial photograph interpretation to studies of interactions between environment and vegetation.
Explanation
Heterogeneity is the measure of how parts of a landscape differ from one another. Landscape ecology looks at how this spatial structure affects organism abundance at the landscape level, as well as the behavior and functioning of the landscape as a whole. This includes studying the influence of pattern, or the internal order of a landscape, on process, or the continuous operation of functions of organisms. Landscape ecology also includes geomorphology as applied to the design and architecture of landscapes. Geomorphology is the study of how geological formations are responsible for the structure of a landscape.
History
Evolution of theory
One central landscape ecology theory originated from MacArthur & Wilson's The Theory of Island Biogeography. This work considered the biodiversity on islands as the result of competing forces of colonization from a mainland stock and stochastic extinction. The concepts of island biogeography were generalized from physical islands to abstract patches of habitat by Levins' metapopulation model (which can be applied e.g. to forest islands in the agricultural landscape). This generalization spurred the growth of landscape ecology by providing conservation biologists a new tool to assess how habitat fragmentation affects population viability. Recent growth of landscape ecology owes much to the development of geographic information systems (GIS) and the availability of large-extent habitat data (e.g. remotely sensed datasets).
Development as a discipline
Landscape ecology developed in Europe from historical planning on human-dominated landscapes. Concepts from general ecology theory were integrated in North America. While general ecology theory and its sub-disciplines focused on the study of more homogenous, discrete community units organized in a hierarchical structure (typically as ecosystems, populations, species, and communities), landscape ecology built upon heterogeneity in space and time. It frequently included human-caused landscape changes in theory and application of concepts.
By 1980, landscape ecology was a discrete, established discipline. It was marked by the organization of the International Association for Landscape Ecology (IALE) in 1982. Landmark book publications defined the scope and goals of the discipline, including Naveh and Lieberman and Forman and Godron. Forman wrote that although study of "the ecology of spatial configuration at the human scale" was barely a decade old, there was strong potential for theory development and application of the conceptual framework.
Today, theory and application of landscape ecology continues to develop through a need for innovative applications in a changing landscape and environment. Landscape ecology relies on advanced technologies such as remote sensing, GIS, and models. There has been associated development of powerful quantitative methods to examine the interactions of patterns and processes. An example would be determining the amount of carbon present in the soil based on landform over a landscape, derived from GIS maps, vegetation types, and rainfall data for a region. Remote sensing work has been used to extend landscape ecology to the field of predictive vegetation mapping, for instance by Janet Franklin.
Definitions/conceptions of landscape ecology
Nowadays, at least six different conceptions of landscape ecology can be identified: one group tending toward the more disciplinary concept of ecology (subdiscipline of biology; in conceptions 2, 3, and 4) and another group—characterized by the interdisciplinary study of relations between human societies and their environment—inclined toward the integrated view of geography (in conceptions 1, 5, and 6):
Interdisciplinary analysis of subjectively defined landscape units (e.g. Neef School): Landscapes are defined in terms of uniformity in land use. Landscape ecology explores the landscape's natural potential in terms of functional utility for human societies. To analyse this potential, it is necessary to draw on several natural sciences.
Topological ecology at the landscape scale 'Landscape' is defined as a heterogeneous land area composed of a cluster of interacting ecosystems (woods, meadows, marshes, villages, etc.) that is repeated in similar form throughout. It is explicitly stated that landscapes are areas at a kilometres wide human scale of perception, modification, etc. Landscape ecology describes and explains the landscapes' characteristic patterns of ecosystems and investigates the flux of energy, mineral nutrients, and species among their component ecosystems, providing important knowledge for addressing land-use issues.
Organism-centered, multi-scale topological ecology (e.g. John A. Wiens): Explicitly rejecting views expounded by Troll, Zonneveld, Naveh, Forman & Godron, etc., landscape and landscape ecology are defined independently of human perceptions, interests, and modifications of nature. 'Landscape' is defined – regardless of scale – as the 'template' on which spatial patterns influence ecological processes. Not humans, but rather the respective species being studied is the point of reference for what constitutes a landscape.
Topological ecology at the landscape level of biological organisation (e.g. Urban et al.): On the basis of ecological hierarchy theory, it is presupposed that nature is working at multiple scales and has different levels of organisation which are part of a rate-structured, nested hierarchy. Specifically, it is claimed that, above the ecosystem level, a landscape level exists which is generated and identifiable by high interaction intensity between ecosystems, a specific interaction frequency and, typically, a corresponding spatial scale. Landscape ecology is defined as ecology that focuses on the influence exerted by spatial and temporal patterns on the organisation of, and interaction among, functionally integrated multispecies ecosystems.
Analysis of social-ecological systems using the natural and social sciences and humanities (e.g. Leser; Naveh; Zonneveld): Landscape ecology is defined as an interdisciplinary super-science that explores the relationship between human societies and their specific environment, making use of not only various natural sciences, but also social sciences and humanities. This conception is grounded in the assumption that social systems are linked to their specific ambient ecological system in such a way that both systems together form a co-evolutionary, self-organising unity called 'landscape'. Societies' cultural, social and economic dimensions are regarded as an integral part of the global ecological hierarchy, and landscapes are claimed to be the manifest systems of the 'total human ecosystem' (Naveh) which encompasses both the physical ('geospheric') and mental ('noospheric') spheres.
Ecology guided by cultural meanings of lifeworldly landscapes (frequently pursued in practice but not defined, but see, e.g., Hard; Trepl): Landscape ecology is defined as ecology that is guided by an external aim, namely, to maintain and develop lifeworldly landscapes. It provides the ecological knowledge necessary to achieve these goals. It investigates how to sustain and develop those populations and ecosystems which (i) are the material 'vehicles' of lifeworldly, aesthetic and symbolic landscapes and, at the same time, (ii) meet societies' functional requirements, including provisioning, regulating, and supporting ecosystem services. Thus landscape ecology is concerned mainly with the populations and ecosystems which have resulted from traditional, regionally specific forms of land use.
Relationship to ecological theory
Some research programmes of landscape ecology theory, namely those standing in the European tradition, may be slightly outside of the "classical and preferred domain of scientific disciplines" because of the large, heterogeneous areas of study. However, general ecology theory is central to landscape ecology theory in many aspects. Landscape ecology consists of four main principles: the development and dynamics of spatial heterogeneity, interactions and exchanges across heterogeneous landscapes, influences of spatial heterogeneity on biotic and abiotic processes, and the management of spatial heterogeneity. The main difference from traditional ecological studies, which frequently assume that systems are spatially homogenous, is the consideration of spatial patterns.
Important terms
Landscape ecology not only created new terms, but also incorporated existing ecological terms in new ways. Many of the terms used in landscape ecology are as interconnected and interrelated as the discipline itself.
Landscape
Certainly, 'landscape' is a central concept in landscape ecology. It is, however, defined in quite different ways. For example: Carl Troll conceives of landscape not as a mental construct but as an objectively given 'organic entity', a harmonic individuum of space.
Ernst Neef defines landscapes as sections within the uninterrupted earth-wide interconnection of geofactors which are defined as such on the basis of their uniformity in terms of a specific land use, and are thus defined in an anthropocentric and relativistic way.
According to Richard Forman and Michel Godron, a landscape is a heterogeneous land area composed of a cluster of interacting ecosystems that is repeated in similar form throughout, whereby they list woods, meadows, marshes and villages as examples of a landscape's ecosystems, and state that a landscape is an area at least a few kilometres wide.
John A. Wiens opposes the traditional view expounded by Carl Troll, Isaak S. Zonneveld, Zev Naveh, Richard T. T. Forman/Michel Godron and others that landscapes are arenas in which humans interact with their environments on a kilometre-wide scale; instead, he defines 'landscape'—regardless of scale—as "the template on which spatial patterns influence ecological processes". Some define 'landscape' as an area containing two or more ecosystems in close proximity.
Scale and heterogeneity (incorporating composition, structure, and function)
A main concept in landscape ecology is scale. Scale represents the real world as translated onto a map, relating distance on a map image and the corresponding distance on earth. Scale is also the spatial or temporal measure of an object or a process, or amount of spatial resolution. Components of scale include composition, structure, and function, which are all important ecological concepts. Applied to landscape ecology, composition refers to the number of patch types (see below) represented on a landscape and their relative abundance. For example, the amount of forest or wetland, the length of forest edge, or the density of roads can be aspects of landscape composition. Structure is determined by the composition, the configuration, and the proportion of different patches across the landscape, while function refers to how each element in the landscape interacts based on its life cycle events. Pattern is the term for the contents and internal order of a heterogeneous area of land.
A landscape with structure and pattern implies that it has spatial heterogeneity, or the uneven distribution of objects across the landscape. Heterogeneity is a key element of landscape ecology that separates this discipline from other branches of ecology. Landscape heterogeneity is able to quantify with agent-based methods as well.
Patch and mosaic
Patch, a term fundamental to landscape ecology, is defined as a relatively homogeneous area that differs from its surroundings. Patches are the basic unit of the landscape that change and fluctuate, a process called patch dynamics. Patches have a definite shape and spatial configuration, and can be described compositionally by internal variables such as number of trees, number of tree species, height of trees, or other similar measurements.
Matrix is the "background ecological system" of a landscape with a high degree of connectivity. Connectivity is the measure of how connected or spatially continuous a corridor, network, or matrix is. For example, a forested landscape (matrix) with fewer gaps in forest cover (open patches) will have higher connectivity. Corridors have important functions as strips of a particular type of landscape differing from adjacent land on both sides. A network is an interconnected system of corridors while mosaic describes the pattern of patches, corridors, and matrix that form a landscape in its entirety.
Boundary and edge
Landscape patches have a boundary between them which can be defined or fuzzy. The zone composed of the edges of adjacent ecosystems is the boundary. Edge means the portion of an ecosystem near its perimeter, where influences of the adjacent patches can cause an environmental difference between the interior of the patch and its edge. This edge effect includes a distinctive species composition or abundance. For example, when a landscape is a mosaic of perceptibly different types, such as a forest adjacent to a grassland, the edge is the location where the two types adjoin. In a continuous landscape, such as a forest giving way to open woodland, the exact edge location is fuzzy and is sometimes determined by a local gradient exceeding a threshold, such as the point where the tree cover falls below thirty-five percent.
Ecotones, ecoclines, and ecotopes
A type of boundary is the ecotone, or the transitional zone between two communities. Ecotones can arise naturally, such as a lakeshore, or can be human-created, such as a cleared agricultural field from a forest. The ecotonal community retains characteristics of each bordering community and often contains species not found in the adjacent communities. Classic examples of ecotones include fencerows, forest to marshlands transitions, forest to grassland transitions, or land-water interfaces such as riparian zones in forests. Characteristics of ecotones include vegetational sharpness, physiognomic change, occurrence of a spatial community mosaic, many exotic species, ecotonal species, spatial mass effect, and species richness higher or lower than either side of the ecotone.
An ecocline is another type of landscape boundary, but it is a gradual and continuous change in environmental conditions of an ecosystem or community. Ecoclines help explain the distribution and diversity of organisms within a landscape because certain organisms survive better under certain conditions, which change along the ecocline. They contain heterogeneous communities which are considered more environmentally stable than those of ecotones. An ecotope is a spatial term representing the smallest ecologically distinct unit in mapping and classification of landscapes. Relatively homogeneous, they are spatially explicit landscape units used to stratify landscapes into ecologically distinct features. They are useful for the measurement and mapping of landscape structure, function, and change over time, and to examine the effects of disturbance and fragmentation.
Disturbance and fragmentation
Disturbance is an event that significantly alters the pattern of variation in the structure or function of a system. Fragmentation is the breaking up of a habitat, ecosystem, or land-use type into smaller parcels. Disturbance is generally considered a natural process. Fragmentation causes land transformation, an important process in landscapes as development occurs.
An important consequence of repeated, random clearing (whether by natural disturbance or human activity) is that contiguous cover can break down into isolated patches. This happens when the area cleared exceeds a critical level, which means that landscapes exhibit two phases: connected and disconnected.
Theory
Landscape ecology theory stresses the role of human impacts on landscape structures and functions. It also proposes ways for restoring degraded landscapes. Landscape ecology explicitly includes humans as entities that cause functional changes on the landscape. Landscape ecology theory includes the landscape stability principle, which emphasizes the importance of landscape structural heterogeneity in developing resistance to disturbances, recovery from disturbances, and promoting total system stability. This principle is a major contribution to general ecological theories which highlight the importance of relationships among the various components of the landscape.
Integrity of landscape components helps maintain resistance to external threats, including development and land transformation by human activity. Analysis of land use change has included a strongly geographical approach which has led to the acceptance of the idea of multifunctional properties of landscapes. There are still calls for a more unified theory of landscape ecology due to differences in professional opinion among ecologists and its interdisciplinary approach (Bastian 2001).
An important related theory is hierarchy theory, which refers to how systems of discrete functional elements operate when linked at two or more scales. For example, a forested landscape might be hierarchically composed of drainage basins, which in turn are composed of local ecosystems, which are in turn composed of individual trees and gaps. Recent theoretical developments in landscape ecology have emphasized the relationship between pattern and process, as well as the effect that changes in spatial scale has on the potential to extrapolate information across scales. Several studies suggest that the landscape has critical thresholds at which ecological processes will show dramatic changes, such as the complete transformation of a landscape by an invasive species due to small changes in temperature characteristics which favor the invasive's habitat requirements.
Application
Research directions
Developments in landscape ecology illustrate the important relationships between spatial patterns and ecological processes. These developments incorporate quantitative methods that link spatial patterns and ecological processes at broad spatial and temporal scales. This linkage of time, space, and environmental change can assist managers in applying plans to solve environmental problems. The increased attention in recent years on spatial dynamics has highlighted the need for new quantitative methods that can analyze patterns, determine the importance of spatially explicit processes, and develop reliable models. Multivariate analysis techniques are frequently used to examine landscape level vegetation patterns. Studies use statistical techniques, such as cluster analysis, canonical correspondence analysis (CCA), or detrended correspondence analysis (DCA), for classifying vegetation. Gradient analysis is another way to determine the vegetation structure across a landscape or to help delineate critical wetland habitat for conservation or mitigation purposes (Choesin and Boerner 2002).
Climate change is another major component in structuring current research in landscape ecology. Ecotones, as a basic unit in landscape studies, may have significance for management under climate change scenarios, since change effects are likely to be seen at ecotones first because of the unstable nature of a fringe habitat. Research in northern regions has examined landscape ecological processes, such as the accumulation of snow, melting, freeze-thaw action, percolation, soil moisture variation, and temperature regimes through long-term measurements in Norway. The study analyzes gradients across space and time between ecosystems of the central high mountains to determine relationships between distribution patterns of animals in their environment. Looking at where animals live, and how vegetation shifts over time, may provide insight into changes in snow and ice over long periods of time across the landscape as a whole.
Other landscape-scale studies maintain that human impact is likely the main determinant of landscape pattern over much of the globe. Landscapes may become substitutes for biodiversity measures because plant and animal composition differs between samples taken from sites within different landscape categories. Taxa, or different species, can "leak" from one habitat into another, which has implications for landscape ecology. As human land use practices expand and continue to increase the proportion of edges in landscapes, the effects of this leakage across edges on assemblage integrity may become more significant in conservation. This is because taxa may be conserved across landscape levels, if not at local levels.
Land change modeling
Land change modeling is an application of landscape ecology designed to predict future changes in land use. Land change models are used in urban planning, geography, GIS, and other disciplines to gain a clear understanding of the course of a landscape. In recent years, much of the Earth's land cover has changed rapidly, whether from deforestation or the expansion of urban areas.
Relationship to other disciplines
Landscape ecology has been incorporated into a variety of ecological subdisciplines. For example, it is closely linked to land change science, the interdisciplinary of land use and land cover change and their effects on surrounding ecology. Another recent development has been the more explicit consideration of spatial concepts and principles applied to the study of lakes, streams, and wetlands in the field of landscape limnology. Seascape ecology is a marine and coastal application of landscape ecology. In addition, landscape ecology has important links to application-oriented disciplines such as agriculture and forestry. In agriculture, landscape ecology has introduced new options for the management of environmental threats brought about by the intensification of agricultural practices. Agriculture has always been a strong human impact on ecosystems.
In forestry, from structuring stands for fuelwood and timber to ordering stands across landscapes to enhance aesthetics, consumer needs have affected conservation and use of forested landscapes. Landscape forestry provides methods, concepts, and analytic procedures for landscape forestry. Landscape ecology has been cited as a contributor to the development of fisheries biology as a distinct biological science discipline, and is frequently incorporated in study design for wetland delineation in hydrology. It has helped shape integrated landscape management. Lastly, landscape ecology has been very influential for progressing sustainability science and sustainable development planning. For example, a recent study assessed sustainable urbanization across Europe using evaluation indices, country-landscapes, and landscape ecology tools and methods.
Landscape ecology has also been combined with population genetics to form the field of landscape genetics, which addresses how landscape features influence the population structure and gene flow of plant and animal populations across space and time and on how the quality of intervening landscape, known as "matrix", influences spatial variation. After the term was coined in 2003, the field of landscape genetics had expanded to over 655 studies by 2010, and continues to grow today. As genetic data has become more readily accessible, it is increasingly being used by ecologists to answer novel evolutionary and ecological questions, many with regard to how landscapes effect evolutionary processes, especially in human-modified landscapes, which are experiencing biodiversity loss.
| Biology and health sciences | Ecology | Biology |
252628 | https://en.wikipedia.org/wiki/Black-headed%20gull | Black-headed gull | The black-headed gull (Chroicocephalus ridibundus) is a small gull that breeds in much of the Palearctic in Europe and Asia, and also locally in smaller numbers in coastal eastern Canada. Most of the population is migratory and winters further south, but many also remain in the milder areas of northwestern Europe. It was formerly sometimes cited as "common black-headed gull" to distinguish it from "great black-headed gull" (an old name for Pallas's gull).
The black-headed gull was previously placed in the genus Larus but genetic studies early in the 21st century showed that this genus in a wide sense like this was paraphyletic with respect to other gull genera, and extensive changes to the taxonomy of gulls were made with many species of gull removed from Larus and transferred to other genera; the black-headed gull joining nine or ten other species in the resurrected genus Chroicocephalus; this was accepted by the IOC World Bird List and other ornithological authorities in 2008.
The genus name Chroicocephalus is from the Ancient Greek words khroizo, "to colour", and kephale, "head". The specific name ridibundus is Latin for "laughing a lot".
Description
This gull is long with a wingspan and weighs ; males (186–400 g) average heavier than females (166–350 g), but with considerable overlap.
In flight, the white leading edge of the outer wing (the outer primaries) is a good field mark, particularly combined with the dark underside of the inner primaries, which distinguishes it from the white underside of those feathers in its close relative the Bonaparte's gull. The summer adult has a chocolate-brown head (not black, although does look black from a distance), white neck, underparts and tail, pale grey wings and back, black tips to the primary wing feathers, and red bill and legs. The hood is lost in winter, leaving just two dark spots above and behind the eye. Summer plumage occurs from March to July (rarely from late January, and into August), winter plumage from late July until March or April. Black-headed gulls take two years to reach maturity. Juvenile birds, for the first month or two after fledging, have a mottled pattern of brown spots over most of the body, and a black band on the tail; in late summer, they moult into first-winter plumage, with a grey back, but retaining a brown carpal bar on the inner wing, blackish secondary feathers, and the black band on the tail. In their first summer (one year old), they develop a partial brown hood, the extent of which is very variable between individuals from almost no brown on the head, to a hood like a full adult's. The second winter plumage is like adult plumage, except for occasional brown marks on the wings and tail tip in some individuals. There is no difference in plumage between the sexes.
It breeds in colonies in large reed beds or marshes, or on islands in lakes, nesting on the ground; colonies may range from a few (or even single) pairs up to several thousand pairs, exceptionally over 10,000 pairs. Like most gulls, it is highly gregarious in winter, both when feeding and in evening roosts. It is not a pelagic species and is rarely seen at sea far from coasts.
Like most gulls, black-headed gulls are long-lived birds, with a maximum age of at least 32.9 years recorded in the wild.
Subspecies
Some authorities treat it as a monotypic species with no subspecies, while others treat it as having two subspecies, C. r. ridibundus in the west and centre of the range, and C. r. sibiricus in the far east (eastern Siberia; wintering in Japan and eastern China). The latter is slightly larger and relatively longer-winged. The variation is likely clinal, with intergrades in central Siberia.
Distribution
Black-headed gulls breed across the Palearctic over much of Europe and northern Asia, from Iceland and Ireland east to Japan and eastern China. It is abundant, with a global population of 2–3 million pairs; most numerous in Europe, with up to 300,000 pairs in Great Britain and 250,000 pairs in Poland the largest concentrations. The range is slowly expanding westwards, with Iceland first colonised in 1911, Greenland in 1969, and Newfoundland in Canada in 1977, with around 20 pairs breeding in northeastern Canada in the late 20th century. They migrate south and west away from regions which freeze hard in winter, reaching northern Africa and southern Asia (with small numbers south to the Equator). Areas within the breeding range with milder winters such as Great Britain receive large influxes of migrants from colder areas like Scandinavia, Poland, the Baltic States, and Russia; the British winter population is around 3 million. Small numbers occur in winter in northeast North America as far south as Virginia, often in flocks of the similar-looking Bonaparte's gull.
Vagrancy
South of its regular range in eastern North America, it is recorded as a very rare transient south to the Carolina coast, and also in some Caribbean islands and Mexico. There are a few records from Australia, the first on 19 October 1991, at Broome, Western Australia, and others at Darwin, Northern Territory in 1998, 2005, and 2006. It is also a vagrant south in Africa as far as the Pretoria area in South Africa.
Disease
Black-headed gulls were among the birds most heavily hit by the 2023 avian influenza outbreak, with over 4,000 birds killed in Great Britain by early May; similarly high mortality rates were also reported from France, the Netherlands, Italy and Germany.
Behaviour
The black-headed gull is a bold and opportunistic feeder. It eats insects, fish, seeds, worms, scraps, and carrion in towns, or invertebrates in ploughed fields with equal facility. It is a noisy species, especially in colonies, with a familiar "kree-ar" call.
It displays a variety of behaviour and adaptations. Some of these include removing eggshells from the nest after hatching, begging co-ordination between siblings, differences between sexes, conspecific brood parasitism, and extra-pair paternity. They are found in a variety of different habitats.
Breeding
Eggshell removal
Eggshell removal is a behaviour seen in birds once the chicks have hatched, observed mostly to reduce risk of predation. Removing the eggshell acts as a way of camouflage to avoid predators seeing the nest. The further away egg shells are from the nest, the lower the predation risk. Black-headed gull eggs experience predation from different species of birds, foxes, stoats, and even other black-headed gulls. Although mothers show some form of aggressiveness when a predator is near, in the first 30 minutes, wet chicks can be easily taken by other black-headed gulls after hatching when the parents of the wet chick are distracted.
Black headed gulls also carry away other objects that do not belong in the nest. The removal of eggshells and other objects is important not only in the incubation period but also during the first few days after the eggs hatch. However, the removal process seems to increase as time goes on. The removal is done by both the male and female parents, normally lasts a few seconds and is done three times a year.
A black-headed gull is able to differentiate an egg shell from an egg by acknowledging its thin, serrated, white, edge; the weight of the egg or eggshell does not play a role when determining its value.
Earlier hypotheses have attempted to explain the survival value of black-headed gulls removing their eggshells from the nest, including:
The sharp edges of the shells after hatching could harm the chicks
The eggshell could somehow intrude during the brooding
The eggshell could slip over the unhatched egg, creating a double shell
Some of the moist organic material left from the shell could lead to a production of bacteria and mould
Begging coordination between siblings
Black-headed gulls feed their young by regurgitating onto the ground, rather than into each chick one at a time. The parents tend to accommodate their regurgitation amounts for how intense the nest begging is, from both an individual chick or a group of chicks begging together. Chicks who are siblings, have learned this behaviour and begin synchronising their begging signals to decrease the costs as an individual and increase the benefits as a whole. The rate of parental food regurgitation to chicks increases with begging intensity.
The amount and response of begging signals differs throughout the nestling period. Usually, there are 3–5 begging events/hour, each lasting around one minute. High intensity begging behaviour appears at the end of the first week in the nest, but the coordination between multiple chicks emerge during the last week of the nestling period. The more siblings present, the more they coordinate their begging while decreasing the number of begging.
Sex differences
Male chicks have less of a chance of survival when compared to female chicks. Black-headed gulls are a sexually size-dimorphic species, so the larger sex is at a disadvantage when the amount of food sources are low.
Male birds are more likely to be born in the first egg and female birds are more likely to be born in the third. The position of a female black-headed gull in response to the food available when laying the eggs can predict the offspring's characteristics.
Conspecific brood parasitism
Conspecific brood parasitism is a behaviour that occurs when females lay their eggs in another female's nest, of the same species. It can reduce the cost of incubation and nestling young by passing it on to another bird. Black-headed gulls usually lay three egg clutches, and the first two are normally larger than the third. The third egg normally has the lowest survival rate, while the first or second are usually the parasitic eggs.
Most of the egg dumping occurs within the beginning of the egg laying period. The parasitic eggs being laid in another conspecific's nest increases the chance of hatching and may occur because of nest desertion or a nest being taken over by another bird.
Multiple eggs in a nest from different mothers may also result from intra-specific nest parasitism, joint female nesting, and nest takeover. Intra-specific nest parasitism is a disadvantage to the hosts because the female could end up taking care of the parasitic chicks over her own and therefore neglecting them and reducing their fitness. Another disadvantage for the host is that incubating more chicks than their own takes up more energy.
Extra-pair paternity
The rate of extra-pair paternity (EPP) has a large variation between populations of black-headed gulls. It is primarily a context-dependent strategy, meaning not all black headed gulls experience this behaviour. The variation between populations of extra-pair paternity can be explained by the variation it has on the advantages and disadvantages it has on a female, as well, as the variation in pressure on a females choice.
The differences in the rate of EPP may be determined by multiple different factors: life history traits, ecological factors or different behavioural strategies of males.
Central–periphery gradient within colonies
Egg-laying can be earlier in Black-headed Gulls nesting in the centre of the colony, with central pairs tending to lay larger eggs, which have a higher hatching success, than pairs nesting at the periphery of the colony. Centrally nesting individuals have also been found to be in better condition and have higher genetic quality.
Walking displays
Black-headed gulls display both head-bobbing walking (HBW) and non-bobbing walking (NBW). Head-bobbing walking is expressed by a hold phase and a thrust phase. The hold phase in black-headed gulls occurs mainly during the single support phase and is when the bird balances its head to equal the environment. Head-bobbing walking occurs during a seeking type foraging by walking through water and includes benefits such as enhancing motion and pattern detection and gathering depth information from motion parallax during the thrust phase. Non-bobbing walking occurs when black-headed gulls are displaying a waiting behaviour while foraging on flat surfaces.
Synchronisation
Observations on the behaviour of black-headed gulls show that individuals synchronise their vigilance activity with other neighbouring black-headed gulls. Synchronisation in black-headed gulls groups is dependent on the distance between the individuals.
Uses
The eggs of the black-headed gull were considered a delicacy by some in the UK and eaten hard boiled. The collection of black-headed gull eggs is heavily regulated by the UK government. Eggs may only be taken by a small number of licensed individuals at six sites between 1 April and 15 May each year and only a single egg may be taken from each nest. No eggs are permitted to be sold after 30 June. As the gulls tend to lay in late April and early May, the eggs are only available to purchase for 3 or 4 weeks per year.
In popular culture
The black-headed gull is the official bird of Tokyo, Japan, and the Yurikamome automated guideway transit in Tokyo Bay is named after it.
In Richard Adams' 1972 novel Watership Down, a black-headed gull named Kehaar (who claims his name is the onomatopoeia of waves breaking against the shore) plays a major part in the story. Injured by a farm cat and left behind during the seasonal migration, Kehaar finds himself stranded on the Downs and is taken in by a warren of rabbits. He later becomes their friend and ally, and helps to save the rabbits from danger many times; instincts eventually force him to return to his colony, but he promises to visit the rabbits each winter. True to Adams' stated intentions of trying to keep their behaviour close to reality, Kehaar is characterised as intelligent, gregarious, noisy, messy, and impatient, and with a guttural accent. Kehaar appears in all three screen adaptations of the novel; the character was voiced by Zero Mostel in the 1978 film, Rik Mayall in the 1999 TV series, and Peter Capaldi in the 2018 miniseries.
Gallery
| Biology and health sciences | Charadriiformes | Animals |
252719 | https://en.wikipedia.org/wiki/Colugo | Colugo | Colugos (), flying lemurs, or cobegos (), are arboreal gliding euarchontogliran mammals that are native to Southeast Asia. Their closest evolutionary relatives are primates. There are just two living species of colugos: the Sunda flying lemur (Galeopterus variegatus) and the Philippine flying lemur (Cynocephalus volans). These two species make up the entire family Cynocephalidae () and order Dermoptera (from Ancient Greek δέρμα - dérma, "skin" and πτερόν - pterón, "wing").
Characteristics
Colugos are nocturnal, tree-dwelling mammals.
Appearance and anatomy
They reach lengths of and weigh . They have long, slender front and rear limbs, a medium-length tail, and a relatively light build. The head is small, with large, front-focused eyes for excellent binocular vision, and small rounded ears.
The incisor teeth of colugos are highly distinctive; they are comb-like in shape with up to 20 tines on each tooth. The incisors are analogous in appearance and function to the incisor suite in strepsirrhines, which is used for grooming. The second upper incisors have two roots, another unique feature among mammals. The dental formula of colugos is:
Movement
Colugos are proficient gliders, and thought better adapted for flight than any other gliding mammal.
They can travel as far as from one tree to another without losing much altitude, with a Malayan colugo (Galeopterus variegatus) individual having been observed traveling about in one glide.
Their ability to glide is possible because of a large membrane of skin that extends between their paired limbs. This gliding membrane, or patagium, runs from the shoulder blades to the fore paws, from the tip of the rear-most fingers to the tip of the toes, and from the hind legs to the tip of the tail. The spaces between the colugo's fingers and toes are webbed. As a result, colugos were once considered to be close relatives of bats. Today, on account of genetic data, they are considered to be more closely related to primates.
Colugos are unskilled climbers; they lack opposable thumbs. They progress up trees in a series of slow hops, gripping onto the bark with their small, sharp claws. They spend most of the day resting. At night, colugos spend most of their time up in the trees foraging, with gliding being used to either find another foraging tree or to find possible mates and protect territory.
Behavior and diet
Colugos are shy, nocturnal, solitary animals found in the tropical forests of Southeast Asia. Consequently, very little is known about their behavior. They are herbivorous and eat leaves, shoots, flowers, sap, and fruit. They have well-developed stomachs and long intestines capable of extracting nutrients from leaves and other fibrous material.
As part of colugos' evolution into a nocturnal species, they developed night vision. Colugos spend their days resting in tree holes and are active at night time; traveling around 1.7 km at night. Colugos may also be a territorial species.
Life cycle
Although they are placentals, colugos raise their young in a manner similar to marsupials. Newborn colugos are underdeveloped and weigh only . They spend the first six months of life clinging to their mother's belly. The mother colugo curls her tail and folds her patagium into a warm, secure, quasipouch to protect and transport her young. The young do not reach maturity until they are two to three years old. In captivity, they live up to 15 years, but their lifespan in the wild is unknown.
Status
Both species are threatened by habitat destruction, and the Philippine flying lemur was once classified by the IUCN as vulnerable. In 1996, the IUCN declared the species vulnerable owing to destruction of lowland forests and hunting. It was downlisted to least-concern status in 2008 but still faces the same threats. In addition to the ongoing clearing of its rainforest habitat, it is hunted for its meat and fur. It is also a favorite prey item for the critically endangered Philippine eagle; some studies suggest colugos account for 90% of the eagle's diet.
Taxonomy
Their family name Cynocephalidae comes from the Greek words kyōn "dog" and kephalē "head" because their heads are broad with short snouts like dogs.
Classification and evolution
It is estimated the ancestors of the colugos split from other mammals about 80 million years ago, leading to the present day forms that consist of 7 to 14 extant species. The Mixodectidae and Plagiomenidae appear to be fossil Dermoptera. Although other Paleogene mammals have been interpreted as related to dermopterans, the evidence for this association is uncertain and many of the fossils are no longer interpreted as being gliding mammals. At present, the fossil record of definitive dermopterans is limited to two species of the Eocene and Oligocene cynocephalid genus Dermotherium.
Molecular phylogenetic studies have demonstrated that colugos emerged as a basal Primatomorpha clade – which, in turn, is a basal Euarchontoglires clade. Scandentia are widely considered to be the closest relatives of Primatomorpha, within Euarchonta. Some studies, however, place Scandentia as sister of Glires (lagomorphs and rodents), in an unnamed sister clade of the Primatomorpha.
Order Dermoptera
†Family Plagiomenidae?
†Planetetherium
†Planetetherium mirabile
†Plagiomene
†Plagiomene multicuspis
†Family Mixodectidae?
†Dracontolestes
†Dracontolestes aphantus
†Eudaemonema
†Eudaemonema cuspidata
†Mixodectes
†Mixodectes pungens
†Mixodectes malaris
Family Cynocephalidae
Cynocephalus
Philippine flying lemur, Cynocephalus volans
Galeopterus
Sunda flying lemur, Galeopterus variegatus
†Dermotherium
†Dermotherium major
†Dermotherium chimaera
Gallery
| Biology and health sciences | Mammals: General | Animals |
252725 | https://en.wikipedia.org/wiki/Wheelbarrow | Wheelbarrow | A wheelbarrow is a small hand-propelled load-bearing vehicle, usually with just one wheel, designed to be pushed and guided by a single person using two handles at the rear. The term "wheelbarrow" is made of two words: "wheel" and "barrow." "Barrow" is a derivation of the Old English "barew" which was a device used for carrying loads.
The wheelbarrow is designed to distribute the weight of its load between the wheel and the operator, so enabling the convenient carriage of heavier and bulkier loads than would be possible were the weight carried entirely by the operator. As such it is a second-class lever. Traditional Chinese wheelbarrows, however, had a central wheel supporting the whole load. Use of wheelbarrows is common in the construction industry and in gardening. Typical capacity is approximately of material.
A two-wheel type is more stable on level ground, while the almost universal one-wheel type has better maneuverability in small spaces, on planks, in water, or when tilted ground would throw the load off balance. The use of one wheel also permits greater control of the deposition of the load upon emptying.
History
China
The earliest wheelbarrows with archaeological evidence in the form of a one-wheel cart come from second-century Han dynasty Emperor Hui's tomb murals and brick tomb reliefs. The painted tomb mural of a man pushing a wheelbarrow was found in a tomb at Chengdu, Sichuan province, dated precisely to 118 AD. The stone carved relief of a man pushing a wheelbarrow was found in the tomb of Shen Fujun in Sichuan province, dated circa 150 AD. And then there is the story of the pious Dong Yuan pushing his father around in a single-wheel lu che barrow, depicted in a mural of the Wu Liang tomb-shrine of Shandong (dated to 147 AD). Earlier accounts dating to the 1st century BC and 1st century AD that mention a "deer cart" (luche) might also have been referencing a wheelbarrow.
The 5th-century Book of Later Han stated that the wife of the once poor and youthful imperial censor Bao Xuan helped him push a lu che back to his village during their feeble wedding ceremony, around 30 BC. Later, during the Red Eyebrows Rebellion (c. 20 AD) against Xin dynasty's Wang Mang (45 BC–23 AD), the official Zhao Xi saved his wife from danger by disguising himself and pushing her along in his lu che barrow, past a group of brigand rebels who questioned him, and allowed him to pass after he convinced them that his wife was terribly ill. The first recorded description of a wheelbarrow appears in Liu Xiang's work Lives of Famous Immortals. Liu describes the invention of the wheelbarrow by the legendary Chinese mythological figure Ko Yu, who builds a "Wooden ox".
Nevertheless, the Chinese historical text of the Sanguozhi (Records of the Three Kingdoms), compiled by the ancient historian Chen Shou (233–297 AD), credits the invention of the wheelbarrow to Prime Minister Zhuge Liang (181–234 AD) of Shu Han from 197–234. It was written that in 231 AD, Zhuge Liang developed the vehicle of the wooden ox and used it as a transport for military supplies in a campaign against Cao Wei.
Centrally mounted wheel
Further annotations of the text by Pei Songzhi (430 AD) described the design in detail as a large single central wheel and axle around which a wooden frame was constructed in representation of an ox. Writing later in the 11th century, the Song dynasty (960–1279) scholar Gao Cheng wrote that the small wheelbarrow of his day, with shafts pointing forward (so that it was pulled), was the direct descendant of Zhuge Liang's wooden ox. Furthermore, he pointed out that the third century 'gliding horse' wheelbarrow featured the simple difference of the shaft pointing backwards (so that it was pushed instead).
Wheelbarrows in China came in two types. The more common type after the third century has a large, centrally mounted wheel. Prior types were universally front-wheeled wheelbarrows. The central-wheeled wheelbarrow could generally transport six human passengers at once, and instead of a laborious amount of energy exacted upon the animal or human driver pulling the wheelbarrow, the weight of the burden was distributed equally between the wheel and the puller. European visitors to China from the 17th century onwards had an appreciation for this, and it was given a considerable amount of attention by a member of the Dutch East India Company, Andreas Everardus van Braam Houckgeest, in his writings of 1797 (who accurately described its design and ability to hold large amounts of heavy baggage). These wheelbarrows continued in use into the twentieth century, and a good example of this is the 'Piepkar', which is a wheelbarrow on rails, and was found in Sumatra on Billiton Island. However, the lower carrying surface made the European wheelbarrow clearly more useful for short-haul work. As of the 1960s, traditional wheelbarrows in China were still in wide use.
Chinese sailing carriage
Although there are records of Chinese sailing carriages from the 6th century these land sailing vehicles were not wheelbarrows, and the date of which the sail assisted wheelbarrow was invented is uncertain. Engravings are found in van Braam Houckgeest's 1797 book.
European interest in the Chinese sailing carriage is also seen in the writings of Andreas Everardus van Braam Houckgeest in 1797, who wrote:
Near the southern border of Shandong one finds a kind of wheelbarrow much larger than that which I have been describing, and drawn by a horse or a mule. But judge by my surprise when today I saw a whole fleet of wheelbarrows of the same size. I say, with deliberation, a fleet, for each of them had a sail, mounted on a small mast exactly fixed in a socket arranged at the forward end of the barrow. The sail, made of matting, or more often of cloth, is high, and broad, with stays, sheets, and halyards, just as on a Chinese ship. The sheets join the shafts of the wheelbarrow and can thus be manipulated by the man in charge.
Ancient Greece and Rome
M. J. T. Lewis surmised the wheelbarrow may have existed in ancient Greece in the form of a one-wheel cart. Two building material inventories for 408/407 and 407/406 B.C. from the temple of Eleusis list, among other machines and tools, "1 body for a one-wheeler ()", although there is no evidence to prove this hypothesis.( in Greek):
Since () and () mean nothing but "two-wheeler" and "four-wheeler," and since the () body is sandwiched in the Eleusis inventory between a four-wheeler body and its four wheels, to take it as anything but a one-wheeler strains credulity far beyond breaking point. It can only be a wheelbarrow, necessarily guided and balanced by a man...what does now emerge as certainty is that the wheelbarrow did not, as is universally claimed, make its European debut in the Middle Ages. It was there some sixteen centuries before.
M. J. T. Lewis admits that the current consensus among technology historians, including Bertrand Gille, Andrea Matthies, and Joseph Needham, is that the wheelbarrow was invented in China around 100 AD and spread to the rest of world. However, Lewis proposes that the wheelbarrow could have also existed in ancient Greece. Based on the Eleusis list, Lewis states that it is possible that wheelbarrows were used on Greek construction sites, but admits that evidence for the wheelbarrow in ancient farming and mining is absent. He surmised that wheelbarrows were not uncommon on Greek construction sites for carrying moderately light loads. He speculates the possibility of wheelbarrows in the Roman Empire and the later Eastern Roman, or Byzantine Empire, although Lewis concludes that the evidence is scarce, and that "most of this scenario, perforce, is pure speculation." The 4th century Historia Augusta reports emperor Elagabalus to have used a wheelbarrow ( from , one-wheeled vehicle) to transport women in his frivolous games at court. While the present evidence does not indicate any use of wheelbarrows into medieval times, the question of continuity in the Byzantine Empire is still open, due to a lack of research yet. Currently, there is no evidence for the wheelbarrow in ancient Greece and Rome.
Medieval Europe
The first wheelbarrows in medieval Europe appeared sometime between 1170 and 1250. In contrast to the ones which typically have a wheel in the center of the barrow and were preferred in China, the types mostly used in Europe featured a wheel at or near the front, the arrangement of most wheelbarrows today.
Research on the early history of the wheelbarrow is made difficult by the marked absence of a common terminology. The historian of technology M.J.T. Lewis has identified in English and French sources four mentions of wheelbarrows between 1172 and 1222, three of them designated with a different term. According to the medieval art historian Andrea Matthies, the first archival reference to a wheelbarrow in medieval Europe is dated 1222, specifying the purchase of several wheelbarrows for the English king's works at Dover. The first depiction appears in an English manuscript, Matthew Paris's Vitae duorum Offarum, completed in 1250.
By the 13th century, the wheelbarrow proved useful in building construction, mining operations, and agriculture. However, going by surviving documents and illustrations the wheelbarrow remained a relative rarity until the 15th century. It also seemed to be limited to England, France, and the Low Countries.
The oldest wheelbarrows preserved from Central Europe were found in 2014 and 2017 during archaeological excavations in Ingolstadt, Germany. The felling dates of the trees that make up the wheelbarrow boards could be dendrochronologically dated to 1537 for one wheelbarrow and the 1530s for the other.
Modern wheelbarrows
Modern day wheelbarrows are generally made from plastic or metal and generally come with either a pneumatic tire, semi-pneumatic tire, or solid tire. Modern wheelbarrows come in four standard shapes, the home gardener shallow-tray variety, the builder's barrow, the square tray utility barrow and the brick barrow.
Plastic wheelbarrows can be beneficial as they are light in weight reducing physical demand on the user. But plastic wheelbarrows are also suited to lighter loads.
Steel wheel barrows can handle heavier loads with carrying capacities of up to wet, or dry. Steel wheelbarrows more effectively transport heavy, jagged material without damaging the wheelbarrow and are often better suited to construction applications.
Modern variations
In the 1970s, British inventor James Dyson introduced the Ballbarrow, an injection-molded plastic wheelbarrow with a spherical ball on the front end instead of a wheel. Compared to a conventional design, the larger surface area of the ball made the wheelbarrow easier to use in soft soil, and more laterally stable with heavy loads on uneven ground.
The Honda HPE60, an electric power-assisted wheelbarrow, was produced in 1998. Power-assisted wheelbarrows are now widely available from a number of different manufacturers. Powered wheelbarrows are used in a range of applications; the technology has improved to enable them to take much heavier loads, beyond weights that a human could transport alone without assistance. Motorized wheelbarrows are generally either diesel powered or electric battery powered. Often used in small-scale construction applications where access for larger plant machinery might be restricted.
| Technology | Agricultural tools | null |
252740 | https://en.wikipedia.org/wiki/Long-tailed%20duck | Long-tailed duck | The long-tailed duck (Clangula hyemalis) or coween, formerly known as the oldsquaw, is a medium-sized sea duck that breeds in the tundra and taiga regions of the arctic and winters along the northern coastlines of the Atlantic and Pacific Oceans. It is the only member of the genus Clangula.
Taxonomy
The long-tailed duck was formally described by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae. He placed it with all the other ducks in the genus Anas and coined the binomial name Anas hyemalis. Linnaeus cited the English naturalist George Edwards's description and illustration of the "Long-tailed duck from Hudson's-Bay" that had been published in 1750 in the third volume of his A Natural History of Uncommon Birds.
This duck is now the only species placed in the genus Clangula; the genus was introduced in 1819 by the English zoologist William Leach to accommodate the long-tailed duck, in an appendix on species to John Ross's account of his voyage to look for the Northwest Passage. The genus name Clangula is a diminutive of the Latin clangere, meaning "to resound". The specific epithet hyemalis, also Latin, means "of winter". The species is considered to be monotypic – no subspecies are recognised.
In North American English it is sometimes called oldsquaw, though this name has fallen out of favour. In 2000, the American Ornithologists' Union (AOU) formally adopted the name long-tailed duck, in response to petitioning by a group of biologists who feared that the former name would be offensive to Native American tribes whose help was required for conservation efforts. The AOU stated that "political correctness" alone was not sufficient to justify changing a long-standing name, but in this case decided to make the change because doing so would "conform with English usage in other parts of the world".
An undescribed congener is known from the Middle Miocene Sajóvölgyi Formation (Late Badenian, 13–12 Mya) of Mátraszőlős, Hungary.
Distribution
Long-tailed ducks breed on tundra across northern Eurasia (in Russian Siberia, Kamchatka, and Karelia, for example), the Faroe Islands, Finland, parts of southern Greenland, Iceland, Norway, as well as across northern North America (Alaska and northern Canada).
In winter, they are found on and near large bodies of seawater, such as the Northern Pacific Ocean, the North Atlantic Ocean, Hudson Bay and the American Great Lakes. Small numbers are found on the Missouri river
Description
Adults have white underparts, though the rest of the plumage goes through a complex moulting process. The male has a long pointed tail ( long) and a dark grey bill crossed by a pink band. In winter, the male has a dark cheek patch on a mainly white head and neck, a dark breast and mostly white body. In summer, the male is dark on the head, neck and back with a white cheek patch. The female has a brown back and a relatively short pointed tail. In winter, the female's head and neck are white with a dark crown. In summer, the head is dark. Juveniles resemble adult females in autumn plumage, though with a lighter, less distinct cheek patch.
The males are vocal and have a musical yodelling call ow, ow, owal-ow.
Behaviour
Breeding
Their breeding habitat is in tundra pools and marshes, but also along sea coasts and in large mountain lakes in the North Atlantic region, Alaska, northern Canada, northern Europe, and Russia. The nest is located on the ground near water; it is built using vegetation and lined with down. They are migratory and winter along the eastern and western coasts of North America, on the Great Lakes, coastal northern Europe and Asia, with stragglers to the Black Sea. The most important wintering area is the Baltic Sea, where a total of about 4.5 million gather. As of 2022 it has also been breeding in parts of Western Europe, such as on the Marker Wadden in the Netherlands.
Food and feeding
The long-tailed duck is gregarious, forming large flocks in winter and during migration. They feed by diving for mollusks, crustaceans and some small fish. Although they usually feed close to the surface, they are capable of diving to depths of . According to the Audubon Society Field Guide to North American Birds they can dive to 80 fathoms (146 metres or 480 feet). They use their wings, like velvet scoters, to dive, which gives them the ability to dive much deeper than other ducks.
Status
The long-tailed duck is still hunted across a large part of its range. There has been a significant decline in the number of birds wintering in the Baltic Sea, partly due to their susceptibility to being trapped in gillnets. For these reasons the International Union for Conservation of Nature (IUCN) has categorised the long-tailed duck as vulnerable. It is one of the species to which the Agreement on the Conservation of African-Eurasian Migratory Waterbirds (AEWA) applies.
| Biology and health sciences | Anseriformes | Animals |
23589344 | https://en.wikipedia.org/wiki/Agricultural%20pollution | Agricultural pollution | Agricultural pollution refers to biotic and abiotic byproducts of farming practices that result in contamination or degradation of the environment and surrounding ecosystems, and/or cause injury to humans and their economic interests. The pollution may come from a variety of sources, ranging from point source water pollution (from a single discharge point) to more diffuse, landscape-level causes, also known as non-point source pollution and air pollution. Once in the environment these pollutants can have both direct effects in surrounding ecosystems, i.e. killing local wildlife or contaminating drinking water, and downstream effects such as dead zones caused by agricultural runoff is concentrated in large water bodies.
Management practices, or ignorance of them, play a crucial role in the amount and impact of these pollutants. Management techniques range from animal management and housing to the spread of pesticides and fertilizers in global agricultural practices, which can have major environmental impacts. Bad management practices include poorly managed animal feeding operations, overgrazing, plowing, fertilizer, and improper, excessive, or badly timed use of pesticides.
Pollutants from agriculture greatly affect water quality and can be found in lakes, rivers, wetlands, estuaries, and groundwater. Pollutants from farming include sediments, nutrients, pathogens, pesticides, metals, and salts. Animal agriculture has an outsized impact on pollutants that enter the environment. Bacteria and pathogens in manure can make their way into streams and groundwater if grazing, storing manure in lagoons and applying manure to fields is not properly managed. Air pollution caused by agriculture through land use changes and animal agriculture practices have an outsized impact on climate change. Addressing these concerns was a central part of the IPCC Special Report on Climate Change and Land as well as in the 2024 UNEP Actions on Air Quality report. Mitigation of agricultural pollution is a key component in the development of a sustainable food system.
Abiotic sources
Pesticides
It has been approximated that in the absence of pest control measures, crop losses before harvesting would typically amount to 40 percent. Persistence is a major issue. For example 2,4-D and atrazine have with lifetimes up to 20 years—such as DDT, aldrin, dieldrin, endrin, heptachlor, and toxaphene), or even permanent (as seen in substances like lead, mercury, and arsenic). The extent to which the pesticides and herbicides persist depends on the compound's unique chemistry, which affects sorption dynamics and resulting fate and transport in the soil environment. Pesticides can also accumulate in animals that eat contaminated pests and soil organisms. The primary danger associated with pesticide application lies in its impact on non-target organisms. These encompass species we typically perceive as beneficial or desirable, such as pollinators, and to natural enemies of pests (i.e. insects that prey on or parasitize pests).
In principle, biopesticides, derived from natural sources, could reduce overall agricultural pollution. Their utilization is modest. Furthermore, biopesticides often suffer the same negative impacts as synthetic pesticides. In the United States, biopesticides are subject to fewer environmental regulations. Many biopesticides are permitted under the National Organic Program, United States Department of Agriculture, standards for organic crop production.
Pesticide leaching
Pesticide leaching occurs when pesticides dissolve in water, and these solutions migrate to off-target sites. Leaching is a major source of groundwater pollution. Leaching is affected by the soil, the pesticide, and rainfall and irrigation. Leaching is most likely to happen if using a water-soluble pesticide, when the soil tends to be sandy in texture; if excessive watering occurs just after pesticide application; if the adsorption ability of the pesticide to the soil is low. Leaching may not only originate from treated fields, but also from pesticide mixing areas, pesticide application machinery washing sites, or disposal areas.
Fertilizers
Fertilizers are used to provide crops with additional sources of nutrients, such as nitrogen, phosphorus, and potassium, that promote plant growth and increase crop yields. While they are beneficial for plant growth, they can also disrupt natural nutrient and mineral biogeochemical cycles and pose risks to human and ecological health.
Nitrogen
Most common nitrogen sources are NO3− (nitrate) and NH4+ (ammonium). These fertilizers have greatly increased the productivity of agricultural land:
Although leading to increased crop yield, nitrogen fertilizers can also negatively affect groundwater and surface waters, pollute the atmosphere, and degrade soil health. Not all nutrient applied through fertilizer are taken up by the crops, and the remainder accumulates in the soil or is lost as runoff. Nitrate fertilizers are much more likely to be lost to the soil profile through runoff because of its high solubility and like charges between the molecule and negatively charged clay particles. High application rates of nitrogen-containing fertilizers combined with the high water-solubility of nitrate leads to increased runoff into surface water as well as leaching into groundwater, thereby causing groundwater pollution. Nitrate levels above 10 mg/L (10 ppm) in groundwater can cause "blue baby syndrome" (acquired methemoglobinemia) in infants and possibly thyroid disease and various types of cancer. Nitrogen fixation, which converts atmospheric nitrogen (N2) to ammonia, and denitrification, which converts biologically available nitrogen compounds to N2 and N2O, are two of the most important metabolic processes involved in the nitrogen cycle because they are the largest inputs and outputs of nitrogen to ecosystems. They allow nitrogen to flow between the atmosphere, which is around 78% nitrogen) and the biosphere. Other significant processes in the nitrogen cycle are nitrification and ammonification which convert ammonium to nitrate or nitrite and organic matter to ammonia respectively. Because these processes keep nitrogen concentrations relatively stable in most ecosystems, a large influx of nitrogen from agricultural runoff can cause serious disruption. A common result of this in aquatic ecosystems is eutrophication, which in turn creates hypoxic and anoxic conditions – both of which are deadly and/or damaging to many species. Nitrogen fertilization can also release NH3 gases into the atmosphere which can then be converted into NOx compounds. A greater amount of NOx compounds in the atmosphere can result in the acidification of aquatic ecosystems and cause various respiratory issues in humans. Fertilization can also release N2O which is a greenhouse gas and can facilitate the destruction of ozone (O3) in the stratosphere. Soils that receive nitrogen fertilizers can also be damaged. An increase in plant available nitrogen will increase a crop's net primary production, and eventually, soil microbial activity will increase as a result of the larger inputs of nitrogen from fertilizers and carbon compounds through decomposed biomass. Excess nitrogen can disrupt mutualisms; for example, in the legumes-rhizobia resource mutualism nitrogen deposition results in the evolution of less-cooperative rhizobia. Because of the increase in decomposition in the soil, its organic matter content will be depleted which results in lower overall soil health.
Phosphorus
The most common form of phosphorus fertilizer used in agricultural practices is phosphate (PO43-), and it is applied in synthetic compounds that incorporate PO43- or in organic forms such as manure and compost. Phosphorus is an essential nutrient in all organisms because of the roles it plays in cell and metabolic functions such as nucleic acid production and metabolic energy transfers. However, most organisms, including agricultural crops, only require a small amount of phosphorus because they have evolved in ecosystems with relatively low amounts of it. Microbial populations in soils are able to convert organic forms of phosphorus to soluble plant available forms such as phosphate. This step is generally bypassed with inorganic fertilizers because it is applied as phosphate or other plant available forms. Any phosphorus that is not taken up by plants is adsorbed to soil particles which helps it remain in place. Because of this, it typically enters surface waters when the soil particles it is attached to are eroded as a result of precipitation or stormwater runoff. The amount that enters surface waters is relatively low in comparison to the amount that is applied as fertilizer, but because it acts as a limiting nutrient in most environments, even a small amount can disrupt an ecosystem's natural phosphorus biogeochemical cycles. Although nitrogen plays a role in harmful algae and cyanobacteria blooms that cause eutrophication, excess phosphorus is considered the largest contributing factor due to the fact that phosphorus is often the most limiting nutrient, especially in freshwaters. In addition to depleting oxygen levels in surface waters, algae and cyanobacteria blooms can produce cyanotoxins which are harmful to human and animal health as well as many aquatic organisms.
The concentration of cadmium in phosphorus-containing fertilizers varies considerably and can be problematic. For example, mono-ammonium phosphate fertilizer may have a cadmium content of as low as 0.14 mg/kg or as high as 50.9 mg/kg. This is because the phosphate rock used in their manufacture can contain as much as 188 mg/kg cadmium (examples are deposits on Nauru and the Christmas islands). Continuous use of high-cadmium fertilizer can contaminate soil and plants. Limits to the cadmium content of phosphate fertilizers has been considered by the European Commission. Producers of phosphorus-containing fertilizers now select phosphate rock based on the cadmium content.
Phosphate rocks contain high levels of fluoride. Consequently, the widespread use of phosphate fertilizers has increased soil fluoride concentrations. It has been found that food contamination from fertilizer is of little concern as plants accumulate little fluoride from the soil; of greater concern is the possibility of fluoride toxicity to livestock that ingest contaminated soils. Also of possible concern are the effects of fluoride on soil microorganisms.
Radioactive elements
The radioactive content of the fertilizers varies considerably and depends both on their concentrations in the parent mineral and on the fertilizer production process. Uranium-238 concentrations range can range from 7 to 100 pCi/g in phosphate rock and from 1 to 67 pCi/g in phosphate fertilizers. Where high annual rates of phosphorus fertilizer are used, this can result in uranium-238 concentrations in soils and drainage waters that are several times greater than are normally present. However, the impact of these increases on the risk to human health from radionuclide contamination of foods is very small (less than 0.05 mSv/y).
From machinery
Farm machinery and equipment emitting substantial quantities of harmful gases.
Land management
Soil erosion and sedimentation
Agriculture contributes greatly to soil erosion and sediment deposition through intensive management or inefficient land cover. It is estimated that agricultural land degradation is leading to an irreversible decline in fertility on about 6 million ha of fertile land each year. The accumulation of sediments (i.e. sedimentation) in runoff water affects water quality in various ways. Sedimentation can decrease the transport capacity of ditches, streams, rivers, and navigation channels. It can also limit the amount of light penetrating the water, which affects aquatic biota. The resulting turbidity from sedimentation can interfere with feeding habits of fishes, affecting population dynamics. Sedimentation also affects the transport and accumulation of pollutants, including phosphorus and various pesticides.
Tillage and nitrous oxide emissions
Natural soil biogeochemical processes result in the emission of various greenhouse gases, including nitrous oxide. Agricultural management practices can affect emission levels. For example, tillage levels have also been shown to affect nitrous oxide emissions.
Organic farming and conservation agriculture in mitigation
Organic farming
Conservation agriculture
Conservation agriculture relies on principles of minimal soil disturbance, the use of mulch and/or cover crops as soil cover, and crop species diversification. It enables the reduction of fertilizers, which in turn reduces ammonia emissions and greenhouse gas emissions.It also stabilizes soil, which slows down the release of carbon into the atmosphere.
Biotic sources
Organic contaminants
Manures and biosolids, although having value as fertilizers, they may also contain contaminants, including pharmaceuticals and personal care products (PPCPs). A wide variety and vast quantity of PPCPs consumed by animals.
Greenhouse gases from fecal waste
The United Nations Food and Agriculture Organization (FAO) predicted that 18% of anthropogenic greenhouse gases come directly or indirectly from the world's livestock. This report also suggested that the emissions from livestock were greater than that of the transportation sector. While livestock do currently play a role in producing greenhouse gas emissions, the estimates have been argued to be a misrepresentation. While the FAO used a life-cycle assessment of animal agriculture (i.e. all aspects including emissions from growing crops for feed, transportation to slaughter, etc.), they did not apply the same assessment for the transportation sector.
Alternate sources claim that FAO estimates are too low, stating that the global livestock industry could be responsible for up to 51% of emitted atmospheric greenhouse gasses rather than 18%. Critics say the difference in estimates come from the FAO's use of outdated data. Regardless, if the FAO's report of 18% is accurate, that still makes livestock the second-largest greenhouse-gas-polluter.
A PNAS model showed that even if animals were completely removed from U.S. agriculture and diets, U.S. GHG emissions would be decreased by 2.6% only (or 28% of agricultural GHG emissions). This is because of the need replace animal manures by fertilizers and to replace also other animal coproducts, and because livestock now use human-inedible food and fiber processing byproducts. Moreover, people would suffer from a greater number of deficiencies in essential nutrients although they would get a greater excess of energy, possibly leading to greater obesity.
Introduced species
Invasive species
The increasing globalization of agriculture has resulted in the accidental transport of pests, weeds, and diseases to novel ranges. If they establish, they become an invasive species that can impact populations of native species and threaten agricultural production. For example, the transport of bumblebees reared in Europe and shipped to the United States and/or Canada for use as commercial pollinators has led to the introduction of an Old World parasite to the New World. This introduction may play a role in recent native bumble bee declines in North America. Agriculturally introduced species can also hybridize with native species resulting in a decline in genetic biodiversity and threaten agricultural production.
Habitat disturbance associated with farming practices themselves can also facilitate the establishment of these introduced organisms. Contaminated machinery, livestock and fodder, and contaminated crop or pasture seed can also lead to the spread of weeds.
Quarantines (see biosecurity) are one way in which prevention of the spread of invasive species can be regulated at the policy level. A quarantine is a legal instrument that restricts the movement of infested material from areas where an invasive species is present to areas in which it is absent.
The World Trade Organization has international regulations concerning the quarantine of pests and diseases under the Agreement on the Application of Sanitary and Phytosanitary Measures. Individual countries often have their own quarantine regulations. In the United States, for example, the United States Department of Agriculture/Animal and Plant Health Inspection Service (USDA/APHIS) administers domestic (within the United States) and foreign (importations from outside the United States) quarantines. These quarantines are enforced by inspectors at state borders and ports of entry.
Biological control
The use of biological pest control agents, or using predators, parasitoids, parasites, and pathogens to control agricultural pests, has the potential to reduce agricultural pollution associated with other pest control techniques, such as pesticide use. The merits of introducing non-native biocontrol agents have been widely debated, however. Once released, the introduction of a biocontrol agent can be irreversible. Potential ecological issues could include the dispersal from agricultural habitats into natural environments, and host-switching or adapting to utilize a native species. In addition, predicting the interaction outcomes in complex ecosystems and potential ecological impacts prior to release can be difficult. One example of a biocontrol program that resulted in ecological damage occurred in North America, where a parasitoid of butterflies was introduced to control gypsy moth and browntail moth. This parasitoid is capable of utilizing many butterfly host species, and likely resulted in the decline and extirpation of several native silk moth species.
International exploration for potential biocontrol agents is aided by agencies such as the European Biological Control Laboratory, the United States Department of Agriculture/Agricultural Research Service (USDA/ARS), the Commonwealth Institute of Biological Control, and the International Organization for Biological Control of Noxious Plants and Animals. In order to prevent agricultural pollution, quarantine and extensive research on the organism's potential efficacy and ecological impacts are required prior to introduction. If approved, attempts are made to colonize and disperse the biocontrol agent in appropriate agricultural settings. Continual evaluations on their efficacy are conducted.
Genetically modified organisms (GMO)
Genetic contamination and ecological effects
GMO crops can, however, result in genetic contamination of native plant species through hybridization. This could lead to increased weediness of the plant or the extinction of the native species. In addition, the transgenic plant itself may become a weed if the modification improves its fitness in a given environment.
There are also concerns that non-target organisms, such as pollinators and natural enemies, could be poisoned by accidental ingestion of Bt-producing plants. A recent study testing the effects of Bt corn pollen dusting nearby milkweed plants on larval feeding of the monarch butterfly found that the threat to populations of the monarch was low.
The use of GMO crop plants engineered for herbicide resistance can also indirectly increase the amount of agricultural pollution associated with herbicide use. For example, the increased use of herbicide in herbicide-resistant corn fields in the mid-western United States is decreasing the amount of milkweeds available for monarch butterfly larvae.
Regulation of the release of genetic modified organisms vary based on the type of organism and the country concerned.
GMO as a tool of pollution reduction
While there may be some concerns regarding the use of GM products, it may also be the solution to some of the existing animal agriculture pollution issues. One of the main sources of pollution, particularly vitamin and mineral drift in soils, comes from a lack of digestive efficiency in animals. By improving digestive efficiency, it is possible to minimize both the cost of animal production and the environmental damage. One successful example of this technology and its potential application is the Enviropig.
The Enviropig is a genetically modified Yorkshire pig that expresses phytase in its saliva. Grains, such as corn and wheat, have phosphorus that is bound in a naturally indigestible form known as phytic acid. Phosphorus, an essential nutrient for pigs, is then added to the diet, since it can not be broken down in the pigs digestive tract. As a result, nearly all of the phosphorus naturally found in the grain is wasted in the feces, and can contribute to elevated levels in the soil. Phytase is an enzyme that is able to break down the otherwise indigestible phytic acid, making it available to the pig. The ability of the Enviropig to digest the phosphorus from the grains eliminates the waste of that natural phosphorus (20-60% reduction), while also eliminating the need to supplement the nutrient in feed.
Animal management
Manure management
One of the main contributors to air, soil and water pollution is animal waste. According to a 2005 report by the USDA, more than 335–million tons of "dry matter" waste (the waste after water is removed) is produced annually on farms in the United States. Animal feeding operations produce about 100 times more manure than the amount of human sewage sludge processed in US municipal waste water plants each year. Diffuse source pollution from agricultural fertilizers is more difficult to trace, monitor and control. High nitrate concentrations are found in groundwater and may reach 50 mg/litre (the EU Directive limit). In ditches and river courses, nutrient pollution from fertilizers causes eutrophication. This is worse in winter, after autumn ploughing has released a surge of nitrates; winter rainfall is heavier increasing runoff and leaching, and there is lower plant uptake. EPA suggests that one dairy farm with 2,500 cows produces as much waste as a city with around 411,000 residents. The US National Research Council has identified odors as the most significant animal emission problem at the local level. Different animal systems have adopted several waste management procedures to deal with the large amount of waste produced annually.
The advantages of manure treatment are a reduction in the amount of manure that needs to be transported and applied to crops, as well as reduced soil compaction. Nutrients are reduced as well, meaning that less cropland is needed for manure to be spread upon. Manure treatment can also reduce the risk of human health and biosecurity risks by reducing the amount of pathogens present in manure. Undiluted animal manure or slurry is one hundred times more concentrated than domestic sewage, and can carry an intestinal parasite, Cryptosporidium, which is difficult to detect but can be passed to humans. Silage liquor (from fermented wet grass) is even stronger than slurry, with a low pH and very high biological oxygen demand. With a low pH, silage liquor can be highly corrosive; it can attack synthetic materials, causing damage to storage equipment, and leading to accidental spillage. All of these advantages can be optimized by using the right manure management system on the right farm based on the resources that are available.
Manure treatment
Composting
Composting is a solid manure management system that relies on solid manure from bedded pack pens, or the solids from a liquid manure separator. There are two methods of composting, active and passive. Manure is churned periodically during active composting, whereas in passive composting it is not. Passive composting has been found to have lower green house gas emissions due to incomplete decomposition and lower gas diffusion rates.
Solid-liquid separation
Manure can be mechanically separated into a solid and liquid portion for easier management. Liquids (4–8% dry matter) can be used easily in pump systems for convenient spread over crops and the solid fraction (15–30% dry matter) can be used as stall bedding, spread on crops, composted or exported.
Anaerobic digestion and lagoons
Anaerobic digestion is the biological treatment of liquid animal waste using bacteria in an area absent of air, which promotes the decomposition of organic solids. Hot water is used to heat the waste in order to increase the rate of biogas production. The remaining liquid is nutrient rich and can be used on fields as a fertilizer and methane gas that can be burned directly on the biogas stove or in an engine generator to produce electricity and heat. Methane is about 20 times more potent as a greenhouse gas than carbon dioxide, which has significant negative environmental effects if not controlled properly. Anaerobic treatment of waste is the best method for controlling the odor associated with manure management.
Biological treatment lagoons also use anaerobic digestion to break down solids, but at a much slower rate. Lagoons are kept at ambient temperatures as opposed to the heated digestion tanks. Lagoons require large land areas and high dilution volumes to work properly, so they do not work well in many climates in the northern United States. Lagoons also offer the benefit of reduced odor and biogas is made available for heat and electric power.
Studies have demonstrated that GHG emissions are reduced using aerobic digestion systems. GHG emission reductions and credits can help compensate for the higher installation cost of cleaner aerobic technologies and facilitate producer adoption of environmentally superior technologies to replace current anaerobic lagoons.
| Technology | Agriculture and ecology | null |
6163834 | https://en.wikipedia.org/wiki/Juniperus%20phoenicea | Juniperus phoenicea | Juniperus phoenicea, the Phoenicean juniper or Arâr, is a juniper found throughout the Mediterranean region.
Description
Juniperus phoenicea is a large evergreen shrub or small tree reaching tall, with a trunk up to in diameter and a rounded or irregular crown. The bark, which can be peeled in strips, is dark grayish-brown. The leaves are of two forms, juvenile needle-like leaves long and 1 mm wide on seedlings, and adult scale-leaves 1–2 mm long on older plants with a green to blue-green color; they are arranged in opposite decussate pairs or whorls of three. It is largely monoecious, but some individual plants are dioecious. The female cones are berrylike, 6–14 mm in diameter, orange-brown, occasionally with a pinkish waxy bloom, and contain 3–8 seeds; they are mature in about 18 months, and are mainly dispersed by birds. The male cones are 2–4 mm long, and shed their pollen in early spring, which is then dispersed by wind.
Taxonomy
There are two varieties, treated as subspecies by some authors and as separate species by others:
Juniperus phoenicea var. phoenicea = J. phoenicea. Throughout the range of the species. Cones globose, about as wide as long. Leaves are small and obtuse. Sheds pollen in the spring.
Juniperus phoenicea var. turbinata (syn. Juniperus turbinata). Confined to coastal sand dune habitats. Cones oval, narrower than long. Leaves are long and thin. Sheds pollen in the autumn.
Distribution and habitat
The species is found throughout the Mediterranean region, from Morocco and Portugal east to Croatia, Italy, Turkey, and Lebanon. It also grows in Egypt, the Palestine region and in western Saudi Arabia near the Red Sea, and also on Madeira and the Canary Islands. It mostly grows at low altitudes close to the coast, but reaches an altitude of in the south of its range in the Atlas Mountains.
Ecology
The species prefers a hot, arid climate with a lot of light, and grows on rocky or sandy ground. Its preferred soil is calcareous with a pH between 7.7 and 7.9 (moderately basic), but could also be silicate. Despite having a shallow root system, it can survive with as little as of rain per year. It can often be found forming scrubs and thickets with other species. In its natural range of France and Spain, J. phoenicea has a generational life of 25 years, and is considered a stable species on the 2016 IUCN Red List of Threatened Species.
Its habitat in coastal areas is most threatened by the presence of humans, both settled and touring. Humans also plant not-naturally-present plants such as pines, black locust, French tamarisk, desert false Indigo, American agave, tree of heaven, and some succulent plants from South Africa. The purpose of this is usually to stabilize the dunes, but these outside plants interfere with the natural vegetation. It is also threatened easily by fires, because it is quite flammable and does not regenerate well. This makes it necessary to plant new organisms after a fire has damaged the others.
Uses
Juniper berries are used as a seasoning in cooking or in alcoholic beverages, particularly to flavor gin. Juniper berries have also been used in traditional medicine for different conditions, although there is no high-quality clinical evidence that it has any effect. Although extracts of juniper berries or wood tar have been used as an aromaparticularly for cosmeticsthe safety of using ointments manufactured from J. phoenicea and related species has not been adequately demonstrated, according to a 2001 review. Juniper extracts used topically may cause skin allergic reactions, and should be avoided during pregnancy.
The tree's essential oil is especially rich in the tricyclic sesquiterpene thujopsene. The heartwood contains an estimated 2.2% of thujopsene; this explains the superior natural durability of the wood itself. The biochemist Jarl Runeburg noted in 1960 that J. phoenicea appears to be the most convenient source of thujopsene so far encountered." Juniper wood is used for small manufactured objects and inlay works in carpentry, and in building construction in Africa where it is mainly used for fuel and producing charcoal.
Culture
It is the vegetable symbol of the island of El Hierro.
| Biology and health sciences | Cupressaceae | Plants |
6166502 | https://en.wikipedia.org/wiki/Common%20death%20adder | Common death adder | The common death adder (Acanthophis antarcticus) is a species of death adder native to Australia. It is one of the most venomous land snakes in Australia and globally. While it remains widespread (unlike related species), it is facing increased threat from the ongoing Australian cane toad invasion.
Taxonomy
The common death adder was first described in 1802. The common death adder feeds on frogs, lizards and birds and, unlike most Australian venomous snakes that actively search for prey, this snake sits in one place and waits for prey to come to it.
Description
The common death adder has a broad flattened, triangular head and a thick body with bands of red, brown and black with a grey, cream or pink belly. It's known to reach a maximum body length of . Unlike the common or European adder (Vipera berus), the common death adder is a member of the snake family Elapidae, rather than the family Viperidae, which are not found in Australia.
Distribution and habitat
The common death adder occurs over much of eastern and coastal southern Australia – Queensland, New South Wales and South Australia. It is more scarce in the Northern Territory, Western Australia and the west parts of South Australia, and is no longer found in Victoria. It is also native to Papua.
Common death adders are found in forests, woodlands, grasslands and heaths of the eastern coast of Australia. Thanks to its band stripes, the death adder is a master of camouflage, hiding beneath loose leaf litter and debris in woodland, shrubland and grassland.
Concerns
Habitat loss and the spread of invasive cane toads are a concern. The toad eats young death adders and adult death adders that eat the toads are poisoned by the toxic glands on their skin.
Diet
Common death adders eat small mammals and birds as a primary diet. Unlike other elapids, a common death adder lies in wait for its prey (often for many days) until a meal passes. It covers itself with leaves—making itself inconspicuous—and lies coiled in ambush, twitching its grub-like tail close to its head as a lure. When an animal approaches to investigate the movement, the death adder quickly strikes, injecting its venom and then waiting for the victim to die before eating it. Death adders are not aggressive, yet their ambush hunting technique and reliance on camouflage rather than flight to avoid threats render them more dangerous than other elapids to humans who venture into bushland habitats.
Reproduction
Unlike most snakes, death adders produce litters of live young. In the late summer, a female death adder will produce a litter of live offspring, approximately 3–20; however, over 30 young have been recorded in a single litter.
| Biology and health sciences | Snakes | Animals |
18514764 | https://en.wikipedia.org/wiki/Stick%20mantis | Stick mantis | Stick mantis and twig mantis are common names applied to numerous species of mantis that mimic sticks or twigs as camouflage. Often the name serves to identify entire genera such as is the case with:
Brunneria (including Brunner's stick mantis, the Brazilian stick mantis and the small-winged stick mantis)
Hoplocorypha (the African stick mantises)
Paratoxodera (including the Borneo stick mantis and the giant Malaysian stick mantis)
Popa (African twig mantis)
In cases, some but not all members of a genera are called by a variation of one of these names. For example:
Archimantis latistyla (Australian stick mantis)
Pseudovates peruviana (Peruvian stick mantis)
Similar insects
Stick mantises should not be confused with stick insects (Phasmatodea) although the latter were long-considered close relatives of all mantises according to classification which is now often considered paraphyletic and outdated. Likewise, both mantises and stick insects are separate from the recently identified Mantophasmatodea.
| Biology and health sciences | Insects: General | Animals |
4713880 | https://en.wikipedia.org/wiki/Solar%20telescope | Solar telescope | A solar telescope or a solar observatory is a special-purpose telescope used to observe the Sun. Solar telescopes usually detect light with wavelengths in, or not far outside, the visible spectrum. Obsolete names for Sun telescopes include heliograph and photoheliograph.
Professional solar telescopes
Solar telescopes need optics large enough to achieve the best possible diffraction limit but less so for the associated light-collecting power of other astronomical telescopes. However, recently newer narrower filters and higher framerates have also driven solar telescopes towards photon-starved operations. Both the Daniel K. Inouye Solar Telescope as well as the proposed European Solar Telescope (EST) have larger apertures not only to increase the resolution, but also to increase the light-collecting power.
Because solar telescopes operate during the day, seeing is generally worse than for night-time telescopes, because the ground around the telescope is heated, which causes turbulence and degrades the resolution. To alleviate this, solar telescopes are usually built on towers and the structures are painted white. The Dutch Open Telescope is built on an open framework to allow the wind to pass through the complete structure and provide cooling around the telescope's main mirror.
Another solar telescope-specific problem is the heat generated by the tightly-focused sunlight. For this reason, a heat stop is an integral part of the design of solar telescopes. For the Daniel K. Inouye Solar Telescope, the heat load is 2.5 MW/m2, with peak powers of 11.4 kW. The goal of such a heat stop is not only to survive this heat load, but also to remain cool enough not to induce any additional turbulence inside the telescope's dome.
Professional solar observatories may have main optical elements with very long focal lengths (although not always, Dutch Open Telescope) and light paths operating in a vacuum or helium to eliminate air motion due to convection inside the telescope. However, this is not possible for apertures over 1 meter, at which the pressure difference at the entrance window of the vacuum tube becomes too large. Therefore, the Daniel K. Inouye Solar Telescope and the EST have active cooling of the dome to minimize the temperature difference between the air inside and outside the telescope.
Due to the Sun's narrow path across the sky, some solar telescopes are fixed in position (and are sometimes buried underground), with the only moving part being a heliostat to track the Sun. One example of this is the McMath-Pierce Solar Telescope.
The Sun, being the closest star to earth, allows a unique chance to study stellar physics with high-resolution. It was, until the 1990s, the only star whose surface had been resolved. General topics that interest a solar astronomer are its 11-year periodicity (i.e., the Solar Cycle), sunspots, magnetic field activity (see solar dynamo), solar flares, coronal mass ejections, differential rotation, and plasma physics.
Other types of observation
Most solar observatories observe optically at visible, UV, and near infrared wavelengths, but other solar phenomena can be observed — albeit not from the Earth's surface due to the absorption of the atmosphere:
Solar X-ray astronomy, observations of the Sun in x-rays
Multi-spectral solar telescope array (MSSTA), a rocket launched payload of UV telescopes in the 1990s
Leoncito Astronomical Complex operated a submillimeter wavelength solar telescope.
The Radio Solar Telescope Network (RSTN) is a network of solar observatories maintained and operated by the U.S. Air Force Weather Agency.
CERN Axion Solar Telescope (CAST), looks for solar axions in the early 2000s
Amateur solar telescopes
In the field of amateur astronomy there are many methods used to observe the Sun. Amateurs use everything from simple systems to project the Sun on a piece of white paper, light blocking filters, Herschel wedges which redirect 95% of the light and heat away from the eyepiece, up to hydrogen-alpha filter systems and even home built spectrohelioscopes. In contrast to professional telescopes, amateur solar telescopes are usually much smaller.
With a conventional telescope, an extremely dark filter at the opening of the primary tube is used to reduce the light of the Sun to tolerable levels. Since the full available spectrum is observed, this is known as "white-light" viewing, and the opening filter is called a "white-light filter". The problem is that even reduced, the full spectrum of white light tends to obscure many of the specific features associated with solar activity, such as prominences and details of the chromosphere. Specialized solar telescopes facilitate clear observation of such H-alpha emissions by using a bandwidth filter implemented with a Fabry-Perot etalon.
Solar tower
A solar tower is a structure used to support equipment for studying the Sun, and is typically part of solar telescope designs. Solar tower observatories are also called vacuum tower telescopes. Solar towers are used to raise the observation equipment above atmospheric turbulence caused by solar heating of the ground and the radiation of the heat into the atmosphere. Traditional observatories do not have to be placed high above ground level, as they do most of their observation at night, when ground radiation is at a minimum.
The horizontal Snow solar observatory was built on Mount Wilson in 1904. It was soon found that heat radiation was disrupting observations. Almost as soon as the Snow Observatory opened, plans were started for a tower that opened in 1908 followed by a tower in 1912. The 60-foot tower is currently used to study helioseismology, while the 150-foot tower is active in UCLA's Solar Cycle Program.
The term has also been used to refer to other structures used for experimental purposes, such as the Solar Tower Atmospheric Cherenkov Effect Experiment (STACEE), which is being used to study Cherenkov radiation, and the Weizmann Institute solar power tower.
Other solar telescopes that have solar towers are Richard B. Dunn Solar Telescope, Solar Observatory Tower Meudon and others.
Selected heliophysics missions
Solar Terrestrial Relations Observatory (STEREO) mission was launched in October 2006. Two identical spacecraft were launched into orbits that caused them to (respectively) pull further ahead of and fall gradually behind Earth. This enables stereoscopic imaging of the Sun and solar phenomena, such as coronal mass ejections.
The Solar Dynamics Observatory was launched in 2010 and monitors the Sun from a geosynchronous orbit around Earth.
Parker Solar Probe was launched in 2018 aboard a Delta IV Heavy rocket and will reach a perihelion of in 2025, making it the closest-orbiting manmade satellite as the first spacecraft to fly low into the solar corona.
Solar Orbiter mission (SolO) was launched in 2020 and will reach a minimum perihelion of , making it the closest satellite with Sun-facing cameras.
CubeSat for Solar Particles (CuSP) was launched as a rideshare on Artemis 1 on 16 November 2022 to study particles and magnetic fields.
Indian Space Research Organisation has launched a satellite named Aditya-L1 on 2 September 2023. Its main instrument will be a coronagraph for studying the dynamics of the solar corona.
Selected solar telescopes
The Einstein Tower (Einsteinturm) became operational in 1924
McMath–Pierce Solar Telescope (1.6 m diameter, 1961–)
McMath–Hulbert Observatory (24"/61 cm diameter, 1941–1979)
Swedish Vacuum Solar Telescope (47.5 cm diameter, 1985–2000)
Swedish 1-m Solar Telescope (1 m diameter, 2002–)
Richard B. Dunn Solar Telescope (0.76 m diameter, 1969–)
Mount Wilson Observatory
Dutch Open Telescope (45 cm diameter, 1997–)
The Teide Observatory hosts multiple solar telescopes, including
the 70 cm Vacuum Tower Telescope (1989–) and
the 1.5 m GREGOR Solar Telescope (2012–]).
Goode Solar Telescope (1.6 m, 2009–)
Daocheng Solar Radio Telescope, Chinese radio telescope with 313 parabolic antennas
Daniel K. Inouye Solar Telescope (DKIST), a telescope with 4 m aperture.
European Solar Telescope (EST), a proposed 4-meter class aperture telescope.
Chinese Giant Solar Telescope (CGST), a proposed 5- to 8-meter aperture telescope.
National Large Solar Telescope (NLST), is a Gregorian multi-purpose open telescope proposed to be built and installed in India and aims to study the Sun's microscopic structure.
| Technology | Telescope | null |
4715070 | https://en.wikipedia.org/wiki/Soil%20chemistry | Soil chemistry | Soil chemistry is the study of the chemical characteristics of soil. Soil chemistry is affected by mineral composition, organic matter and environmental factors. In the early 1870s a consulting chemist to the Royal Agricultural Society in England, named J. Thomas Way, performed many experiments on how soils exchange ions, and is considered the father of soil chemistry. Other scientists who contributed to this branch of ecology include Edmund Ruffin, and Linus Pauling.
History
Until the late 1960s, soil chemistry focused primarily on chemical reactions in the soil that contribute to pedogenesis or that affect plant growth. Since then, concerns have grown about environmental pollution, organic and inorganic soil contamination and potential ecological health and environmental health risks. Consequently, the emphasis in soil chemistry has shifted from pedology and agricultural soil science to an emphasis on environmental soil science.
Environmental soil chemistry
A knowledge of environmental soil chemistry is paramount to predicting the fate of contaminants, as well as the processes by which they are initially released into the soil. Once a chemical is exposed to the soil environment, myriad chemical reactions can occur that may increase or decrease contaminant toxicity. These reactions include adsorption/desorption, precipitation, polymerization, dissolution, hydrolysis, hydration, complexation and oxidation/reduction. These reactions are often disregarded by scientists and engineers involved with environmental remediation. Understanding these processes enable us to better predict the fate and toxicity of contaminants and provide the knowledge to develop scientifically correct, and cost-effective remediation strategies.
Key concepts
Soil structure
Soil structure refers to the manner in which these individual soil particles are grouped together to form clusters of particles called aggregates. This is determined by the types of soil formation, parent material, and texture. Soil structure can be influenced by a wide variety of biota as well as management methods by humans.
Formation of aggregates
Aggregates form under varying soil forming conditions and differ in structure as a result
Natural aggregation results in soil peds.
Compaction produces hard dirt clods rather than soft soil peds. Clods result from tillage, excavation, and using heavy field equipment under poor (wet) soil conditions.
Microbial activity also influences the formation of aggregates.
Types of soil structure
The classification of soil structural forms is based largely on shape.
Spheroidal structure: sphere-like or rounded in shape. All the axes are approximately of the same dimensions, with curved and irregular faces. These are found commonly in cultivated fields.
Crumb structure: small and are like crumbs of bread due to them being porous
Granular structure: less porous than crumb structure aggregates and are more durable than crumb structure aggregates
Plate-like structure: mainly horizontally aligned along plant based areas, with thin units being laminar and the thick units of the aggregates are classified as platy. Platy structures are usually found in the surface and sometimes in the lower sub-soils.
Block-like structure: particles that are arranged around a central point are enclosed by surfaces that may be either flat or somewhat rounded. These types are generally found in subsoil.
Sub angular blocky: corners are more rounded than the angular blocky aggregates
Prism-like structure: particles that are longer than they are wide, with the vertical axis being greater than the horizontal axis. They are commonly found in subsoil horizon of arid and semi-arid region soils.
Prismatic: more angular and hexagonal at the top of the aggregate
Columnar: particles that are rounded at the top of the aggregate
Minerals
The mineral components of the soil are derived from the parental rocks or regolith. The minerals present about 90% of the total weight of the soil. Some important elements, which are found in compound state, are oxygen, iron, silicon, aluminium, nitrogen, phosphorus, potassium, calcium, magnesium, carbon, hydrogen, etc.
The formation of primary and secondary minerals can better define what minerals are in the rock composition
Soil pores
The interactions of the soil's micropores and macropores are important to soil chemistry, as they allow for the provision of water and gaseous elements to the soil and the surrounding atmosphere. Macropores help transport molecules and substances in and out of the micropores. Micropores are comprised within the aggregates themselves.
Soil water
Water is essential for organisms within the soil profile, and it partially fills up the macropores in an ideal soil.
Leaching of the soil occurs as water carries along with it ions deeper into the lower soil horizons, causing the soil to become more oxidized in other soil horizons.
Water also will go from a higher water potential to a lower water potential, this can result in capillarity activity and gravitational force occurring with the water due to adhesion of the water to the soil surface and cohesion amongst the water molecules.
Air/Atmosphere
The atmosphere contains three main gases, namely oxygen, carbon dioxide (CO2) and nitrogen. In the atmosphere, oxygen is 20%, nitrogen is 79% and CO2 is 0.15% to 0.65% by volume. CO2 increases with the increase in the depth of soil because of decomposition of accumulated organic matter and abundance of plant roots. The presence of oxygen in the soil is important because it helps in breaking down insoluble rocky mass into soluble minerals and organic humification. Air in the soil is composed of gases that are present in the atmosphere, but not in the same proportions. These gases facilitate chemical reactions in microorganisms. Accumulation of soluble nutrients in the soil makes it more productive. If the soil is deficient in oxygen, microbial activity is slowed down or eliminated. Important factors controlling the soil atmosphere are temperature, atmospheric pressure, wind/aeration and rainfall.
Soil texture
Soil texture influences the soil chemistry pertaining to the soil's ability to maintain its structure, the restriction of water flow and the contents of the particles in the soil. Soil texture considers all particle types and a soil texture triangle is a chart that can be used to calculate the percentages of each particle type adding up to total 100% for the soil profile. These soil separates differ not only in their sizes but also in their bearing on some of the important factors affecting plant growth such as soil aeration, work ability, movement and availability of water and nutrients.
Sand
Sand particles range in size (about 0.05–2 mm). Sand is the most coarse of the particle groups. Sand has the largest pores and soil particles of the particle groups. It also drains the most easily. These particles become more involved in chemical reactions when coated with clay.
Silt
Silt particles range in size (about 0.002–0.5 mm). Silt pores are considered a medium in size compared with the other particle groups. Silt has a texture consistency of flour. Silt particles allow water and air to pass readily, yet retain moisture for crop growth. Silty soil contains sufficient quantities of nutrients, both organic and inorganic.
Clay
Clay has particles smallest in size (about <0.002 mm) of the particle groups. Clay also has the smallest pores which give it a greater porosity, and it does not drain well. Clay has a sticky texture when wet. Some kinds can grow and dissipate, or in other words shrink and swell.
Loam
Loam is a combination of sand, silt and clay that encompasses soils. It can be named based on the primary particles in the soil composition, ex. sandy loam, clay loam, silt loam, etc.
Biota
Biota are organisms that, along with organic matter, help comprise the biological system of the soil. The vast majority of biological activity takes place near the soil surface, usually in the A horizon of a soil profile. Biota rely on inputs of organic matter in order to sustain themselves and increase population sizes. In return, they contribute nutrients to the soil, typically after it has been cycled in the soil trophic food web.
With the many different interactions that take place, biota can largely impact their environment physically, chemically, and biologically (Pavao-Zuckerman, 2008). A prominent factor that helps to provide some degree of stability with these interactions is biodiversity, a key component of all ecological communities. Biodiversity allows for a consistent flow of energy through trophic levels and strongly influences the structure of ecological communities in the soil.
Soil organisms
Types of living soil biota can be divided into categories of plants (flora), animals (fauna), and microorganisms. Plants play a role in soil chemistry by exchanging nutrients with microorganisms and absorbing nutrients, creating concentration gradients of cations and anions. In addition to this, the differences in water potential created by plants influence water movement in soil, which affects the form and transportation of various particles. Vegetative cover on the soil surface greatly reduces erosion, which in turn prevents compaction and helps to maintain aeration in the soil pore space, providing oxygen and carbon to the biota and cation exchange sites that depend on it. Animals are essential to soil chemistry, as they regulate the cycling of nutrients and energy into different forms. This is primarily done through food webs. Some types of soil animals can be found below.
Detritivores
Examples include millipedes, woodlice, and dung beetles
Decomposers
Examples include fungi, earthworms, and bacteria
Protozoans
Examples include amoeba, euglena, and paramecium
Soil microbes play a major role in a multitude of biological and chemical activities that take place in soil. These microorganisms are said to make up around 1,000–10,000 kg of biomass per hectare in some soils (García-Sánchez, 2016). They are mostly recognized for their association with plants. The most well-known example of this is mycorrhizae, which exchange carbon for nitrogen with plant roots in a symbiotic relationship. Additionally, microbes are responsible for the majority of respiration that takes place in the soil, which has implications for the release of gases like methane and nitrous oxide from soil (giving it significance in discussion of climate change) (Frouz et al., 2020). Given the significance of the effects of microbes on their environment, the conservation and promotion of microbial life is often desired by many plant growers, conservationists, and ecologists.
Soil organic matter
Soil organic matter is the largest source of nutrients and energy in a soil. Its inputs strongly influence key soil factors such as types of biota, pH, and even soil order. Soil organic matter is often strategically applied by plant growers because of its ability to improve soil structure, supply nutrients, manage pH, increase water retention, and regulate soil temperature (which directly affects water dynamics and biota).
The chief elements found in humus, the product of organic matter decomposition in soil, are carbon, hydrogen, oxygen, sulphur and nitrogen. The important compound found in humus are carbohydrates, phosphoric acid, some organic acids, resins, urea etc. Humus is a dynamic product and is constantly changing because of its oxidation, reduction and hydrolysis; hence, it has much carbon content and less nitrogen. This material can come from a variety of sources, but often derives from livestock manure and plant residues.
Though there are many other variables, such as texture, soils that lack sufficient organic matter content are susceptible to soil degradation and drying, as there is nothing supporting the soil structure. This often leads to a decline in soil fertility and an increase in erodibility.
Other associated concepts:
Anion and cation exchange capacity
Soil pH
Mineral formation and transformation processes and pedogenesis
Clay mineralogy
Sorption and precipitation reactions in soil
Chemistry of problem soils
C/N ratio
Erosion and soil degradation
Soil cycle
Many plant nutrients in soil undergo biogeochemical cycles throughout their environment. These cycles are influenced by water, gas exchange, biological activity, immobilization, and mineralization dynamics, but each element has its own course of flow (Deemy et al., 2022). For example, nitrogen moves from an isolated gaseous form to the compounds nitrate and nitrite as it moves through soil and becomes available to plants. In comparison, an element like phosphorus transfers in mineral form, as it is contained in rock material. These cycles also greatly vary in mobility, solubility, and the rate at which they move through their natural cycles. Together, they drive all of the processes of soil chemistry.
Elemental cycles
Carbon
Hydrogen
Oxygen
Nitrogen
Phosphorus
Potassium
Sulfur
Calcium
Magnesium
Iron
Boron
Manganese
Copper
Zinc
Nickel
Chlorine
Methods of investigation
New knowledge about the chemistry of soils often comes from studies in the laboratory, in which soil samples taken from undisturbed soil horizons in the field are used in experiments that include replicated treatments and controls. In many cases, the soil samples are air dried at ambient temperatures (e.g., ) and sieved to a 2 mm size prior to storage for further study. Such drying and sieving soil samples markedly disrupts soil structure, microbial population diversity, and chemical properties related to pH, oxidation-reduction status, manganese oxidation state, and dissolved organic matter; among other properties. Renewed interest in recent decades has led many soil chemists to maintain soil samples in a field-moist condition and stored at under aerobic conditions before and during investigations.
Two approaches are frequently used in laboratory investigations in soil chemistry. The first is known as batch equilibration. The chemist adds a given volume of water or salt solution of known concentration of dissolved ions to a mass of soil (e.g., 25–mL of solution to 5–g of soil in a centrifuge tube or flask). The soil slurry then is shaken or swirled for a given amount of time (e.g., 15 minutes to many hours) to establish a steady state or equilibrium condition prior to filtering or centrifuging at high speed to separate sand grains, silt particles, and clay colloids from the equilibrated solution. The filtrate or centrifugate then is analyzed using one of several methods, including ion specific electrodes, atomic absorption spectrophotometry, inductively coupled plasma spectrometry, ion chromatography, and colorimetric methods. In each case, the analysis quantifies the concentration or activity of an ion or molecule in the solution phase, and by multiplying the measured concentration or activity (e.g., in mg ion/mL) by the solution-to-soil ratio (mL of extraction solution/g soil), the chemist obtains the result in mg ion/g soil. This result based on the mass of soil allows comparisons between different soils and treatments. A related approach uses a known volume to solution to leach (infiltrate) the extracting solution through a quantity of soil in small columns at a controlled rate to simulate how rain, snow meltwater, and irrigation water pass through soils in the field. The filtrate then is analyzed using the same methods as used in batch equilibrations.
Another approach to quantifying soil processes and phenomena uses in situ methods that do not disrupt the soil. as occurs when the soil is shaken or leached with an extracting soil solution. These methods usually use surface spectroscopic techniques, such as Fourier transform infrared spectroscopy, nuclear magnetic resonance, Mössbauer spectroscopy, and X-ray spectroscopy. These approaches aim to obtain information on the chemical nature of the mineralogy and chemistry of particle and colloid surfaces, and how ions and molecules are associated with such surfaces by adsorption, complexation, and precipitation.
These laboratory experiments and analyses have an advantage over field studies in that chemical mechanisms on how ions and molecules react in soils can be inferred from the data. One can draw conclusions or frame new hypotheses on similar reactions in different soils with diverse textures, organic matter contents, types of clay minerals and oxides, pH, and drainage condition. Laboratory studies have the disadvantage that they lose some of the realism and heterogeneity of undisturbed soil in the field, while gaining control and the power of extrapolation to unstudied soil. Mechanistic laboratory studies combined with more realistic, less controlled, observational field studies often yield accurate approximations of the behavior and chemistry of the soils that may be spatially heterogeneous and temporally variable. Another challenge faced by soil chemists is how microbial populations and enzyme activity in field soils may be changed when the soil is disturbed, both in the field and laboratory, particularly when soils samples are dried prior to laboratory studies and analysis.
| Physical sciences | Soil science | Earth science |
4715998 | https://en.wikipedia.org/wiki/Competition%20%28biology%29 | Competition (biology) | Competition is an interaction between organisms or species in which both require one or more resources that are in limited supply (such as food, water, or territory). Competition lowers the fitness of both organisms involved since the presence of one of the organisms always reduces the amount of the resource available to the other.
In the study of community ecology, competition within and between members of a species is an important biological interaction. Competition is one of many interacting biotic and abiotic factors that affect community structure, species diversity, and population dynamics (shifts in a population over time).
There are three major mechanisms of competition: interference, exploitation, and apparent competition (in order from most direct to least direct). Interference and exploitation competition can be classed as "real" forms of competition, while apparent competition is not, as organisms do not share a resource, but instead share a predator. Competition among members of the same species is known as intraspecific competition, while competition between individuals of different species is known as interspecific competition.
According to the competitive exclusion principle, species less suited to compete for resources must either adapt or die out, although competitive exclusion is rarely found in natural ecosystems. According to evolutionary theory, competition within and between species for resources is important in natural selection. More recently, however, researchers have suggested that evolutionary biodiversity for vertebrates has been driven not by competition between organisms, but by these animals adapting to colonize empty livable space; this is termed the 'Room to Roam' hypothesis.
Interference competition
During interference competition, also called contest competition, organisms interact directly by fighting for scarce resources. For example, large aphids defend feeding sites on cottonwood leaves by ejecting smaller aphids from better sites. Male-male competition in red deer during rut is an example of interference competition that occurs within a species.
Interference competition occurs directly between individuals via aggression when the individuals interfere with the foraging, survival, and reproduction of others, or by directly preventing their physical establishment in a portion of the habitat. An example of this can be seen between the ant Novomessor cockerelli and red harvester ants, where the former interferes with the ability of the latter to forage by plugging the entrances to their colonies with small rocks. Male bowerbirds, who create elaborate structures called bowers to attract potential mates, may reduce the fitness of their neighbors directly by stealing decorations from their structures.
In animals, interference competition is a strategy mainly adopted by larger and stronger organisms within a habitat. As such, populations with high interference competition have adult-driven generation cycles. At first, the growth of juveniles is stunted by larger adult competitors. However, once the juveniles reach adulthood, they experience a secondary growth cycle. Plants, on the other hand, primarily engage in interference competition with their neighbors through allelopathy, or the production of biochemicals.
Interference competition can be seen as a strategy that has a clear cost (injury or death) and benefit (obtaining resources that would have gone to other organisms). In order to cope with strong interference competition, other organisms often either do the same or engage in exploitation competition. For example, depending on the season, larger ungulate red deer males are competitively dominant due to interference competition. However, does and fawns have dealt with this through temporal resource partitioning — foraging for food only when adult males are not present.
Exploitation competition
Exploitation competition, or scramble competition, occurs indirectly when organisms both use a common limiting resource or shared food item. Instead of fighting or exhibiting aggressive behavior in order to win resources, exploitative competition occurs when resource use by one organism depletes the total amount available for other organisms. These organisms might never interact directly but compete by responding to changes in resource levels. Very obvious examples of this phenomenon include a diurnal species and a nocturnal species that nevertheless share the same resources or a plant that competes with neighboring plants for light, nutrients, and space for root growth.
This form of competition typically rewards those organisms who claim the resource first. As such, exploitation competition is often size-dependent and smaller organisms are favored since smaller organisms typically have higher foraging rates. Since smaller organisms have an advantage when exploitative competition is important in an ecosystem, this mechanism of competition might lead to a juvenile-driven generation cycle: individual juveniles succeed and grow fast, but once they mature they are outcompeted by smaller organisms.
In plants, exploitative competition can occur both above- and below ground. Aboveground, plants reduce the fitness of their neighbors by vying for sunlight plants consume nitrogen by absorbing it into their roots, making nitrogen unavailable to nearby plants. Plants that produce many roots typically reduce soil nitrogen to very low levels, eventually killing neighboring plants.
Exploitative competition has also been shown to occur both within species (intraspecific) and between different species (interspecific). Furthermore, many competitive interactions between organisms are some combination of exploitative and interference competition, meaning the two mechanisms are far from mutually exclusive. For example, a recent 2019 study found that the native thrip species Frankliniella intonsa was competitively dominant over an invasive thrip species Frankliniella occidentalis because it not only exhibited greater time feeding (exploitative competition) but also greater time guarding its resources (interference competition). Plants may also exhibit both forms of competition, not only scrambling for space for root growth but also directly inhibiting other plants' development through allelopathy.
Apparent competition
Apparent competition occurs when two otherwise unrelated prey species indirectly compete for survival through a shared predator. This form of competition typically manifests in new equilibrium abundances of each prey species. For example, suppose there are two species (species A and species B), which are preyed upon by food-limited predator species C. Scientists observe an increase in the abundance of species A and a decline in the abundance of species B. In an apparent competition model, this relationship is found to be mediated through predator C; a population explosion of species A increases the abundance of predator species C due to a greater total food source. Since there are now more predators, species A and B would be hunted at higher rates than before. Thus, the success of species A was to the detriment of species B — not because they competed for resources, but because their increased numbers had indirect effects on the predator population.
This one-predator/two-prey model has been explored by ecologists as early as 1925, but the term "apparent competition" was first coined by University of Florida ecologist Robert D. Holt in 1977. Holt found that field ecologists at the time were erroneously attributing negative interactions among prey species to niche partitioning and competitive exclusion, ignoring the role of food-limited predators.
Apparent competition and realized niche
Apparent competition can help shape a species' realized niche, or the area or resources the species can actually persist due to interspecific interactions. The effect on realized niches could be incredibly strong, especially when there is an absence of more traditional interference or exploitative competition. A real-world example was studied in the late 1960s, when the introduction of snowshoe hares (Lepus americanus) to Newfoundland reduced the habitat range of native arctic hares (Lepus arcticus). While some ecologists hypothesized that this was due to an overlap in the niche, other ecologists argued that the more plausible mechanism was that snowshoe hare populations led to an explosion in food-limited lynx populations, a shared predator of both prey species. Since the arctic hare has a relatively weaker defense tactic than the snowshoe hare, they were excluded from woodland areas on the basis of differential predation. However, both apparent competition and exploitation competition might help explain the situation to some degree. Support for the impact of competition on the breadth of the realized niche with respect to diet is becoming more common in a variety of systems based upon isotopic and spatial data, including both carnivores and small mammals.
Asymmetric apparent competition
Apparent competition can be symmetric or asymmetric. Symmetric apparent competition negatively impacts both species equally (-,-), from which it can be inferred that both species will persist. However, asymmetric apparent competition occurs when one species is affected less than the other. The most extreme scenario of asymmetric apparent competition is when one species is not affected at all by the increase in the predator, which can be seen as a form of amensalism (0, -). Human impacts on endangered prey species have been characterized by conservation scientists as an extreme form of asymmetric apparent competition, often through introducing predator species into ecosystems or resource subsidies. An example of fully asymmetric apparent competition which often occurs near urban centers is subsidies in the form of human garbage or waste. In the early 2000s, the common raven (Corvus corax) population in the Mojave Desert increased due to an influx of human garbage, leading to an indirect negative effect on juvenile desert tortoises (Gopherus agassizii). Asymmetry in apparent competition can also arise as a consequence of resource competition. An empirical example is provided by two small fish species in postglacial lakes in Western Canada, where resource competition between prickly sculpin and threespine stickleback fish leads to a spatial niche shift mainly in threespine stickleback. As a consequence of this shift, predation by a shared trout predator increases for stickleback but decreases for sculpin in lakes where the two species co-occur compared to lakes in which each species occurs on its own together with trout predators. Because sharing predators often comes together with competition for shared food resources, apparent competition and resource competition may often interplay in nature.
Apparent competition in the human microbiome
Apparent competition has also been viewed in and on the human body. The human immune system can acts as the generalist predator, and a high abundance of a certain bacteria may induce an immune response, damaging all pathogens in the body. Another example of this is of two populations of bacteria that can both support a predatory bacteriophage. In most situations, the one that is most resistant to infection by the shared predator will replace the other.
Apparent competition has also been suggested as an exploitable phenomenon for cancer treatments. Highly specialized viruses that are developed to target malignant cancer cells often go locally extinct prior to eradicating all cancer. However, if a virus were developed that targets both healthy and unhealthy host cells to some degree, the large number of healthy cells would support the predatory virus for long enough to eliminate all malignant cells.
Size-asymmetric competition
Competition can be either complete symmetric (all individuals receive the same amount of resources, irrespective of their size), perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass), or absolutely size-asymmetric (the largest individuals exploit all the available resource).
Among plants, size asymmetry is context-dependent and competition can be both asymmetric and symmetric depending on the most limiting resource. In forest stands, below-ground competition for nutrients and water is size-symmetric, because a tree's root system is typically proportionate to the biomass of the entire tree. Conversely, above-ground competition for light is size-asymmetric — since light has directionality, the forest canopy is dominated entirely by the largest trees. These trees disproportionately exploit most of the resource for their biomass, making the interaction size asymmetric. Whether above-ground or below-ground resources are more limiting can have major effects on the structure and diversity of ecological communities; in mixed beech stands, for example, size-asymmetric competition for light is a stronger predictor of growth compared with competition for soil resources.
Within and between species
Competition can occur between individuals of the same species, called intraspecific competition, or between different species, called interspecific competition. Studies show that intraspecific competition can regulate population dynamics (changes in population size over time). This occurs because individuals become crowded as the population grows. Since individuals within a population require the same resources, crowding causes resources to become more limited. Some individuals (typically small juveniles) eventually do not acquire enough resources and die or do not reproduce. This reduces population size and slows population growth.
Species also interact with other species that require the same resources. Consequently, interspecific competition can alter the sizes of many species populations at the same time. Experiments demonstrate that when species compete for a limited resource, one species eventually drives the populations of other species extinct. These experiments suggest that competing species cannot coexist (they cannot live together in the same area) because the best competitor will exclude all other competing species.
Intraspecific
Intraspecific competition occurs when members of the same species compete for the same resources in an ecosystem. A simple example is a stand of equally-spaced plants, which are all of the same age. The higher the density of plants, the more plants will be present per unit ground area, and the stronger the competition will be for resources such as light, water, or nutrients.
Interspecific
Interspecific competition may occur when individuals of two separate species share a limiting resource in the same area. If the resource cannot support both populations, then lowered fecundity, growth, or survival may result in at least one species. Interspecific competition has the potential to alter populations, communities, and the evolution of interacting species. An example among animals could be the case of cheetahs and lions; since both species feed on similar prey, they are negatively impacted by the presence of the other because they will have less food, however, they still persist together, despite the prediction that under competition one will displace the other. In fact, lions sometimes steal prey items killed by cheetahs. Potential competitors can also kill each other, in so-called 'intraguild predation'. For example, in southern California coyotes often kill and eat gray foxes and bobcats, all three carnivores sharing the same stable prey (small mammals).
An example among protozoa involves Paramecium aurelia and Paramecium caudatum. Russian ecologist, Georgy Gause, studied the competition between the two species of Paramecium that occurred as a result of their coexistence. Through his studies, Gause proposed the Competitive exclusion principle, observing the competition that occurred when their different ecological niches overlapped.
Competition has been observed between individuals, populations, and species, but there is little evidence that competition has been the driving force in the evolution of large groups. For example, mammals lived beside reptiles for many millions of years of time but were unable to gain a competitive edge until dinosaurs were devastated by the Cretaceous–Paleogene extinction event.
Evolutionary strategies
In evolutionary contexts, competition is related to the concept of r/K selection theory, which relates to the selection of traits which promote success in particular environments. The theory originates from work on island biogeography by the ecologists Robert MacArthur and E. O. Wilson.
In r/K selection theory, selective pressures are hypothesized to drive evolution in one of two stereotyped directions: r- or K-selection. These terms, r, and K, are derived from standard ecological algebra, as illustrated in the simple Verhulst equation of population dynamics:
where r is the growth rate of the population (N), and K is the carrying capacity of its local environmental setting. Typically, r-selected species exploit empty niches, and produce many offspring, each of whom has a relatively low probability of surviving to adulthood. In contrast, K-selected species are strong competitors in crowded niches, and invest more heavily in much fewer offspring, each with a relatively high probability of surviving to adulthood.
Competitive exclusion principle
To explain how species coexist, in 1934 Georgii Gause proposed the competitive exclusion principle which is also called the Gause principle: species cannot coexist if they have the same ecological niche. The word "niche" refers to a species' requirements for survival and reproduction. These requirements include both resources (like food) and proper habitat conditions (like temperature or pH). Gause reasoned that if two species had identical niches (required identical resources and habitats) they would attempt to live in exactly the same area and would compete for exactly the same resources. If this happened, the species that was the best competitor would always exclude its competitors from that area. Therefore, species must at least have slightly different niches in order to coexist.
Character displacement
Competition can cause species to evolve differences in traits. This occurs because the individuals of a species with traits similar to competing species always experience strong interspecific competition. These individuals have less reproduction and survival than individuals with traits that differ from their competitors. Consequently, they will not contribute many offspring to future generations. For example, Darwin's finches can be found alone or together on the Galapagos Islands. Both species populations actually have more individuals with intermediate-sized beaks when they live on islands without the other species present. However, when both species are present on the same island, competition is intense between individuals that have intermediate-sized beaks of both species because they all require intermediate-sized seeds. Consequently, individuals with small and large beaks have greater survival and reproduction on these islands than individuals with intermediate-sized beaks. Different finch species can coexist if they have traits—for instance, beak size—that allow them to specialize in particular resources. When Geospiza fortis and Geospiza fuliginosa are present on the same island, G. fuliginosa tends to evolve a small beak and G. fortis a large beak. The observation that competing species' traits are more different when they live in the same area than when competing species live in different areas is called character displacement. For the two finch species, beak size was displaced: Beaks became smaller in one species and larger in the other species. Studies of character displacement are important because they provide evidence that competition is important in determining ecological and evolutionary patterns in nature.
| Biology and health sciences | Ecology | null |
2554507 | https://en.wikipedia.org/wiki/Scalding | Scalding | Scalding is a form of thermal burn resulting from heated fluids such as boiling water or steam. Most scalds are considered first- or second-degree burns, but third-degree burns can result, especially with prolonged contact. The term is from the Latin word calidus, meaning hot.
Causes
Most scalds result from exposure to high-temperature water, such as tap water in baths and showers, water heaters, or cooking water, or from spilled hot drinks, such as coffee.
Scalds can be more severe when steam impinges on the naked skin, because steam can reach higher temperatures than water, and it transfers latent heat by condensation. However, when clothes are soaked with hot water, the heat transfer is often of a longer duration, since the body part cannot be removed from the heat source as quickly.
Temperatures
The temperature of tap water should not exceed to prevent discomfort and scalding. However, it is necessary to keep warm water at a temperature of to inhibit the growth of legionella bacteria.
The American Burn Association states that a scalding injury occurs when skin is placed in contact with water measuring 155 degrees Fahrenheit, or 68 degrees Celsius, for one second.
Burn injuries may occur in two seconds, for water measuring 148 degrees Fahrenheit, or 64 degrees Celsius. At 140 degrees Fahrenheit, or 60 degrees Celsius, scalding injuries may occur within five seconds. Scalding injuries can occur within 15 seconds of exposure to water that is 133 degrees Fahrenheit, or 56 degrees Celsius. At 125 degrees Fahrenheit, or 52 degrees Celsius, scalding injuries may occur in 90 seconds.
Scalds are more common in children, especially from the accidental spilling of hot liquids.
Treatment
Applying first aid for scalds is the same as for burns. First, the site of the injury should be removed from the source of heat, to prevent further scalding. If the burn is at least second degree, remove any jewelry or clothing from the site, unless it is already stuck to the skin. Cool the scald for about 20 minutes with cool or lukewarm (not cold) water, such as water from a tap.
With second-degree burns, blisters will form, but should never be popped, as it only increases chances of infection. With third-degree burns, it is best to wrap the injury very loosely to keep it clean, and seek expert medical attention.
Treatments to avoid
Ice should be avoided, as it can do further damage to area around the injury, as should butter, toothpaste.
Food production
Beef, poultry and pork
The carcasses of beef, poultry and pork are commonly scalded after slaughter, to facilitate the removal of feathers and hair. Methods include immersion in tanks of hot water or spraying with steam. The scalding may be hard or soft, in which the temperature or duration is varied. A hard scald of 58 °C (136.4 °F) for 2.5 minutes will remove the epidermis of poultry; this is commonly used for carcasses that will be frozen, so that their appearance is white and attractive.
Scalding milk
Scalded milk is milk that has been heated to . At this temperature, bacteria are killed, enzymes in the milk are destroyed, and many of the proteins are denatured.
In cooking, milk is typically scalded to increase its temperature, or to change the consistency or other cooking interactions due to the denaturing of proteins.
Recipes that call for scalded milk include café au lait, baked milk, and ryazhenka. Scalded milk is used in yogurt to make the proteins unfold, and to make sure that all organisms that could out-compete the yogurt culture's bacteria are killed.
Milk is both scalded and also cooled in many recipes, such as for bread and other yeast doughs, as pasteurization does not kill all bacteria, and the wild yeasts that may also be present can alter the texture and flavor. In addition, scalding milk improves the rise due to inhibition of bread rise by certain undenatured milk proteins.
| Biology and health sciences | Types | Health |
2554979 | https://en.wikipedia.org/wiki/Geothermal%20gradient | Geothermal gradient | Geothermal gradient is the rate of change in temperature with respect to increasing depth in Earth's interior. As a general rule, the crust temperature rises with depth due to the heat flow from the much hotter mantle; away from tectonic plate boundaries, temperature rises in about 25–30 °C/km (72–87 °F/mi) of depth near the surface in the continental crust. However, in some cases the temperature may drop with increasing depth, especially near the surface, a phenomenon known as or geothermal gradient. The effects of weather, the Sun, and season only reach a depth of roughly .
Strictly speaking, geo-thermal necessarily refers to Earth, but the concept may be applied to other planets. In SI units, the geothermal gradient is expressed as °C/km, K/km, or mK/m. These are all equivalent.
Earth's internal heat comes from a combination of residual heat from planetary accretion, heat produced through radioactive decay, latent heat from core crystallization, and possibly heat from other sources. The major heat-producing nuclides in Earth are potassium-40, uranium-238, uranium-235, and thorium-232. The inner core is thought to have temperatures in the range of 4000 to 7000 K, and the pressure at the centre of the planet is thought to be about 360 GPa (3.6 million atm). (The exact value depends on the density profile in Earth.) Because much of the heat is provided for by radioactive decay, scientists believe that early in Earth's history, before nuclides with short half-lives had been depleted, Earth's heat production would have been much higher. Heat production was twice that of present-day at approximately 3 billion years ago, resulting in larger temperature gradients within Earth, larger rates of mantle convection and plate tectonics, allowing the production of igneous rocks such as komatiites that are no longer formed.
The top of the geothermal gradient is influenced by atmospheric temperature. The uppermost layers of the solid planet are at the temperature produced by the local weather, decaying to approximately the annual mean-average ground temperature (MAGT) at a shallow depth of about 10-20 metres depending on the type of ground, rock etc.;
it is this depth which is used for many ground-source heat pumps. The top hundreds of meters reflect past climate change; descending further, warmth increases steadily as interior heat sources begin to dominate.
Heat sources
Temperature within Earth increases with depth. Highly viscous or partially molten rock at temperatures between are found at the margins of tectonic plates, increasing the geothermal gradient in the vicinity, but only the outer core is postulated to exist in a molten or fluid state, and the temperature at Earth's inner core/outer core boundary, around deep, is estimated to be 5650 ± 600 Kelvin. The heat content of Earth is 1031 joules.
Much of the heat is created by decay of naturally radioactive elements. An estimated 45 to 90 percent of the heat escaping from Earth originates from radioactive decay of elements, mainly located in the mantle.
Gravitational potential energy, which can be further divided into:
Release during the accretion of Earth.
Heat released during differentiation, as abundant heavy metals (iron, nickel, copper) descended to Earth's core.
Latent heat released as the liquid outer core crystallizes at the inner core boundary.
Heat may be generated by tidal forces on Earth as it rotates (conservation of angular momentum). The resulting earth tides dissipate energy in Earth's interior as heat.
In Earth's continental crust, the decay of natural radioactive nuclides makes a significant contribution to geothermal heat production. The continental crust is abundant in lower density minerals but also contains significant concentrations of heavier lithophilic elements such as uranium. Because of this, it holds the most concentrated global reservoir of radioactive elements found in Earth. Naturally occurring radioactive elements are enriched in the granite and basaltic rocks, especially in layers closer to Earth's surface. These high levels of radioactive elements are largely excluded from Earth's mantle due to their inability to substitute in mantle minerals and consequent enrichment in melts during mantle melting processes. The mantle is mostly made up of high density minerals with higher concentrations of elements that have relatively small atomic radii, such as magnesium (Mg), titanium (Ti), and calcium (Ca).
The geothermal gradient is steeper in the lithosphere than in the mantle because the mantle transports heat primarily by convection, leading to a geothermal gradient that is determined by the mantle adiabat, rather than by the conductive heat transfer processes that predominate in the lithosphere, which acts as a thermal boundary layer of the convecting mantle.
Heat flow
Heat flows constantly from its sources within Earth to the surface. Total heat loss from Earth is estimated at 44.2 TW (). Mean heat flow is 65 mW/m2 over continental crust and 101 mW/m2 over oceanic crust. This is 0.087 watt/square metre on average (0.03 percent of solar power absorbed by Earth), but is much more concentrated in areas where the lithosphere is thin, such as along mid-ocean ridges (where new oceanic lithosphere is created) and near mantle plumes.
Earth's crust effectively acts as a thick insulating blanket which must be pierced by fluid conduits (of magma, water or other) in order to release the heat underneath. More of the heat in Earth is lost through plate tectonics, by mantle upwelling associated with mid-ocean ridges. Another major mode of heat loss is by conduction through the lithosphere, the majority of which occurs in the oceans due to the crust there being much thinner and younger than under the continents.
The heat of Earth is replenished by radioactive decay at a rate of 30 TW. The global geothermal flow rates are more than twice the rate of human energy consumption from all primary sources. Global data on heat-flow density are collected and compiled by the International Heat Flow Commission (IHFC) of the IASPEI/IUGG.
Direct application
Heat from Earth's interior can be used as an energy source, known as geothermal energy. The geothermal gradient has been used for space heating and bathing since ancient Roman times, and more recently for generating electricity. As the human population continues to grow, so does energy use and the correlating environmental impacts that are consistent with global primary sources of energy. This has caused a growing interest in finding sources of energy that are renewable and have reduced greenhouse gas emissions. In areas of high geothermal energy density, current technology allows for the generation of electrical power because of the corresponding high temperatures. Generating electrical power from geothermal resources requires no fuel while providing true baseload energy at a reliability rate that constantly exceeds 90%. In order to extract geothermal energy, it is necessary to efficiently transfer heat from a geothermal reservoir to a power plant, where electrical energy is converted from heat by passing steam through a turbine connected to a generator. The efficiency of converting the geothermal heat into electricity depends on the temperature difference between the heated fluid (water or steam) and the environmental temperature, so it is advantageous to use deep, high-temperature heat sources. On a worldwide scale, the heat stored in Earth's interior provides an energy that is still seen as an exotic source. About 10 GW of geothermal electric capacity is installed around the world as of 2007, generating 0.3% of global electricity demand. An additional 28 GW of direct geothermal heating capacity is installed for district heating, space heating, spas, industrial processes, desalination and agricultural applications.
Variations
The geothermal gradient varies with location and is typically measured by determining the bottom open-hole temperature after borehole drilling. Temperature logs obtained immediately after drilling are however affected due to drilling fluid circulation. To obtain accurate bottom hole temperature estimates, it is necessary for the well to reach stable temperature. This is not always achievable for practical reasons.
In stable tectonic areas in the tropics, a temperature-depth plot will converge to the annual average surface temperature. However, in areas where deep permafrost developed during the Pleistocene, a low temperature anomaly can be observed that persists down to several hundred metres. The Suwałki cold anomaly in Poland has led to the recognition that similar thermal disturbances related to Pleistocene-Holocene climatic changes are recorded in boreholes throughout Poland, as well as in Alaska, northern Canada, and Siberia.
In areas of Holocene uplift and erosion (Fig. 1) the shallow gradient will be high until it reaches a point (labeled "Inflection point" in the figure) where it reaches the stabilized heat-flow regime. If the gradient of the stabilized regime is projected above this point to its intersection with present-day annual average temperature, the height of this intersection above present-day surface level gives a measure of the extent of Holocene uplift and erosion. In areas of Holocene subsidence and deposition (Fig. 2) the initial gradient will be lower than the average until it reaches a point where it joins the stabilized heat-flow regime.
Variations in surface temperature, whether daily, seasonal, or induced by climate changes and the Milankovitch cycle, penetrate below Earth's surface and produce an oscillation in the geothermal gradient with periods varying from a day to tens of thousands of years, and an amplitude which decreases with depth. The longest-period variations have a scale depth of several kilometers. Melt water from the polar ice caps flowing along ocean bottoms tends to maintain a constant geothermal gradient throughout Earth's surface.
If the rate of temperature increase with depth observed in shallow boreholes were to persist at greater depths, temperatures deep within Earth would soon reach the point where rocks would melt. We know, however, that Earth's mantle is solid because of the transmission of S-waves. The temperature gradient dramatically decreases with depth for two reasons. First, the mechanism of thermal transport changes from conduction, as within the rigid tectonic plates, to convection, in the portion of Earth's mantle that convects. Despite its solidity, most of Earth's mantle behaves over long time-scales as a fluid, and heat is transported by advection, or material transport. Second, radioactive heat production is concentrated within the crust of Earth, and particularly within the upper part of the crust, as concentrations of uranium, thorium, and potassium are highest there: these three elements are the main producers of radioactive heat within Earth. Thus, the geothermal gradient within the bulk of Earth's mantle is of the order of 0.5 kelvin per kilometer, and is determined by the adiabatic gradient associated with mantle material (peridotite in the upper mantle).
Negative geothermal gradient
Negative geothermal gradients occur where temperature decreases with depth. This occurs in the upper few hundreds of meters near the surface. Because of the low thermal diffusivity of rocks, deep underground temperatures are hardly affected by diurnal or even annual surface temperature variations. At depths of a few meters, underground temperatures are therefore similar to the annual average surface temperature. At greater depths, underground temperatures reflect a long-term average over past climate, so that temperatures at the depths of dozens to hundreds of meters contain information about the climate of the last hundreds to thousands of years. Depending on the location, these may be colder than current temperatures due to the colder weather close to the last ice age, or due to more recent climate change.
Negative geothermal gradients may also occur due to deep aquifers, where heat transfer from deep water by convection and advection results in water at shallower levels heating adjacent rocks to a higher temperature than rocks at a somewhat deeper level.
Negative geothermal gradients are also found at large scales in subduction zones. A subduction zone is a tectonic plate boundary where oceanic crust sinks into the mantle due to the high density of the oceanic plate relative to the underlying mantle. Since the sinking plate enters the mantle at a rate of a few centimeters per year, heat conduction is unable to heat the plate as quickly as it sinks. Therefore, the sinking plate has a lower temperature than the surrounding mantle, resulting in a negative geothermal gradient.
| Physical sciences | Geophysics | Earth science |
2555865 | https://en.wikipedia.org/wiki/Aura%20%28symptom%29 | Aura (symptom) | An aura is a perceptual disturbance experienced by some with epilepsy or migraine. An epileptic aura is actually a minor seizure.
Epileptic and migraine auras are due to the involvement of specific areas of the brain, which are those that determine the symptoms of the aura. Therefore, if the visual area is affected, the aura will consist of visual symptoms, while if a sensory one, then sensory symptoms will occur.
Epileptic auras are subjective sensory or psychic phenomena due to a focal seizure, i.e. a seizure that originates from that area of the brain responsible for the function which then expresses itself with the symptoms of the aura. It is important because it makes it clear where the alteration causing the seizure is located. An epileptic aura is in most cases followed by other manifestations of a seizure, for example a convulsion, since the epileptic discharge spreads to other parts of the brain. Rarely it remains isolated. Auras, when they occur, allow some people who have epilepsy time to prevent injury to themselves and/or others when they lose consciousness.
Migraine
The aura of migraine is visual in the vast majority of cases, because dysfunction starts from the visual cortex. The aura is usually followed, after a time varying from minutes to an hour, by the migraine headache. However, the migraine aura can manifest itself in isolation, that is, without being followed by headache. The aura can stay for the duration of the migraine; depending on the type of aura, it can leave the person disoriented and confused. It is common for people with migraines to experience more than one type of aura during the migraine. Some people who have auras have the same type of aura every time.
Auras can also be confused with sudden onset of panic, panic attacks or anxiety attacks, which creates difficulties in diagnosis. The differential diagnosis of patients who experience symptoms of paresthesias, derealization, dizziness, chest pain, tremors, and palpitations can be quite challenging.
Seizures
An epileptic aura is the consequence of the activation of functional cortex by abnormal neuronal discharge. In addition to being a warning sign for an impending seizure, the nature of an aura can give insight into the localization and lateralization of the seizure or migraine.
The most common auras include motor, somatosensory, visual, and auditory symptoms. The activation in the brain during an aura can spread through multiple regions continuously or discontinuously, on the same side or to both sides.
Auras are particularly common in focal seizures. If the motor cortex is involved in the overstimulation of neurons, motor auras can result. Likewise, somatosensory auras (such as tingling, numbness, and pain) can result if the somatosensory cortex is involved. When the primary somatosensory cortex is activated, more discrete parts on the opposite side of the body and the secondary somatosensory areas result in symptoms ipsilateral to the seizure focus.
Visual auras can be simple or complex. Simple visual symptoms can include static, flashing, or moving lights/shapes/colors caused mostly by abnormal activity in the primary visual cortex. Complex visual auras can include people, scenes, and objects which results from stimulation of the temporo-occipital junction and is lateralized to one hemifield. Auditory auras can also be simple (ringing, buzzing) or complex (voices, music). Simple symptoms can occur from activation in the primary auditory cortex and complex symptoms from the temporo-occipital cortex at the location of the auditory association areas.
Examples
An aura sensation can include one or a combination of the following:
Visual changes
Bright lights and blobs
Zigzag lines
Distortions in the size or shape of objects
Vibrating visual field
Scintillating scotoma
Shimmering, pulsating patches, often curved
Tunnel vision
Scotoma
Blind or dark spots
Curtain like effect over one eye
Slowly spreading spots
Kaleidoscope effects
Temporary blindness in one or both eyes
Heightened sensitivity to light
Auditory changes
Hearing voices or sounds that do not exist: auditory hallucinations
Modification of voices or sounds in the environment: buzzing, tremolo, amplitude modulation or other modulations
Heightened sensitivity to hearing
Vestibular dysfunction causing vertigo
Other sensations
Strange smells (phantosmia) or tastes (gustatory hallucinations)
Heightened sensitivity to smell
Synesthesia
Déjà vu or jamais vu
Cephalic aura, a perception of movement of the head or inside the head
Abdominal aura, such as an epigastric rising sensation
Nausea
Numbness or tingling (paresthesia)
Weakness on one side of the body (hemiparesis)
Feelings of being separated from or floating above one's body (dissociation)
Feeling of overheating and sudden perspiration
Inability to speak (aphasia) or slurred speech
| Biology and health sciences | Symptoms and signs | Health |
2556665 | https://en.wikipedia.org/wiki/Hominini | Hominini | The Hominini (hominins) form a taxonomic tribe of the subfamily Homininae (hominines). They comprise two extant genera: Homo (humans) and Pan (chimpanzees and bonobos), and in standard usage exclude the genus Gorilla (gorillas), which is grouped separately within the subfamily Homininae.
The term Hominini was originally introduced by Camille Arambourg (1948), who combined the categories of Hominina and Simiina pursuant to Gray's classifications (1825).
Traditionally, chimpanzees, gorillas and orangutans were grouped together, excluding humans, as pongids. Since Gray's classifications, evidence accumulating from genetic phylogeny confirmed that humans, chimpanzees, and gorillas are more closely related to each other than to the orangutan. The orangutans were reassigned to the family Hominidae (great apes), which already included humans; and the gorillas were grouped as a separate tribe (Gorillini) of the subfamily Homininae. Still, details of this reassignment remain contested, and of publishing since (on tribe Hominini), not every source excludes gorillas and not every source includes chimpanzees.
Humans are the only extant species in the Australopithecine branch (subtribe), which also contains many extinct close relatives of humans.
Terminology and definition
Concerning membership, when Hominini is taken to exclude Pan, Panini ("panins") may refer to the tribe containing Pan as its only genus. Or perhaps place Pan with other dryopithecine genera, making the whole tribe or subtribe of Panini or Panina together. Minority dissenting nomenclatures include Gorilla in Hominini and Pan in Homo (Goodman et al. 1998), or both Pan and Gorilla in Homo (Watson et al. 2001).
By convention, the adjectival term "hominin" (or nominalized "hominins") refers to the tribe Hominini, whereas the members of the subtribe Hominina (and thus all archaic human species) are referred to as "homininan" ("homininans"). This follows the proposal by Mann and Weiss (1996), which presents tribe Hominini as including both Pan and Homo, placed in separate subtribes. The genus Pan is referred to subtribe Panina, and genus Homo is included in the subtribe Hominina (see below).
The alternative convention uses "hominin" to exclude members of Panina: for Homo; or for human and australopithecine species. This alternative convention is referenced in e.g. Coyne (2009) and in Dunbar (2014). Potts (2010) in addition uses the name Hominini in a different sense, as excluding Pan, and uses "hominins" for this, while a separate tribe (rather than subtribe) for chimpanzees is introduced, under the name Panini. In this recent convention, contra Arambourg, the term "hominin" is applied to Homo, Australopithecus, Ardipithecus, and others that arose after the split from the line that led to chimpanzees (see cladogram below); that is, they distinguish fossil members on the human side of the split, as "hominins", from those on the chimpanzee side, as "not hominins" (or "non-hominin hominids").
Cladogram
This cladogram shows the clade of superfamily Hominoidea and its descendant clades, focused on the division of Hominini (omitting detail on clades not ancestral to Hominini). The family Hominidae ("hominids") comprises the tribes Ponginae (including orangutans), Gorillini (including gorillas) and Hominini, the latter two forming the subfamily of Homininae. Hominini is divided into Panina (chimpanzees) and Australopithecina (australopithecines). The Hominina (humans) are usually held to have emerged within the Australopithecina (which would roughly correspond to the alternative definition of Hominini according to the alternative definition which excludes Pan).
Genetic analysis combined with fossil evidence indicates that hominoids diverged from the Old World monkeys about 25 million years ago (Mya), near the Oligocene-Miocene boundary. The most recent common ancestors (MRCA) of the subfamilies Homininae and Ponginae lived about 15 million years ago. The best-known fossil genus of Ponginae is Sivapithecus, consisting of several species from 12.5 million to 8.5 million years ago. It differs from orangutans in dentition and postcranial morphology. In the following cladogram, the approximate time the clades radiated newer clades is indicated in millions of years ago (Mya).
Evolutionary history
Both Sahelanthropus and Orrorin existed during the estimated duration of the ancestral chimpanzee–human speciation events, within the range of eight to four million years ago (Mya). Very few fossil specimens have been found that can be considered directly ancestral to genus Pan. News of the first fossil chimpanzee, found in Kenya, was published in 2005. However, it is dated to very recent times—between 545 and 284 thousand years ago. The divergence of a "proto-human" or "pre-human" lineage separate from Pan appears to have been a process of complex speciation-hybridization rather than a clean split, taking place over the period of anywhere between 13 Mya (close to the age of the tribe Hominini itself) and some 4 Mya. Different chromosomes appear to have split at different times, with broad-scale hybridization activity occurring between the two emerging lineages as late as the period 6.3 to 5.4 Mya, according to Patterson et al. (2006), This research group noted that one hypothetical late hybridization period was based in particular on the similarity of X chromosomes in the proto-humans and stem chimpanzees, suggesting that the final divergence was even as recent as 4 Mya. Wakeley (2008) rejected these hypotheses; he suggested alternative explanations, including selection pressure on the X chromosome in the ancestral populations prior to the chimpanzee–human last common ancestor (CHLCA).
Most DNA studies find that humans and Pan are 99% identical, but one study found only 94% commonality, with some of the difference occurring in non-coding DNA. It is most likely that the australopithecines, dating from 4.4 to 3 Mya, evolved into the earliest members of genus Homo. In the year 2000, the discovery of Orrorin tugenensis, dated as early as 6.2 Mya, briefly challenged critical elements of that hypothesis, as it suggested that Homo did not in fact derive from australopithecine ancestors.
All the listed fossil genera are evaluated for two traits that could identify them as hominins:
probability of being ancestral to Homo, and
whether they are more closely related to Homo than to any other living primate.
Some, including Paranthropus, Ardipithecus, and Australopithecus, are broadly thought to be ancestral and closely related to Homo; others, especially earlier genera, including Sahelanthropus (and perhaps Orrorin), are supported by one community of scientists but doubted by another.
List of known hominin species
Extant species are in bold.
Sahelanthropus tchadensis
Orrorin tugenensis
Ardipithecus kadabba
Ardipithecus ramidus
Australopithecus anamensis
Australopithecus afarensis
Australopithecus deyiremeda
Australopithecus garhi
Kenyanthropus platyops
Australopithecus africanus
Australopithecus sediba
Paranthropus aethiopicus
Paranthropus boisei
Paranthropus robustus
Pan troglodytes
Pan paniscus
Homo habilis
Homo rudolfensis
Homo ergaster
Homo erectus
Homo antecessor
Homo heidelbergensis
Homo naledi
Homo neanderthalensis
Homo denisova
Homo sapiens
Homo floresiensis
Homo luzonensis
Gallery
| Biology and health sciences | Human evolution: General | Biology |
2557067 | https://en.wikipedia.org/wiki/Stingless%20bee | Stingless bee | Stingless bees (SB), sometimes called stingless honey bees or simply meliponines, are a large group of bees (from about 462 to 552 described species), comprising the tribe Meliponini (or subtribe Meliponina according to other authors). They belong in the family Apidae (subfamily Apinae), and are closely related to common honey bees (HB, tribe Apini), orchid bees (tribe Euglossini), and bumblebees (tribe Bombini). These four bee tribes belong to the corbiculate bees monophyletic group. Meliponines have stingers, but they are highly reduced and cannot be used for defense, though these bees exhibit other defensive behaviors and mechanisms. Meliponines are not the only type of bee incapable of stinging: all male bees and many female bees of several other families, such as Andrenidae and Megachilidae (tribe Dioxyini), also cannot sting.
Some stingless bees have powerful mandibles and can inflict painful bites. Some species can present large mandibular glands for the secretion of caustic defense substances, secrete unpleasant smells or use sticky materials to immobilise enemies.
The main honey-producing bees of this group generally belong to the genera Scaptotrigona, Tetragonisca, Melipona and Austroplebeia, although there are other genera containing species that produce some usable honey. They are farmed in meliponiculture in the same way that European honey bees (genus Apis) are cultivated in apiculture.
Throughout Mesoamerica, the Mayans have engaged in extensive meliponiculture on a large scale since before the arrival of Columbus. Meliponiculture played a significant role in Maya society, influencing their social, economic, and religious activities. The practice of maintaining stingless bees in man-made structures is prevalent across the Americas, with notable instances in countries such as Brazil, Peru, and Mexico.
Geographical distribution
Stingless bees can be found in most tropical or subtropical regions of the world, such as the African continent (Afrotropical region), Southeast Asia and Australia (Indo-Malayan and Australasian region), and tropical America (Neotropical region).
The majority of native eusocial bees of Central and South America are SB, although only a few of them produce honey on a scale such that they are farmed by humans. The Neotropics, with approximately 426 species, boast the highest abundance and species richness, ranging from Cuba and Mexico in the north to Argentina in the south.
They are also quite diverse in Africa, including Madagascar, and are farmed there also. Around 36 species exist on the continent. The equatorial regions harbor the greatest diversity, with the Sahara Desert acting as a natural barrier to the north. The range extends southward to South Africa and southern Madagascar, with most African species inhabiting tropical forests or both tropical forests and savannahs.
Meliponine honey is prized as a medicine in many African communities, as well as in South America. Some cultures use SB honey against digestive, respiratory, ocular and reproductive problems, although more research is needed to disclose evidence that supports these practices.
In Asia and Australia, approximately 90 species of stingless bees span from India in the west to the Solomon Islands in the east, and from Nepal, China (Yunnan, Hainan), and Taiwan in the north to Australia in the south.
Origin and dispersion
Phylogenetic analyses reveal three distinct groups in the evolutionary history of Meliponini: the Afrotropical, the Indo-Malay/Australasia, and the Neotropical lineages. The evolutionary origin of the Meliponini is Neotropical. Studies observing contemporary species richness show that it remains highest in the Neotropics.
The hypothesis proposes the potential dispersion of stingless bees from what is now North America. According to this scenario, these bees would have then traveled to Asia by crossing the Bering Strait (Beringia route) and reached Europe through Greenland (Thulean route).
Evolution and phylogeny
Meliponines form a clade within the corbiculate bees, characterized by unique pollen-carrying structures known as corbiculae (pollen baskets) located on their hind legs. This group also includes another three tribes: honey bees (Apini), bumble bees (Bombini), and orchid bees (Euglossini). The concept of higher eusociality, defined by the presence of distinct queen and worker castes and characterized by features such as perennial colony lifestyles and extensive food sharing among adults, is particularly relevant in understanding the social structure of these tribes. Both Meliponini and Apini tribes are considered higher eusocial, while Bombini is considered to be primitively eusocial.
The phylogenetic relationships among the four tribes of corbiculate bees have been a topic of considerable debate within the scientific community. Two primary questions arise: the relationship of stingless bees to honey bees and bumble bees, and whether their eusocial behavior evolved independently or from a common ancestor. Morphological and behavioral studies have suggested that Meliponini and Apini are sister groups, indicating a single origin of higher eusociality. In contrast, molecular studies often support a relationship between Meliponini and Bombini, proposing independent origins of higher eusociality in both Apini and Meliponini.
A morphological, behavioral, and molecular data analysis provided strong support for the latter hypothesis of dual origins of higher eusociality. Subsequent research has reinforced the idea that stingless bees and honey bees evolved their eusocial lifestyles independently, resulting in distinct adaptive strategies for colony reproduction, brood rearing, foraging communication, and colony defense. This divergence helps explain the varied ecological and social solutions developed by these two groups of bees, such as foraging communication, colony defense/reproduction and brood rearing.
Fossil history
The fossil record for stingless bees is notably robust compared to that of many other bee groups, with twelve extinct species currently identified. Fossils of these bees are primarily found in amber and copal, where excellent preservation typically occurs. This favorable fossilization process may be attributed to the behaviors of stingless bees, which collect significant amounts of tree resin for building nests and defense, increasing the likelihood of entrapment.
Despite this relatively good fossil record, the evolutionary history of stingless bees remains poorly understood, particularly regarding their widespread distribution across various ecological niches around the globe. The oldest known fossil stingless bee is Cretotrigona prisca, a small worker bee approximately 5 mm in body length, discovered in New Jersey amber. This species is believed to have existed during the Late Cretaceous period, around 65–70 million years ago, marking it as the oldest confirmed fossil of an apid bee and the earliest fossil evidence of a eusocial bee. C. prisca exhibits striking similarities to extant stingless bees, indicating that the evolutionary lineage of meliponines may date back to this period.
Some researchers suggest that stingless bees likely evolved in the Late Cretaceous, approximately 70–87 million years ago. According to recent studies, corbiculate bees, which include stingless bees, are thought to have appeared around 84–87 million years ago, further supporting the notion of their evolution during this dynamic period in Earth's history.
Behaviour, biology and ecology
Overview
Meliponines, considered highly eusocial insects, exhibit a remarkable caste division. The colonies typically consist of a queen, workers, and sometimes male drones. The queen is responsible for reproduction, while the workers perform various tasks such as foraging, nursing, and defending the colony. Individuals work together with a well-defined division of labor for the overall benefit.
Stingless bees are valuable pollinators and contribute to ecosystem health by producing essential products. These insects collect and store honey, pollen, resin, propolis, and cerumen. Honey serves as their primary carbohydrate source, while pollen provides essential proteins. Resin, propolis, and cerumen are used in nest construction and maintenance.
Nesting behavior varies among species and may involve hollow tree trunks, external hives, the soil, termite nest or even urban structures. This adaptability underscores their resilience and ability to coexist with human activities.
Castes
Workers
In a SB colony, workers constitute the predominant segment of the population, serving as the colony's primary workforce. They undertake a multitude of responsibilities crucial for the colony's well-being, including defense, cleaning, handling building materials, and the collection and processing of food. Recognizable by the corbicula - a distinctive structure on their hind legs resembling a small basket - workers efficiently carry pollen, resin, clay, and other materials gathered from the environment. Given their abundance and unique physical feature, workers play a central role in sustaining the colony.
Queens
The principal egg layer in SB colonies is the queen, distinguished from the workers by differences in both size and shape. Stingless bee queens - except in the case of the Melipona genus, where queens and workers receive similar amounts of food and thus exhibit similar sizes - are generally larger and weigh more than workers (approximately 2–6 times). Post-mating, meliponine queens undergo physogastry, developing a distended abdomen. This physical transformation sets them apart from honey bee queens, and even Melipona queens can be easily identified by their enlarged abdomen after mating.
Stingless bee colonies typically follow a monogynous structure, featuring a single egg-laying queen. An exception is noted in Melipona bicolor colonies, which are often polygynous (large populations may have as many as 5 physogastric queens simultaneously involved in oviposition). Depending on the species, queens can lay varying quantities of eggs daily, ranging from a dozen (e.g., Plebeia julianii) to several hundred (e.g., Trigona recursa). While information on queen lifespans is limited, available data suggest that queens generally outlive workers, with lifespans usually falling between 1 and 3 years, although some queens may live up to 7 years.
The laying queen assumes the crucial role of producing eggs that give rise to all castes within the colony. Additionally, she plays a pivotal role in organizing the colony, overseeing a complex communication system primarily reliant on the use of pheromones.
Males (drones)
The primary function of males, or drones, is to mate with queens, performing limited tasks within the nest and leaving at around 2–3 weeks old, never to return. The production of males can vary, occurring continuously, sparsely or in large spurts when numerous drones emerge from brood combs for brief periods. Identifying a male can be challenging due to its similar body size to workers, but distinctive features such as the absence of a corbicula, larger eyes, slightly smaller mandibles, slightly longer and v-shaped antennae, and often a lighter face color distinguish them. Clusters of males, numbering in the hundreds, can be observed outside colonies, awaiting the opportunity to mate with virgin queens.
Males in a stingless bee colony, either produced mainly by the laying queen or primarily by the workers, play an important role in reproduction. Workers can produce males by laying unfertilized eggs, enabled by the haplodiploidy system, where males are haploid, having only one set of chromosomes, while workers are diploid and incapable of producing female eggs due to their inability to mate. This sex determination system is common to all hymenopterans.
Soldiers
While the existence of a soldier caste is well known in ants and termites, the phenomenon was unknown among bees until 2012, when some stingless bees were found to have a similar caste of defensive specialists that help guard the nest entrance against intruders. To date, at least 10 species have been documented to possess such "soldiers", including Tetragonisca angustula, T. fiebrigi, and Frieseomelitta longipes, with the guards not only larger, but also sometimes a different color from ordinary workers.
Division of labour
When the young worker bees emerge from their cells, they tend to initially remain inside the hive, performing different jobs. As workers age, they become guards or foragers. Unlike the larvae of honey bees and many social wasps, meliponine larvae are not actively fed by adults (progressive provisioning). Pollen and nectar are placed in a cell, within which an egg is laid, and the cell is sealed until the adult bee emerges after pupation (mass provisioning).
At any one time, hives can contain from 300 to more than 100,000 workers (with some authors claiming to calculate more than 150,000 workers, but with no methodology explanation), depending on species.
Products and materials
The industrious nature of stingless bees extends to their building activities. Unlike honey bees, they do not use pure wax for construction but combine it with resin to create cerumen, a material employed in constructing nest structures such as brood cells, food pots, and the protective involucrum. Wax is secreted by young bees through glands located on the top of their abdomen and this mixture not only provides structural strength but also offers antimicrobial properties, inhibiting the growth of fungi and bacteria. The creation of batumen involves combining cerumen with additional resin, mud, plant material, and sometimes even animal feces. Batumen, a stronger material, forms protective layers covering the walls of the nesting space, ensuring the safety of the colony.
On the other hand, clay, sourced from the wild and exhibiting diverse colors based on its mineral origin, serves as another essential raw material for SB. While it can be used in its pure form, it is more common to combine clay with vegetable resins to produce geopropolis. The inclusion of clay in this mixture enhances the durability and structural integrity of the resulting substance.
Vegetable resin, gathered from a variety of plant species in the wild, is an essential raw material brought back to the hive. Stored in small, sticky clumps in peripheral areas of the colony, it is often mistakenly treated as a synonym for propolis. However, in beekeeping terminology, propolis refers to a mixture of resin, wax, enzymes, and possibly other substances. Stingless bees go beyond the classic propolis by producing various derivatives from resins and wax, sometimes using pure resins for sealing or defense, a behavior not observed in Apis bees. Understanding these distinctions is vital for effective production and value addition to the meliponiculture activity.
Honey, a prized product of bee colonies, is crafted through the processing of nectars, honeydews, and fruit juices by worker bees. They store these collected substances in an extension of their gut called a crop. Back at the hive, the bees ripen or dehydrate the nectar droplets by spinning them inside their mouthparts until honey is formed. Ripening concentrates the nectar and increases the sugar content, though it is not nearly as concentrated as the honey from Apis mellifera. Stored in food pots, meliponines' honey is often referred to as pot-honey due to its distinctive storage method. Stingless bee honeys differ from A. mellifera honey in terms of color, texture, and flavor, being more liquid with a higher water content. Rich in minerals, amino acids, and flavonoid compounds, the composition of honey varies among colonies of the same species, influenced by factors such as season, habitat, and collected resources.
Special methods are being developed to harvest moderate amounts of honey from stingless bees in these areas without causing harm. For honey production, the bees need to be kept in a box specially designed to make the honey stores accessible without damaging the rest of the nest structure. Some recent box designs for honey production provide a separate compartment for the honey stores so the honey pots can be removed without spilling honey into other areas of the nest. Unlike a hive of commercial honeybees, which can produce 75 kg (165 lbs) of honey a year, a hive of Australian stingless bees produces less than 1 kg (2 lbs). Stingless bee honey has a distinctive "bush" taste—a mix of sweet and sour with a hint of fruit. The taste comes from plant resins—which the bees use to build their hives and honey pots—and varies at different times of year depending on the flowers and trees visited.
In 2020 researchers at the University of Queensland found that some species of stingless bee in Australia, Malaysia, and Brazil produce honey that has trehalulose—a sugar with an unusually low glycaemic index (GI) compared to that of glucose and fructose, the main sugars composing conventional honey. Such low glycaemic index honey is beneficial for humans because its consumption does not cause blood sugar to spike, forcing the body to make more insulin in response. Honey with trehalulose is also beneficial as it this sugar cannot nourish the lactic acid-producing bacteria that cause tooth decay. The university's findings supported the long-standing claims of Indigenous Australian people that native honey is beneficial to human health. This type of honey is scientifically supported as providing therapeutic value to humans as well.
Nest
Stingless bees, as a collective group, display remarkable adaptability to diverse nesting sites. They can be found in exposed nests in trees, from living in ant and termite nests above and below ground to cavities in trees, trunks, branches, rocks, or even human constructions.
Many beekeepers keep the bees in their original log hive or transfer them to a wooden box, as this makes controlling the hive easier. Some beekeepers put them in bamboos, flowerpots, coconut shells, and other recycling containers such as a water jug, a broken guitar, and other safe and closed containers.
Exposed nests
Notably, certain species, such as the African Dactylurina, construct hanging nests from the undersides of large branches for protection against adverse weather conditions. Additionally, some American Trigona species, including T. corvina, T. spinipes, and T. nigerrima, as well as Tetragonisca weyrauchi, build fully exposed nests.
Ground nests
A significant minority of meliponine species, belonging to genera like Camargoia, Geotrigona, Melipona, Mourella, Nogueirapis, Paratrigona, Partamona, Schwarziana, and others, opt for ground nests. These species take advantage of cavities in the ground, often utilizing abandoned nests of ants, termites, or rodents. Unlike some other cavity-nesting bees, stingless bees in this category do not excavate their own cavities but may enlarge existing ones.
Termite and ant shared nests
Numerous stingless bee species have evolved to coexist with termites. They inhabit parts of ant or termite nests, both above and below ground. These nests are often associated with various ant species, such as Azteca, Camponotus, or Crematogaster, and termite species like Nasutitermes, Constrictotermes, Macrotermes, Microcerotermes, Odontotermes, or Pseudocanthotermes. This strategy allows SB to utilize pre-existing cavities without the need for extensive excavation.
Cavity nests
The majority of stingless bees favor nesting in pre-existing cavities within tree trunks or branches. Nesting heights vary, with some colonies positioned close to the ground, typically below 5 meters, while others, like Trigona and Oxytrigona, may nest at higher elevations, ranging from 10 to 25 meters. Some species, such as Melipona nigra, exhibit unique nesting habits at the foot of a tree in root cavities or between roots. The choice of nesting height has implications for predation pressure and the microclimate experienced by the colony.
The majority of stingless bee species exhibit a non-specific preference when it comes to selecting tree species for nesting. Instead, they opportunistically exploit whatever nesting sites are available This adaptability underscores the versatility of SB in adapting to various arboreal environments. Furthermore, cavity-nesting species can opportunistically utilize human constructions, nesting under roofs, in hollow spaces in walls, electricity boxes, or even metal tubes. In few cases, specific tree species, like Caryocar brasiliense, may be preferred by certain stingless bee species (Melipona quadrifasciata), illustrating a degree of selectivity in nesting choices among different groups.
Entrances
Entrance tubes showcase a spectrum of characteristics, from being hard and brittle to soft and flexible. In many situations, the portion near the opening remains soft and flexible, aiding workers in sealing the entrance during the night. The tubes may also feature perforations and a coating of resin droplets, adding to the complexity of their design.
The entrances serve as essential visual landmarks for returning bees, and they are often the first structures constructed at a new nest site. The diversity in entrance size influences foraging traffic, with larger entrances facilitating smoother traffic but potentially necessitating more entrance guards to ensure adequate defense.
Some Partamona species exhibit a distinctive entrance architecture, where workers of P. helleri construct a large outer mud entrance leading to a smaller adjacent entrance. This unique design enables foragers to enter with high speed, bouncing off the ceiling of the outer entrance towards the smaller inner entrance. The peculiar appearance of this entrance has led to local names such as "toad mouth", highlighting the intriguing adaptations found in stingless bee nest entrances.
Brood cell arrangement
Stingless bee colonies exhibit a diversity of construction patterns of brood cells, primarily composed of soft cerumen, a mixture of wax and resin. Each crafted cell is designed to rear a single individual bee, emphasizing the precision and efficiency of their nest architecture.
The quantity of brood cells within a nest displays significant variation across different stingless bee species. Nest size can range from a few brood cells, as observed in the Asian Lisotrigona carpenteri, to remarkably expansive colonies with over 80,000 brood cells, particularly in some American Trigona species.
Meliponine colonies exhibit diverse brood cell arrangements, primarily categorized into three main types: horizontal combs, vertical combs, and clustered cells. Despite these primary types, variations and intermediate forms are prevalent, contributing to the flexibility of nest structures.
The first type involves horizontal combs, often characterized by a spiral pattern or layers of cells. The presence of spirals may not be consistent within a species, varying among colonies or even within the same colony. Some species, such as Melipona, Plebeia, Plebeina, Nannotrigona, Trigona, and Tetragona, may occasionally build spirals alongside other comb structures, as observed in Oxytrigona mellicolor. As space diminishes for upward construction, workers initiate the creation of a new comb at the bottom of the brood chamber. This innovative approach optimizes the available space when emerging bees vacate older, lower brood combs.
The second prevalent brood cell arrangement involves clusters of cells held together with thin cerumen connections. This clustered style is observed in various distantly related genera, such as the American Trigonisca, Frieseomelitta, Leurotrigona, the Australian Austroplebeia, and the African Hypotrigona. This arrangement is particularly useful for colonies in irregular cavities unsuitable for traditional comb building.
The construction of vertical combs is a distinctive trait found in only two stingless bee species: the African Dactylurina and the American Scaura longula. This vertical arrangement sets these species apart from the more commonly observed horizontal comb structures in other stingless bee genera.
Brood rearing
Stingless bee brood rearing is a sophisticated and intricately coordinated process involving various tasks performed by worker bees, closely synchronized with the queen's activities. The sequence begins with the completion of a new brood cell, marking the initiation of mass provisioning.
Upon finishing a brood cell, several workers engage in mass provisioning, regurgitating larval food into the cell. This collective effort is swiftly followed by the queen laying her egg on top of the provided larval food. The immediate sealing of the cell ensues shortly afterward, culminating this important phase of the brood rearing process.
The practice of mass provisioning, oviposition, and cell sealing is considered an ancestral trait, shared with solitary wasps and bees. However, in the context of stingless bees, these actions represent distinct stages of a highly integrated social process. Notably, the queen plays a central role in orchestrating these activities, acting as a pacemaker for the entire colony.
This process diverges significantly from brood rearing in Apis spp. In honeybee colonies, queens lay eggs into reusable empty cells, which are then progressively provisioned over several days before final sealing. The contrasting approaches in brood rearing highlight the unique social dynamics and adaptations within stingless bee colonies.
Swarming
Stingless bees and honey bees, despite encountering a common challenge in establishing daughter colonies, employ contrasting strategies. There are three key differences: reproductive status and age of the queen that leaves the nest, temporal aspects of colony foundation, and communication processes for nest site selection.
In HB (Apis mellifera), the mother queen, accompanied by a swarm of numerous workers, embarks on relocation to a new home once replacement queens have been reared. Conversely, in SB (meliponines), the departure is orchestrated by the unmated ("virgin") queen, leaving the mother queen in the original nest. Mated stingless bee cannot leave the hive due to damaged wings and increased abdominal size post-mating (physogastrism). The queen's weight in species like Scaptotrigona postica increases, for example, about 250%.
Unlike honey bees, stingless bee colonies are unable to perform absconding - a term denoting the abandonment of the nest and migration to a new location - making them reliant on alternative strategies to cope with challenges. Meliponines progressive found new colonies without abandonning their nest abruptly.
These are the stages of stingless bees swarming:
Reconnaissance and preparation: Scouts inspect potential new nest sites for suitability, considering factors such as cavity size, entrance characteristics, and potential threats. The criteria for determining suitability remain largely unexplored. Some colonies engage in simultaneous preparation of multiple cavities before making a final decision and some others make the initial reconnaissance but do not move into the cavity;
Transport of building material and food: workers seal cracks in the chosen cavity using materials like resin, batumen, or mud. They construct an entrance tube, possibly serving as a visual beacon for nestmate workers. Early food pots are built and filled with honey, requiring a growing number of workers to transport cerumen and honey from the mother nest.
Progressive establishment and social link: the mother and daughter colony maintain a social link through workers traveling between the two nests. The duration of this link varies among species, ranging from a few days to several months. Stingless bee colonies display a preference for cavities previously used by other colonies, containing remnants of building material and nest structures.
Arrival of the queen: after initial preparations, an unmated queen, accompanied by additional workers, arrives at the new nest site.
Drone arrival: males (drones) aggregate outside the newly established nest. They often arrive shortly after swarming initiation, even before the completion of nest structures. Males can be observed near the entrance, awaiting further events.
Mating flight: males in aggregations do not enter the colony but await the queen's emergence for a mating flight. Although rarely observed, it is assumed that unmated stingless bee queens embark on a single mating flight, utilizing acquired sperm for the entirety of their reproductive life.
Natural enemies
In meliponiculture, beekeepers need to be aware of the presence of animals that can harm stingless bee colonies. There are several potential enemies, but the most damaging ones to meliponaries are listed below.
Invertebrates
Phorid flies in the genus Pseudohypocera pose a significant threat to stingless bee colonies, causing problems for beekeepers. These parasites lay eggs in open cells of pollen and honey, leading to potential extinction if not addressed. Early detection is crucial for manual removal or using vinegar traps. It's important never to leave an infested box unattended to prevent the cycle from restarting and avoid contaminating other colonies. Careful handling of food jars, especially during swarms transfers, is essential. Prompt removal of broken jars, sealing gaps with wax or tape, and maintaining vigilance during the rainy season for heightened phorid activity are recommended. Combatting these flies usually is a priority, particularly during increased reproductive periods.
Termites usually do not attack bees or their food pots. However, they can cause damage to the structure of hive boxes as there are many xylophagous species. While termites do not usually pose major problems for beekeepers, they should still be monitored closely.
Ants are attracted to bee colonies by the smell of food. To prevent ant attacks, it's important to handle the hive boxes carefully and avoid exposing jars of pollen and honey. Although rare, when attacks do occur, there are intense conflicts between ants and bees. Stingless bees usually manage to defend themselves, but the damage to the bee population can be significant. To prevent ant infestations in meliponaries with individual supports, a useful strategy is to impregnate the box supports with burnt oil.
Another group of enemy flies are the black soldier flies (Hermetia illucens). They lay their eggs in crevices of boxes and can extend the tip of their abdomen during laying, facilitating access to the inside of the hive. Larvae of this species feed on pollen, feces, and other materials found in colonies. In general, healthy bee colonies can coexist peacefully with soldier flies. However, in areas where these insects are prevalent, beekeepers must remain vigilant and protect the gaps in the colonies to prevent potential issues.
Cleptobiosis, also known as cleptoparasitism, is a behaviour observed in various species of stingless bees, with over 30 identified species engaging in nest attacks, including honey bee nests. This behaviour serves the purpose of either resource theft or usurping the nest by swarming into an already occupied cavity and these bees are called robber bees. The Neotropical genus Lestrimelitta and the African genus Cleptotrigona represent bees with an obligate cleptobiotic lifestyle since they do not visit flowers for nectar or pollen.
Furthermore, other species such as Melipona fuliginosa, Oxytrigona tataira, Trigona hyalinata, T. spinipes, and Tetragona clavipes are reported to have comparable habits of pillaging and invading, which emphasises the variety of strategies employed by stingless bees in acquiring resources.
Other enemies include: jumping spiders (Salticidae), moths, assassin bugs (Reduviidae), beetles, parasitoid wasps, predatory mites (Amblyseius), mantises (Mantodea), robber flies (Asilidae), etc.
Vertebrates
Human activities pose the most significant threat to stingless bees, whether through honey and nest removal, habitat destruction, pesticide use or introduction of non-native competitors. Large-scale environmental alterations, particularly the conversion of natural habitats into urban or intensively farmed land, are the most dramatic threats leading to habitat loss, reduced nest densities, and species disappearance.
Primates, including chimpanzees, gorillas, baboons, and various monkey species, are known to threaten stingless bee colonies. Elephants, honey badgers, sun bears, spectacled bears, anteaters, hog-nosed skunks, armadillos, tayras, eyra cats, kinkajous, grisons, and coyotes are among the mammals that consume or destroy stingless bee nests. Some, like the tayra and eyra cat, have specific preferences for stealing honey. Geckos, lizards, and toads also pose threats by hunting adult bees or consuming workers at nest entrances. Woodpeckers and various bird species, including bee-eaters, woodcreepers, drongos, jacamars, herons, kingbirds, flycatchers, swifts, and honeyeaters, occasionally prey on stingless bees. African honeyguides have developed a mutualism with human honey-hunters, actively guiding them to bee nests for honey extraction and then consuming leftover wax and larvae.
Defense
Being tropical, stingless bees are active all year round, although they are less active in cooler weather, with some species presenting diapause. Unlike other eusocial bees, they do not sting, but will defend by biting if their nest is disturbed. In addition, a few (in the genus Oxytrigona) have mandibular secretions, including formic acid, that cause painful blisters. Despite their lack of a sting, stingless bees, being eusocial, may have very large colonies made formidable by the number of defenders.
Stingless bees use other sophisticated defence tactics to protect their colonies and ensure their survival. One important strategy is to choose nesting habitats with fewer natural enemies to reduce the risk of attacks. In addition, they use camouflage and mimicry to blend into their surroundings or imitate other animals to avoid detection. An effective strategy is to nest near colonies that provide protection, using collective strength to defend against potential invaders.
Nest entrance guards play a vital role in colony defense by actively preventing unauthorized entry through attacking intruders and releasing alarm pheromones to recruit additional defenders. It is worth noting that nest guards often carry sticky substances, such as resins and wax, in their corbiculae or mandibles. Stingless bees apply substances to attackers to immobilise them, thus thwarting potential threats to the colony. Some species (Tetragonisca angustula and Nannotrigona testaceicornis, for example) also close their nest entrances with a soft and porous layer of cerumen at night, further enhancing colony security during vulnerable periods. These intricate defence mechanisms demonstrate the adaptability and resilience of stingless bees in safeguarding their nests and resources.
Role differentiation
In a simplified sense, the sex of each bee depends on the number of chromosomes it receives. Female bees have two sets of chromosomes (diploid)—one set from the queen and another from one of the male bees or drones. Drones have only one set of chromosomes (haploid), and are the result of unfertilized eggs, though inbreeding can result in diploid drones.
Unlike true honey bees, whose female bees may become workers or queens strictly depending on what kind of food they receive as larvae (queens are fed royal jelly and workers are fed pollen), the caste system in meliponines is variable, and commonly based simply on the amount of pollen consumed; larger amounts of pollen yield queens in the genus Melipona. Also, a genetic component occurs, however, and as much as 25% (typically 5–14%) of the female brood may be queens. Queen cells in the former case can be distinguished from others by their larger size, as they are stocked with more pollen, but in the latter case, the cells are identical to worker cells, and scattered among the worker brood. When the new queens emerge, they typically leave to mate, and most die. New nests are not established by swarms, but by a procession of workers that gradually construct a new nest at a secondary location. The nest is then joined by a newly mated queen, at which point many workers take up permanent residence and help the new queen raise her own workers. If a ruling queen is herself weak or dying, then a new queen can replace her. For Schwarziana quadripunctata, although fewer than 1% of female worker cells produce dwarf queens, they comprise six of seven queen bees, and one of five proceed to head colonies of their own. They are reproductively active, but less fecund than large queens.
Interaction with humans
Pollination
Bees play a critical role in the ecosystem, particularly in the pollination of natural vegetation. This activity is essential for the reproduction of various plant species, particularly in tropical forests where most tree species rely on insect pollination. Even in temperate climates, where wind pollination is prevalent among forest trees, many bushes and herbaceous plants, rely on bees for pollination. The significance of bees extends to arid regions, such as desertic and xeric shrublands, where bee-pollinated plants are essential for preventing erosion, supporting wildlife, and ensuring ecosystem stability.
The impact of bee pollination on agriculture is substantial. In the late 1980s, certain plants were estimated to contribute between $4.6 to $18.9 billion to the U.S. economy, primarily through insect-pollinated crops. Although some bee-pollinated plants can self-pollinate in the absence of bees, the resulting crops often suffer from inbreeding depression. The quality and quantity of seeds or fruits are significantly enhanced when bees participate in the pollination process. Although estimates of crop pollination attributed to honey bees are uncertain, it is undeniable that bee pollination is a vital and economically valuable activity.
Ramalho (2004) demonstrates that stingless bees amount to approximately 70% of all bees foraging on flowers in the Brazilian Tropical Atlantic Rainforest even though they represented only 7% of all bee species. In a habitat in Costa Rica, stingless bees accounted for 50% of the observed foraging bees, despite representing only 16% of the recorded bee species. Following this pattern, Cairns et al. (2005) found that 52% of all bees visiting flowers in Mexican habitats were meliponines.
Meliponine bees play a crucial role in tropical environments due to their high population rate, morphological diversity, diverse foraging strategies, generalist foraging habits (polylecty), and flower constancy during foraging trips. Nest densities and colony sizes can result in over a million individual stingless bees inhabiting a square kilometre of tropical habitat. Due to their diverse morphology and behaviour, bees are capable of collecting pollen and nectar from a wide range of flowering plants. Key plant families are reported as most visited by meliponines: Fabaceae, Euphorbiaceae, Asteraceae and Myrtaceae.
Grüter compiled some studies about twenty crops that substantially benefit from SB pollination (following table) and also lists seventy-four crops that are at least occasionally or potentially pollinated by stingless bees.
Worldwide overview
Africa
Stingless bees also play a vital ecological role across Sub-Saharan Africa and Madagascar. To understand these insects on the African continent, it's important to consider the prevailing socio-economic and cultural contexts. Despite their ecological significance, the diversity, conservation, and behavior of these bees remain underexplored, particularly compared to better-studied regions such as South America and Southeast Asia. Also, honeybees were extensively researched, in contrast to native meliponines.
Africa is home to seven biodiversity hotspots, yet the recorded bee fauna is moderate relative to the continent's size. Madagascar stands out with exceptionally high levels of endemic species, though much of the bee diversity remains undocumented. Africa is home to aproximately 36 species of meliponines, including seven endemic to Madagascar. Most of these bees are found in equatorian regions (tropical forests and some savannahs).
Factors such as habitat destruction, pesticide use, and invasive species pose significant threats to these pollinators. Furthermore, high rates of nest mortality, driven by predation and human activity, exacerbate conservation challenges. Research indicates that stingless bees in Africa face greater pressures than their counterparts in the American and Asian tropics, underlining the urgency for targeted conservation measures.
Uganda's Bwindi Impenetrable National Park has shown the presence of at least five stingless bee species in, distributed across two genera: Meliponula and Hypotrigona. In Madagascar, there is only one genus of stingless bees: Liotrigona.
Meliponiculture, for example, is practised in Angola and Tanzania, and interest in managing stingless bees is growing in other African countries as well.
Australia
Of the 1,600 species of wild bees native to Australia, about 14 are meliponines. "Coot-tha", which derives from "ku-ta", is one of the Aboriginal names for "wild stingless bee honey". These species bear a variety of names, including Australian native honey bees, native bees, sugar-bag bees, and sweat bees (because they land on people's skin to collect sweat). The various stingless species look quite similar, with the two most common species, Tetragonula carbonaria and Austroplebeia australis, displaying the greatest variation, as the latter is smaller and less active. Both of these inhabit the area around Brisbane.
As stingless bees are usually harmless to humans, they have become an increasingly attractive addition to the suburban backyard. Most meliponine beekeepers do not keep the bees for honey, but rather for the pleasure of conserving native species whose original habitat is declining due to human development. In return, the bees pollinate crops, garden flowers, and bushland during their search for nectar and pollen. While a number of beekeepers fill a small niche market for bush honey, native meliponines only produce small amounts and the structure of their hives makes the honey difficult to extract. Only in warm areas of Australia such as Queensland and northern New South Wales are favorable for these bees to produce more honey than they need for their own survival. Most bees only come out of the hive when it is above about 18°C (64°F). Harvesting honey from a nest in a cooler area could weaken or even kill the nest.
Pollination
Australian farmers rely almost exclusively on the introduced western honey bee to pollinate their crops. However, native bees may be better pollinators for certain agricultural crops. Stingless bees have been shown to be valuable pollinators of tropical plants such as macadamias and mangos. Their foraging may also benefit strawberries, watermelons, citrus, avocados, lychees, and many others. Research into the use of stingless bees for crop pollination in Australia is still in its very early stages, but these bees show great potential. Studies at the University of Western Sydney have shown these bees are effective pollinators even in confined areas, such as glasshouses.
Brazil
Brazil is home to several species bees belonging to Meliponini, with more than 300 species already identified and probably more yet to be discovered and described. They vary greatly in shape, size, and habits, and 20 to 30 of these species have good potential as honey producers. Although they are still quite unknown by most people, an increasing number of beekeepers (meliponicultores, in Portuguese) have been dedicated to these bees throughout the country. This activity has experienced significant growth since August 2004, when national laws were changed to allow native bee colonies to be freely marketed, which was previously forbidden in an unsuccessful attempt to protect these species. Nowadays the capture or destruction of existing colonies in nature is still forbidden, and only new colonies formed by the bees themselves in artificial traps can be collected from the wild. Most marketed colonies are artificially produced by authorized beekeepers, through division of already existing captive colonies. Besides honey production, Brazilian stingless bees such as the jataí (Tetragonisca angustula), mandaguari (Scaptotrigona postica), and mandaçaia (Melipona quadrifasciata) serve as major pollinators of tropical plants and are considered the ecological equivalent of the honey bee.
Also, much practical and academic work is being done about the best ways of keeping such bees, multiplying their colonies, and exploring the honey they produce. Among many others, species such as jandaíra (Melipona subnitida) and true uruçu (Melipona scutellaris) in the northeast of the country, mandaçaia (Melipona quadrifasciata) and yellow uruçu (Melipona rufiventris) in the south-southeast, tiúba or jupará (Melipona interrupta) and canudo (Scaptotrigona polysticta) in the north and jataí (Tetragonisca angustula) throughout the country are increasingly kept by small, medium, and large producers. Many other species as the mandaguari (Scaptotrigona postica), the guaraipo (Melipona bicolor), marmelada (Frieseomelitta varia) and the iraí (Nannotrigona testaceicornis), to mention a few, are also reared.
According to ICMBio and the Ministry of the Environment there are presently four species of Meliponini listed in the National Red List of Threatened Species in Brazil. Melipona capixaba, Melipona rufiventris, Melipona scutellaris, and Partamona littoralis all listed as Endangered (EN).
Honey production
Although the colony population of most of these bees is much smaller than that of European bees, the productivity per bee can be quite high. Interestingly, honey production is more connected to the body size, not the colony size. The manduri (Melipona marginata), jandaíra (Melipona subnitida) and the guaraipo (M. bicolor) live in swarms of only around 300 individuals but can still produce up to 5 liters (1.3 US gallon) of honey a year under the right conditions. In large bee farms, only the availability of flowers limits the honey production per colony. However, much larger numbers of beehives are required to produce amounts of honey comparable to that of European bees. Also, due to the fact of those bees storing honey in cerumen pots instead of standardized honeycombs as in the honeybee rearing makes extraction a lot more difficult and laborious.
The honey from stingless bees has a higher water content, from 25% to 35%, compared to the honey from the genus Apis. This contributes to its less cloying taste but also causes it to spoil more easily. Thus, for marketing, this honey needs to be processed through desiccation, fermentation or pasteurization. In its natural state, it should be kept under refrigeration.
Bees as pets
Due to the lack of a functional stinger and characteristic nonaggressive behavior of many Brazilian species of stingless bees, they can be reared without problems in densely populated environments (residential buildings, schools, urban parks), provided enough flowers are at their disposal nearby. Some breeders (meliponicultores) can produce honey even in apartments up to the 12th floor.
The mandaçaias (Melipona quadrifasciata) are extremely tame, rarely attacking humans (only when their hives are opened for honey extraction or colony division). They form small, manageable colonies of only 400–600 individuals. They are fairly large bees, up to 11 mm (7/16") in length, and as a result have better body heat control, allowing them to live in regions where temperatures can drop a little lower than 0 °C (32 °F). However, they are somewhat selective about which flowers they will visit, preferring the flora that occurs in their natural environment. They are thus difficult to keep outside their region of origin (the eastern coast of Brazil). Once very common, the mandaçaia is now rather rare in nature, mainly due to the destruction of their native forests in the of Brazil.
Other groups of Brazilian stingless bees, genera Plebeia and Leurotrigona, are also very tame and much smaller, with one of them (Plebeia minima) reaching no more than 2.5 mm (3/32") in length, and the lambe-olhos ("lick-eyes" bee, Leurotrigona muelleri) being even smaller, at no more than 1.5 mm (3/32"). Many of these species are known as mirim (meaning 'small' in the Tupi-Guarani languages). As a result, they can be kept in very small artificial hives, thus being of interest for keepers who want them as pollinators in small glasshouses or just for the pleasure of having a 'toy' bee colony at home. Being so tiny, these species produce only a very small amount of honey, typically less than 500 ml (1/2 US pint) a year, so are not interesting for commercial honey production.
Belonging to the same group, the jataí (Tetragonisca angustula), the marmelada (Frieseomelitta varia), and the moça-branca (Frieseomelitta doederleini) are intermediate in size between those very small species and the European bee. They are very adaptable species; the jataí, and can be reared in many different regions and environments, being quite common in most Brazilian cities. The jataí can bite when disturbed, but its jaws are weak, and in practice they are harmless, while the marmelada and moça-branca usually deposit propolis on their aggressors. Jataí is one of the first species to be kept by home beekeepers. Their nests can be easily identified in trees or wall cavities by the yellow wax pipe they build at the entrance, usually guarded by some soldier bees, which are stronger than regular worker bees. The marmelada and moça-branca make a little less honey, but it is denser and sweeter than most from other stingless bees and is considered very tasty.
Central America
The stingless bees Melipona beecheii and M. yucatanica are the primary native bees cultured in Central America, though a few other species are reported as being occasionally managed (e.g., Trigona fulviventris and Scaptotrigona mexicana). They were extensively cultured by the Maya civilization for honey, and regarded as sacred. They continue to be cultivated by the modern Maya peoples, although these bees are endangered due to massive deforestation, altered agricultural practices (especially overuse of insecticides), and changing beekeeping practices with the arrival of the Africanized honey bee, which produces much greater honey crops.
History
Native meliponines (M. beecheii being the most common) have been kept by the lowland Maya for thousands of years. The Yucatec Maya language name for this bee is xunan kab, meaning "(royal, noble) lady bee". The bees were once the subject of religious ceremonies and were a symbol of the bee-god Ah-Muzen-Cab, known from the Madrid Codex.
The bees were, and still are, treated as pets. Families would have one or many log-hives hanging in and around their houses. Although they are stingless, the bees do bite and can leave welts similar to a mosquito bite. The traditional way to gather bees, still favored among the locals, is find a wild hive, then the branch is cut around the hive to create a portable log, enclosing the colony. With proper maintenance, hives have been recorded as lasting over 80 years, being passed down through generations. In the archaeological record of Mesoamerica, stone discs have been found that are generally considered to be the caps of long-disintegrated logs that once housed the beehives.
Tulum
Tulum, the site of a pre-Columbian Maya city on the Caribbean coast 130 km (81 mi) south of Cancun, has a god depicted repeatedly all over the site. Upside down, he appears as a small figure over many doorways and entrances. One of the temples, the Temple of the Descending God (Templo del Dios Descendente), stands just left of the central plaza. Speculation is that he may be the "Bee God", Ah Muzen Cab, as seen in the Madrid Codex. It is possible that this was a religious/trade center with emphasis on xunan kab, the "royal lady".
Economic uses
Balché, a traditional Mesoamerican alcoholic beverage similar to mead, was made from fermented honey and the bark of the leguminous balché tree (Lonchocarpus violaceus), hence its name. It was traditionally brewed in a canoe. The drink was known to have entheogenic properties, that is, to produce mystical experiences, and was consumed in medicinal and ritual practices. Beekeepers would place the nests near the psychoactive plant Turbina corymbosa and possibly near balché trees, forcing the bees to use nectar from these plants to make their honey. Additionally, brewers would add extracts of the bark of the balché tree to the honey mixture before fermentation. The resulting beverage is responsible for psychotropic effects when consumed, due to the ergoline compounds in the pollen of the T. corymbosa, the Melipona nectar gathered from the balché flowers, or the hallucinogenic compounds of the balché tree bark.
Lost-wax casting, a common metalworking method typically found where the inhabitants keep bees, was also used by the Maya. The wax from Melipona is soft and easy to work, especially in the humid Maya lowland. This allowed the Maya to create smaller works of art, jewelry, and other metalsmithing that would be difficult to forge. It also makes use of the leftovers from honey extraction. If the hive was damaged beyond repair, the whole of the comb could be used, thus using all of the hive. With experienced keepers, though, only the honey pot could be removed, the honey extracted, and the wax used for casting or other purposes.
Future
The outlook for meliponines in Mesoamerica is uncertain. The number of active Meliponini beekeepers is shy in comparison with the Africanized Apis mellifera breeders. The high honey yield, 100 kg (220 lbs) or more annually, along with the ease of hive care and ability to create new hives from existing stock, commonly outweighs the negative consequences of "killer bee" hive maintenance.
An additional blow to the art of meliponine beekeeping is that many of the meliponicultores are now elderly, and their hives may not be cared for once they die. The hives are considered similar to an old family collection, to be parted out once the collector dies or to be buried in whole or part along with the beekeeper upon death. In fact, a survey of a once-popular area of the Maya lowlands shows the rapid decline of beekeepers, down to around 70 in 2004 from thousands in the late 1980s. Conservation efforts are underway in several parts of Mesoamerica.
| Biology and health sciences | Hymenoptera | Animals |
2559987 | https://en.wikipedia.org/wiki/Penaeus%20monodon | Penaeus monodon | Penaeus monodon, commonly known as the giant tiger prawn, Asian tiger shrimp, black tiger shrimp, and other names, is a marine crustacean that is widely reared for food.
Taxonomy
Penaeus monodon was first described by Johan Christian Fabricius in 1798. That name was overlooked until 1949, when Lipke Holthuis clarified to which species it referred. Holthuis also showed that P. monodon had to be the type species of the genus Penaeus.
Description
Females can reach about long, but are typically long and weigh ; males are slightly smaller at long and weighing . The carapace and abdomen are transversely banded with alternative red and white. The antennae are grayish brown. Brown pereiopods and pleopods are present with fringing setae in red.
Distribution
Its natural distribution is the Indo-Pacific, ranging from the eastern coast of Africa and the Arabian Peninsula, as far as Southeast Asia, the Pacific Ocean, and northern Australia.
It is an invasive species in the northern waters of the Gulf of Mexico and the Atlantic Ocean off the Southern U.S.
Invasive species
The first occurrence of P. monodon in the U.S. was in November 1988. Close to 300 shrimp were captured off the Southeastern shore after an accidental release from an aquaculture facility. This species can now be caught in waters from Texas to North Carolina. Although P. monodon has been an invasive species for many years, it has yet to grow large, established populations. Escapes in other parts of the world, though, have led to established P. monodon populations, such as off West Africa, Brazil, and the Caribbean.
Habitat
P. monodon is suited to inhabit a multitude of environments. They mainly occur in Southeastern Asia, but are widely found. Juveniles of P. monodon are generally found in sandy estuaries and mangroves, and upon adulthood, they move to deeper waters (0– 110 m) and live on muddy or rocky bottoms. The P. monodon has shown to be nocturnal in the wild, burrowing into substrate during the day, and coming out at night to feed. P. monodon typically feed on detritus, polychaete worms, mollusks, and small crustaceans. They feed on algae, as well. Due to their nutrient-rich diet, these shrimp are unable to consume phytoplankton because of their feeding appendages, but they are able to consume senescent phytoplankton. They also commence mating at night, and can produce around 800,000 eggs.
Aquaculture
P. monodon is the second-most widely cultured prawn species in the world, after only whiteleg shrimp, Litopenaeus vannamei. In 2009, 770,000 tonnes were produced, with a total value of US$3,650,000,000. P. monodon makes up nearly 50% of cultured shrimp alone.
The prawn is popular to culture because of its tolerance to salinity and very quick growth rate, but they are very vulnerable to fungal, viral, and bacterial infections. Diseases such as white spot disease and yellowhead disease have led to a great economic impact in shrimp industries around the globe. They can receive transmitted diseases from other crustaceans such as the Australian red claw crayfish (Cherax quadricarinatus), which is susceptible to yellowhead disease and has shown to transmit it to P. monodon in Thailand.
Black tiger shrimp's susceptibility to many diseases engenders economic constraints towards the black tiger shrimp food industry in Australia, which is farm-raised. To confront such challenges, attempts have been made to selectively breed specific pathogen-resistant lines of the species.
P. monodon has been farmed throughout the world, including West Africa, Hawaii, Tahiti, and England. For optimal growth, P. monodon is raised in waters between 28 and 33 °C. Characteristically for the Penaeus genus, P. monodon has a natural ability to survive and grow in a wide range of salinity, though its optimal salinity is around 15-25 g/L. While in a farm setting, the shrimp are typically fed a compound diet, which is produced in dried pellets. By mixing the diet to have compound feeds and fresh feed, P. monodon was shown to have better reproductive performance.
Sustainable consumption
In 2010, Greenpeace added P. monodon to its seafood red list – "a list of fish that are commonly sold in supermarkets around the world, and which have a very high risk of being sourced from unsustainable fisheries". The reasons given by Greenpeace were "destruction of vast areas of mangroves in several countries, overfishing of juvenile shrimp from the wild to supply farms, and significant human-rights abuses".
Genetic research
In an effort to understand whether DNA repair processes can protect crustaceans against infection, basic research was conducted to elucidate the repair mechanisms used by P. monodon. Repair of DNA double-strand breaks was found to be predominantly carried out by accurate homologous recombinational repair. Another, less accurate process, microhomology-mediated end joining, is also used to repair such breaks.
| Biology and health sciences | Shrimps and prawns | Animals |
1843447 | https://en.wikipedia.org/wiki/Crank%E2%80%93Nicolson%20method | Crank–Nicolson method | In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the 1940s.
For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step times the thermal diffusivity to the square of space step, , is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations.
Principle
The Crank–Nicolson method is based on the trapezoidal rule, giving second-order convergence in time. For linear equations, the trapezoidal rule is equivalent to the implicit midpoint method—the simplest example of a Gauss–Legendre implicit Runge–Kutta method—which also has the property of being a geometric integrator. For example, in one dimension, suppose the partial differential equation is
Letting and evaluated for and , the equation for Crank–Nicolson method is a combination of the forward Euler method at and the backward Euler method at (note, however, that the method itself is not simply the average of those two methods, as the backward Euler equation has an implicit dependence on the solution):
Note that this is an implicit method: to get the "next" value of in time, a system of algebraic equations must be solved. If the partial differential equation is nonlinear, the discretization will also be nonlinear, so that advancing in time will involve the solution of a system of nonlinear algebraic equations, though linearizations are possible. In many problems, especially linear diffusion, the algebraic problem is tridiagonal and may be efficiently solved with the tridiagonal matrix algorithm, which gives a fast direct solution, as opposed to the usual for a full matrix, in which indicates the matrix size.
Example: 1D diffusion
The Crank–Nicolson method is often applied to diffusion problems. As an example, for linear diffusion,
applying a finite difference spatial discretization for the right-hand side, the Crank–Nicolson discretization is then
or, letting ,
Given that the terms on the right-hand side of the equation are known, this is a tridiagonal problem, so that may be efficiently solved by using the tridiagonal matrix algorithm in favor over the much more costly matrix inversion.
A quasilinear equation, such as (this is a minimalistic example and not general)
would lead to a nonlinear system of algebraic equations, which could not be easily solved as above; however, it is possible in some cases to linearize the problem by using the old value for , that is, instead of . Other times, it may be possible to estimate using an explicit method and maintain stability.
Example: 1D diffusion with advection for steady flow, with multiple channel connections
This is a solution usually employed for many purposes when there is a contamination problem in streams or rivers under steady flow conditions, but information is given in one dimension only. Often the problem can be simplified into a 1-dimensional problem and still yield useful information.
Here we model the concentration of a solute contaminant in water. This problem is composed of three parts: the known diffusion equation ( chosen as constant), an advective component (which means that the system is evolving in space due to a velocity field), which we choose to be a constant , and a lateral interaction between longitudinal channels ():
where is the concentration of the contaminant, and subscripts and correspond to previous and next channel.
The Crank–Nicolson method (where represents position, and time) transforms each component of the PDE into the following:
Now we create the following constants to simplify the algebra:
and substitute (), (), (), (), (), (), , and into (). We then put the new time terms on the left () and the present time terms on the right () to get
To model the first channel, we realize that it can only be in contact with the following channel (), so the expression is simplified to
In the same way, to model the last channel, we realize that it can only be in contact with the previous channel (), so the expression is simplified to
To solve this linear system of equations, we must now see that boundary conditions must be given first to the beginning of the channels:
: initial condition for the channel at present time step,
: initial condition for the channel at next time step,
: initial condition for the previous channel to the one analyzed at present time step,
: initial condition for the next channel to the one analyzed at present time step.
For the last cell of the channels (), the most convenient condition becomes an adiabatic one, so
This condition is satisfied if and only if (regardless of a null value)
Let us solve this problem (in a matrix form) for the case of 3 channels and 5 nodes (including the initial boundary condition). We express this as a linear system problem:
where
Now we must realize that AA and BB should be arrays made of four different subarrays (remember that only three channels are considered for this example, but it covers the main part discussed above):
where the elements mentioned above correspond to the next arrays, and an additional 4×4 full of zeros. Please note that the sizes of AA and BB are 12×12:
The d vector here is used to hold the boundary conditions. In this example it is a 12×1 vector:
To find the concentration at any time, one must iterate the following equation:
Example: 2D diffusion
When extending into two dimensions on a uniform Cartesian grid, the derivation is similar and the results may lead to a system of band-diagonal equations rather than tridiagonal ones. The two-dimensional heat equation
can be solved with the Crank–Nicolson discretization of
assuming that a square grid is used, so that . This equation can be simplified somewhat by rearranging terms and using the CFL number
For the Crank–Nicolson numerical scheme, a low CFL number is not required for stability, however, it is required for numerical accuracy. We can now write the scheme as
Solving such a linear system is costly. Hence an alternating-direction implicit method can be implemented to solve the numerical PDE, whereby one dimension is treated implicitly, and other dimension explicitly for half of the assigned time step and conversely for the remainder half of the time step. The benefit of this strategy is that the implicit solver only requires a tridiagonal matrix algorithm to be solved. The difference between the true Crank–Nicolson solution and ADI approximated solution has an order of accuracy of and hence can be ignored with a sufficiently small time step.
Crank–Nicolson for nonlinear problems
Because the Crank–Nicolson method is implicit, it is generally impossible to solve exactly. Instead, an iterative technique should be used to converge to the solution. One option is to use Newton's method to converge on the prediction, but this requires the computation of the Jacobian. For a high-dimensional system like those in computational fluid dynamics or numerical relativity, it may be infeasible to compute this Jacobian.
A Jacobian-free alternative is fixed-point iteration. If is the velocity of the system, then the Crank–Nicolson prediction will be a fixed point of the map If the map iteration does not converge, the parameterized map , with , may be better behaved. In expanded form, the update formula is
where is the current guess and is the previous time-step.
Even for high-dimensional systems, iteration of this map can converge surprisingly quickly.
Application in financial mathematics
Because a number of other phenomena can be modeled with the heat equation (often called the diffusion equation in financial mathematics), the Crank–Nicolson method has been applied to those areas as well. Particularly, the Black–Scholes option pricing model's differential equation can be transformed into the heat equation, and thus numerical solutions for option pricing can be obtained with the Crank–Nicolson method.
The importance of this for finance is that option pricing problems, when extended beyond the standard assumptions (e.g. incorporating changing dividends), cannot be solved in closed form, but can be solved using this method. Note however, that for non-smooth final conditions (which happen for most financial instruments), the Crank–Nicolson method is not satisfactory as numerical oscillations are not damped. For vanilla options, this results in oscillation in the gamma value around the strike price. Therefore, special damping initialization steps are necessary (e.g., fully implicit finite difference method).
| Mathematics | Differential equations | null |
1846025 | https://en.wikipedia.org/wiki/Schmidt%E2%80%93Cassegrain%20telescope | Schmidt–Cassegrain telescope | The Schmidt–Cassegrain is a catadioptric telescope that combines a Cassegrain reflector's optical path with a Schmidt corrector plate to make a compact astronomical instrument that uses simple spherical surfaces.
Invention and design
The American astronomer and lens designer James Gilbert Baker first proposed a Cassegrain design for Bernhard Schmidt's Schmidt camera in 1940. The optical shop at Mount Wilson Observatory manufactured the first one during World War II as part of their research into optical designs for the military. As in the Schmidt camera, this design uses a spherical primary mirror and a Schmidt corrector plate to correct for spherical aberration. In this Cassegrain configuration the convex secondary mirror acts as a field flattener and relays the image through the perforated primary mirror to a final focal plane located behind the primary. Some designs include additional optical elements (such as field flatteners) near the focal plane. The first large telescope to use the design was the James Gregory Telescope of 1962 at the University of St Andrews.
As of 2021, the James Gregory Telescope is also recognized as the largest Schmidt-Cassegrain. The telescope is noted for its large field of view, up 60 times a full moon.
Derivative designs
While there are many variations of the Schmidt–Cassegrain telescope design (both mirrors spherical, both mirrors aspherical, or one of each), they can be divided into two principal types: compact and non-compact. In the compact form, the corrector plate is located at or near the focus of the primary mirror. In the non-compact, the corrector plate remains at or near the center of curvature (twice the focal length) of the primary mirror.
Compact designs combine a fast primary mirror and a small, strongly curved secondary. This yields a very short tube length, at the expense of field curvature. Compact designs have a primary mirror with a focal ratio of around f/2 and a secondary with a focal ratio also around f/2, the separation of the two mirrors determining a typical system focal ratio around f/10.
One very well-corrected type of non-compact design is the concentric (or monocentric) Schmidt–Cassegrain, where all the mirror surfaces and the focal surface are concentric to a single point: the center of curvature of the primary. Optically, non-compact designs give better aberration correction and a flatter field than most compact designs, but at the expense of longer tube length.
Amateur astronomical applications
The Schmidt–Cassegrain design is very popular with consumer telescope manufacturers because it combines easy-to-manufacture spherical optical surfaces to create an instrument with the long focal length of a refracting telescope with the lower cost per aperture of a reflecting telescope. The compact design makes it very portable for its given aperture, which adds to its marketability. Their high f-ratio means they are not a wide-field telescope like their Schmidt camera predecessor, but they are good for more narrow-field deep sky and planetary viewing.
Consumer versions of this design typically achieve focus by adjusting the position of the primary mirror rather than a traditional eye-piece. This means that small changes in the position of the mirror are magnified by the focal length of the telescope. As the mirror is not permanently fixed in place, it is possible for it to move by a small amount and cause the image to shift. This is otherwise known as "mirror flop". Some Schmidt-Cassegrain telescopes are equipped with mirror locks to fix the primary mirror in place once focus has been achieved.
| Technology | Telescope | null |
20745984 | https://en.wikipedia.org/wiki/Lecanorales | Lecanorales | The Lecanorales are an order of mostly lichen-forming fungi belonging to the class Lecanoromycetes in the division Ascomycota. The order contains 26 families, 269 genera, and 5695 species.
Families
Suborder Lecanorineae
Biatorellaceae M. Choisy ex Hafellner & Casares-Porcel, 1992
Brigantiaeaceae Hafellner & Bellem., 1982
Bruceomycetaceae Rikkinen & A.R.Schmidt in Rikkinen et al.,
Byssolomataceae Zahlbr. 1926
Carbonicolaceae Bendiksby & Timdal (2013)
Catillariaceae Hafellner, 1984
Cetradoniaceae J.C. Wei & Ahti 2002
Cladoniaceae Zenker, J.C. 1827–1829
Dactylosporaceae Bellem. & Hafellner, 1982
Gypsoplacaceae Timdal, E. 1990
Haematommataceae Hafellner, 1984
Lecanoraceae Fée, A.L.A. 1824
Malmideaceae Kalb, K., Rivas Plata, E., Lücking, R. & Lumbsch, H.T. 2011
Pachyascaceae Poelt ex P.M.Kirk, P.F.Cannon & J.C.David, 2001
Parmeliaceae Berchtold, F.v. & Presl, J.S. 1820
Pilocarpaceae Zahlbr., 1905
Porpidiaceae Hertel & Hafellner (1984)
Psilolechiaceae S. Stenroos, Miądl. & Lutzoni, 2014
Psoraceae Zahlbr., 1898
Ramalinaceae C. Agardh, 1821
Ramboldiaceae S. Stenroos, Miądl. & Lutzoni, 2014
Scoliciosporaceae Hafellner, 1984
Sphaerophoraceae Fée, A.L.A. 1824
Tephromelataceae Hafellner, 1984
Vezdaeaceae Poelt & Vezda ex J.C. David & D. Hawksw., 1991
incertae sedis of uncertain placement
Unplaced families;
Psilolechiaceae S. Stenroos, Miądl. & Lutzoni, 2014
There are several genera in the Lecanorales that have not been placed with certainty into any family. These are:
Coronoplectrum – 1 sp.
Ivanpisutia – 1 sp.
Joergensenia – 1 sp.
Myochroidea – 4 spp.
Neopsoromopsis – 1 sp.
Notolecidea – 1 sp.
Psoromella – 1 sp.
Puttea – 3 spp.
Ramalea – 4 spp.
| Biology and health sciences | Lichens | Plants |
23592304 | https://en.wikipedia.org/wiki/16-bit%20computing | 16-bit computing | 16-bit microcomputers are microcomputers that use 16-bit microprocessors.
A 16-bit register can store 216 different values. The range of integer values that can be stored in 16 bits depends on the integer representation used. With the two most common representations, the range is 0 through 65,535 (216 − 1) for representation as an (unsigned) binary number, and −32,768 (−1 × 215) through 32,767 (215 − 1) for representation as two's complement. Since 216 is 65,536, a processor with 16-bit memory addresses can directly access 64 KB (65,536 bytes) of byte-addressable memory. If a system uses segmentation with 16-bit segment offsets, more can be accessed.
16-bit architecture
The MIT Whirlwind ( 1951) was quite possibly the first-ever 16-bit computer. It was an unusual word size for the era; most systems used six-bit character code and used a word length of some multiple of 6-bits. This changed with the effort to introduce ASCII, which used a 7-bit code and naturally led to the use of an 8-bit multiple which could store a single ASCII character or two binary coded decimal digits.
The 16-bit word length thus became more common in the 1960s, especially on minicomputer systems. Early 16-bit computers ( 1965–70) include the IBM 1130, the HP 2100, the Data General Nova, and the DEC PDP-11. Early 16-bit microprocessors, often modeled on one of the mini platforms, began to appear in the 1970s. Examples ( 1973–76) include the five-chip National Semiconductor IMP-16 (1973), the two-chip NEC μCOM-16 (1974), the three-chip Western Digital MCP-1600 (1975), and the five-chip Toshiba T-3412 (1976).
Early single-chip 16-bit microprocessors ( 1975–76) include the Panafacom MN1610 (1975), National Semiconductor PACE (1975), General Instrument CP1600 (1975), Texas Instruments TMS9900 (1976), Ferranti F100-L, and the HP BPC. Other notable 16-bit processors include the Intel 8086, the Intel 80286, the WDC 65C816, and the Zilog Z8000. The Intel 8088 was binary compatible with the Intel 8086, and was 16-bit in that its registers were 16 bits wide, and arithmetic instructions could operate on 16-bit quantities, even though its external bus was 8 bits wide.
16-bit processors have been almost entirely supplanted in the personal computer industry, and are used less than 32-bit (or 8-bit) CPUs in embedded applications.
16/32-bit Motorola 68000 and Intel 386SX
The Motorola 68000 is sometimes called 16-bit because of the way it handles basic arithmetic. The instruction set was based on 32-bit numbers and the internal registers were 32 bits wide, so by common definitions, the 68000 is a 32-bit design. Internally, 32-bit arithmetic is performed using two 16-bit operations, and this leads to some descriptions of the system as 16-bit, or "16/32".
Such solutions have a long history in the computer field, with various designs performing math even one bit at a time, known as "serial arithmetic", while most designs by the 1970s processed at least a few bits at a time. A common example is the Data General Nova, which was a 16-bit design that performed 16-bit math as a series of four 4-bit operations. 4-bits was the word size of a widely available single-chip ALU and thus allowed for inexpensive implementation. Using the definition being applied to the 68000, the Nova would be a 4-bit computer, or 4/16. Not long after the introduction of the Nova, a second version was introduced, the SuperNova, which included four of the 4-bit ALUs running in parallel to perform math 16 bits at a time and therefore offer higher performance. This was invisible to the user and the programs, which always used 16-bit instructions and data. In a similar fashion, later 68000-family members, starting with the Motorola 68020, had 32-bit ALUs.
One may also see references to systems being, or not being, 16-bit based on some other measure. One common one is when the address space is not the same size of bits as the internal registers. Most 8-bit CPUs of the 1970s fall into this category; the MOS 6502, Intel 8080, Zilog Z80 and most others had 16-bit address space which provided 64 KB of address space. This also meant address manipulation required two instruction cycles. For this reason, most processors had special 8-bit addressing modes, the zero page, improving speed. This sort of difference between internal register size and external address size remained in the 1980s, although often reversed, as memory costs of the era made a machine with 32-bit addressing, 2 or 4 GB, a practical impossibility. For example, the 68000 exposed only 24 bits of addressing on the DIP, limiting it to a still huge (for the era) 16 MB.
A similar analysis applies to Intel's 80286 CPU replacement, called the 386SX, which is a 32-bit processor with 32-bit ALU and internal 32-bit data paths with a 16-bit external bus and 24-bit addressing of the processor it replaced.
16-bit application
In the context of IBM PC compatible and Wintel platforms, a 16-bit application is any software written for MS-DOS, OS/2 1.x or early versions of Microsoft Windows which originally ran on the 16-bit Intel 8088 and Intel 80286 microprocessors. Such applications used a 20-bit or 24-bit segment or selector-offset address representation to extend the range of addressable memory locations beyond what was possible using only 16-bit addresses. Programs containing more than 216 bytes (65,536 bytes) of instructions and data therefore required special instructions to switch between their 64-kilobyte segments, increasing the complexity of programming 16-bit applications.
List of 16-bit CPUs
Angstrem
1801 series CPU
Data General
Nova
Eclipse
Digital Equipment Corporation
PDP-11 (for LSI-11, see Western Digital, below)
DEC J-11
DEC T-11
EnSilica
eSi-1600
Fairchild Semiconductor
9440 MICROFLAME
Ferranti
Ferranti F100-L
Ferranti F200-L
General Instrument
CP1600
Hewlett-Packard
HP 21xx/2000/1000/98xx/BPC
HP 3000
Honeywell
Honeywell Level 6/DPS 6
IBM
1130/1800
System/7
Series/1
System/36
Infineon
XE166 family
C166/C167 family
XC2000
Intel
Intel 8086/Intel 8088
Intel 80186/Intel 80188
Intel 80286
Intel MCS-96
Lockheed
MAC-16
MIL-STD-1750A
Motorola
Motorola 68HC12
Motorola 68HC16
National Semiconductor
IMP-16
PACE/INS8900
NEC
μCOM-16
NEC V20 and V30
Panafacom
MN1610
Renesas
Renesas (16-bit registers, 24-bit address space)
Ricoh
Ricoh 5A22 (WDC 65816 clone used in SNES)
Texas Instruments
Texas Instruments TMS9900
TI MSP430
Toshiba
T-3412
Western Design Center
WDC 65816/65802
Western Digital
MCP-1600
used in the DEC LSI-11
used in the Pascal MicroEngine
used in the WD16
Xerox
Alto
Zilog
Zilog Z8000
Zilog Z280
| Technology | Computer architecture concepts | null |
10507997 | https://en.wikipedia.org/wiki/Berkshire%20pig | Berkshire pig | The Berkshire is a British breed of pig. It originated in the English county of Berkshire, for which it is named. It is normally black, with some white on the snout, on the lower legs, and on the tip of the tail.
It is a rare breed in the United Kingdom. It has been exported to a number of countries including Australia, Japan, New Zealand and the United States, and is numerous in some of them.
History
The Berkshire is a traditional breed of the county of the same name. Until the eighteenth century it was a large tawny-coloured pig with lop ears, often with darker patches. In the late eighteenth and early nineteenth centuries it was substantially modified by cross-breeding with small black pigs imported from Asia.
Herds are still maintained in England by the Rare Breeds Survival Trust at Aldenham Country Park, Hertfordshire, and by the South of England Rare Breeds Centre in Kent. The Berkshire was listed as vulnerable in 2008; fewer than 300 breeding sows were known to exist at that time, but with the revived popularity of the breed through its connection to the Japanese marketing of a "wagyu for pork" connection, the numbers have increased.
The Berkshire has been exported to many countries, and has become numerous in some of them; it is reported to the DAD-IS database of the Food and Agriculture Organization of the United Nations by twenty-three countries, in the Americas, Asia, continental Europe and Oceania.
Exports to the United States began in the early nineteenth century. The American Berkshire Association, established in 1875, was the first breed society for a pig breed; the first pig registered was a boar named Ace of Spades, reportedly bred by Queen Victoria.
The pigs were exported to Japan in the 1860s, and became numerous there: in 2007 there were over . The Japanese Kagoshima Berkshire, which apparently derives from two British Berkshire pigs imported to Japan in the 1930s, is considered a separate breed; the meat may be marketed as Kurobuta pork, and can command a premium price.
Characteristics
The Berkshire is of medium size: adult boars weigh about , sows some . It is black with six white markings: four white socks, a white splash on the snout, and a white tip to the tail. It is prick-eared.
Use
The Berkshire is reared for pork. Although the meat has a relatively low pH, and high pH is normally correlated with consumer satisfaction, Berkshire pork was highly rated in taste tests in the United States.
| Biology and health sciences | Pigs | Animals |
10511615 | https://en.wikipedia.org/wiki/Gossypium%20arboreum | Gossypium arboreum | Gossypium arboreum, commonly called tree cotton, is a species of cotton native to Indian subcontinent and other tropical and subtropical regions of the Old World. There is evidence of its cultivation as long ago as the Indus Valley Civilisation of the Indus River for the production of cotton textiles. The shrub was included in Linnaeus's Species Plantarum published in 1753. The holotype was also supplied by him, which is now in the Linnean Herbarium in the Swedish Museum of Natural History.
Description
Tree cotton is a shrub that grows to about one to two meters tall. Its branches, which are purple in color, are covered with fine hairs. Stipules are present at the leaf base and they are linear to lanceolate in shape and sometimes falcate (i.e. sickle-shaped). The leaves are attached to the stem by a 1.5 to 10 cm petiole. The blades are ovate to orbicular in shape and have five to seven lobes, making them superficially resemble a maple leaf. The lobes are linear to lanceolate, and often a tooth is present in the sinus. Glands are present along the midrib or occasionally on the adjacent nerves. The leaves are glabrescent, meaning the pubescence is lost with age, but when it is present on young leaves, it is both stellate (i.e. star-shaped) and simple.
The flowers are set on short pedicels (i.e. flower stalks). An epicalyx is present, which is a series of subtending bracts that resemble sepals. Its large, ovate segments are dentate (i.e. toothed along the margins), though sometimes only very slightly so. They are cordate (i.e. heart-shaped) at the base and acute at the apex. The true calyx is small, measuring only about long. Its shape is cupular, and five subtle dentations are present. The corolla is a pale yellow on colour, sometimes with a purple centre, and occasionally entirely purple. It measures long. The staminal tube bears the anthers and is 1.5 to 2 cm in length. The fruit is a three- or four-celled capsule measuring across. It is ovoid or oblong in shape and glabrous (i.e. hairless). The surface is pitted and a beak is present at the terminal end. The seeds within are globular and are covered in long white cotton.
Gossypium arboreum var. neglecta, locally known as "Phuti karpas", was the variant used to make Dhaka muslin in Bengal, now Bangladesh. The variant could only be grown in an area south of Dhaka, along the banks of the Meghna River. It could be spun so that individual threads could maintain tensile strength at counts higher than any other variant of cotton.
| Biology and health sciences | Malvales | Plants |
12784278 | https://en.wikipedia.org/wiki/North%20African%20elephant | North African elephant | The North African elephant (Loxodonta africana pharaohensis) is an extinct subspecies of the African bush elephant (Loxodonta africana), or possibly a separate elephant species, that existed in North Africa, north of the Sahara, until it died out in Roman times. These were the famous war elephants used by Carthage in the Punic Wars, their conflict with the Roman Republic. Although the subspecies has been formally described, it has not been widely recognized by taxonomists. Other names for this animal include the North African forest elephant, Carthaginian elephant, and Atlas elephant. Its natural range probably extended along the coast of the Red Sea, in what is now Egypt, Sudan, and Eritrea, but it may have extended further across northern Africa.
Description
Carthaginian frescoes and coins minted by whoever controlled North Africa at various times show very small elephants, perhaps at the shoulder, with the large ears and concave back typical of modern African elephants. Contemporary writers noted that the North African elephant was smaller than the Indian elephant. This suggests that the North African elephant was smaller than extant African bush elephants (L. a. africana), possibly similar in size to the modern African forest elephant (L. cyclotis).
History
The North African elephant was a significant animal in Nubian culture. They were depicted on the walls of temples and on Meroitic lamps. Kushite kings also utilize war elephants, which are believed to have been kept and trained in the "Great Enclosure" at Musawwarat al-Sufa. The Kingdom of Kush provided these war elephants to the Egyptians, Ptolemies and Syrians.
After they conquered Sicily in 242 BC, the Romans tried to capture some specimens that had been left behind in the middle of the island by the Carthaginians, but failed in the endeavor. The elephants with which Hannibal crossed the Pyrenees and the Alps in order to invade Italy during the Second Punic War (218–201 BC) belonged to this group, with the exception of Hannibal's personal animal, Surus (meaning "the Syrian," or possibly "One-Tusker"). This individual, according to his documented name and large size, may have been a Syrian elephant (Elephas maximus asurus), which was possibly a subspecies of the Asian elephant that became extinct shortly after Hannibal invaded Italy, but before the extinction of the North African elephant.
The North African elephant was also used by the Ptolemaic dynasty of Egypt. Writing in the 2nd century BC, Polybius (The Histories; 5.84) described their inferiority in battle against the larger Indian elephants used by the Seleucid kings. A surviving Ptolemaic inscription enumerates three types of war elephant: the "Troglodytic" (probably Libyan), the "Ethiopian", and the "Indian". The Ptolemaic king prides himself on being the first to tame the Ethiopian elephants, a stock which could be identical to one of the two extant African species.<ref>{{cite web|id=BCHP 11, the Ptolemy III Chronicle (Invasion of Ptolemy III Chronicle) |title=Chronicle concerning the Invasion of Ptolemy III |url=https://www.livius.org/sources/content/mesopotamian-chronicles-content/bchp-11-invasion-of-ptolemy-iii-chronicle/ |website=Livius}}</ref>
During the reign of Augustus, about 3,500 elephants were killed in Roman circus games, and this prolonged use as a beast in games of baiting along with hunting would drive the species to extinction at the 4th century AD.
Taxonomic uncertainty
Ansell (1971) classified L. a. pharaohensis'' as a distinct taxon of African elephant closely related to the African forest elephant. However, this has not been universally accepted. If not a distinct species or subspecies, the small size of the North African elephant could be explained by being a small population of African bush elephants, or by the capture and use of young African bush elephant individuals.
Given the relatively recent date of its disappearance, the status of this population can probably be resolved through ancient DNA sequence analyses, if specimens of definite North African origin can be located and examined. Remains dating to the time of the Roman Republic from Tetouan, Morocco, identified as those of an elephant by collagen fingerprinting, likely belong to this taxon.
| Biology and health sciences | Proboscidea | Animals |
3506480 | https://en.wikipedia.org/wiki/Foliation%20%28geology%29 | Foliation (geology) | Foliation in geology refers to repetitive layering in metamorphic rocks. Each layer can be as thin as a sheet of paper, or over a meter in thickness. The word comes from the Latin , meaning "leaf", and refers to the sheet-like planar structure. It is caused by shearing forces (pressures pushing different sections of the rock in different directions), or differential pressure (higher pressure from one direction than in others). The layers form parallel to the direction of the shear, or perpendicular to the direction of higher pressure. Nonfoliated metamorphic rocks are typically formed in the absence of significant differential pressure or shear. Foliation is common in rocks affected by the regional metamorphic compression typical of areas of mountain belt formation (orogenic belts).
More technically, foliation is any penetrative planar fabric present in metamorphic rocks. Rocks exhibiting foliation include the standard sequence formed by the prograde metamorphism of mudrocks; slate, phyllite, schist and gneiss. The slatey cleavage typical of slate is due to the preferred orientation of microscopic phyllosilicate crystals. In gneiss, the foliation is more typically represented by compositional banding due to segregation of mineral phases. Foliated rock is also known as S-tectonite in sheared rock masses.
Examples include the bands in gneiss (gneissic banding), a preferred orientation of planar large mica flakes in schist (schistosity), the preferred orientation of small mica flakes in phyllite (with its planes having a silky sheen, called phylitic luster – the Greek word, phyllon, also means "leaf"), the extremely fine grained preferred orientation of clay flakes in slate (called "slaty cleavage"), and the layers of flattened, smeared, pancake-like clasts in metaconglomerate.
Formation mechanisms
Foliation is usually formed by the preferred orientation of minerals within a rock.
Usually, this is the result of some physical force and its effect on the growth of minerals. The planar fabric of a foliation typically forms at right angles to the maximum principal stress direction. In sheared zones, however, planar fabric within a rock may not be directly perpendicular to the principal stress direction due to rotation, mass transport, and shortening.
Foliation may be formed by realignment of micas and clays via physical rotation of the minerals within the rock. Often this foliation is associated with diagenetic metamorphism and low-grade burial metamorphism. Foliation may parallel original sedimentary bedding, but more often is oriented at some angle to it.
The growth of platy minerals, typically of the mica group, is usually a result of prograde metamorphic reactions during deformation. Often, retrograde metamorphism will not form a foliation because the unroofing of a metamorphic belt is not accompanied by significant compressive stress. Thermal metamorphism in the aureole of a granite is also unlikely to result in the growth of mica in a foliation, although the growth of new minerals may overprint existing foliation(s).
Alignment of tabular minerals in metamorphic rocks, igneous rocks and intrusive rocks may form a foliation. Typical examples of metamorphic rocks include porphyroblastic schists where large, oblate minerals form an alignment either due to growth or rotation in the groundmass.
Igneous rocks can become foliated by alignment of cumulate crystals during convection in large magma chambers, especially ultramafic intrusions, and typically plagioclase laths. Granite may form foliation due to frictional drag on viscous magma by the wall rocks. Lavas may preserve a flow foliation, or even compressed eutaxitic texture, typically in highly viscous felsic agglomerate, welded tuff and pyroclastic surge deposits.
Metamorphic differentiation, typical of gneisses, is caused by chemical and compositional banding within the metamorphic rock mass. Usually, this represents the protolith chemistry, which forms distinct mineral assemblages. However, compositional banding can be the result of nucleation processes which cause chemical and mineralogical differentiation into bands. This typically follows the same principle as mica growth, perpendicular to the principal stress.
Metamorphic differentiation can be present at angles to protolith compositional banding.
Crenulation cleavage and oblique foliation are particular types of foliation.
Interpretation
Foliation, as it forms generally perpendicular to the direction of principal stress, records the direction of shortening. This is related to the axis of folds, which generally form an axial-planar foliation within their axial regions.
Measurement of the intersection between a fold's axial plane and a surface on the fold will provide the fold plunge. If a foliation does not match the observed plunge of a fold, it is likely associated with a different deformation event.
Foliation in areas of shearing, and within the plane of thrust faults, can provide information on the transport direction or sense of movement on the thrust or shear. Generally, the acute intersection angle shows the direction of transport. Foliations typically bend or curve into a shear, which provides the same information, if it is of a scale which can be observed.
Foliations, in a regional sense, will tend to curve around rigid, incompressible bodies such as granite. Thus, they are not always 'planar' in the strictest sense and may violate the rule of being perpendicular to the regional stress field, due to local influences. This is a megascopic version of what may occur around porphyroblasts. Often, fine observation of foliations on outcrop, hand specimen and on the microscopic scale complements observations on a map or regional scale.
Description
When describing a foliation it is useful to note
the mineralogy of the folia; this can provide information on the conditions of formation
the mineralogy in intrafolial areas
foliation spacing
any porphyroblasts or minerals associated with the foliation and whether they overprint it or are cut by it
whether it is planar, undulose, vague or well developed
its orientation in space, as strike and dip, or dip and dip direction
its relationship to other foliations, to bedding and any folding
measure intersection lineations
Following such a methodology allows eventual correlations in style, metamorphic grade, and intensity throughout a region, relationship to faults, shears, structures and mineral assemblages.
Engineering considerations
In geotechnical engineering, a foliation plane may introduce anisotropy of stress, which is a vital consideration for geotechnical engineers. At some point, this foliation may form a discontinuity that may greatly influence the mechanical behavior (strength, deformation, etc.) of rock masses in, for example, tunnel, foundation, or slope construction.
| Physical sciences | Structural geology | Earth science |
3507365 | https://en.wikipedia.org/wiki/Solar%20panel | Solar panel | A solar panel is a device that converts sunlight into electricity by using photovoltaic (PV) cells. PV cells are made of materials that produce excited electrons when exposed to light. These electrons flow through a circuit and produce direct current (DC) electricity, which can be used to power various devices or be stored in batteries. Solar panels are also known as solar cell panels, solar electric panels, or PV modules.
Solar panels are usually arranged in groups called arrays or systems. A photovoltaic system consists of one or more solar panels, an inverter that converts DC electricity to alternating current (AC) electricity, and sometimes other components such as controllers, meters, and trackers. Most panels are in solar farms or rooftop solar panels which supply the electricity grid.
Some advantages of solar panels are that they use a renewable and clean source of energy, reduce greenhouse gas emissions, and lower electricity bills. Some disadvantages are that they depend on the availability and intensity of sunlight, require cleaning, and have high initial costs. Solar panels are widely used for residential, commercial, and industrial purposes, as well as in space, often together with batteries.
History
In 1839, the ability of some materials to create an electrical charge from light exposure was first observed by the French physicist Edmond Becquerel. Though these initial solar panels were too inefficient for even simple electric devices, they were used as an instrument to measure light.
The observation by Becquerel was not replicated again until 1873, when the English electrical engineer Willoughby Smith discovered that the charge could be caused by light hitting selenium. After this discovery, William Grylls Adams and Richard Evans Day published "The action of light on selenium" in 1876, describing the experiment they used to replicate Smith's results.
In 1881, the American inventor Charles Fritts created the first commercial solar panel, which was reported by Fritts as "continuous, constant and of considerable force not only by exposure to sunlight but also to dim, diffused daylight". However, these solar panels were very inefficient, especially compared to coal-fired power plants.
In 1939, Russell Ohl created the solar cell design that is used in many modern solar panels. He patented his design in 1941. In 1954, this design was first used by Bell Labs to create the first commercially viable silicon solar cell.
Solar panel installers saw significant growth between 2008 and 2013. Due to that growth many installers had projects that were not "ideal" solar roof tops to work with and had to find solutions to shaded roofs and orientation difficulties. This challenge was initially addressed by the re-popularization of micro-inverters and later the invention of power optimizers.
Solar panel manufacturers partnered with micro-inverter companies to create AC modules and power optimizer companies partnered with module manufacturers to create smart modules. In 2013 many solar panel manufacturers announced and began shipping their smart module solutions.
Theory and construction
Photovoltaic modules consist of a large number of solar cells and use light energy (photons) from the Sun to generate electricity through the photovoltaic effect. Most modules use wafer-based crystalline silicon cells or thin-film cells. The structural (load carrying) member of a module can be either the top layer or the back layer. Cells must be protected from mechanical damage and moisture. Most modules are rigid, but semi-flexible ones based on thin-film cells are also available. The cells are usually connected electrically in series, one to another to the desired voltage, and then in parallel to increase current. The power (in watts) of the module is the voltage (in volts) multiplied by the current (in amperes), and depends both on the amount of light and on the electrical load connected to the module. The manufacturing specifications on solar panels are obtained under standard conditions, which are usually not the true operating conditions the solar panels are exposed to on the installation site.
A PV junction box is attached to the back of the solar panel and functions as its output interface. External connections for most photovoltaic modules use MC4 connectors to facilitate easy weatherproof connections to the rest of the system. A USB power interface can also be used. Solar panels also use metal frames consisting of racking components, brackets, reflector shapes, and troughs to better support the panel structure.
Cell connection techniques
Solar modular cells need to be connected together to form the module, with front electrodes blocking the solar cell front optical surface area slightly. To maximize frontal surface area available for sunlight and improve solar cell efficiency, manufacturers use varying rear electrode solar cell connection techniques:
Passivated emitter rear contact (PERC) adds a polymer film to capture light
Tunnel oxide passivated contact (TOPCon) adds an oxidation layer to the PERC film to capture more light
Interdigitated back contact (IBC)
Arrays of PV modules
A single solar module can produce only a limited amount of power; most installations contain multiple modules adding their voltages or currents. A photovoltaic system typically includes an array of photovoltaic modules, an inverter, a battery pack for energy storage, a charge controller, interconnection wiring, circuit breakers, fuses, disconnect switches, voltage meters, and optionally a solar tracking mechanism. Equipment is carefully selected to optimize energy output and storage, reduce power transmission losses, and convert from direct current to alternating current.
Smart solar modules
Smart modules are different from traditional solar panels because the power electronics embedded in the module offers enhanced functionality such as panel-level maximum power point tracking, monitoring, and enhanced safety. Power electronics attached to the frame of a solar module, or connected to the photovoltaic circuit through a connector, are not properly considered smart modules.
Several companies have begun incorporating into each PV module various embedded power electronics such as:
Maximum power point tracking (MPPT) power optimizers, a DC-to-DC converter technology developed to maximize the power harvest from solar photovoltaic systems by compensating for shading effects, wherein a shadow falling on a section of a module causes the electrical output of one or more strings of cells in the module to fall to near zero, but not having the output of the entire module fall to zero.
Solar performance monitors for data and fault detection
Technology
Most solar modules are currently produced from crystalline silicon (c-Si) solar cells made of polycrystalline or monocrystalline silicon. In 2021, crystalline silicon accounted for 95% of worldwide PV production, while the rest of the overall market is made up of thin-film technologies using cadmium telluride (CdTe), copper indium gallium selenide (CIGS) and amorphous silicon .
Emerging, third-generation solar technologies use advanced thin-film cells. They produce a relatively high-efficiency conversion for a lower cost compared with other solar technologies. Also, high-cost, high-efficiency, and close-packed rectangular multi-junction (MJ) cells are usually used in solar panels on spacecraft, as they offer the highest ratio of generated power per kilogram lifted into space. MJ-cells are compound semiconductors and made of gallium arsenide (GaAs) and other semiconductor materials. Another emerging PV technology using MJ-cells is concentrator photovoltaics (CPV).
Thin film
Mounting and tracking
Ground
Large utility-scale solar power plants frequently use ground-mounted photovoltaic systems. Their solar modules are held in place by racks or frames that are attached to ground-based mounting supports. Ground based mounting supports include:
Pole mounts, which are driven directly into the ground or embedded in concrete.
Foundation mounts, such as concrete slabs or poured footings
Ballasted footing mounts, such as concrete or steel bases that use weight to secure the solar module system in position and do not require ground penetration. This type of mounting system is well suited for sites where excavation is not possible such as capped landfills and simplifies decommissioning or relocation of solar module systems.
Vertical bifacial solar array
Vertical bifacial solar cells are oriented towards east and west to catch the sun's irradiance more efficiently in the morning and evening. Applications include agrivoltaics, solar fencing, highway and railroad noise dampeners and barricades.
Roof
Roof-mounted solar power systems consist of solar modules held in place by racks or frames attached to roof-based mounting supports. Roof-based mounting supports include:
Rail mounts, which are attached directly to the roof structure and may use additional rails for attaching the module racking or frames.
Ballasted footing mounts, such as concrete or steel bases that use weight to secure the panel system in position and do not require through penetration. This mounting method allows for decommissioning or relocation of solar panel systems with no adverse effect on the roof structure.
All wiring connecting adjacent solar modules to the energy harvesting equipment must be installed according to local electrical codes and should be run in a conduit appropriate for the climate conditions
Solar Canopy
Solar canopies are solar arrays which are installed on top of a traditional canopy. These canopies could be a parking lot canopy, carport, gazebo, Pergola, or patio cover.
There are many benefits, which include maximizing the space available in urban areas while also providing shade for cars. The energy produced can be used to create electric vehicle (EV) charging stations.
Portable
Portable solar panels can ensure electric current, enough to charge devices (mobile, radio, ...) via USB-port or to charge a powerbank f.e.
Special features of the panels include high flexibility, high durability & waterproof characteristics. They are good for travel or camping.
Tracking
Solar trackers increase the energy produced per module at the cost of mechanical complexity and increased need for maintenance. They sense the direction of the Sun and tilt or rotate the modules as needed for maximum exposure to the light.
Alternatively, fixed racks can hold modules stationary throughout the day at a given tilt (zenith angle) and facing a given direction (azimuth angle). Tilt angles equivalent to an installation's latitude are common. Some systems may also adjust the tilt angle based on the time of year.
On the other hand, east- and west-facing arrays (covering an east–west facing roof, for example) are commonly deployed. Even though such installations will not produce the maximum possible average power from the individual solar panels, the cost of the panels is now usually cheaper than the tracking mechanism and they can provide more economically valuable power during morning and evening peak demands than north or south facing systems.
Concentrator
Some special solar PV modules include concentrators in which light is focused by lenses or mirrors onto smaller cells. This enables the cost-effective use of highly efficient, but expensive cells (such as gallium arsenide) with the trade-off of using a higher solar exposure area. Concentrating the sunlight can also raise the efficiency to around 45%.
Light capture
The amount of light absorbed by a solar cell depends on the angle of incidence of whatever direct sunlight hits it. This is partly because the amount falling on the panel is proportional to the cosine of the angle of incidence, and partly because at high angle of incidence more light is reflected. To maximize total energy output, modules are often oriented to face south (in the Northern Hemisphere) or north (in the Southern Hemisphere) and tilted to allow for the latitude. Solar tracking can be used to keep the angle of incidence small.
Solar panels are often coated with an anti-reflective coating, which is one or more thin layers of substances with refractive indices intermediate between that of silicon and that of air. This causes destructive interference in the reflected light, diminishing the amount. Photovoltaic manufacturers have been working to decrease reflectance with improved anti-reflective coatings or with textured glass.
Power curve
In general with individual solar panels, if not enough current is taken, then power isn't maximised. If too much current is taken then the voltage collapses. The optimum current draw is roughly proportional to the amount of sunlight striking the panel. Solar panel capacity is specified by the MPP (maximum power point) value of solar panels in full sunlight.
Inverters
Solar inverters convert the DC power provided by panels to AC power.
MPP (Maximum power point) of the solar panel consists of MPP voltage (V) and MPP current (I). Performing maximum power point tracking (MPPT), a solar inverter samples the output (I-V curve) from the solar cell and applies the proper electrical load to obtain maximum power.
An AC (alternating current) solar panel has a small DC to AC microinverter on the back and produces AC power with no external DC connector. AC modules are defined by Underwriters Laboratories as the smallest and most complete system for harvesting solar energy.
Micro-inverters work independently to enable each panel to contribute its maximum possible output for a given amount of sunlight, but can be more expensive.
Module interconnection
Module electrical connections are made with conducting wires that take the current off the modules and are sized according to the current rating and fault conditions, and sometimes include in-line fuses.
Panels are typically connected in series of one or more panels to form strings to achieve a desired output voltage, and strings can be connected in parallel to provide the desired current capability (amperes) of the PV system.
In string connections the voltages of the modules add, but the current is determined by the lowest performing panel. This is known as the "Christmas light effect". In parallel connections the voltages will be the same, but the currents add. Arrays are connected up to meet the voltage requirements of the inverters and to not greatly exceed the current limits.
Blocking and bypass diodes may be incorporated within the module or used externally to deal with partial array shading, in order to maximize output. For series connections, bypass diodes are placed in parallel with modules to allow current to bypass shaded modules which would otherwise severely limit the current. For paralleled connections, a blocking diode may be placed in series with each module's string to prevent current flowing backwards through shaded strings thus short-circuiting other strings.
Connectors
Outdoor solar panels usually include MC4 connectors, automotive solar panels may include an auxiliary power outlet and/or USB adapter and indoor panels may have a microinverter.
Efficiency
Each module is rated by its DC output power under standard test conditions (STC) and hence the on field output power might vary. Power typically ranges from 100 to 365 Watts (W). The efficiency of a module determines the area of a module given the same rated output an 8% efficient 230 W module will have twice the area of a 16% efficient 230 W module. Some commercially available solar modules exceed 24% efficiency. Currently, the best achieved sunlight conversion rate (solar module efficiency) is around 21.5% in new commercial products typically lower than the efficiencies of their cells in isolation. The most efficient mass-produced solar modules have power density values of up to 175 W/m2 (16.22 W/ft2).
The current versus voltage curve of a module provides useful information about its electrical performance. Manufacturing processes often cause differences in the electrical parameters of different modules photovoltaic, even in cells of the same type. Therefore, only the experimental measurement of the I–V curve allows us to accurately establish the electrical parameters of a photovoltaic device. This measurement provides highly relevant information for the design, installation and maintenance of photovoltaic systems. Generally, the electrical parameters of photovoltaic modules are measured by indoor tests. However, outdoor testing has important advantages such as no expensive artificial light source required, no sample size limitation, and more homogeneous sample illumination.
Capacity factor of solar panels is limited primarily by geographic latitude and varies significantly depending on cloud cover, dust, day length and other factors. In the United Kingdom, seasonal capacity factor ranges from 2% (December) to 20% (July), with average annual capacity factor of 10–11%, while in Spain the value reaches 18%.
Globally, capacity factor for utility-scale PV farms was 16.1% in 2019.
Overheating is the most important factor for the efficiency of the solar panel.
Radiation-dependent efficiency
Depending on construction, photovoltaic modules can produce electricity from a range of frequencies of light, but usually cannot cover the entire solar radiation range (specifically, ultraviolet, infrared and low or diffused light). Hence, much of the incident sunlight energy is wasted by solar modules, and they can give far higher efficiencies if illuminated with monochromatic light. Therefore, another design concept is to split the light into six to eight different wavelength ranges that will produce a different color of light, and direct the beams onto different cells tuned to those ranges.
Performance and degradation
Module performance is generally rated under standard test conditions (STC): irradiance of 1,000 W/m2, solar spectrum of AM 1.5 and module temperature at 25 °C. The actual voltage and current output of the module changes as lighting, temperature and load conditions change, so there is never one specific voltage at which the module operates. Performance varies depending on geographic location, time of day, the day of the year, amount of solar irradiance, direction and tilt of modules, cloud cover, shading, soiling, state of charge, and temperature. Performance of a module or panel can be measured at different time intervals with a DC clamp meter or shunt and logged, graphed, or charted with a chart recorder or data logger.
For optimum performance, a solar panel needs to be made of similar modules oriented in the same direction perpendicular to direct sunlight. Bypass diodes are used to circumvent broken or shaded panels and optimize output. These bypass diodes are usually placed along groups of solar cells to create a continuous flow.
Electrical characteristics include nominal power (PMAX, measured in W), open-circuit voltage (VOC), short-circuit current (ISC, measured in amperes), maximum power voltage (VMPP), maximum power current (IMPP), peak power, (watt-peak, Wp), and module efficiency (%).
Open-circuit voltage or VOC is the maximum voltage the module can produce when not connected to an electrical circuit or system. VOC can be measured with a voltmeter directly on an illuminated module's terminals or on its disconnected cable.
The peak power rating, Wp, is the maximum output under standard test conditions (not the maximum possible output). Typical modules, which could measure approximately , will be rated from as low as 75 W to as high as 600 W, depending on their efficiency. At the time of testing, the test modules are binned according to their test results, and a typical manufacturer might rate their modules in 5 W increments, and either rate them at +/- 3%, +/-5%, +3/-0% or +5/-0%.
Influence of temperature
The performance of a photovoltaic (PV) module depends on the environmental conditions, mainly on the global incident irradiance G in the plane of the module. However, the temperature T of the p–n junction also influences the main electrical parameters: the short circuit current ISC, the open circuit voltage VOC and the maximum power Pmax. In general, it is known that VOC shows a significant inverse correlation with T, while for ISC this correlation is direct, but weaker, so that this increase does not compensate for the decrease in VOC. As a consequence, Pmax decreases when T increases. This correlation between the power output of a solar cell and the working temperature of its junction depends on the semiconductor material, and is due to the influence of T on the concentration, lifetime, and mobility of the intrinsic carriers, i.e., electrons and gaps. inside the photovoltaic cell.
Temperature sensitivity is usually described by temperature coefficients, each of which expresses the derivative of the parameter to which it refers with respect to the junction temperature. The values of these parameters can be found in any data sheet of the photovoltaic module; are the following:
- β: VOC variation coefficient with respect to T, given by ∂VOC/∂T.
- α: Coefficient of variation of ISC with respect to T, given by ∂ISC/∂T.
- δ: Coefficient of variation of Pmax with respect to T, given by ∂Pmax/∂T.
Techniques for estimating these coefficients from experimental data can be found in the literature
Degradation
The ability of solar modules to withstand damage by rain, hail, heavy snow load, and cycles of heat and cold varies by manufacturer, although most solar panels on the U.S. market are UL listed, meaning they have gone through testing to withstand hail.
Potential-induced degradation (also called PID) is a potential-induced performance degradation in crystalline photovoltaic modules, caused by so-called stray currents. This effect may cause power loss of up to 30%.
Advancements in photovoltaic technologies have brought about the process of "doping" the silicon substrate to lower the activation energy thereby making the panel more efficient in converting photons to retrievable electrons.
Chemicals such as boron (p-type) are applied into the semiconductor crystal in order to create donor and acceptor energy levels substantially closer to the valence and conductor bands. In doing so, the addition of boron impurity allows the activation energy to decrease twenty-fold from 1.12 eV to 0.05 eV. Since the potential difference (EB) is so low, the boron is able to thermally ionize at room temperatures. This allows for free energy carriers in the conduction and valence bands thereby allowing greater conversion of photons to electrons.
The power output of a photovoltaic (PV) device decreases over time. This decrease is due to its exposure to solar radiation as well as other external conditions. The degradation index, which is defined as the annual percentage of output power loss, is a key factor in determining the long-term production of a photovoltaic plant. To estimate this degradation, the percentage of decrease associated with each of the electrical parameters. The individual degradation of a photovoltaic module can significantly influence the performance of a complete string. Furthermore, not all modules in the same installation decrease their performance at exactly the same rate. Given a set of modules exposed to long-term outdoor conditions, the individual degradation of the main electrical parameters and the increase in their dispersion must be considered. As each module tends to degrade differently, the behavior of the modules will be increasingly different over time, negatively affecting the overall performance of the plant.
There are several studies dealing with the power degradation analysis of modules based on different photovoltaic technologies available in the literature. According to a recent study, the degradation of crystalline silicon modules is very regular, oscillating between 0.8% and 1.0% per year.
On the other hand, if we analyze the performance of thin-film photovoltaic modules, an initial period of strong degradation is observed (which can last several months and even up to 2 years), followed by a later stage in which the degradation stabilizes, being then comparable to that of crystalline silicon. Strong seasonal variations are also observed in such thin-film technologies because the influence of the solar spectrum is much greater. For example, for modules of amorphous silicon, micromorphic silicon or cadmium telluride, we are talking about annual degradation rates for the first years of between 3% and 4%. However, other technologies, such as CIGS, show much lower degradation rates, even in those early years.
Maintenance
Solar panel conversion efficiency, typically in the 20% range, is reduced by the accumulation of dust, grime, pollen, and other particulates on the solar panels, collectively referred to as soiling. "A dirty solar panel can reduce its power capabilities by up to 30% in high dust/pollen or desert areas", says Seamus Curran, associate professor of physics at the University of Houston and director of the Institute for NanoEnergy, which specializes in the design, engineering, and assembly of nanostructures.
The average soiling loss in the world in 2018 is estimated to be at least 3% – 4%.
Paying to have solar panels cleaned is a good investment in many regions, as of 2019. However, in some regions, cleaning is not cost-effective. In California as of 2013 soiling-induced financial losses were rarely enough to warrant the cost of washing the panels. On average, panels in California lost a little less than 0.05% of their overall efficiency per day.
There are also occupational hazards with solar panel installation and maintenance. A 2015–2018 study in the UK investigated 80 PV-related incidents of fire, with over 20 "serious fires" directly caused by PV installation, including 37 domestic buildings and 6 solar farms. In of the incidents a root cause was not established and in a majority of others was caused by poor installation, faulty product or design issues. The most frequent single element causing fires was the DC isolators.
A 2021 study by kWh Analytics determined median annual degradation of PV systems at 1.09% for residential and 0.8% for non-residential ones, almost twice that previously assumed. A 2021 module reliability study found an increasing trend in solar module failure rates with 30% of manufacturers experiencing safety failures related to junction boxes (growth from 20%) and 26% bill-of-materials failures (growth from 20%).
Cleaning methods for solar panels can be divided into 5 groups: manual tools, mechanized tools (such as tractor mounted brushes), installed hydraulic systems (such as sprinklers), installed robotic systems, and deployable robots. Manual cleaning tools are by far the most prevalent method of cleaning, most likely because of the low purchase cost. However, in a Saudi Arabian study done in 2014, it was found that "installed robotic systems, mechanized systems, and installed hydraulic systems are likely the three most promising technologies for use in cleaning solar panels".
Waste and recycling
There were 30 thousand tonnes of PV waste in 2021, and the annual amount was estimated by Bloomberg NEF to rise to more than 1 million tons by 2035 and more than 10 million by 2050. For comparison, 750 million tons of fly ash waste was produced by coal power in 2022. In the United States, around 90% of decommissioned solar panels end up in landfills as of 2023. Most parts of a solar module can be recycled including up to 95% of certain semiconductor materials or the glass as well as large amounts of ferrous and non-ferrous metals. Some private companies and non-profit organizations take-back and recycle end-of-life modules. EU law requires manufacturers to ensure their solar panels are recycled properly. Similar legislation is underway in Japan, India, and Australia. A 2023 Australian report said that there is a market for quality used panels and made recommendations for increasing reuse.
Recycling possibilities depend on the kind of technology used in the modules:
Silicon based modules: aluminum frames and junction boxes are dismantled manually at the beginning of the process. The module is then crushed in a mill and the different fractions are separated – glass, plastics and metals. It is possible to recover more than 80% of the incoming weight. This process can be performed by flat glass recyclers, since the shape and composition of a PV module is similar to flat glass used in the building and automotive industry. The recovered glass, for example, is readily accepted by the glass foam and glass insulation industry.
Non-silicon based modules: they require specific recycling technologies such as the use of chemical baths in order to separate the different semiconductor materials. For cadmium telluride modules, the recycling process begins by crushing the module and subsequently separating the different fractions. This recycling process is designed to recover up to 90% of the glass and 95% of the semiconductor materials contained. Some commercial-scale recycling facilities have been created in recent years by private companies.
Since 2010, there is an annual European conference bringing together manufacturers, recyclers and researchers to look at the future of PV module recycling.
Production
The production of PV systems has followed a classic learning curve effect, with significant cost reduction occurring alongside large rises in efficiency and production output.
With over 100% year-on-year growth in PV system installation, PV module makers dramatically increased their shipments of solar modules in 2019. They actively expanded their capacity and turned themselves into gigawatt GW players. According to Pulse Solar, five of the top ten PV module companies in 2019 have experienced a rise in solar panel production by at least 25% compared to 2019.
The basis of producing most solar panels is mostly on the use of silicon cells. These silicon cells are typically 10–20% efficient at converting sunlight into electricity, with newer production models exceeding 22%.
In 2018, the world's top five solar module producers in terms of shipped capacity during the calendar year of 2018 were Jinko Solar, JA Solar, Trina Solar, Longi solar, and Canadian Solar.
Price
The price of solar electrical power has continued to fall so that in many countries it has become cheaper than fossil fuel electricity from the electricity grid since 2012, a phenomenon known as grid parity. With the rise of global awareness, institutions such as the IRS have adopted a tax credit format, refunding a portion of any solar panel array for private use. The price of a solar array only continues to fall.
Average pricing information divides in three pricing categories: those buying small quantities (modules of all sizes in the kilowatt range annually), mid-range buyers (typically up to 10 MWp annually), and large quantity buyers (self-explanatory—and with access to the lowest prices). Over the long term there is clearly a systematic reduction in the price of cells and modules. For example, in 2012 it was estimated that the quantity cost per watt was about US$0.60, which was 250 times lower than the cost in 1970 of US$150. A 2015 study shows price/kWh dropping by 10% per year since 1980, and predicts that solar could contribute 20% of total electricity consumption by 2030, whereas the International Energy Agency predicts 16% by 2050.
Real-world energy production costs depend a great deal on local weather conditions. In a cloudy country such as the United Kingdom, the cost per produced kWh is higher than in sunnier countries like Spain.
Following to RMI, Balance-of-System (BoS) elements, this is, non-module cost of non-microinverter solar modules (as wiring, converters, racking systems and various components) make up about half of the total costs of installations.
For merchant solar power stations, where the electricity is being sold into the electricity transmission network, the cost of solar energy will need to match the wholesale electricity price. This point is sometimes called 'wholesale grid parity' or 'busbar parity'.
Standards
Standards generally used in photovoltaic modules:
IEC 61215 (crystalline silicon performance), 61646 (thin film performance) and 61730 (all modules, safety), 61853 (Photovoltaic module performance testing & energy rating)
ISO 9488 Solar energy—Vocabulary.
UL 1703 from Underwriters Laboratories
UL 1741 from Underwriters Laboratories
UL 2703 from Underwriters Laboratories
CE mark
Electrical Safety Tester (EST) Series (EST-460, EST-22V, EST-22H, EST-110).
Applications
There are many practical applications for the use of solar panels or photovoltaics. It can first be used in agriculture as a power source for irrigation. In health care solar panels can be used to refrigerate medical supplies. It can also be used for infrastructure. PV modules are used in photovoltaic systems and include a large variety of electric devices:
Agrivoltaics
Solar canals
Photovoltaic power stations
Rooftop solar PV systems
Standalone PV systems
Solar hybrid power systems
Concentrated photovoltaics
Floating solar; water-borne solar panels
Solar planes
Solar-powered water purification
Solar-pumped lasers
Solar vehicles
Solar water heating
Solar panels on spacecraft and space stations
Solar landfill
Limitations
Impact on electricity network
With the increasing levels of rooftop photovoltaic systems, the energy flow becomes 2-way. When there is more local generation than consumption, electricity is exported to the grid. However, an electricity network traditionally is not designed to deal with the 2- way energy transfer. Therefore, some technical issues may occur. For example, in Queensland Australia, more than 30% of households used rooftop PV by the end of 2017. The duck curve appeared often for a lot of communities from 2015 onwards. An over-voltage issue may result as the electricity flows from PV households back to the network. There are solutions to manage the over voltage issue, such as regulating PV inverter power factor, new voltage and energy control equipment at the electricity distributor level, re-conducting the electricity wires, demand side management, etc. There are often limitations and costs related to these solutions.
For rooftop solar to be able to provide enough backup power during a power cut a battery is often also required.
Quality assurance
Solar module quality assurance involves testing and evaluating solar cells and Solar Panels to ensure the quality requirements of them are met. Solar modules (or panels) are expected to have a long service life between 20 and 40 years. They should continually and reliably convey and deliver the power anticipated. Solar modules can be tested through a combination of physical tests, laboratory studies, and numerical analyses. Furthermore, solar modules need to be assessed throughout the different stages of their life cycle. Various companies such as Southern Research Energy & Environment, SGS Consumer Testing Services, TÜV Rheinland, Sinovoltaics, Clean Energy Associates (CEA), CSA Solar International and Enertis provide services in solar module quality assurance."The implementation of consistent traceable and stable manufacturing processes becomes mandatory to safeguard and ensure the quality of the PV Modules"
Stages of testing
The lifecycle stages of testing solar modules can include: the conceptual phase, manufacturing phase, transportation and installation, commissioning phase, and the in-service phase. Depending on the test phase, different test principles may apply.
Conceptual phase
The first stage can involve design verification where the expected output of the module is tested through computer simulation. Further, the modules ability to withstand natural environment conditions such as temperature, rain, hail,
snow, corrosion, dust, lightning, horizon and near-shadow effects is tested. The layout for design and construction of the module and the quality of components and installation can also be tested at this stage.
Manufacturing phase
Inspecting manufacturers of components is carried through visitation. The inspection can include assembly checks, material testing supervision and Non Destructive Testing (NDT). Certification is carried out according to ANSI/UL1703, IEC 17025, IEC 61215, IEC 61646, IEC 61701 and IEC 61730-1/-2.
| Technology | Power generation | null |
3508478 | https://en.wikipedia.org/wiki/Polyester | Polyester | Polyester is a category of polymers that contain one or two ester linkages in every repeat unit of their main chain. As a specific material, it most commonly refers to a type called polyethylene terephthalate (PET). Polyesters include naturally occurring chemicals, such as in plants and insects, as well as synthetics such as polybutyrate. Natural polyesters and a few synthetic ones are biodegradable, but most synthetic polyesters are not. Synthetic polyesters are used extensively in clothing.
Polyester fibers are sometimes spun together with natural fibers to produce a cloth with blended properties. Cotton-polyester blends can be strong, wrinkle- and tear-resistant, and reduce shrinking. Synthetic fibers using polyester have high water, wind, and environmental resistance compared to plant-derived fibers. They are less fire-resistant and can melt when ignited.
Liquid crystalline polyesters are among the first industrially used liquid crystal polymers. They are used for their mechanical properties and heat-resistance. These traits are also important in their application as an abradable seal in jet engines.
Types
Polyesters can contain one ester linkage per repeat unit of the polymer, as in polyhydroxyalkanoates like polylactic acid, or they may have two ester linkages per repeat unit, as in polyethylene terephthalate (PET).
Polyesters are one of the most economically important classes of polymers, driven especially by PET, which is counted among the commodity plastics; in 2019 around 30.5 million metric tons were produced worldwide. There is a great variety of structures and properties in the polyester family, based on the varying nature of the R group (see first figure with blue ester group).
Natural
Polyesters occurring in nature include the cutin component of plant cuticles, which consists of omega hydroxy acids and their derivatives, interlinked via ester bonds, forming polyester polymers of indeterminate size. Polyesters are also produced by bees in the genus Colletes, which secrete a cellophane-like polyester lining for their underground brood cells earning them the nickname "polyester bees".
Synthetic
The family of synthetic polyesters comprises
Linear aliphatic high molecular weight polyesters (Mn >10,000) are low-melting (m. p. 40 – 80 °C) semicrystalline polymers and exhibit relatively poor mechanical properties. Their inherent degradability, resulting from their hydrolytic instability, makes them suitable for applications where a possible environmental impact is a concern, e.g. packaging, disposable items or agricultural mulch films or in biomedical and pharmaceutical applications.
Aliphatic linear low-molar-mass (Mn < 10,000) hydroxy-terminated polyesters are used as macromonomers for the production of polyurethanes.
hyperbranched polyesters are used as rheology modifiers in thermoplastics or as crosslinkers in coatings due to their particularly low viscosity, good solubility and high functionality
Aliphatic–aromatic polyesters, including poly(ethylene terephthalate) (PET) and poly(butylene terephthalate) (PBT), poly(hexamethylene terephthalate)(PHT), poly(propylene terephthalate) (PTT, Sorona), etc. are high-melting semicrystalline materials (m. p. 160–280 °C) that and have benefited from engineering thermoplastics, fibers and films.
Wholly aromatic linear copolyesters present superior mechanical properties and heat resistance and are used in a number of high-performance applications.
Unsaturated polyesters are produced from multifunctional alcohols and unsaturated dibasic acids and are cross-linked thereafter; they are used as matrices in composite materials. Alkyd resins are made from polyfunctional alcohols and fatty acids and are used widely in the coating and composite industries as they can be cross-linked in the presence of oxygen. Also rubber-like polyesters exist, called thermoplastic polyester elastomers (ester TPEs). Unsaturated polyesters (UPR) are thermosetting resins. They are used in the liquid state as casting materials, in sheet molding compounds, as fiberglass laminating resins and in non-metallic auto-body fillers. They are also used as the thermoset polymer matrix in pre-pregs. Fiberglass-reinforced unsaturated polyesters find wide application in bodies of yachts and as body parts of cars.
Depending on the chemical structure, polyester can be a thermoplastic or thermoset. There are also polyester resins cured by hardeners; however, the most common polyesters are thermoplastics. The OH group is reacted with an Isocyanate functional compound in a 2 component system producing coatings which may optionally be pigmented. Polyesters as thermoplastics may change shape after the application of heat. While combustible at high temperatures, polyesters tend to shrink away from flames and self-extinguish upon ignition. Polyester fibers have high tenacity and E-modulus as well as low water absorption and minimal shrinkage in comparison with other industrial fibers.
Increasing the aromatic parts of polyesters increases their glass transition temperature, melting temperature, thermostability, chemical stability, and solvent resistance.
Polyesters can also be telechelic oligomers like the polycaprolactone diol (PCL) and the polyethylene adipate diol (PEA). They are then used as prepolymers.
Aliphatic vs. aromatic polymers
Thermally stable polymers, which generally have a high proportion of aromatic structures, are also called high-performance plastics. This application-oriented classification compares such polymers with engineering plastics and commodity plastics. The continuous service temperature of high-performance plastics is generally stated as being higher than 150 °C, whereas engineering plastics (such as polyamide or polycarbonate) are often defined as thermoplastics that retain their properties above 100 °C. Commodity plastics (such as polyethylene or polypropylene) have in this respect even greater limitations, but they are manufactured in great amounts at low cost.
Poly(ester imides) contain an aromatic imide group in the repeat unit, the imide-based polymers have a high proportion of aromatic structures in the main chain and belong to the class of thermally stable polymers. Such polymers contain structures that impart high melting temperatures, resistance to oxidative degradation and stability to radiation and chemical reagents. Among the thermally stable polymers with commercial relevance are polyimides, polysulfones, polyetherketones, and polybenzimidazoles. Of these, polyimides are most widely applied. The polymers' structures result also in poor processing characteristics, in particular a high melting point and low solubility. The named properties are in particular based on a high percentage of aromatic carbons in the polymer backbone which produces a certain stiffness. Approaches for an improvement of processability include the incorporation of flexible spacers into the backbone, the attachment of stable pendent groups or the incorporation of non-symmetrical structures. Flexible spacers include, for example, ether or hexafluoroisopropylidene, carbonyl or aliphatic groups like isopropylidene; these groups allow bond rotation between aromatic rings. Less symmetrical structures, for example, based on meta- or ortho-linked monomers, introduce structural disorder, decreasing the crystallinity.
The generally poor processability of aromatic polymers (for example, a high melting point and a low solubility) also limits the available options for synthesis and may require strong electron-donating co-solvents like HFIP or TFA for analysis (e. g. 1H NMR spectroscopy) which themselves can introduce further practical limitations.
Uses and applications
Fabrics woven or knitted from polyester thread or yarn are used extensively in apparel and home furnishings, from shirts and pants to jackets and hats, bed sheets, blankets, upholstered furniture and computer mouse mats. Industrial polyester fibers, yarns and ropes are used in car tire reinforcements, fabrics for conveyor belts, safety belts, coated fabrics and plastic reinforcements with high-energy absorption. Polyester fiber is used as cushioning and insulating material in pillows, comforters, stuffed animals and characters, and upholstery padding. Polyester fabrics are highly stain-resistant since polyester is a hydrophobic material, making it hard to absorb liquids. The only class of dyes which can be used to alter the color of polyester fabric are what are known as disperse dyes.
Polyesters are also used to make bottles, films, tarpaulin, sails (Dacron), canoes, liquid crystal displays, holograms, filters, dielectric film for capacitors, film insulation for wire and insulating tapes. Polyesters are widely used as a finish on high-quality wood products such as guitars, pianos, and vehicle/yacht interiors. Thixotropic properties of spray-applicable polyesters make them ideal for use on open-grain timbers, as they can quickly fill wood grain, with a high-build film thickness per coat.
It can be used for fashionable dresses, but it is most admired for its ability to resist wrinkling and shrinking while washing the product. Its toughness makes it a frequent choice for children's wear. Polyester is often blended with other fibres like cotton to get the desirable properties of both materials.
Cured polyesters can be sanded and polished to a high-gloss, durable finish.
Production
Polyester is typically produced through a process known as polymerization. For polyethylene terephthalate (PET), the production process involves the chemical reaction between two primary raw materials: purified terephthalic acid (PTA) or dimethyl terephthalate (DMT) and monoethylene glycol (MEG).
The production process includes the following steps:
Polycondensation Reaction: The reaction between PTA or DMT and MEG creates polyester polymer chains through a process called polycondensation. This reaction takes place at high temperatures and involves the removal of water or methanol byproducts.
Extrusion: Once the polymerization is complete, the molten polyester is extruded into long strands. These strands are then cooled and cut into small pellets or chips.
Spinning: To form fibers, these polyester chips are melted and extruded through spinnerets, forming fine strands of polyester filament. These filaments can be processed further to create continuous fibers, which are then woven into textiles.
Recycling: The production of polyester has evolved to include the recycling of PET, especially from post-consumer plastic bottles. Recycled PET (rPET) is increasingly being used in textile production, reducing the environmental impact of polyester manufacturing.
Polyethylene terephthalate, the polyester with the greatest market share, is a synthetic polymer made of purified terephthalic acid (PTA) or its dimethyl ester dimethyl terephthalate (DMT) and monoethylene glycol (MEG). With 18% market share of all plastic materials produced, it ranges third after polyethylene (33.5%) and polypropylene (19.5%) and is counted as commodity plastic.
There are several reasons for the importance of polyethylene terephthalate:
The relatively easy accessible raw materials PTA or DMT and MEG
The very well understood and described simple chemical process of its synthesis
The low toxicity level of all raw materials and side products during production and processing
The possibility to produce PET in a closed loop at low emissions to the environment
The outstanding mechanical and chemical properties
The recyclability
The wide variety of intermediate and final products.
In the following table, the estimated world polyester production is shown. Main applications are textile polyester, bottle polyester resin, film polyester mainly for packaging and specialty polyesters for engineering plastics.
Polyester processing
After the first stage of polymer production in the melt phase, the product stream divides into two different application areas which are mainly textile applications and packaging applications. In the following table, the main applications of textile and packaging of polyester are listed.
Abbreviations:
PSF Polyester-staple fiber
POY Partially oriented yarn
DTY Drawn textured yarn
FDY Fully drawn yarn
CSD Carbonated soft drink
A-PET Amorphous polyethylene terephthalate film
BO-PET Biaxial-oriented polyethylene terephthalate film
A comparable small market segment (much less than 1 million tonnes/year) of polyester is used to produce engineering plastics and masterbatch.
In order to produce the polyester melt with a high efficiency, high-output processing steps like staple fiber (50–300 tonnes/day per spinning line) or POY /FDY (up to 600 tonnes/day split into about 10 spinning machines) are meanwhile more and more vertically integrated direct processes. This means the polymer melt is directly converted into the textile fibers or filaments without the common step of pelletizing. We are talking about full vertical integration when polyester is produced at one site starting from crude oil or distillation products in the chain oil → benzene → PX → PTA → PET melt → fiber/filament or bottle-grade resin. Such integrated processes are meanwhile established in more or less interrupted processes at one production site. Eastman Chemicals were the first to introduce the idea of closing the chain from PX to PET resin with their so-called INTEGREX process. The capacity of such vertically integrated production sites is >1000 tonnes/day and can easily reach 2500 tonnes/day.
Besides the above-mentioned large processing units to produce staple fiber or yarns, there are ten thousands of small and very small processing plants, so that one can estimate that polyester is processed and recycled in more than 10 000 plants around the globe. This is without counting all the companies involved in the supply industry, beginning with engineering and processing machines and ending with special additives, stabilizers and colors. This is a gigantic industry complex and it is still growing by 4–8% per year, depending on the world region.
Synthesis
Synthesis of polyesters is generally achieved by a polycondensation reaction. The general equation for the reaction of a diol with a diacid is:
(n+1) R(OH)2 + n R'(COOH)2 → HO[ROOCR'COO]nROH + 2n H2O.
Polyesters can be obtained by a wide range of reactions of which the most important are the reaction of acids and alcohols, alcoholysis and or acidolysis of low-molecular weight esters or the alcoholysis of acyl chlorides. The following figure gives an overview over such typical polycondensation reactions for polyester production. Furthermore, polyesters are accessible via ring-opening polymerization.
Azeotrope esterification is a classical method for condensation. The water formed by the reaction of alcohol and a carboxylic acid is continually removed by azeotropic distillation. When melting points of the monomers are sufficiently low, a polyester can be formed via direct esterification while removing the reaction water via vacuum.
Direct bulk polyesterification at high temperatures (150 – 290 °C) is well-suited and used on the industrial scale for the production of aliphatic, unsaturated, and aromatic–aliphatic polyesters. Monomers containing phenolic or tertiary hydroxyl groups exhibit a low reactivity with carboxylic acids and cannot be polymerized via direct acid alcohol-based polyesterification. In the case of PET production, however, the direct process has several advantages, in particular a higher reaction rate, a higher attainable molecular weight, the release of water instead of methanol and lower storage costs of the acid when compared to the ester due to the lower weight.
Alcoholic transesterification
Transesterification: An alcohol-terminated oligomer and an ester-terminated oligomer condense to form an ester linkage, with loss of an alcohol. R and R' are the two oligomer chains, R'' is a sacrificial unit such as a methyl group (methanol is the byproduct of the esterification reaction).
The term "transesterification" is typically used to describe hydroxy–ester, carboxy–ester, and ester–ester exchange reactions. The hydroxy–ester exchange reaction possesses the highest rate of reaction and is used for the production of numerous aromatic–aliphatic and wholly aromatic polyesters. The transesterification based synthesis is particularly useful for when high melting and poorly soluble dicarboxylic acids are used. In addition, alcohols as condensation product are more volatile and thereby easier to remove than water.
The high-temperature melt synthesis between bisphenol diacetates and aromatic dicarboxylic acids or in reverse between bisphenols and aromatic dicarboxylic acid diphenyl esters (carried out at 220 to 320 °C upon the release of acetic acid) is, besides the acyl chloride based synthesis, the preferred route to wholly aromatic polyesters.
Acylation
In acylation, the acid begins as an acyl chloride, and thus the polycondensation proceeds with emission of hydrochloric acid (HCl) instead of water.
The reaction between diacyl chlorides and alcohols or phenolic compounds has been widely applied to polyester synthesis and has been subject of numerous reviews and book chapters. The reaction is carried out at lower temperatures than the equilibrium methods; possible types are the high-temperature solution condensation, amine catalysed and interfacial reactions. In addition, the use of activating agents is counted as non-equilibrium method. The equilibrium constants for the acyl chloride-based condensation yielding yielding arylates and polyarylates are very high indeed and are reported to be 4.3 × 103 and 4.7 × 103, respectively. This reaction is thus often referred to as a 'non-equilibrium' polyesterification. Even though the acyl chloride based synthesis is also subject of reports in the patent literature, it is unlikely that the reaction is utilized on the production scale. The method is limited by the acid dichlorides' high cost, its sensitivity to hydrolysis and the occurrence of side reactions.
The high temperature reaction (100 to > 300 °C) of an diacyl chloride with an dialcohol yields the polyester and hydrogen chloride. Under these relatively high temperatures the reaction proceeds rapidly without a catalyst:
The conversion of the reaction can be followed by titration of the evolved hydrogen chloride. A wide variety of solvents has been described including chlorinated benzenes (e.g. dichlorobenzene), chlorinated naphthalenes or diphenyls, as well as non-chlorinated aromatics like terphenyls, benzophenones or dibenzylbenzenes. The reaction was also applied successfully to the preparation of highly crystalline and poorly soluble polymers which require high temperatures to be kept in solution (at least until a sufficiently high molecular weight was achieved).
In an interfacial acyl chloride-based reaction, the alcohol (generally in fact a phenol) is dissolved in the form of an alkoxide in an aqueous sodium hydroxide solution, the acyl chloride in an organic solvent immiscible with water such as dichloromethane, chlorobenzene or hexane, the reaction occurs at the interface under high-speed agitation near room temperature.
The procedure is used for the production of polyarylates (polyesters based on bisphenols), polyamides, polycarbonates, poly(thiocarbonate)s, and others. Since the molecular weight of the product obtained by a high-temperature synthesis can be seriously limited by side reactions, this problem is circumvented by the mild temperatures of interfacial polycondensation. The procedure is applied to the commercial production of bisphenol-A-based polyarylates like Unitika's U-Polymer. Water could be in some cases replaced by an immiscible organic solvent (e. g. in the adiponitrile/carbon tetrachloride system). The procedure is of little use in the production of polyesters based on aliphatic diols which have higher pKa values than phenols and therefore do not form alcoholate ions in aqueous solutions. The base catalysed reaction of an acyl chloride with an alcohol may also be carried out in one phase using tertiary amines (e. g. triethylamine, Et3N) or pyridine as acid acceptors:
While acyl chloride-based polyesterifications proceed only very slowly at room temperature without a catalyst, the amine accelerates the reaction in several possible ways, although the mechanism is not fully understood. However, it is known that tertiary amines can cause side-reactions such as the formation of ketenes and ketene dimers.
Silyl method
In this variant of the HCl method, the carboxylic acid chloride is converted with the trimethyl silyl ether of the alcohol component and production of trimethyl silyl chloride is obtained
Acetate method (esterification)
Silyl acetate method
Ring-opening polymerization
Aliphatic polyesters can be assembled from lactones under very mild conditions, catalyzed anionically, cationically, metallorganically or enzyme-based. A number of catalytic methods for the copolymerization of epoxides with cyclic anhydrides have also recently been shown to provide a wide array of functionalized polyesters, both saturated and unsaturated. Ring-opening polymerization of lactones and lactides is also applied on the industrial scale.
Other methods
Numerous other reactions have been reported for the synthesis of selected polyesters, but are limited to laboratory-scale syntheses using specific conditions, for example using dicarboxylic acid salts and dialkyl halides or reactions between bisketenes and diols.
Instead of acyl chlorides, so-called activating agents can be used, such as 1,1'-carbonyldiimidazole, dicyclohexylcarbodiimide, or trifluoroacetic anhydride. The polycondensation proceeds via the in situ conversion of the carboxylic acid into a more reactive intermediate while the activating agents are consumed. The reaction proceeds, for example, via an intermediate N-acylimidazole which reacts with catalytically acting sodium alkoxide:
The use of activating agents for the production of high-melting aromatic polyesters and polyamides under mild conditions has been subject of intensive academic research since the 1980s, but the reactions have not gained commercial acceptance as similar results can be achieved with cheaper reactants.
Thermodynamics of polycondensation reactions
Polyesterifications are grouped by some authors into two main categories: a) equilibrium polyesterifications (mainly alcohol-acid reaction, alcohol–ester and acid–ester interchange reactions, carried out in bulk at high temperatures), and b) non-equilibrium polyesterifications, using highly reactive monomers (for example acid chlorides or activated carboxylic acids, mostly carried out at lower temperatures in solution).
The acid-alcohol based polyesterification is one example of an equilibrium reaction. The ratio between the polymer-forming ester group (-C(O)O-) and the condensation product water (H2O) against the acid-based (-C(O)OH) and alcohol-based (-OH) monomers is described by the equilibrium constant KC.
The equilibrium constant of the acid-alcohol based polyesterification is typically KC ≤ 10, what is not high enough to obtain high-molecular weight polymers (DPn ≥ 100), as the number average degree of polymerization (DPn) can be calculated from the equilibrium constant KC.
In equilibrium reactions, it is therefore necessary to remove the condensation product continuously and efficiently from the reaction medium in order to drive the equilibrium towards polymer. The condensation product is therefore removed at reduced pressure and high temperatures (150–320 °C, depending on the monomers) to prevent the back reaction. With the progress of the reaction, the concentration of active chain ends is decreasing and the viscosity of the melt or solution increasing. For an increase of the reaction rate, the reaction is carried out at high end group concentration (preferably in the bulk), promoted by the elevated temperatures.
Equilibrium constants of magnitude KC ≥ 104 are achieved when using reactive reactants (acid chlorides or acid anhydrides) or activating agents like 1,1′-carbonyldiimidazole. Using these reactants, molecular weights required for technical applications can be achieved even without active removal of the condensation product.
History
In 1926, United States–based DuPont began research on large molecules and synthetic fibers. This early research, headed by Wallace Carothers, centered on what became nylon, which was one of the first synthetic fibers. Carothers was working for DuPont at the time. Carothers' research was incomplete and had not advanced to investigating the polyester formed from mixing ethylene glycol and terephthalic acid. In 1928 polyester was patented in Britain by the International General Electric company. Carothers' project was revived by British scientists Whinfield and Dickson, who patented polyethylene terephthalate (PET) or PETE in 1941. Polyethylene terephthalate forms the basis for synthetic fibers like Dacron, Terylene and polyester. In 1946, DuPont bought all legal rights from Imperial Chemical Industries (ICI).
Biodegradation and environmental concerns
The Futuro houses were made of fibreglass-reinforced polyester plastic; polyester-polyurethane, and poly(methyl methacrylate). One house was found to be degrading by cyanobacteria and Archaea.
Cross-linking
Unsaturated polyesters are thermosetting polymers. They are generally copolymers prepared by polymerizing one or more diols with saturated and unsaturated dicarboxylic acids (maleic acid, fumaric acid, etc.) or their anhydrides. The double bond of unsaturated polyesters reacts with a vinyl monomer, usually styrene, resulting in a 3-D cross-linked structure. This structure acts as a thermoset. The exothermic cross-linking reaction is initiated through a catalyst, usually an organic peroxide such as methyl ethyl ketone peroxide or benzoyl peroxide.
Pollution of freshwater and seawater habitats
A team at Plymouth University in the UK spent 12 months analysing what happened when a number of synthetic materials were washed at different temperatures in domestic washing machines, using different combinations of detergents, to quantify the microfibres shed. They found that an average washing load of 6 kg could release an estimated 137,951 fibres from polyester-cotton blend fabric, 496,030 fibres from polyester and 728,789 from acrylic. Those fibers add to the general microplastics pollution.
Safety
Fertility
Ahmed Shafik was a sexologist who won a Ig Nobel Prize on his research regarding how polyester can affect the fertility of rats, dogs, and men.
Bisphenol A which is a endocrine disrupting chemical may be used in the synthesis of polyester.
Recycling
Recycling of polymers has become very important as the production and use of plastic is continuously rising. Global plastic waste may almost triple by 2060 if this continues. Plastics can be recycled by various means like mechanical recycling, chemical recycling, etc. Among the recyclable polymers, polyester PET is one of the most recycled plastics. The ester bond present in polyesters is susceptible to hydrolysis (acidic or basic conditions), methanolysis and glycolysis which makes this class of polymers suitable for chemical recycling. Enzymatic/biological recycling of PET can be carried out using different enzymes like PETase, cutinase, esterase, lipase, etc. PETase has been also reported for enzymatic degradation of other synthetic polyesters (PBT, PHT, Akestra™, etc.) which contains similar aromatic ester bond as that of PET.
| Physical sciences | Carbon–oxygen bond | null |
3510370 | https://en.wikipedia.org/wiki/Dissociative%20amnesia | Dissociative amnesia | Dissociative amnesia or psychogenic amnesia is a dissociative disorder "characterized by retrospectively reported memory gaps. These gaps involve an inability to recall personal information, usually of a traumatic or stressful nature." The concept is scientifically controversial and remains disputed.
Dissociative amnesia was previously known as psychogenic amnesia, a memory disorder, which was characterized by sudden retrograde episodic memory loss, said to occur for a period of time ranging from hours to years to decades.
The atypical clinical syndrome of the memory disorder (as opposed to organic amnesia) is that a person with psychogenic amnesia is profoundly unable to remember personal information about themselves; there is a lack of conscious self-knowledge which affects even simple self-knowledge, such as who they are. Psychogenic amnesia is distinguished from organic amnesia in that it is supposed to result from a nonorganic cause: no structural brain damage should be evident but some form of psychological stress should precipitate the amnesia. Psychogenic amnesia as a memory disorder is controversial.
Definition
Psychogenic amnesia is the presence of retrograde amnesia (the inability to retrieve stored memories leading up to the onset of amnesia), and an absence of anterograde amnesia (the inability to form new long term memories). Access to episodic memory can be impeded, while the degree of impairment to short term memory, semantic memory and procedural memory is thought to vary among cases. If other memory processes are affected, they are usually much less severely affected than retrograde autobiographical memory, which is taken as the hallmark of psychogenic amnesia. However the wide variability of memory impairment among cases of psychogenic amnesia raises questions as to its true neuropsychological criteria, as despite intense study of a wide range of cases there is little consensus of which memory deficits are specific to psychogenic amnesia.
Past literature has suggested psychogenic amnesia can be 'situation-specific' or 'global-transient', the former referring to memory loss for a particular incident, and the latter relating to large retrograde amnesic gaps of up to many years in personal identity. The most commonly cited examples of global-transient psychogenic amnesia are 'fugue states', of which there is a sudden retrograde loss of autobiographical memory resulting in impairment of personal identity and usually accompanied by a period of wandering. Suspected cases of psychogenic amnesia have been heavily reported throughout the literature since 1935 where it was reported by Abeles and Schilder. There are many clinical anecdotes of psychogenic or dissociative amnesia attributed to stressors ranging from cases of child sexual abuse to soldiers returning from combat.
Cause
The neurological cause of psychogenic amnesia is controversial. Even in cases of organic amnesia, where there is lesion or structural damage to the brain, caution must still be taken in defining causation, as only damage to areas of the brain crucial to memory processing is possible to result in memory impairment. Organic causes of amnesia can be difficult to detect, and often both organic cause and psychological triggers can be entangled. Failure to find an organic cause may result in the diagnosis that the amnesia is psychological, however it is possible that some organic causes may fall below a threshold of detection, while other neurological ails are thought to be unequivocally organic (such as a migraine) even though no functional damage is evident. Possible malingering must also be taken into account. Some researchers have cautioned against psychogenic amnesia becoming a "wastebasket" diagnosis when organic amnesia is not apparent. Other researchers have hastened to defend the notion of psychogenic amnesia and its right not to be dismissed as a clinical disorder. Diagnoses of psychogenic amnesia have dropped since agreement in the field of transient global amnesia, suggesting some over diagnosis at least. Speculation also exists about psychogenic amnesia due to its similarities with 'pure retrograde amnesia', as both share similar retrograde loss of memory. Also, although no functional damage or brain lesions are evident in the case of pure retrograde amnesia, unlike psychogenic amnesia it is not thought that purely psychological or 'psychogenic triggers' are relevant to pure retrograde amnesia. Psychological triggers such as emotional stress are common in everyday life, yet pure retrograde amnesia is considered very rare. Also the potential for organic damage to fall below threshold of being identified does not necessarily mean it is not present, and it is highly likely that both psychological factors and organic cause exist in pure retrograde amnesia.
Comparison with organic amnesia
Psychogenic amnesia is supposed to differ from organic amnesia in a number of ways; one being that unlike organic amnesia, psychogenic amnesia is thought to occur when no structural damage to the brain or brain lesion is evident. Psychological triggers are instead considered as preceding psychogenic amnesia, and indeed many anecdotal case studies which are cited as evidence of psychogenic amnesia hail from traumatic experiences such as World War II. As aforementioned however, an etiology of psychogenic amnesia is controversial as causation is not always clear, and both elements of psychological stress and organic amnesia may be present among cases. Often, but not necessarily, a premorbid history of psychiatric illness such as depression is thought to be present in conjunction to triggers of psychological stress. Lack of psychological evidence precipitating amnesia does not mean there is not any, for example trauma during childhood has even been cited as triggering amnesia later in life, but such an argument runs the risk of psychogenic amnesia becoming an umbrella term for any amnesia of which there is no apparent organic cause. Due to organic amnesia often being difficult to detect, defining between organic and psychogenic amnesia is not easy and often context of precipitating experiences are considered (for example, if there has been drug abuse) as well as the symptomology the patient presents with. Psychogenic amnesia is supposed to differ from organic amnesia qualitatively in that retrograde loss of autobiographical memory while semantic memory remains intact is said to be specific of psychogenic amnesia. Another difference that has been cited between organic and psychogenic amnesia is the temporal gradient of retrograde loss of autobiographical memory. The temporal gradient of loss in most cases of organic amnesia is said to be steepest at its most recent premorbid period, whereas for psychogenic amnesia the temporal gradient of retrograde autobiographical memory loss is said to be quite consistently flat. Although there is much literature on psychogenic amnesia as dissimilar to organic amnesia, the distinction between neurological and psychological features is often difficult to discern and remains controversial.
Diagnosis
Brain activity can be assessed functionally for psychogenic amnesia using imaging techniques such as fMRI, PET and EEG, in accordance with clinical data. Some research has suggested that organic and psychogenic amnesia to some extent share the involvement of the same structures of the temporo-frontal region in the brain. It has been suggested that deficits in episodic memory may be attributable to dysfunction in the limbic system, while self-identity deficits have been suggested as attributable to functional changes related to the posterior parietal cortex. To reiterate however, care must be taken when attempting to define causation as only ad hoc reasoning about the aetiology of psychogenic amnesia is possible, which means cause and consequence can be infeasible to untangle.
Treatments
Because psychogenic amnesia is defined by its lack of physical damage to the brain, treatment by physical methods is difficult. Nonetheless, distinguishing between organic and dissociative memory loss has been described as an essential first-step in effective treatments. Treatments in the past have attempted to alleve psychogenic amnesia by treating the mind itself, as guided by theories which range from notions such as 'betrayal theory' to account for memory loss attributed to protracted abuse by caregivers to the amnesia as a form of self-punishment in a Freudian sense, with the obliteration of personal identity as an alternative to suicide.
Treatment attempts often have revolved around trying to discover what traumatic event had caused the amnesia, and drugs such as intravenously administered barbiturates (often thought of as 'truth serum') were popular as treatment for psychogenic amnesia during World War II; benzodiazepines may have been substituted later. 'Truth serum' drugs were thought to work by making a painful memory more tolerable when expressed through relieving the strength of an emotion attached to a memory. Under the influence of these 'truth' drugs the patient would more readily talk about what had occurred to them. However, information elicited from patients under the influence of drugs such as barbiturates would be a mixture of truth and fantasy, and was thus not regarded as scientific in gathering accurate evidence for past events. Often treatment was aimed at treating the patient as a whole, and probably varied in practice in different places. Hypnosis was also popular as a means for gaining information from people about their past experiences, but like 'truth' drugs really only served to lower the threshold of suggestibility so that the patient would speak easily but not necessarily truthfully. If no motive for the amnesia was immediately apparent, deeper motives were usually sought by questioning the patient more intensely, often in conjunction with hypnosis and 'truth' drugs. In many cases, however, patients were found to spontaneously recover from their amnesia on their own accord so no treatment was required.
Controversy
The concept is scientifically controversial and remains disputed. Critics argue dissociative amnesia is merely a rebranding of the discredited repressed memory concept.
In popular culture
Dissociative amnesia is a common fictional plot device in many films, books and other media. Examples include William Shakespeare's King Lear, who experienced amnesia and madness following a betrayal by his daughters; and the title character Nina in Nicolas Dalayrac's 1786 opera. Sunny, the title character in Omocat's Omori, is suspected of having dissociative amnesia.
| Biology and health sciences | Mental disorders | Health |
117534 | https://en.wikipedia.org/wiki/Optical%20microscope | Optical microscope | The optical microscope, also referred to as a light microscope, is a type of microscope that commonly uses visible light and a system of lenses to generate magnified images of small objects. Optical microscopes are the oldest design of microscope and were possibly invented in their present compound form in the 17th century. Basic optical microscopes can be very simple, although many complex designs aim to improve resolution and sample contrast.
The object is placed on a stage and may be directly viewed through one or two eyepieces on the microscope. In high-power microscopes, both eyepieces typically show the same image, but with a stereo microscope, slightly different images are used to create a 3-D effect. A camera is typically used to capture the image (micrograph).
The sample can be lit in a variety of ways. Transparent objects can be lit from below and solid objects can be lit with light coming through (bright field) or around (dark field) the objective lens. Polarised light may be used to determine crystal orientation of metallic objects. Phase-contrast imaging can be used to increase image contrast by highlighting small details of differing refractive index.
A range of objective lenses with different magnification are usually provided mounted on a turret, allowing them to be rotated into place and providing an ability to zoom-in. The maximum magnification power of optical microscopes is typically limited to around 1000x because of the limited resolving power of visible light. While larger magnifications are possible no additional details of the object are resolved.
Alternatives to optical microscopy which do not use visible light include scanning electron microscopy and transmission electron microscopy and scanning probe microscopy and as a result, can achieve much greater magnifications.
Types
There are two basic types of optical microscopes: simple microscopes and compound microscopes. A simple microscope uses the optical power of a single lens or group of lenses for magnification. A compound microscope uses a system of lenses (one set enlarging the image produced by another) to achieve a much higher magnification of an object. The vast majority of modern research microscopes are compound microscopes, while some cheaper commercial digital microscopes are simple single-lens microscopes. Compound microscopes can be further divided into a variety of other types of microscopes, which differ in their optical configurations, cost, and intended purposes.
Simple microscope
A simple microscope uses a lens or set of lenses to enlarge an object through angular magnification alone, giving the viewer an erect enlarged virtual image. The use of a single convex lens or groups of lenses are found in simple magnification devices such as the magnifying glass, loupes, and eyepieces for telescopes and microscopes.
Compound microscope
A compound microscope uses a lens close to the object being viewed to collect light (called the objective lens), which focuses a real image of the object inside the microscope (image 1). That image is then magnified by a second lens or group of lenses (called the eyepiece) that gives the viewer an enlarged inverted virtual image of the object (image 2). The use of a compound objective/eyepiece combination allows for much higher magnification. Common compound microscopes often feature exchangeable objective lenses, allowing the user to quickly adjust the magnification. A compound microscope also enables more advanced illumination setups, such as phase contrast.
Other microscope variants
There are many variants of the compound optical microscope design for specialized purposes. Some of these are physical design differences allowing specialization for certain purposes:
Stereo microscope, a low-powered microscope which provides a stereoscopic view of the sample, commonly used for dissection.
Comparison microscope has two separate light paths allowing direct comparison of two samples via one image in each eye.
Inverted microscope, for studying samples from below; useful for cell cultures in liquid or for metallography.
Fiber optic connector inspection microscope, designed for connector end-face inspection
Traveling microscope, for studying samples of high optical resolution.
Other microscope variants are designed for different illumination techniques:
Petrographic microscope, whose design usually includes a polarizing filter, rotating stage, and gypsum plate to facilitate the study of minerals or other crystalline materials whose optical properties can vary with orientation.
Polarizing microscope, similar to the petrographic microscope.
Phase-contrast microscope, which applies the phase contrast illumination method.
Epifluorescence microscope, designed for analysis of samples that include fluorophores.
Confocal microscope, a widely used variant of epifluorescent illumination that uses a scanning laser to illuminate a sample for fluorescence.
Two-photon microscope, used to image fluorescence deeper in scattering media and reduce photobleaching, especially in living samples.
Student microscope – an often low-power microscope with simplified controls and sometimes low-quality optics designed for school use or as a starter instrument for children.
Ultramicroscope, an adapted light microscope that uses light scattering to allow viewing of tiny particles whose diameter is below or near the wavelength of visible light (around 500 nanometers); mostly obsolete since the advent of electron microscopes
Tip-enhanced Raman microscope, is a variant of optical microscope based on tip-enhanced Raman spectroscopy, without traditional wavelength-based resolution limits. This microscope primarily realized on the scanning-probe microscope platforms using all optical tools.
Digital microscope
A digital microscope is a microscope equipped with a digital camera allowing observation of a sample via a computer. Microscopes can also be partly or wholly computer-controlled with various levels of automation. Digital microscopy allows greater analysis of a microscope image, for example, measurements of distances and areas and quantitation of a fluorescent or histological stain.
Low-powered digital microscopes, USB microscopes, are also commercially available. These are essentially webcams with a high-powered macro lens and generally do not use transillumination. The camera is attached directly to a computer's USB port to show the images directly on the monitor. They offer modest magnifications (up to about 200×) without the need to use eyepieces and at a very low cost. High-power illumination is usually provided by an LED source or sources adjacent to the camera lens.
Digital microscopy with very low light levels to avoid damage to vulnerable biological samples is available using sensitive photon-counting digital cameras. It has been demonstrated that a light source providing pairs of entangled photons may minimize the risk of damage to the most light-sensitive samples. In this application of ghost imaging to photon-sparse microscopy, the sample is illuminated with infrared photons, each spatially correlated with an entangled partner in the visible band for efficient imaging by a photon-counting camera.
History
Invention
The earliest microscopes were single lens magnifying glasses with limited magnification, which date at least as far back as the widespread use of lenses in eyeglasses in the 13th century.
Compound microscopes first appeared in Europe around 1620 including one demonstrated by Cornelis Drebbel in London (around 1621) and one exhibited in Rome in 1624.
The actual inventor of the compound microscope is unknown although many claims have been made over the years. These include a claim 35 years after they appeared by Dutch spectacle-maker Johannes Zachariassen that his father, Zacharias Janssen, invented the compound microscope and/or the telescope as early as 1590. Johannes' testimony, which some claim is dubious, pushes the invention date so far back that Zacharias would have been a child at the time, leading to speculation that, for Johannes' claim to be true, the compound microscope would have to have been invented by Johannes' grandfather, Hans Martens. Another claim is that Janssen's competitor, Hans Lippershey (who applied for the first telescope patent in 1608) also invented the compound microscope. Other historians point to the Dutch innovator Cornelis Drebbel with his 1621 compound microscope.
Galileo Galilei is sometimes cited as a compound microscope inventor. After 1610, he found that he could close focus his telescope to view small objects, such as flies, close up and/or could look through the wrong end in reverse to magnify small objects. The only drawback was that his 2 foot long telescope had to be extended out to 6 feet to view objects that close. After seeing the compound microscope built by Drebbel exhibited in Rome in 1624, Galileo built his own improved version. In 1625, Giovanni Faber coined the name microscope for the compound microscope Galileo submitted to the in 1624 (Galileo had called it the "occhiolino" or "little eye"). Faber coined the name from the Greek words μικρόν (micron) meaning "small", and σκοπεῖν (skopein) meaning "to look at", a name meant to be analogous with "telescope", another word coined by the Linceans.
Christiaan Huygens, another Dutchman, developed a simple 2-lens ocular system in the late 17th century that was achromatically corrected, and therefore a huge step forward in microscope development. The Huygens ocular is still being produced to this day, but suffers from a small field size, and other minor disadvantages.
Popularization
Antonie van Leeuwenhoek (1632–1724) is credited with bringing the microscope to the attention of biologists, even though simple magnifying lenses were already being produced in the 16th century. Van Leeuwenhoek's home-made microscopes were simple microscopes, with a single very small, yet strong lens. They were awkward in use, but enabled van Leeuwenhoek to see detailed images. It took about 150 years of optical development before the compound microscope was able to provide the same quality image as van Leeuwenhoek's simple microscopes, due to difficulties in configuring multiple lenses. In the 1850s, John Leonard Riddell, Professor of Chemistry at Tulane University, invented the first practical binocular microscope while carrying out one of the earliest and most extensive American microscopic investigations of cholera.
Lighting techniques
While basic microscope technology and optics have been available for over 400 years it is much more recently that techniques in sample illumination were developed to generate the high quality images seen today.
In August 1893, August Köhler developed Köhler illumination. This method of sample illumination gives rise to extremely even lighting and overcomes many limitations of older techniques of sample illumination. Before development of Köhler illumination the image of the light source, for example a lightbulb filament, was always visible in the image of the sample.
The Nobel Prize in physics was awarded to Dutch physicist Frits Zernike in 1953 for his development of phase contrast illumination which allows imaging of transparent samples. By using interference rather than absorption of light, extremely transparent samples, such as live mammalian cells, can be imaged without having to use staining techniques. Just two years later, in 1955, Georges Nomarski published the theory for differential interference contrast microscopy, another interference-based imaging technique.
Fluorescence microscopy
Modern biological microscopy depends heavily on the development of fluorescent probes for specific structures within a cell. In contrast to normal transilluminated light microscopy, in fluorescence microscopy the sample is illuminated through the objective lens with a narrow set of wavelengths of light. This light interacts with fluorophores in the sample which then emit light of a longer wavelength. It is this emitted light which makes up the image.
Since the mid-20th century chemical fluorescent stains, such as DAPI which binds to DNA, have been used to label specific structures within the cell. More recent developments include immunofluorescence, which uses fluorescently labelled antibodies to recognise specific proteins within a sample, and fluorescent proteins like GFP which a live cell can express making it fluorescent.
Components
All modern optical microscopes designed for viewing samples by transmitted light share the same basic components of the light path. In addition, the vast majority of microscopes have the same 'structural' components (numbered below according to the image on the right):
Eyepiece (ocular lens) (1)
Objective turret, revolver, or revolving nose piece (to hold multiple objective lenses) (2)
Objective lenses (3)
Focus knobs (to move the stage)
Coarse adjustment (4)
Fine adjustment (5)
Stage (to hold the specimen) (6)
Light source (a light or a mirror) (7)
Diaphragm and condenser (8)
Mechanical stage (9)
Eyepiece (ocular lens)
The eyepiece, or ocular lens, is a cylinder containing two or more lenses; its function is to bring the image into focus for the eye. The eyepiece is inserted into the top end of the body tube. Eyepieces are interchangeable and many different eyepieces can be inserted with different degrees of magnification. Typical magnification values for eyepieces include 5×, 10× (the most common), 15× and 20×. In some high performance microscopes, the optical configuration of the objective lens and eyepiece are matched to give the best possible optical performance. This occurs most commonly with apochromatic objectives.
Objective turret (revolver or revolving nose piece)
Objective turret, revolver, or revolving nose piece is the part that holds the set of objective lenses. It allows the user to switch between objective lenses.
Objective lens
At the lower end of a typical compound optical microscope, there are one or more objective lenses that collect light from the sample. The objective is usually in a cylinder housing containing a glass single or multi-element compound lens. Typically there will be around three objective lenses screwed into a circular nose piece which may be rotated to select the required objective lens. These arrangements are designed to be parfocal, which means that when one changes from one lens to another on a microscope, the sample stays in focus. Microscope objectives are characterized by two parameters, namely, magnification and numerical aperture. The former typically ranges from 5× to 100× while the latter ranges from 0.14 to 0.7, corresponding to focal lengths of about 40 to 2 mm, respectively. Objective lenses with higher magnifications normally have a higher numerical aperture and a shorter depth of field in the resulting image. Some high performance objective lenses may require matched eyepieces to deliver the best optical performance.
Oil immersion objective
Some microscopes make use of oil-immersion objectives or water-immersion objectives for greater resolution at high magnification. These are used with index-matching material such as immersion oil or water and a matched cover slip between the objective lens and the sample. The refractive index of the index-matching material is higher than air allowing the objective lens to have a larger numerical aperture (greater than 1) so that the light is transmitted from the specimen to the outer face of the objective lens with minimal refraction. Numerical apertures as high as 1.6 can be achieved. The larger numerical aperture allows collection of more light making detailed observation of smaller details possible. An oil immersion lens usually has a magnification of 40 to 100×.
Focus knobs
Adjustment knobs move the stage up and down with separate adjustment for coarse and fine focusing. The same controls enable the microscope to adjust to specimens of different thickness. In older designs of microscopes, the focus adjustment wheels move the microscope tube up or down relative to the stand and had a fixed stage.
Frame
The whole of the optical assembly is traditionally attached to a rigid arm, which in turn is attached to a robust U-shaped foot to provide the necessary rigidity. The arm angle may be adjustable to allow the viewing angle to be adjusted.
The frame provides a mounting point for various microscope controls. Normally this will include controls for focusing, typically a large knurled wheel to adjust coarse focus, together with a smaller knurled wheel to control fine focus. Other features may be lamp controls and/or controls for adjusting the condenser.
Stage
The stage is a platform below the objective lens which supports the specimen being viewed. In the center of the stage is a hole through which light passes to illuminate the specimen. The stage usually has arms to hold slides (rectangular glass plates with typical dimensions of 25×75 mm, on which the specimen is mounted).
At magnifications higher than 100× moving a slide by hand is not practical. A mechanical stage, typical of medium and higher priced microscopes, allows tiny movements of the slide via control knobs that reposition the sample/slide as desired. If a microscope did not originally have a mechanical stage it may be possible to add one.
All stages move up and down for focus. With a mechanical stage slides move on two horizontal axes for positioning the specimen to examine specimen details.
Focusing starts at lower magnification in order to center the specimen by the user on the stage. Moving to a higher magnification requires the stage to be moved higher vertically for re-focus at the higher magnification and may also require slight horizontal specimen position adjustment. Horizontal specimen position adjustments are the reason for having a mechanical stage.
Due to the difficulty in preparing specimens and mounting them on slides, for children it is best to begin with prepared slides that are centered and focus easily regardless of the focus level used.
Light source
Many sources of light can be used. At its simplest, daylight is directed via a mirror. Most microscopes, however, have their own adjustable and controllable light source – often a halogen lamp, although illumination using LEDs and lasers are becoming a more common provision. Köhler illumination is often provided on more expensive instruments.
Condenser
The condenser is a lens designed to focus light from the illumination source onto the sample. The condenser may also include other features, such as a diaphragm and/or filters, to manage the quality and intensity of the illumination. For illumination techniques like dark field, phase contrast and differential interference contrast microscopy additional optical components must be precisely aligned in the light path.
Magnification
The actual power or magnification of a compound optical microscope is the product of the powers of the eyepiece and the objective lens. For example a 10x eyepiece magnification and a 100x objective lens magnification gives a total magnification of 1,000×. Modified environments such as the use of oil or ultraviolet light can increase the resolution and allow for resolved details at magnifications larger than 1,000x.
Operation
Illumination techniques
Many techniques are available which modify the light path to generate an improved contrast image from a sample. Major techniques for generating increased contrast from the sample include cross-polarized light, dark field, phase contrast and differential interference contrast illumination. A recent technique (Sarfus) combines cross-polarized light and specific contrast-enhanced slides for the visualization of nanometric samples.
Other techniques
Modern microscopes allow more than just observation of transmitted light image of a sample; there are many techniques which can be used to extract other kinds of data. Most of these require additional equipment in addition to a basic compound microscope.
Reflected light, or incident, illumination (for analysis of surface structures)
Fluorescence microscopy, both:
Epifluorescence microscopy
Confocal microscopy
Microspectroscopy (where a UV-visible spectrophotometer is integrated with an optical microscope)
Ultraviolet microscopy
Near-Infrared microscopy
Multiple transmission microscopy for contrast enhancement and aberration reduction.
Automation (for automatic scanning of a large sample or image capture)
Applications
Optical microscopy is used extensively in microelectronics, nanophysics, biotechnology, pharmaceutic research, mineralogy and microbiology.
Optical microscopy is used for medical diagnosis, the field being termed histopathology when dealing with tissues, or in smear tests on free cells or tissue fragments.
In industrial use, binocular microscopes are common. Aside from applications needing true depth perception, the use of dual eyepieces reduces eye strain associated with long workdays at a microscopy station. In certain applications, long-working-distance or long-focus microscopes are beneficial. An item may need to be examined behind a window, or industrial subjects may be a hazard to the objective. Such optics resemble telescopes with close-focus capabilities.
Measuring microscopes are used for precision measurement. There are two basic types.
One has a reticle graduated to allow measuring distances in the focal plane. The other (and older) type has simple crosshairs and a micrometer mechanism for moving the subject relative to the microscope.
Very small, portable microscopes have found some usage in places where a laboratory microscope would be a burden.
Limitations
At very high magnifications with transmitted light, point objects are seen as fuzzy discs surrounded by diffraction rings. These are called Airy disks. The resolving power of a microscope is taken as the ability to distinguish between two closely spaced Airy disks (or, in other words the ability of the microscope to reveal adjacent structural detail as distinct and separate). It is these impacts of diffraction that limit the ability to resolve fine details. The extent and magnitude of the diffraction patterns are affected by both the wavelength of light (λ), the refractive materials used to manufacture the objective lens and the numerical aperture (NA) of the objective lens. There is therefore a finite limit beyond which it is impossible to resolve separate points in the objective field, known as the diffraction limit. Assuming that optical aberrations in the whole optical set-up are negligible, the resolution d, can be stated as:
Usually a wavelength of 550 nm is assumed, which corresponds to green light. With air as the external medium, the highest practical NA is 0.95, and with oil, up to 1.5. In practice the lowest value of d obtainable with conventional lenses is about 200 nm. A new type of lens using multiple scattering of light allowed to improve the resolution to below 100 nm.
Surpassing the resolution limit
Multiple techniques are available for reaching resolutions higher than the transmitted light limit described above. Holographic techniques, as described by Courjon and Bulabois in 1979, are also capable of breaking this resolution limit, although resolution was restricted in their experimental analysis.
Using fluorescent samples more techniques are available. Examples include Vertico SMI, near field scanning optical microscopy which uses evanescent waves, and stimulated emission depletion. In 2005, a microscope capable of detecting a single molecule was described as a teaching tool.
Despite significant progress in the last decade, techniques for surpassing the diffraction limit remain limited and specialized.
While most techniques focus on increases in lateral resolution there are also some techniques which aim to allow analysis of extremely thin samples. For example, sarfus methods place the thin sample on a contrast-enhancing surface and thereby allows to directly visualize films as thin as 0.3 nanometers.
On 8 October 2014, the Nobel Prize in Chemistry was awarded to Eric Betzig, William Moerner and Stefan Hell for the development of super-resolved fluorescence microscopy.
Structured illumination SMI
SMI (spatially modulated illumination microscopy) is a light optical process of the so-called point spread function (PSF) engineering. These are processes which modify the PSF of a microscope in a suitable manner to either increase the optical resolution, to maximize the precision of distance measurements of fluorescent objects that are small relative to the wavelength of the illuminating light, or to extract other structural parameters in the nanometer range.
Localization microscopy SPDMphymod
SPDM (spectral precision distance microscopy), the basic localization microscopy technology is a light optical process of fluorescence microscopy which allows position, distance and angle measurements on "optically isolated" particles (e.g. molecules) well below the theoretical limit of resolution for light microscopy. "Optically isolated" means that at a given point in time, only a single particle/molecule within a region of a size determined by conventional optical resolution (typically approx. 200–250 nm diameter) is being registered. This is possible when molecules within such a region all carry different spectral markers (e.g. different colors or other usable differences in the light emission of different particles).
Many standard fluorescent dyes like GFP, Alexa dyes, Atto dyes, Cy2/Cy3 and fluorescein molecules can be used for localization microscopy, provided certain photo-physical conditions are present. Using this so-called SPDMphymod (physically modifiable fluorophores) technology a single laser wavelength of suitable intensity is sufficient for nanoimaging.
3D super resolution microscopy
3D super resolution microscopy with standard fluorescent dyes can be achieved by combination of localization microscopy for standard fluorescent dyes SPDMphymod and structured illumination SMI.
STED
Stimulated emission depletion is a simple example of how higher resolution surpassing the diffraction limit is possible, but it has major limitations. STED is a fluorescence microscopy technique which uses a combination of light pulses to induce fluorescence in a small sub-population of fluorescent molecules in a sample. Each molecule produces a diffraction-limited spot of light in the image, and the centre of each of these spots corresponds to the location of the molecule. As the number of fluorescing molecules is low the spots of light are unlikely to overlap and therefore can be placed accurately. This process is then repeated many times to generate the image. Stefan Hell of the Max Planck Institute for Biophysical Chemistry was awarded the 10th German Future Prize in 2006 and Nobel Prize for Chemistry in 2014 for his development of the STED microscope and associated methodologies.
Alternatives
In order to overcome the limitations set by the diffraction limit of visible light other microscopes have been designed which use other waves.
Atomic force microscope (AFM)
Scanning electron microscope (SEM)
Scanning ion-conductance microscopy (SICM)
Scanning tunneling microscope (STM)
Transmission electron microscopy (TEM)
Ultraviolet microscope
X-ray microscope
It is important to note that higher frequency waves have limited interaction with matter, for example soft tissues are relatively transparent to X-rays resulting in distinct sources of contrast and different target applications.
The use of electrons and X-rays in place of light allows much higher resolution – the wavelength of the radiation is shorter so the diffraction limit is lower. To make the
short-wavelength probe non-destructive, the atomic beam imaging system (atomic nanoscope) has been proposed and widely discussed in the literature, but it is not yet competitive with conventional imaging systems.
STM and AFM are scanning probe techniques using a small probe which is scanned over the sample surface. Resolution in these cases is limited by the size of the probe; micromachining techniques can produce probes with tip radii of 5–10 nm.
Additionally, methods such as electron or X-ray microscopy use a vacuum or partial vacuum, which limits their use for live and biological samples (with the exception of an environmental scanning electron microscope). The specimen chambers needed for all such instruments also limits sample size, and sample manipulation is more difficult. Color cannot be seen in images made by these methods, so some information is lost. They are however, essential when investigating molecular or atomic effects, such as age hardening in aluminium alloys, or the microstructure of polymers.
| Technology | Optical instruments | null |
22210655 | https://en.wikipedia.org/wiki/Aquatic%20locomotion | Aquatic locomotion | Aquatic locomotion or swimming is biologically propelled motion through a liquid medium. The simplest propulsive systems are composed of cilia and flagella. Swimming has evolved a number of times in a range of organisms including arthropods, fish, molluscs, amphibians, reptiles, birds, and mammals.
Evolution of swimming
Swimming evolved a number of times in unrelated lineages. Supposed jellyfish fossils occur in the Ediacaran, but the first free-swimming animals appear in the Early to Middle Cambrian. These are mostly related to the arthropods, and include the Anomalocaridids, which swam by means of lateral lobes in a fashion reminiscent of today's cuttlefish. Cephalopods joined the ranks of the active swimmers (nekton) in the late Cambrian, and chordates were probably swimming from the Early Cambrian. Many terrestrial animals retain some capacity to swim, however some have returned to the water and developed the capacities for aquatic locomotion. Most apes (including humans), however, lost the swimming instinct.
In 2013 Pedro Renato Bender, a research fellow at the University of the Witwatersrand's Institute for Human Evolution, proposed a theory to explain the loss of that instinct. Termed the Saci last common ancestor hypothesis (after Saci, a Brazilian folklore character who cannot cross water barriers), it holds that the loss of instinctive swimming ability in apes is best explained as a consequence of constraints related to the adaptation to an arboreal life in the last common ancestor of apes. Bender hypothesized that the ancestral ape increasingly avoided deep-water bodies when the risks of being exposed to water were clearly higher than the advantages of crossing them. A decreasing contact with water bodies then could have led to the disappearance of the doggy paddle instinct.
Micro-organisms
Microbial swimmers, sometimes called microswimmers, are microscopic entities that have the ability to move in fluid or aquatic environment. Natural microswimmers are found everywhere in the natural world as biological microorganisms, such as bacteria, archaea, protists, sperm and microanimals.
Bacterial
Ciliates
Ciliates use small flagella called cilia to move through the water. One ciliate will generally have hundreds to thousands of cilia that are densely packed together in arrays. During movement, an individual cilium deforms using a high-friction power stroke followed by a low-friction recovery stroke. Since there are multiple cilia packed together on an individual organism, they display collective behavior in a metachronal rhythm. This means the deformation of one cilium is in phase with the deformation of its neighbor, causing deformation waves that propagate along the surface of the organism. These propagating waves of cilia are what allow the organism to use the cilia in a coordinated manner to move. A typical example of a ciliated microorganism is the Paramecium, a one-celled, ciliated protozoan covered by thousands of cilia. The cilia beating together allow the Paramecium to propel through the water at speeds of 500 micrometers per second.
Flagellates
Certain organisms such as bacteria and animal sperm have flagellum which have developed a way to move in liquid environments. A rotary motor model shows that bacteria uses the protons of an electrochemical gradient in order to move their flagella. Torque in the flagella of bacteria is created by particles that conduct protons around the base of the flagellum. The direction of rotation of the flagella in bacteria comes from the occupancy of the proton channels along the perimeter of the flagellar motor.
Movement of sperm is called sperm motility. The middle of the mammalian spermatozoon contains mitochondria that power the movement of the flagellum of the sperm. The motor around the base produces torque, just like in bacteria for movement through the aqueous environment.
Pseudopodia
Movement using a pseudopod is accomplished through increases in pressure at one point on the cell membrane. This pressure increase is the result of actin polymerization between the cortex and the membrane. As the pressure increases the cell membrane is pushed outward creating the pseudopod. When the pseudopod moves outward, the rest of the body is pulled forward by cortical tension. The result is cell movement through the fluid medium. Furthermore, the direction of movement is determined by chemotaxis. When chemoattraction occurs in a particular area of the cell membrane, actin polymerization can begin and move the cell in that direction. An excellent example of an organism that utilizes pseudopods is Naegleria fowleri.
A Simple Animation
Invertebrates
Among the radiata, jellyfish and their kin, the main form of swimming is to flex their cup shaped bodies. All jellyfish are free-swimming, although many of these spend most of their time swimming passively. Passive swimming is akin to gliding; the organism floats, using currents where it can, and does not exert any energy into controlling its position or motion. Active swimming, in contrast, involves the expenditure of energy to travel to a desired location.
In bilateria, there are many methods of swimming. The arrow worms (chaetognatha) undulate their finned bodies, not unlike fish. Nematodes swim by undulating their fin-less bodies. Some Arthropod groups can swim – including many crustaceans. Most crustaceans, such as shrimp, will usually swim by paddling with special swimming legs (pleopods). Swimming crabs swim with modified walking legs (pereiopods). Daphnia, a crustacean, swims by beating its antennae instead.
There are also a number of forms of swimming molluscs. Many free-swimming sea slugs, such as sea angels, flap fin-like structures. Some shelled molluscs, such as scallops can briefly swim by clapping their two shells open and closed. The molluscs most evolved for swimming are the cephalopods. Violet sea-snails exploit a buoyant foam raft stabilized by amphiphilic mucins to float at the sea surface.
Among the Deuterostomia, there are a number of swimmers as well. Feather stars can swim by undulating their many arms. Salps move by pumping waters through their gelatinous bodies. The deuterostomes most evolved for swimming are found among the vertebrates, notably the fish.
Jet propulsion
Jet propulsion is a method of aquatic locomotion where animals fill a muscular cavity and squirt out water to propel them in the opposite direction of the squirting water. Most organisms are equipped with one of two designs for jet propulsion; they can draw water from the rear and expel it from the rear, such as jellyfish, or draw water from front and expel it from the rear, such as salps. Filling up the cavity causes an increase in both the mass and drag of the animal. Because of the expanse of the contracting cavity, the animal's velocity fluctuates as it moves through the water, accelerating while expelling water and decelerating while vacuuming water. Even though these fluctuations in drag and mass can be ignored if the frequency of the jet-propulsion cycles is high enough, jet-propulsion is a relatively inefficient method of aquatic locomotion.
All cephalopods can move by jet propulsion, but this is a very energy-consuming way to travel compared to the tail propulsion used by fish. The relative efficiency of jet propulsion decreases further as animal size increases. Since the Paleozoic, as competition with fish produced an environment where efficient motion was crucial to survival, jet propulsion has taken a back role, with fins and tentacles used to maintain a steady velocity. The stop-start motion provided by the jets, however, continues to be useful for providing bursts of high speed – not least when capturing prey or avoiding predators. Indeed, it makes cephalopods the fastest marine invertebrates, and they can out accelerate most fish. Oxygenated water is taken into the mantle cavity to the gills and through muscular contraction of this cavity, the spent water is expelled through the hyponome, created by a fold in the mantle. Motion of the cephalopods is usually backward as water is forced out anteriorly through the hyponome, but direction can be controlled somewhat by pointing it in different directions. Most cephalopods float (i.e. are neutrally buoyant), so do not need to swim to remain afloat. Squid swim more slowly than fish, but use more power to generate their speed. The loss in efficiency is due to the amount of water the squid can accelerate out of its mantle cavity.
Jellyfish use a one-way water cavity design which generates a phase of continuous cycles of jet-propulsion followed by a rest phase. The Froude efficiency is about 0.09, which indicates a very costly method of locomotion. The metabolic cost of transport for jellyfish is high when compared to a fish of equal mass.
Other jet-propelled animals have similar problems in efficiency. Scallops, which use a similar design to jellyfish, swim by quickly opening and closing their shells, which draws in water and expels it from all sides. This locomotion is used as a means to escape predators such as starfish. Afterwards, the shell acts as a hydrofoil to counteract the scallop's tendency to sink. The Froude efficiency is low for this type of movement, about 0.3, which is why it's used as an emergency escape mechanism from predators. However, the amount of work the scallop has to do is mitigated by the elastic hinge that connects the two shells of the bivalve. Squids swim by drawing water into their mantle cavity and expelling it through their siphon. The Froude efficiency of their jet-propulsion system is around 0.29, which is much lower than a fish of the same mass.
Much of the work done by scallop muscles to close its shell is stored as elastic energy in abductin tissue, which acts as a spring to open the shell. The elasticity causes the work done against the water to be low because of the large openings the water has to enter and the small openings the water has to leave. The inertial work of scallop jet-propulsion is also low. Because of the low inertial work, the energy savings created by the elastic tissue is so small that it's negligible. Medusae can also use their elastic mesoglea to enlarge their bell. Their mantle contains a layer of muscle sandwiched between elastic fibers. The muscle fibers run around the bell circumferentially while the elastic fibers run through the muscle and along the sides of the bell to prevent lengthening. After making a single contraction, the bell vibrates passively at the resonant frequency to refill the bell. However, in contrast with scallops, the inertial work is similar to the hydrodynamic work due to how medusas expel water – through a large opening at low velocity. Because of this, the negative pressure created by the vibrating cavity is lower than the positive pressure of the jet, meaning that inertial work of the mantle is small. Thus, jet-propulsion is shown as an inefficient swimming technique.
Fish
Many fish swim through water by creating undulations with their bodies or oscillating their fins. The undulations create components of forward thrust complemented by a rearward force, side forces which are wasted portions of energy, and a normal force that is between the forward thrust and side force. Different fish swim by undulating different parts of their bodies. Eel-shaped fish undulate their entire body in rhythmic sequences. Streamlined fish, such as salmon, undulate the caudal portions of their bodies. Some fish, such as sharks, use stiff, strong fins to create dynamic lift and propel themselves. It is common for fish to use more than one form of propulsion, although they will display one dominant mode of swimming Gait changes have even been observed in juvenile reef fish of various sizes. Depending on their needs, fish can rapidly alternate between synchronized fin beats and alternating fin beats.
According to Guinness World Records 2009, Hippocampus zosterae (the dwarf seahorse) is the slowest moving fish, with a top speed of about per hour. They swim very poorly, rapidly fluttering a dorsal fin and using pectoral fins (located behind their eyes) to steer. Seahorses have no caudal fin.
Body-caudal fin (BCF) propulsion
Anguilliform: Anguilliform swimmers are typically slow swimmers. They undulate the majority of their body and use their head as the fulcrum for the load they are moving. At any point during their undulation, their body has an amplitude between 0.5-1.0 wavelengths. The amplitude that they move their body through allows them to swim backwards. Anguilliform locomotion is usually seen in fish with long, slender bodies like eels, lampreys, oarfish, and a number of catfish species.
Subcarangiform, Carangiform, Thunniform: These swimmers undulate the posterior half of their body and are much faster than anguilliform swimmers. At any point while they are swimming, a wavelength <1 can be seen in the undulation pattern of the body. Some Carangiform swimmers include nurse sharks, bamboo sharks, and reef sharks. Thunniform swimmers are very fast and some common Thunniform swimmers include tuna, white sharks, salmon, jacks, and mako sharks. Thunniform swimmers only undulate their high aspect ratio caudal fin, so they are usually very stiff to push more water out of the way.
Ostraciiform: Ostraciiform swimmers oscillate their caudal region, making them relatively slow swimmers. Boxfish, torpedo rays, and momyrs employ Ostraciiform locomotion. The cow fish uses Osctraciiform locomotion to hover in the water column.
Median paired fin (MPF) propulsion
Tetraodoniform, Balistiform, Diodontiform: These swimmers oscillate their median (pectoral) fins. They are typically slow swimmers, and some notable examples include the oceanic sunfish (which has extremely modified anal and dorsal fins), puffer fish, and triggerfish.
Rajiform, Amiiform, Gymnotiform: This locomotory mode is accomplished by undulation of the pectoral and median fins. During their undulation pattern, a wavelength >1 can be seen in their fins. They are typically slow to moderate swimmers, and some examples include rays, bowfin, and knife fishes. The black ghost knife fish is a Gymnotiform swimmer that has a very long ventral ribbon fin. Thrust is produced by passing waves down the ribbon fin while the body remains rigid. This also allows the ghost knife fish to swim in reverse.
Labriform: Labriform swimmers are also slow swimmers. They oscillate their pectoral fins to create thrust. Oscillating fins create thrust when a starting vortex is shed from the trailing edge of the fin. As the foil departs from the starting vortex, the effect of that vortex diminishes, while the bound circulation remains, producing lift. Labriform swimming can be viewed as continuously starting and stopping. Wrasses and surf perch are common Labriform swimmers.
Hydrofoils
Hydrofoils, or fins, are used to push against the water to create a normal force to provide thrust, propelling the animal through water. Sea turtles and penguins beat their paired hydrofoils to create lift. Some paired fins, such as pectoral fins on leopard sharks, can be angled at varying degrees to allow the animal to rise, fall, or maintain its level in the water column. The reduction of fin surface area helps to minimize drag, and therefore increase efficiency. Regardless of size of the animal, at any particular speed, maximum possible lift is proportional to (wing area) x (speed)2. Dolphins and whales have large, horizontal caudal hydrofoils, while many fish and sharks have vertical caudal hydrofoils. Porpoising (seen in cetaceans, penguins, and pinnipeds) may save energy if they are moving fast. Since drag increases with speed, the work required to swim unit distance is greater at higher speeds, but the work needed to jump unit distance is independent of speed. Seals propel themselves through the water with their caudal tail, while sea lions create thrust solely with their pectoral flippers.
Drag powered swimming
As with moving through any fluid, friction is created when molecules of the fluid collide with organism. The collision causes drag against moving fish, which is why many fish are streamlined in shape. Streamlined shapes work to reduce drag by orienting elongated objects parallel to the force of drag, therefore allowing the current to pass over and taper off the end of the fish. This streamlined shape allows for more efficient use of energy locomotion. Some flat-shaped fish can take advantage of pressure drag by having a flat bottom surface and curved top surface. The pressure drag created allows for the upward lift of the fish.
Appendages of aquatic organisms propel them in two main and biomechanically extreme mechanisms. Some use lift powered swimming, which can be compared to flying as appendages flap like wings, and reduce drag on the surface of the appendage. Others use drag powered swimming, which can be compared to oars rowing a boat, with movement in a horizontal plane, or paddling, with movement in the parasagittal plane.
Drag swimmers use a cyclic motion in which they push water back in a power stroke, and return their limb forward in the return or recovery stroke. When they push water directly backwards, this moves their body forward, but as they return their limbs to the starting position, they push water forward, which will thus pull them back to some degree, and so opposes the direction that the body is heading. This opposing force is called drag. The return-stroke drag causes drag swimmers to employ different strategies than lift swimmers. Reducing drag on the return stroke is essential for optimizing efficiency. For example, ducks paddle through the water spreading the webs of their feet as they move water back, and then when they return their feet to the front they pull their webs together to reduce the subsequent pull of water forward. The legs of water beetles have little hairs which spread out to catch and move water back in the power stroke, but lay flat as the appendage moves forward in the return stroke. Also, one side of a water beetle leg is wider than the others and is held perpendicular to the motion when pushing backward, but the leg rotates when the limb returns forward, so the thinner side catches less water.
Drag swimmers experience a lessened efficiency in swimming due to resistance which affects their optimum speed. The less drag a fish experiences, the more it will be able to maintain higher speeds. Morphology of the fish can be designed to reduce drag, such as streamlining the body. The cost of transport is much higher for the drag swimmer, and when deviating from its optimum speed, the drag swimmer is energetically strained much more than the lift swimmer. There are natural processes in place to optimize energy use, and it is thought that adjustments of metabolic rates can compensate in part for mechanical disadvantages.
Semi-aquatic animals compared to fully aquatic animals exhibit exacerbation of drag. Design that allows them to function out of the water limits the efficiency possible to be reached when in the water. In water swimming at the surface exposes them to resistive wave drag and is associated with a higher cost than submerged swimming. Swimming below the surface exposes them to resistance due to return strokes and pressure, but primarily friction. Frictional drag is due to fluid viscosity and morphology characteristics. Pressure drag is due to the difference of water flow around the body and is also affected by body morphology. Semi-aquatic organisms encounter increased resistive forces when in or out of the water, as they are not specialized for either habitat. The morphology of otters and beavers, for example, must meet needs for both environments. Their fur decreases streamlining and creates additional drag. The platypus may be a good example of an intermediate between drag and lift swimmers because it has been shown to have a rowing mechanism which is similar to lift-based pectoral oscillation. The limbs of semi-aquatic organisms are reserved for use on land and using them in water not only increases the cost of locomotion, but limits them to drag-based modes.
Although they are less efficient, drag swimmers are able to produce more thrust at low speeds than lift swimmers. They are also thought to be better for maneuverability due to the large thrust produced.
Amphibians
Most of the Amphibia have a larval state, which has inherited anguilliform motion, and a laterally compressed tail to go with it, from fish ancestors.
The corresponding tetrapod adult forms, even in the tail-retaining sub-class Urodeles, are sometimes aquatic to only a negligible extent (as in the genus
Salamandra, whose tail has lost its suitability for aquatic propulsion), but the majority of Urodeles, from the newts to the giant salamander Megalobatrachus,
retain a laterally compressed tail for a life that is aquatic to a considerable degree, which can use in a carangiform motion.
Of the tailless amphibians (the frogs and toads of the sub-class Anura) the majority are aquatic to an insignificant extent in adult life, but in that considerable minority that are mainly aquatic we encounter for the first time the problem of adapting the tailless-tetrapod structure for aquatic propulsion. The mode that they use is unrelated to any used by fish. With their flexible back legs and webbed feet they execute something close to the leg movements of a human 'breast stroke,' rather more efficiently because the legs are better streamlined.
Reptiles
From the point of view of aquatic propulsion, the descent of modern members of the class Reptilia from archaic tailed Amphibia is most obvious in the case of the order Crocodilia (crocodiles and alligators), which use their deep, laterally compressed tails in an essentially carangiform mode of propulsion (see Fish locomotion#Carangiform).
Terrestrial snakes, in spite of their 'bad' hydromechanical shape with roughly circular cross-section and gradual posterior taper, swim fairly readily when required, by an anguilliform propulsion (see Fish locomotion#Anguilliform).
Cheloniidae (sea turtles) have found a solution to the problem of tetrapod swimming through the development of their forelimbs into flippers of high-aspect-ratio wing shape, with which they imitate a bird's propulsive mode more accurately than do the eagle-rays themselves.
Fin and flipper locomotion
Aquatic reptiles such as sea turtles (see also turtles) and extinct species like Pliosauroids predominantly use their pectoral flippers to propel themselves through the water and their pelvic flippers for maneuvering. During swimming they move their pectoral flippers in a dorso-ventral motion, causing forward motion. During swimming, they rotate their front flippers to decrease drag through the water column and increase efficiency. Newly hatched sea turtles exhibit several behavioral skills that help orientate themselves towards the ocean as well as identifying the transition from sand to water. If rotated in the pitch, yaw or roll direction, the hatchlings are capable of counteracting the forces acting upon them by correcting with either their pectoral or pelvic flippers and redirecting themselves towards the open ocean.
Among mammals otariids (fur seals) swim primarily with their front flippers, using the rear flippers for steering, and phocids (true seals) move the rear flippers laterally, pushing the animal through the water.
Escape reactions
Some arthropods, such as lobsters and shrimps, can propel themselves backwards quickly by flicking their tail, known as lobstering or the caridoid escape reaction.
Varieties of fish, such as teleosts, also use fast-starts to escape from predators. Fast-starts are characterized by the muscle contraction on one side of the fish twisting the fish into a C-shape. Afterwards, muscle contraction occurs on the opposite side to allow the fish to enter into a steady swimming state with waves of undulation traveling alongside the body. The power of the bending motion comes from fast-twitch muscle fibers located in the central region of the fish. The signal to perform this contraction comes from a set of Mauthner cells which simultaneously send a signal to the muscles on one side of the fish. Mauthner cells are activated when something startles the fish and can be activated by visual or sound-based stimuli.
Fast-starts are split up into three stages. Stage one, which is called the preparatory stroke, is characterized by the initial bending to a C-shape with small delay caused by hydrodynamic resistance. Stage two, the propulsive stroke, involves the body bending rapidly to the other side, which may occur multiple times. Stage three, the rest phase, cause the fish to return to normal steady-state swimming and the body undulations begin to cease. Large muscles located closer to the central portion of the fish are stronger and generate more force than the muscles in the tail. This asymmetry in muscle composition causes body undulations that occur in Stage 3. Once the fast-start is completed, the position of the fish has been shown to have a certain level of unpredictability, which helps fish survive against predators.
The rate at which the body can bend is limited by resistance contained in the inertia of each body part. However, this inertia assists the fish in creating propulsion as a result of the momentum created against the water. The forward propulsion created from C-starts, and steady-state swimming in general, is a result of the body of the fish pushing against the water. Waves of undulation create rearward momentum against the water providing the forward thrust required to push the fish forward.
Efficiency
The Froude propulsion efficiency is defined as the ratio of power output to the power input:
where U1 = free stream velocity and U2 = jet velocity. A good efficiency for carangiform propulsion is between 50 and 80%.
Minimizing drag
Pressure differences occur outside the boundary layer of swimming organisms due to disrupted flow around the body. The difference on the up- and down-stream surfaces of the body is pressure drag, which creates a downstream force on the object. Frictional drag, on the other hand, is a result of fluid viscosity in the boundary layer. Higher turbulence causes greater frictional drag.
Reynolds number (Re) is the measure of the relationships between inertial and viscous forces in flow ((animal's length x animal's velocity)/kinematic viscosity of the fluid). Turbulent flow can be found at higher Re values, where the boundary layer separates and creates a wake, and laminar flow can be found at lower Re values, when the boundary layer separation is delayed, reducing wake and kinetic energy loss to opposing water momentum.
The body shape of a swimming organism affects the resulting drag. Long, slender bodies reduce pressure drag by streamlining, while short, round bodies reduce frictional drag; therefore, the optimal shape of an organism depends on its niche. Swimming organisms with a fusiform shape are likely to experience the greatest reduction in both pressure and frictional drag.
Wing shape also affects the amount of drag experienced by an organism, as with different methods of stroke, recovery of the pre-stroke position results in the accumulation of drag.
High-speed ram ventilation creates laminar flow of water from the gills along the body of an organism.
The secretion of mucus along the organism's body surface, or the addition of long-chained polymers to the velocity gradient, can reduce frictional drag experienced by the organism.
Buoyancy
Many aquatic/marine organisms have developed organs to compensate for their weight and control their buoyancy in the water. These structures, make the density of their bodies very close to that of the surrounding water. Some hydrozoans, such as siphonophores, has gas-filled floats; the Nautilus, Sepia, and Spirula (Cephalopods) have chambers of gas within their shells; and most teleost fish and many lantern fish (Myctophidae) are equipped with swim bladders. Many aquatic and marine organisms may also be composed of low-density materials. Deep-water teleosts, which do not have a swim bladder, have few lipids and proteins, deeply ossified bones, and watery tissues that maintain their buoyancy. Some sharks' livers are composed of low-density lipids, such as hydrocarbon squalene or wax esters (also found in Myctophidae without swim bladders), which provide buoyancy.
Swimming animals that are denser than water must generate lift or adapt a benthic lifestyle. Movement of the fish to generate hydrodynamic lift is necessary to prevent sinking. Often, their bodies act as hydrofoils, a task that is more effective in flat-bodied fish. At a small tilt angle, the lift is greater for flat fish than it is for fish with narrow bodies. Narrow-bodied fish use their fins as hydrofoils while their bodies remain horizontal. In sharks, the heterocercal tail shape drives water downward, creating a counteracting upward force while thrusting the shark forward. The lift generated is assisted by the pectoral fins and upward-angle body positioning. It is supposed that tunas primarily use their pectoral fins for lift.
Buoyancy maintenance is metabolically expensive. Growing and sustaining a buoyancy organ, adjusting the composition of biological makeup, and exerting physical strain to stay in motion demands large amounts of energy. It is proposed that lift may be physically generated at a lower energy cost by swimming upward and gliding downward, in a "climb and glide" motion, rather than constant swimming on a plane.
Temperature
Temperature can also greatly affect the ability of aquatic organisms to move through water. This is because temperature not only affects the properties of the water, but also the organisms in the water, as most have an ideal range specific to their body and metabolic needs.
Q10 (temperature coefficient), the factor by which a rate increases at a 10 °C increase in temperature, is used to measure how organisms' performance relies on temperature. Most have increased rates as water becomes warmer, but some have limits to this and others find ways to alter such effects, such as by endothermy or earlier recruitment of faster muscle.
For example, Crocodylus porosus, or estuarine crocodiles, were found to increase swimming speed from 15 °C to 23 °C and then to have peak swimming speed from 23 °C to 33 °C. However, performance began to decline as temperature rose beyond that point, showing a limit to the range of temperatures at which this species could ideally perform.
Submergence
The more of the animal's body that is submerged while swimming, the less energy it uses. Swimming on the surface requires two to three times more energy than when completely submerged. This is because of the bow wave that is formed at the front when the animal is pushing the surface of the water when swimming, creating extra drag.
Secondary evolution
While tetrapods lost many of their natural adaptations to swimming when they evolved onto the land, many have re-evolved the ability to swim or have indeed returned to a completely aquatic lifestyle.
Primarily or exclusively aquatic animals have re-evolved from terrestrial tetrapods multiple times: examples include amphibians such as newts, reptiles such as crocodiles, sea turtles, ichthyosaurs, plesiosaurs and mosasaurs, marine mammals such as whales, seals and otters, and birds such as penguins. Many species of snakes are also aquatic and live their entire lives in the water. Among invertebrates, a number of insect species have adaptations for aquatic life and locomotion. Examples of aquatic insects include dragonfly larvae, water boatmen, and diving beetles. There are also aquatic spiders, although they tend to prefer other modes of locomotion under water than swimming proper.
Examples are: Some breeds of dog swim recreationally. Umbra, a world record-holding dog, can swim 4 miles (6.4 km) in 73 minutes, placing her in the top 25% in human long-distance swimming competitions. The fishing cat is one wild species of cat that has evolved special adaptations for an aquatic or semi-aquatic lifestyle – webbed digits. Tigers and some individual jaguars are the only big cats known to go into water readily, though other big cats, including lions, have been observed swimming. A few domestic cat breeds also like swimming, such as the Turkish Van.
Horses, moose, and elk are very powerful swimmers, and can travel long distances in the water. Elephants are also capable of swimming, even in deep waters. Eyewitnesses have confirmed that camels, including dromedary and Bactrian camels, can swim, despite the fact that there is little deep water in their natural habitats.
Both domestic and wild rabbits can swim. Domestic rabbits are sometimes trained to swim as a circus attraction. A wild rabbit famously swam in an apparent attack on U.S. President Jimmy Carter's boat when it was threatened in its natural habitat.
The guinea pig (or cavy) is noted as having an excellent swimming ability. Mice can swim quite well. They do panic when placed in water, but many lab mice are used in the Morris water maze, a test to measure learning. When mice swim, they use their tails like flagella and kick with their legs.
Many snakes are excellent swimmers as well. Large adult anacondas spend the majority of their time in the water, and have difficulty moving on land.
Many monkeys can naturally swim and some, like the proboscis monkey, crab-eating macaque, and rhesus macaque swim regularly.
Human swimming
Swimming has been known amongst humans since prehistoric times; the earliest record of swimming dates back to Stone Age paintings from around 7,000 years ago. Competitive swimming started in Europe around 1800 and was part of the first modern 1896 Summer Olympics in Athens, though not in a form comparable to the contemporary events. It was not until 1908 that regulations were implemented by the International Swimming Federation to produce competitive swimming.
| Biology and health sciences | Ethology | Biology |
19541428 | https://en.wikipedia.org/wiki/Barley | Barley | Barley (), a member of the grass family, is a major cereal grain grown in temperate climates globally. It was one of the first cultivated grains; it was domesticated in the Fertile Crescent around 9000 BC, giving it nonshattering spikelets and making it much easier to harvest. Its use then spread throughout Eurasia by 2000 BC. Barley prefers relatively low temperatures and well-drained soil to grow. It is relatively tolerant of drought and soil salinity but is less winter-hardy than wheat or rye.
In 2022, barley was fourth among grains in quantity produced, 155 million tonnes, behind maize, wheat, and rice. Globally, 70% of barley production is used as animal feed, while 30% is used as a source of fermentable material for beer, or further distilled into whisky, and as a component of various foods. It is used in soups and stews and in barley bread of various cultures. Barley grains are commonly made into malt using a traditional and ancient method of preparation. In English folklore, John Barleycorn personifies the grain and the alcoholic beverages made from it. English pub names such as The Barley Mow allude to its role in the production of beer.
Etymology
The Old English word for barley was bere. This survives in the north of Scotland as bere; it is used for a strain of six-row barley grown there. Modern English barley derives from the Old English adjective bærlic, meaning "of barley". The word barn derives from Old English bere-aern meaning "barley-store".
The name of the genus is from Latin hordeum, barley, likely related to Latin horrere, to bristle.
Description
Barley is a cereal, a member of the grass family with edible grains. Its flowers are clusters of spikelets arranged in a distinctive herringbone pattern. Each spikelet has a long thin awn (to long), making the ears look tufted. The spikelets are in clusters of three. In six-row barley, all three spikelets in each cluster are fertile; in two-row barley, only the central one is fertile. It is a self-pollinating, diploid species with 14 chromosomes.
The genome of barley was sequenced in 2012 by the International Barley Genome Sequencing Consortium and the UK Barley Sequencing Consortium. The genome is organised into seven pairs of nuclear chromosomes (recommended designations: 1H, 2H, 3H, 4H, 5H, 6H and 7H), and one mitochondrial and one chloroplast chromosome, with a total of 5000 Mbp. Details of the genome are freely available in several barley databases.
Origin
External phylogeny
The barley genus Hordeum is relatively closely related to wheat and rye within the Triticeae, and more distantly to rice within the BOP clade of grasses (Poaceae). The phylogeny of the Triticeae is complicated by hybridization between species, so there is a network of relationships rather than a simple inheritance-based tree.
Domestication
Barley was one of the first grains to be domesticated in the Fertile Crescent, an area of relatively abundant water in Western Asia, around 9,000 BC. Wild barley (H. vulgare ssp. spontaneum) ranges from North Africa and Crete in the west to Tibet in the east. A study of genome-wide diversity markers found Tibet to be an additional center of domestication of cultivated barley. The earliest archaeological evidence of the consumption of wild barley, Hordeum spontaneum, comes from the Epipaleolithic at Ohalo II at the southern end of the Sea of Galilee, where grinding stones with traces of starch were found. The remains were dated to about 23,000 BC. The earliest evidence for the domestication of barley, in the form of cultivars that cannot reproduce without human assistance, comes from Mesopotamia, specifically the Jarmo region of modern-day Iraq, around 9,000–7,000 BC.
Domestication changed the morphology of the barley grain substantially, from an elongated shape to a more rounded spherical one. Wild barley has distinctive genes, alleles, and regulators with potential for resistance to abiotic or biotic stresses; these may help cultivated barley to adapt to climatic changes. Wild barley has a brittle spike; upon maturity, the spikelets separate, facilitating seed dispersal. Domesticated barley has nonshattering spikelets, making it much easier to harvest the mature ears. The nonshattering condition is caused by a mutation in one of two tightly linked genes known as Bt1 and Bt2; many cultivars possess both mutations. The nonshattering condition is recessive, so varieties of barley that exhibit this condition are homozygous for the mutant allele. Domestication in barley is followed by the change of key phenotypic traits at the genetic level.
The wild barley found currently in the Fertile Crescent may not be the progenitor of the barley cultivated in Eritrea and Ethiopia, indicating that it may have been domesticated separately in eastern Africa.
Spread
Archaeobotanical evidence shows that barley had spread throughout Eurasia by 2,000 BC. Genetic analysis demonstrates that cultivated barley followed several different routes over time. By 4200 BC domesticated barley had reached Eastern Finland. Barley has been grown in the Korean Peninsula since the Early Mumun Pottery Period (circa 1500–850 BC). Barley ( in Sanskrit) is mentioned many times in the Rigveda and other Indian scriptures as a principal grain in ancient India. Traces of barley cultivation have been found in post-Neolithic Bronze Age Harappan civilization 5,700–3,300 years ago. Barley beer was probably one of the first alcoholic drinks developed by Neolithic humans; later it was used as currency. The Sumerian language had a word for barley, akiti. In ancient Mesopotamia, a stalk of barley was the primary symbol of the goddess Shala.
Rations of barley for workers appear in Linear B tablets in Mycenaean contexts at Knossos and at Mycenaean Pylos. In mainland Greece, the ritual significance of barley possibly dates back to the earliest stages of the Eleusinian Mysteries. The preparatory kykeon or mixed drink of the initiates, prepared from barley and herbs, mentioned in the Homeric hymn to Demeter. The goddess's name may have meant "barley-mother", incorporating the ancient Cretan word δηαί (dēai), "barley". The practice was to dry the barley groats and roast them before preparing the porridge, according to Pliny the Elder's Natural History. Tibetan barley has been a staple food in Tibetan cuisine since the fifth century AD. This grain, along with a cool climate that permitted storage, produced a civilization that was able to raise great armies. It is made into a flour product called tsampa that is still a staple in Tibet. In medieval Europe, bread made from barley and rye was peasant food, while wheat products were consumed by the upper classes.
Taxonomy and varieties
Two-row and six-row barley
Spikelets are arranged in triplets which alternate along the rachis. In wild barley (and other Old World species of Hordeum), only the central spikelet is fertile, while the other two are reduced. This condition is retained in certain cultivars known as two-row barleys. A pair of mutations (one dominant, the other recessive) result in fertile lateral spikelets to produce six-row barleys. A mutation in one gene, vrs1, is responsible for the transition from two-row to six-row barley. Brewers in Europe tend to use two-row cultivars and breweries in North America use six-row barley (or a mix), and there are important differences in enzyme content, kernel shape, and other factors that malters and brewers must take into consideration.
In traditional taxonomy, different forms of barley were classified as different species based on morphological differences. Two-row barley with shattering spikes (wild barley) was named Hordeum spontaneum. Two-row barley with nonshattering spikes was named as H. distichon, six-row barley with nonshattering spikes as H. vulgare (or H. hexastichum), and six-row with shattering spikes as H. agriocrithon. Because these differences were driven by single-gene mutations, coupled with cytological and molecular evidence, most recent classifications treat these forms as a single species, H. vulgare.
Hulless barley
Hulless or "naked" barley (Hordeum vulgare var. nudum) is a form of domesticated barley with an easier-to-remove hull. Naked barley is an ancient food crop, but a new industry has developed around uses of selected hulless barley to increase the digestibility of the grain, especially for pigs and poultry. Hulless barley has been investigated for several potential new applications as whole grain, bran, and flour.
Production
In 2022, world production of barley was 155 million tonnes, led by Russia accounting for 15% of the world total (table). France, Germany, and Canada were secondary producers. Worldwide barley production was fourth among grains, following maize (1.2 billion tonnes), wheat (808 million tonnes), and rice (776 million tonnes).
Cultivation
Barley is a crop that prefers relatively low temperatures, in the growing season; it is grown around the world in temperate areas. It grows best in well-drained soil in full sunshine. In the tropics and subtropics, it is grown for food and straw in South Asia, North and East Africa, and in the Andes of South America. In dry regions it requires irrigation. It has a short growing season and is relatively drought-tolerant. Barley is more tolerant of soil salinity than other cereals, varying in different cultivars. It has less winter-hardiness than winter wheat and far less than rye.
Like other cereals, barley is typically planted on tilled land. Seed was traditionally scattered, but in developed countries is usually drilled. As it grows it requires soil nutrients (nitrogen, phosphorus, potassium), often supplied as fertilizers. It needs to be monitored for pests and diseases, and if necessary treated before these become serious. The stems and ears turn yellow when ripe, and the ears begin to droop. Traditional harvesting was by hand with sickles or scythes; in developed countries, harvesting is mechanised with combine harvesters.
Pests and diseases
Among the insect pests of barley are aphids such as Russian wheat aphid, caterpillars such as of the armyworm moth, barley mealybug, and wireworm larvae of click beetle genera such as Aeolus. Aphid damage can often be tolerated, whereas armyworms can eat whole leaves. Wireworms kill seedlings, and require seed or preplanting treatment.
Serious fungal diseases of barley include powdery mildew caused by Blumeria graminis, leaf scald caused by Rhynchosporium secalis, barley rust caused by Puccinia hordei, crown rust caused by Puccinia coronata, various diseases caused by Cochliobolus sativus, Fusarium ear blight,
and stem rust (Puccinia graminis).
Bacterial diseases of barley include bacterial blight caused by Xanthomonas campestris pv. translucens.
Barley is susceptible to several viral diseases, such as barley mild mosaic bymovirus. Some viruses, such as barley yellow dwarf virus, vectored by the rice root aphid, can cause serious crop injury.
For durable disease resistance, quantitative resistance is more important than qualitative resistance. The most important foliar diseases have corresponding resistance gene regions on all chromosomes of barley.
A large number of molecular markers are available for breeding of resistance to leaf rust, powdery mildew, Rhynchosporium secalis, Pyrenophora teres f. teres, Barley yellow dwarf virus, and the Barley yellow mosaic virus complex.
Food
Preparation
Hulled barley (or covered barley) is eaten after removing the inedible, fibrous, outer husk or hull. Once removed, it is called dehulled barley (or pot barley or scotch barley). Pearl barley (or pearled barley) is dehulled to remove most of the bran, and polished. Barley meal, a wholemeal barley flour lighter than wheat meal but darker in colour, is used in gruel. This gruel is known as سويق : sawīq in the Arab world.
With a long history of cultivation in the Middle East, barley is used in a wide range of traditional Arabic, Assyrian, Israelite, Kurdish, and Persian foodstuffs including keşkek, kashk, and murri. Barley soup is traditionally eaten during Ramadan in Saudi Arabia. Cholent or hamin (in Hebrew) is a traditional Jewish stew often eaten on the Sabbath, in numerous recipes by both Mizrachi and Ashkenazi Jews; its original form was a barley porridge.
In Eastern and Central Europe, barley is used in soups and stews such as ričet. In Africa, where it is a traditional food plant, it has the potential to improve nutrition, boost food security, foster rural development, and support sustainable landcare.
The six-row variety bere is cultivated in Orkney, Shetland, Caithness and the Western Isles of the Scottish Highlands and Islands. When milled into beremeal, it is used locally in bread, biscuits, and the traditional beremeal bannock.
In Japanese cuisine, barley is mixed with rice and steamed as mugimeshi. The naval surgeon Takaki Kanehiro introduced it into institutional cooking to combat beriberi, endemic in the armed forces in the 19th century. It became standard prison fare, and remains a staple in the Japan Self-Defense Forces.
Nutrition
Cooked barley is 69% water, 28% carbohydrates, 2% protein, and 0.4% fat (table). In a 100-gram (3.5 oz) reference serving, cooked barley provides of food energy and is a good source (10% or more of the Daily Value, DV) of essential nutrients, including, dietary fibre, the B vitamin niacin (14% DV), and dietary minerals, including iron (10% DV) and manganese (12% DV) (table).
Health implications
According to Health Canada and the US Food and Drug Administration, consuming at least 3 grams per day of barley beta-glucan can lower levels of blood cholesterol, a risk factor for cardiovascular diseases.
Eating whole-grain barley, a high-fibre grain, improves regulation of blood sugar (i.e., reduces blood glucose response to a meal). Consuming breakfast cereals containing barley over weeks to months improves cholesterol levels and glucose regulation.
Barley contains gluten, which makes it an unsuitable grain for consumption by people with gluten-related disorders, such as coeliac disease, non-coeliac gluten sensitivity and wheat allergy sufferers. Nevertheless, some wheat allergy patients can tolerate barley.
Uses
Beer, whisky, and soft drinks
Barley, made into malt, is a key ingredient in beer and whisky production. Two-row barley is traditionally used in German and English beers. Six-row barley was traditionally used in US beers, but both varieties are in common usage now. Distilled from green beer, Scottish and Irish whisky are made primarily from barley. About 25% of American barley is used for malting, for which barley is the best-suited grain. Accordingly, barley is often assessed by its malting enzyme content. Barley wine is a style of strong beer from the English brewing tradition. An 18th-century alcoholic drink of the same name was made by boiling barley in water, then mixing the barley water with white wine, borage, lemon and sugar. In the 19th century, a different barley wine was prepared from recipes of ancient Greek origin.
Nonalcoholic drinks such as barley water and roasted barley tea have been made by boiling barley in water. In Italy, roasted barley is sometimes used as coffee substitute, caffè d'orzo (barley coffee).
Animal feed
Some 70% of the world's barley production is used as livestock feed, for example for cattle feeding in western Canada. In 2014, an enzymatic process was devised to make a high-protein fish feed from barley, suitable for carnivorous fish such as trout and salmon.
Other uses
Barley straw has been placed in mesh bags and floated in fish ponds or water gardens to help prevent algal growth without harming pond plants and animals. The technique's effectiveness is at best mixed.
Barley grains were once used for measurement in England, there being nominally three or four barleycorns to the inch. By the 19th century, this had been superseded by standard inch measures. In ancient Mesopotamia, barley was used as a form of money, the standard unit of weight for barley, and hence of value, being the shekel.
Culture and folklore
In the Old English poem Beowulf, and in Norse mythology, Scyld Scefing (the second name meaning "with a sheaf") and his son Beow ("Barley") are associated with the grain, or are possibly corn-gods; J. R. R. Tolkien wrote a poem "King Sheave" about them, and based a major element of his legendarium, the Old Straight Road from Middle-earth to the earthly paradise of Valinor, on their story. William of Malmesbury's 12th century Chronicle tells the story of the related figure Sceafa as a sleeping child in a boat without oars with a sheaf of corn at his head. Axel Olrik identified Peko, a parallel "barley-figure" in Finnish culture, in turn connected by R.D. Fulk with the Eddaic Bergelmir.
In English folklore, the figure of John Barleycorn in the folksong of the same name is a personification of barley, and of the alcoholic beverages made from it: beer and whisky. In the song, John Barleycorn is represented as suffering attacks, death, and indignities that correspond to the various stages of barley cultivation, such as reaping and malting; but he is revenged by getting the men drunk: "And little Sir John and the nut-brown bowl / Proved the strongest man at last." The folksong "Elsie Marley" celebrates an alewife of County Durham with lines such as "And do you ken Elsie Marley, honey? / The wife that sells the barley, honey". The antiquary Cuthbert Sharp records that Elsie Marley was "a handsome, buxom, bustling landlady, and brought good custom to the [ale] house by her civility and attention."
English pub names such as The Barley Mow, John Barleycorn, Malt Shovel, and Mash Tun allude to barley's role in the production of beer.
| Biology and health sciences | Poales | null |
19541494 | https://en.wikipedia.org/wiki/Cloud%20computing | Cloud computing | Cloud computing is "a paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources with self-service provisioning and administration on-demand," according to ISO.
Essential Characteristics
In 2011, the National Institute of Standards and Technology (NIST) identified five "essential characteristics" for cloud systems. Below are the exact definitions according to NIST:
On-demand self-service: "A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider."
Broad network access: "Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations)."
Resource pooling: " The provider's computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand."
Rapid elasticity: "Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear unlimited and can be appropriated in any quantity at any time."
Measured service: "Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.
By 2023, the International Organization for Standardization (ISO) had expanded and refined the list.
History
The history of cloud computing extends back to the 1960s, with the initial concepts of time-sharing becoming popularized via remote job entry (RJE). The "data center" model, where users submitted jobs to operators to run on mainframes, was predominantly used during this era. This was a time of exploration and experimentation with ways to make large-scale computing power available to more users through time-sharing, optimizing the infrastructure, platform, and applications, and increasing efficiency for end users.
The "cloud" metaphor for virtualized services dates to 1994, when it was used by General Magic for the universe of "places" that mobile agents in the Telescript environment could "go". The metaphor is credited to David Hoffman, a General Magic communications specialist, based on its long-standing use in networking and telecom. The expression cloud computing became more widely known in 1996 when Compaq Computer Corporation drew up a business plan for future computing and the Internet. The company's ambition was to supercharge sales with "cloud computing-enabled applications". The business plan foresaw that online consumer file storage would likely be commercially successful. As a result, Compaq decided to sell server hardware to internet service providers.
In the 2000s, the application of cloud computing began to take shape with the establishment of Amazon Web Services (AWS) in 2002, which allowed developers to build applications independently. In 2006 Amazon Simple Storage Service, known as Amazon S3, and the Amazon Elastic Compute Cloud (EC2) were released. In 2008 NASA's development of the first open-source software for deploying private and hybrid clouds.
The following decade saw the launch of various cloud services. In 2010, Microsoft launched Microsoft Azure, and Rackspace Hosting and NASA initiated an open-source cloud-software project, OpenStack. IBM introduced the IBM SmartCloud framework in 2011, and Oracle announced the Oracle Cloud in 2012. In December 2019, Amazon launched AWS Outposts, a service that extends AWS infrastructure, services, APIs, and tools to customer data centers, co-location spaces, or on-premises facilities.
Value proposition
Cloud computing can enable shorter time to market by providing pre-configured tools, scalable resources, and managed services, allowing users to focus on their core business value instead of maintaining infrastructure. Cloud platforms can enable organizations and individuals to reduce upfront capital expenditures on physical infrastructure by shifting to an operational expenditure model, where costs scale with usage. Cloud platforms also offer managed services and tools, such as artificial intelligence, data analytics, and machine learning, which might otherwise require significant in-house expertise and infrastructure investment.
While cloud computing can offer cost advantages through effective resource optimization, organizations often face challenges such as unused resources, inefficient configurations, and hidden costs without proper oversight and governance. Many cloud platforms provide cost management tools, such as AWS Cost Explorer and Azure Cost Management, and frameworks like FinOps have emerged to standardize financial operations in the cloud. Cloud computing also facilitates collaboration, remote work, and global service delivery by enabling secure access to data and applications from any location with an internet connection.
Cloud providers offer various redundancy options for core services, such as managed storage and managed databases, though redundancy configurations often vary by service tier. Advanced redundancy strategies, such as cross-region replication or failover systems, typically require explicit configuration and may incur additional costs or licensing fees.
Cloud environments operate under a shared responsibility model, where providers are typically responsible for infrastructure security, physical hardware, and software updates, while customers are accountable for data encryption, identity and access management (IAM), and application-level security. These responsibilities vary depending on the cloud service model—Infrastructure as a Service (IaaS), Platform as a Service (PaaS), or Software as a Service (SaaS)—with customers typically having more control and responsibility in IaaS environments and progressively less in PaaS and SaaS models, often trading control for convenience and managed services.
Factors Influencing Adoption and Suitability of Cloud Computing
The decision to adopt cloud computing or maintain on-premises infrastructure depends on factors such as scalability, cost structure, latency requirements, regulatory constraints, and infrastructure customization.
Organizations with variable or unpredictable workloads, limited capital for upfront investments, or a focus on rapid scalability benefit from cloud adoption. Startups, SaaS companies, and e-commerce platforms often prefer the pay-as-you-go operational expenditure (OpEx) model of cloud infrastructure. Additionally, companies prioritizing global accessibility, remote workforce enablement, disaster recovery, and leveraging advanced services such as AI/ML and analytics are well-suited for the cloud. In recent years, some cloud providers have started offering specialized services for high-performance computing and low-latency applications, addressing some use cases previously exclusive to on-premises setups.
On the other hand, organizations with strict regulatory requirements, highly predictable workloads, or reliance on deeply integrated legacy systems may find cloud infrastructure less suitable. Businesses in industries like defense, government, or those handling highly sensitive data often favor on-premises setups for greater control and data sovereignty. Additionally, companies with ultra-low latency requirements, such as high-frequency trading (HFT) firms, rely on custom hardware (e.g., FPGAs) and physical proximity to exchanges, which most cloud providers cannot fully replicate despite recent advancements. Similarly, tech giants like Google, Meta, and Amazon build their own data centers due to economies of scale, predictable workloads, and the ability to customize hardware and network infrastructure for optimal efficiency. However, these companies also use cloud services selectively for certain workloads and applications where it aligns with their operational needs.
In practice, many organizations are increasingly adopting hybrid cloud architectures, combining on-premises infrastructure with cloud services. This approach allows businesses to balance scalability, cost-effectiveness, and control, offering the benefits of both deployment models while mitigating their respective limitations.
Challenges and limitations
One of the main challenges of cloud computing, in comparison to more traditional on-premises computing, is data security and privacy. Cloud users entrust their sensitive data to third-party providers, who may not have adequate measures to protect it from unauthorized access, breaches, or leaks. Cloud users also face compliance risks if they have to adhere to certain regulations or standards regarding data protection, such as GDPR or HIPAA.
Another challenge of cloud computing is reduced visibility and control. Cloud users may not have full insight into how their cloud resources are managed, configured, or optimized by their providers. They may also have limited ability to customize or modify their cloud services according to their specific needs or preferences. Complete understanding of all technology may be impossible, especially given the scale, complexity, and deliberate opacity of contemporary systems; however, there is a need for understanding complex technologies and their interconnections to have power and agency within them. The metaphor of the cloud can be seen as problematic as cloud computing retains the aura of something noumenal and numinous; it is something experienced without precisely understanding what it is or how it works.
Additionally, cloud migration is a significant challenge. This process involves transferring data, applications, or workloads from one cloud environment to another, or from on-premises infrastructure to the cloud. Cloud migration can be complicated, time-consuming, and expensive, particularly when there are compatibility issues between different cloud platforms or architectures. If not carefully planned and executed, cloud migration can lead to downtime, reduced performance, or even data loss.
Cloud migration challenges
According to the 2024 State of the Cloud Report by Flexera, approximately 50% of respondents identified the following top challenges when migrating workloads to public clouds:
"Understanding application dependencies"
"Comparing on-premise and cloud costs"
"Assessing technical feasibility."
Implementation challenges
Applications hosted in the cloud are susceptible to the fallacies of distributed computing, a series of misconceptions that can lead to significant issues in software development and deployment.
Cloud cost overruns
In a report by Gartner, a survey of 200 IT leaders revealed that 69% experienced budget overruns in their organizations' cloud expenditures during 2023. Conversely, 31% of IT leaders whose organizations stayed within budget attributed their success to accurate forecasting and budgeting, proactive monitoring of spending, and effective optimization.
The 2024 Flexera State of Cloud Report identifies the top cloud challenges as managing cloud spend, followed by security concerns and lack of expertise. Public cloud expenditures exceeded budgeted amounts by an average of 15%. The report also reveals that cost savings is the top cloud initiative for 60% of respondents. Furthermore, 65% measure cloud progress through cost savings, while 42% prioritize shorter time-to-market, indicating that cloud's promise of accelerated deployment is often overshadowed by cost concerns.
Service Level Agreements
Typically, cloud providers' Service Level Agreements (SLAs) do not encompass all forms of service interruptions. Exclusions typically include planned maintenance, downtime resulting from external factors such as network issues, human errors, like misconfigurations, natural disasters, force majeure events, or security breaches. Typically, customers bear the responsibility of monitoring SLA compliance and must file claims for any unmet SLAs within a designated timeframe. Customers should be aware of how deviations from SLAs are calculated, as these parameters may vary by service. These requirements can place a considerable burden on customers. Additionally, SLA percentages and conditions can differ across various services within the same provider, with some services lacking any SLA altogether. In cases of service interruptions due to hardware failures in the cloud provider, the company typically does not offer monetary compensation. Instead, eligible users may receive credits as outlined in the corresponding SLA.
Leaky abstractions
Cloud computing abstractions aim to simplify resource management, but leaky abstractions can expose underlying complexities. These variations in abstraction quality depend on the cloud vendor, service and architecture. Mitigating leaky abstractions requires users to understand the implementation details and limitations of the cloud services they utilize.
Service lock-in within the same vendor
Service lock-in within the same vendor occurs when a customer becomes dependent on specific services within a cloud vendor, making it challenging to switch to alternative services within the same vendor when their needs change.
Security and privacy
Cloud computing poses privacy concerns because the service provider can access the data that is in the cloud at any time. It could accidentally or deliberately alter or delete information. Many cloud providers can share information with third parties if necessary for purposes of law and order without a warrant. That is permitted in their privacy policies, which users must agree to before they start using cloud services. Solutions to privacy include policy and legislation as well as end-users' choices for how data is stored. Users can encrypt data that is processed or stored within the cloud to prevent unauthorized access. Identity management systems can also provide practical solutions to privacy concerns in cloud computing. These systems distinguish between authorized and unauthorized users and determine the amount of data that is accessible to each entity. The systems work by creating and describing identities, recording activities, and getting rid of unused identities.
According to the Cloud Security Alliance, the top three threats in the cloud are Insecure Interfaces and APIs, Data Loss & Leakage, and Hardware Failure—which accounted for 29%, 25% and 10% of all cloud security outages respectively. Together, these form shared technology vulnerabilities. In a cloud provider platform being shared by different users, there may be a possibility that information belonging to different customers resides on the same data server. Additionally, Eugene Schultz, chief technology officer at Emagined Security, said that hackers are spending substantial time and effort looking for ways to penetrate the cloud. "There are some real Achilles' heels in the cloud infrastructure that are making big holes for the bad guys to get into". Because data from hundreds or thousands of companies can be stored on large cloud servers, hackers can theoretically gain control of huge stores of information through a single attack—a process he called "hyperjacking". Some examples of this include the Dropbox security breach, and iCloud 2014 leak. Dropbox had been breached in October 2014, having over seven million of its users passwords stolen by hackers in an effort to get monetary value from it by Bitcoins (BTC). By having these passwords, they are able to read private data as well as have this data be indexed by search engines (making the information public).
There is the problem of legal ownership of the data (If a user stores some data in the cloud, can the cloud provider profit from it?). Many Terms of Service agreements are silent on the question of ownership. Physical control of the computer equipment (private cloud) is more secure than having the equipment off-site and under someone else's control (public cloud). This delivers great incentive to public cloud computing service providers to prioritize building and maintaining strong management of secure services. Some small businesses that do not have expertise in IT security could find that it is more secure for them to use a public cloud. There is the risk that end users do not understand the issues involved when signing on to a cloud service (persons sometimes do not read the many pages of the terms of service agreement, and just click "Accept" without reading). This is important now that cloud computing is common and required for some services to work, for example for an intelligent personal assistant (Apple's Siri or Google Assistant). Fundamentally, private cloud is seen as more secure with higher levels of control for the owner, however public cloud is seen to be more flexible and requires less time and money investment from the user.
The attacks that can be made on cloud computing systems include man-in-the middle attacks, phishing attacks, authentication attacks, and malware attacks. One of the largest threats is considered to be malware attacks, such as Trojan horses. Recent research conducted in 2022 has revealed that the Trojan horse injection method is a serious problem with harmful impacts on cloud computing systems.
Service models
The National Institute of Standards and Technology recognized three cloud service models in 2011: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). The International Organization for Standardization (ISO) later identified additional models in 2023, including "Network as a Service", "Communications as a Service", "Compute as a Service", and "Data Storage as a Service".
Infrastructure as a service (IaaS)
Infrastructure as a service (IaaS) refers to online services that provide high-level APIs used to abstract various low-level details of underlying network infrastructure like physical computing resources, location, data partitioning, scaling, security, backup, etc. A hypervisor runs the virtual machines as guests. Pools of hypervisors within the cloud operational system can support large numbers of virtual machines and the ability to scale services up and down according to customers' varying requirements. Linux containers run in isolated partitions of a single Linux kernel running directly on the physical hardware. Linux cgroups and namespaces are the underlying Linux kernel technologies used to isolate, secure and manage the containers. The use of containers offers higher performance than virtualization because there is no hypervisor overhead. IaaS clouds often offer additional resources such as a virtual-machine disk-image library, raw block storage, file or object storage, firewalls, load balancers, IP addresses, virtual local area networks (VLANs), and software bundles.
The NIST's definition of cloud computing describes IaaS as "where the consumer is able to deploy and run arbitrary software, which can include operating systems and applications. The consumer does not manage or control the underlying cloud infrastructure but has control over operating systems, storage, and deployed applications; and possibly limited control of select networking components (e.g., host firewalls)."
IaaS-cloud providers supply these resources on-demand from their large pools of equipment installed in data centers. For wide-area connectivity, customers can use either the Internet or carrier clouds (dedicated virtual private networks). To deploy their applications, cloud users install operating-system images and their application software on the cloud infrastructure. In this model, the cloud user patches and maintains the operating systems and the application software. Cloud providers typically bill IaaS services on a utility computing basis: cost reflects the number of resources allocated and consumed.
Platform as a service (PaaS)
The NIST's definition of cloud computing defines Platform as a Service as:
PaaS vendors offer a development environment to application developers. The provider typically develops toolkit and standards for development and channels for distribution and payment. In the PaaS models, cloud providers deliver a computing platform, typically including an operating system, programming-language execution environment, database, and the web server. Application developers develop and run their software on a cloud platform instead of directly buying and managing the underlying hardware and software layers. With some PaaS, the underlying computer and storage resources scale automatically to match application demand so that the cloud user does not have to allocate resources manually.
Some integration and data management providers also use specialized applications of PaaS as delivery models for data. Examples include iPaaS (Integration Platform as a Service) and dPaaS (Data Platform as a Service). iPaaS enables customers to develop, execute and govern integration flows. Under the iPaaS integration model, customers drive the development and deployment of integrations without installing or managing any hardware or middleware. dPaaS delivers integration—and data-management—products as a fully managed service. Under the dPaaS model, the PaaS provider, not the customer, manages the development and execution of programs by building data applications for the customer. dPaaS users access data through data-visualization tools.
Software as a service (SaaS)
The NIST's definition of cloud computing defines Software as a Service as:
In the software as a service (SaaS) model, users gain access to application software and databases. Cloud providers manage the infrastructure and platforms that run the applications. SaaS is sometimes referred to as "on-demand software" and is usually priced on a pay-per-use basis or using a subscription fee. In the SaaS model, cloud providers install and operate application software in the cloud and cloud users access the software from cloud clients. Cloud users do not manage the cloud infrastructure and platform where the application runs. This eliminates the need to install and run the application on the cloud user's own computers, which simplifies maintenance and support. Cloud applications differ from other applications in their scalability—which can be achieved by cloning tasks onto multiple virtual machines at run-time to meet changing work demand. Load balancers distribute the work over the set of virtual machines. This process is transparent to the cloud user, who sees only a single access-point. To accommodate a large number of cloud users, cloud applications can be multitenant, meaning that any machine may serve more than one cloud-user organization.
The pricing model for SaaS applications is typically a monthly or yearly flat fee per user, so prices become scalable and adjustable if users are added or removed at any point. It may also be free. Proponents claim that SaaS gives a business the potential to reduce IT operational costs by outsourcing hardware and software maintenance and support to the cloud provider. This enables the business to reallocate IT operations costs away from hardware/software spending and from personnel expenses, towards meeting other goals. In addition, with applications hosted centrally, updates can be released without the need for users to install new software. One drawback of SaaS comes with storing the users' data on the cloud provider's server. As a result, there could be unauthorized access to the data. Examples of applications offered as SaaS are games and productivity software like Google Docs and Office Online. SaaS applications may be integrated with cloud storage or File hosting services, which is the case with Google Docs being integrated with Google Drive, and Office Online being integrated with OneDrive.
Serverless computing
Serverless computing allows customers to use various cloud capabilities without the need to provision, deploy, or manage hardware or software resources, apart from providing their application code or data. ISO/IEC 22123-2:2023 classifies serverless alongside Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) under the broader category of cloud service categories. Notably, while ISO refers to these classifications as cloud service categories, the National Institute of Standards and Technology (NIST) refers to them as service models.
Deployment models
"A cloud deployment model represents the way in which cloud computing can be organized based on the control and sharing of physical or virtual resources." Cloud deployment models define the fundamental patterns of interaction between cloud customers and cloud providers. They do not detail implementation specifics or the configuration of resources.
Private
Private cloud is cloud infrastructure operated solely for a single organization, whether managed internally or by a third party, and hosted either internally or externally. Undertaking a private cloud project requires significant engagement to virtualize the business environment, and requires the organization to reevaluate decisions about existing resources. It can improve business, but every step in the project raises security issues that must be addressed to prevent serious vulnerabilities. Self-run data centers are generally capital intensive. They have a significant physical footprint, requiring allocations of space, hardware, and environmental controls. These assets have to be refreshed periodically, resulting in additional capital expenditures. They have attracted criticism because users "still have to buy, build, and manage them" and thus do not benefit from less hands-on management, essentially "[lacking] the economic model that makes cloud computing such an intriguing concept".
Public
Cloud services are considered "public" when they are delivered over the public Internet, and they may be offered as a paid subscription, or free of charge. Architecturally, there are few differences between public- and private-cloud services, but security concerns increase substantially when services (applications, storage, and other resources) are shared by multiple customers. Most public-cloud providers offer direct-connection services that allow customers to securely link their legacy data centers to their cloud-resident applications.
Several factors like the functionality of the solutions, cost, integrational and organizational aspects as well as safety & security are influencing the decision of enterprises and organizations to choose a public cloud or on-premises solution.
Hybrid
Hybrid cloud is a composition of a public cloud and a private environment, such as a private cloud or on-premises resources, that remain distinct entities but are bound together, offering the benefits of multiple deployment models. Hybrid cloud can also mean the ability to connect collocation, managed and/or dedicated services with cloud resources. Gartner defines a hybrid cloud service as a cloud computing service that is composed of some combination of private, public and community cloud services, from different service providers. A hybrid cloud service crosses isolation and provider boundaries so that it cannot be simply put in one category of private, public, or community cloud service. It allows one to extend either the capacity or the capability of a cloud service, by aggregation, integration or customization with another cloud service.
Varied use cases for hybrid cloud composition exist. For example, an organization may store sensitive client data in house on a private cloud application, but interconnect that application to a business intelligence application provided on a public cloud as a software service. This example of hybrid cloud extends the capabilities of the enterprise to deliver a specific business service through the addition of externally available public cloud services. Hybrid cloud adoption depends on a number of factors such as data security and compliance requirements, level of control needed over data, and the applications an organization uses.
Another example of hybrid cloud is one where IT organizations use public cloud computing resources to meet temporary capacity needs that can not be met by the private cloud. This capability enables hybrid clouds to employ cloud bursting for scaling across clouds. Cloud bursting is an application deployment model in which an application runs in a private cloud or data center and "bursts" to a public cloud when the demand for computing capacity increases. A primary advantage of cloud bursting and a hybrid cloud model is that an organization pays for extra compute resources only when they are needed. Cloud bursting enables data centers to create an in-house IT infrastructure that supports average workloads, and use cloud resources from public or private clouds, during spikes in processing demands.
Community
Community cloud shares infrastructure between several organizations from a specific community with common concerns (security, compliance, jurisdiction, etc.), whether it is managed internally or by a third-party, and hosted internally or externally, the costs are distributed among fewer users compared to a public cloud (but more than a private cloud). As a result, only a portion of the potential cost savings of cloud computing is achieved.
Multi cloud
According to ISO/IEC 22123-1: "multi-cloud is a cloud deployment model in which a customer uses public cloud services provided by two or more cloud service providers". Poly cloud refers to the use of multiple public clouds for the purpose of leveraging specific services that each provider offers. It differs from Multi cloud in that it is not designed to increase flexibility or mitigate against failures but is rather used to allow an organization to achieve more than could be done with a single provider.
Market
According to International Data Corporation (IDC), global spending on cloud computing services has reached $706 billion and is expected to reach $1.3 trillion by 2025. Gartner estimated that global public cloud services end-user spending would reach $600 billion by 2023. According to a McKinsey & Company report, cloud cost-optimization levers and value-oriented business use cases foresee more than $1 trillion in run-rate EBITDA across Fortune 500 companies as up for grabs in 2030. In 2022, more than $1.3 trillion in enterprise IT spending was at stake from the shift to the cloud, growing to almost $1.8 trillion in 2025, according to Gartner.
The European Commission's 2012 Communication identified several issues which were impeding the development of the cloud computing market:
fragmentation of the digital single market across the EU
concerns about contracts including reservations about data access and ownership, data portability, and change control
variations in standards applicable to cloud computing
The Communication set out a series of "digital agenda actions" which the Commission proposed to undertake in order to support the development of a fair and effective market for cloud computing services.
List of public clouds
Adobe Creative Cloud
Amazon Web Services
Google Cloud
IBM Cloud
Microsoft Azure
OpenStack
Oracle Cloud
Panorama9
Similar concepts
The goal of cloud computing is to allow users to take benefit from all of these technologies, without the need for deep knowledge about or expertise with each one of them. The cloud aims to cut costs and helps the users focus on their core business instead of being impeded by IT obstacles. The main enabling technology for cloud computing is virtualization. Virtualization software separates a physical computing device into one or more "virtual" devices, each of which can be easily used and managed to perform computing tasks. With operating system–level virtualization essentially creating a scalable system of multiple independent computing devices, idle computing resources can be allocated and used more efficiently. Virtualization provides the agility required to speed up IT operations and reduces cost by increasing infrastructure utilization. Autonomic computing automates the process through which the user can provision resources on-demand. By minimizing user involvement, automation speeds up the process, reduces labor costs and reduces the possibility of human errors.
Cloud computing uses concepts from utility computing to provide metrics for the services used. Cloud computing attempts to address QoS (quality of service) and reliability problems of other grid computing models.
Cloud computing shares characteristics with:
Client–server model Client–server computing refers broadly to any distributed application that distinguishes between service providers (servers) and service requestors (clients).
Computer bureau A service bureau providing computer services, particularly from the 1960s to 1980s.
Grid computing A form of distributed and parallel computing, whereby a 'super and virtual computer' is composed of a cluster of networked, loosely coupled computers acting in concert to perform very large tasks.
Fog computing Distributed computing paradigm that provides data, compute, storage and application services closer to the client or near-user edge devices, such as network routers. Furthermore, fog computing handles data at the network level, on smart devices and on the end-user client-side (e.g. mobile devices), instead of sending data to a remote location for processing.
Utility computing The "packaging of computing resources, such as computation and storage, as a metered service similar to a traditional public utility, such as electricity."
Peer-to-peer A distributed architecture without the need for central coordination. Participants are both suppliers and consumers of resources (in contrast to the traditional client-server model).
Cloud sandbox A live, isolated computer environment in which a program, code or file can run without affecting the application in which it runs.
| Technology | Computer architecture concepts | null |
19554533 | https://en.wikipedia.org/wiki/Homo%20erectus | Homo erectus | Homo erectus ( ) is an extinct species of archaic human from the Pleistocene, spanning nearly 2 million years. It is the first human species to evolve a humanlike body plan and gait, to leave Africa and colonize Asia and Europe, and to wield fire. H. erectus is the ancestor of later Homo species, including H. heidelbergensis — the last common ancestor of modern humans, Neanderthals, and Denisovans. As such a widely distributed species both geographically and temporally, H. erectus anatomy varies considerably. Subspecies are sometimes recognized: H. e. erectus, H. e. pekinensis, H. e. soloensis, H. e. ergaster, H. e. georgicus, and H. e. tautavelensis.
The species was first described by Eugène Dubois in 1893 as "Pithecanthropus erectus" using a skullcap, molar, and femur from Java, Indonesia. Further discoveries around East Asia were used to contend that humanity evolved out of Asia. Based on historical race concepts, it was argued that local H. erectus populations directly evolved into local modern human populations (polycentricism) rather than everyone sharing an anatomically modern ancestor (monogenism). As the fossil record improved over the mid-to-late 20th century, "Out of Africa" theory and monogenism became the consensus.
The skull usually has a pronounced brow ridge, a protruding jaw, and large teeth. The bones are extraordinarily thickened. East Asian H. erectus normally have much more robust skeletons and bigger brain volumes — averaging , within the range of variation for modern humans. But, that of H. e. georgicus was as low as . H. erectus probably had a faster apelike growth trajectory, lacking the extended childhood required for language acquisition. Reconstructed adult body dimensions range from in height and about in weight.
H. erectus invented the Acheulean industry, a major innovation of large, heavy-duty stone tools, which may have been used in butchery, vegetable processing, and woodworking of maybe digging sticks and spears. H. erectus was a major predator of large herbivores on the expanding savannas of the Quaternary glaciation. The species is usually characterized as the first hunter-gatherer and the first to practice sexual division of labor. Evidence of fire and cave habitation by H. erectus is sparse, and similarly, populations appear to have preferred warmer climates and usually ate meat raw. The last occurrence of H. erectus is 117,000 to 108,000 years ago (H. e. soloensis), when the last savannas in the region gave way to jungle.
Taxonomy
Research history
Despite what Charles Darwin had hypothesized in his 1871 Descent of Man, many late-19th century evolutionary naturalists postulated that Asia (instead of Africa) was the birthplace of humankind as it is midway between all continents via land routes or short sea crossings, providing optimal dispersal routes throughout the world. Among the major proponents of "Out of Asia" theory was Ernst Haeckel, who argued that the first human species (which he proactively named "Homo primigenius") evolved on the now-disproven hypothetical continent "Lemuria" in what is now Southeast Asia from a species he termed "Pithecanthropus alalus" ("speechless ape-man"). "Lemuria" had supposedly sunk below the Indian Ocean, so no fossils could be found to prove this.
Nevertheless, Haeckel's model inspired Dutch scientist Eugène Dubois to join the Royal Netherlands East Indies Army and search for his "missing link" in Java. At the Trinil site, his team found a skullcap and molar in 1891, and a femur in 1892 (Java Man), which he named "Pithecanthropus erectus" in 1893. He unfruitfully attempted to convince the European scientific community that he had found an upright-walking ape-man dating to the late Pliocene or early Pleistocene; they dismissed his findings as some kind of malformed non-human ape.
Dubois continued to argue that "P. erectus" was a gibbon-like ape which was the precursor to a more familiar human body plan, but in the 1930s, Jewish-German anatomist Franz Weidenreich noticed a striking similarity with ancient human remains recently being unearthed in China (Peking Man, "Sinanthropus pekinensis"). This characterization became better supported as German-Dutch palaeontologist Gustav Heinrich Ralph von Koenigswald discovered more Indonesian ancient human remains over the decade at Mojokerto, Sangiran, and Ngandong. Weidenreich believed that they were the direct ancestors of the local modern human Homo sapiens subspecies in accord with historical race concepts — that is, Peking Man was the direct ancestor of specifically Chinese people, and Java Man of Aboriginal Australians (polycentricism). As the importance of racial distinction diminished with the development of modern evolutionary synthesis, many fossil human species and genera around Asia, Africa, and Europe (including "Pithecanthropus erectus" and "Sinanthropus pekinensis") were reclassified as subspecies of Homo erectus.
In the late 20th century, far older H. erectus fossils were discovered across Africa, the first being Kenyan archeologist Louis Leakey's Olduvai Hominin 9 in 1960. As the human fossil record expanded, the "Out of Africa" theory and monogenism became the consensus (that all modern humans share a fully anatomically modern common ancestor). H. erectus is now generally considered to be an African species which later dispersed across Eurasia.
Subspecies
By the middle of the 20th century, human taxonomy was in a state of turmoil, with many different species and genera defined across Europe, Asia, and Africa, which overexaggerated how different these fossils actually are from each other. In 1940, Weidenreich was the first to suggest reclassifying "Sinanthropus pekinensis" and "Pithecanthropus erectus" as subspecies of H. erectus. In 1950, German-American evolutionary biologist Ernst Mayr had entered the field of anthropology, and, surveying a "bewildering diversity of names", decided to subsume human fossils into three species of Homo: "H. transvaalensis" (the australopithecines), H. erectus (including "Sinanthropus", "Pithecanthropus", and various other putative Asian, African, and European taxa), and H. sapiens (including anything younger than H. erectus, such as modern humans and Neanderthals), as had been broadly recommended by various priors. Mayr defined these species as a sequential lineage, with each species evolving into the next (chronospecies). Though later Mayr changed his opinion on the australopithecines (recognizing Australopithecus), his more conservative view of archaic human diversity became widely adopted in the subsequent decades.
In the 1970s, as population genetics became better understood, the anatomical variation of H. erectus across its wide geographic and temporal range (the basis for the subspecies distinctions) became better understood as clines — different populations which attained some anatomical regionality but were not reproductively isolated. In general, subspecies names for H. erectus are now used for convenience to indicate time and region rather than specific anatomical trends.
If an author uses subspecies, the ones usually recognized can include:
H. e. erectus for earlier Indonesian fossils
H. e. pekinensis for Chinese fossils
H. e. soloensis for the latest-surviving Indonesian fossils
H. e. ergaster for African fossils
H. e. georgicus for an early group of fossils from Georgia
H. e. tautavelensis for Western European fossils
The ancient Georgia fossils have variably been classified as a population of H. e. ergaster (sometimes denoted by a quadrinomial H. e. ergaster georgicus) its own subspecies as H. e. georgicus, or elevated to its own species as H. georgicus. Some authors may also elevate H. ergaster, H. soloensis, and H. pekinensis. Material relegated to H. e. tautavelensis is traditionally assigned to H. heidelbergensis.
Evolution and dispersal
H. erectus is generally considered to have its origins in Africa, evolving from a population of H. habilis (anagenesis). The oldest identified H. erectus specimen is a 2.04 million year old skull, DNH 143, from Drimolen, South Africa, coexisting with the australopithecine Paranthropus robustus. H. erectus dispersed out of Africa soon after evolution, the earliest recorded instances being H. e. georgicus 1.85 to 1.78 million years ago in Georgia and the Indonesian Mojokerto and Sangiran sites 1.8 to 1.6 million years ago. Populations may have pushed into northwestern Europe at around the same time. Since the species was first defined in East Asia, those populations are sometimes distinguished as H. erectus sensu stricto ("in the strict sense"), and African and West Eurasian populations as H. erectus sensu lato ("in the broad sense"), but this may not reflect how these populations are actually related to each other.
Once established around the Old World, H. erectus evolved into the other later species in genus Homo, including: H. heidelbergensis, H. antecessor, H. floresiensis, and H. luzonensis. H. heidelbergensis, in turn, is usually placed as the last common ancestor of modern humans (H. sapiens), Neanderthals (H. neanderthalensis), and Denisovans. H. erectus is thus a non-natural, paraphyletic grouping of fossils and does not include all the descendants of a last common ancestor. Despite being designated as different species, H. erectus may have been interbreeding with some of its descendant species, namely the common ancestor of Neanderthals and Denisovans ("Neandersovans").
The dispersal of H. erectus is generally ascribed to the evolution of bipedalism, better technology, and a dietary switch to carnivory. Populations spread out via open grassland and woodland savannas, which were expanding due to a global aridification trend at the onset of the Quaternary glaciation. Most H. erectus sensu lato specimens date to 1.8 to 1 million years ago in the Early Pleistocene before giving way to descendant species. H. erectus sensu stricto persisted much longer than sensu lato, with the youngest population (H. e. soloensis) dating to 117,000 to 108,000 years ago in Late Pleistocene Java. This population appears to have died out when the savannah corridors closed in the Late Pleistocene, and tropical jungle took over.
A 2021 phylogeny of some H. erectus fossils using tip dating:
Biology
As such a widely distributed species both geographically and through time, the anatomy of H erectus can vary considerably. Among living primates, the degree of regionality achieved by H. erectus (phenotypic plasticity) is only demonstrated in modern humans.
Head
Dubois originally described the species using a skullcap, noting the traits of a low and thickened cranial vault and a continuous bar of bone forming the brow ridge (supraorbital torus), as well as several other traits now considered more typical of H. erectus sensu stricto, such as a strong crest on the mastoid part of the temporal bone, a sagittal keel running across the midline, and a bar of bone running across the back of the skull (occipital torus). The latter traits can be still be found nonetheless in a few H. erectus sensu lato specimens, namely the 1.47 million year old Olduvai Hominin 9. Compared to H. erectus sensu lato, the skullcap of sensu stricto narrows considerably at the front, the face is bigger and presumably more prognathic (it juts out more, but the face is poorly documented), and the molars are larger particularly in Indonesian fossils. H. erectus was the first human species with a fleshy nose, which is generally thought to have evolved in response to breathing dry air in order to retain moisture. Compared to earlier Homo, H. erectus has smaller teeth, thinner enamel, and weaker mandibles (jawbone), likely due to a greater reliance on tool use and food processing.
The brain size of H. erectus varies considerably, but is generally smaller in H. erectus sensu lato, as low as in Dmanisi skull 5. Asian H. erectus overall are rather big-brained, averaging roughly 1,000 cc, staying within the range of variation for modern humans. The late-surviving H. e. soloensis has the biggest brain volume with one specimen measuring .
Body
The rest of the body is primarily understood by three partial skeletons from the Kenyan Lake Turkana site, notably Turkana Boy. Other postcranial (any bone aside from the skull) fossils attributed to H. erectus are not associated with a skull, making attribution unverifiable. Though the body plan of earlier Homo is poorly understood, H. erectus has typically been characterized as the first Homo species with a human body plan, distinct from non-human apes. Fossil tracks near Ileret, Kenya, similarly suggest a human gait. This adaptation is implicated in the spread of H. erectus across the Old World.
Body size and robusticity differs appreciably among populations. Height reconstructions range approximately , with tropical populations typically reconstructed as scoring on the higher end like modern human populations. Adult weight is harder to approximate, but it may have been about . H. erectus is usually thought to be the first human species with little size-specific sexual dimorphism, but the variability of postcranial material makes this unclear.
It is largely unclear when human ancestors lost most of their body hair. Genetic analysis suggests that high activity in the melanocortin 1 receptor, which would produce dark skin, dates back to 1.2 million years ago. This could indicate the evolution of hairlessness around this time, as a lack of body hair would have left the skin exposed to harmful UV radiation. Populations in higher latitudes potentially developed lighter skin to prevent vitamin D deficiency, though a 500,000 to 300,000 year old Turkish H. erectus specimen presents the earliest case of tuberculous meningitis, which is typically exacerbated by vitamin D deficiency specifically in dark-skinned people living in higher latitudes. Hairlessness is generally thought to have facilitated sweating, but reducing parasite load and sexual selection have also been proposed.
Growth and development
The dimensions of a 1.8 million years old adult female H. e. ergaster pelvis from Gona, Ethiopia, suggests that she would have been capable of birthing children with a maximum prenatal brain size of , about 30–50% of adult brain size, falling between chimpanzees (~40%) and modern humans (28%). Similarly, a 1.5 million year old infant skull from Mojokerto had a brain volume of about 72–84% the size of an adult, which suggests a brain growth trajectory more similar to that of non-human apes. This suggests that the childhood growth and development of H. erectus was intermediate between that of chimpanzees and modern humans, and the faster development rate suggests that altriciality (an extended childhood) evolved at a later stage in human evolution. The faster development rate might also indicate a shorter expected lifespan compared to later Homo.
Bone thickness
The bones are extraordinarily thickened, particularly in Homo erectus sensu stricto, so much so that skull fragments have sometimes been confused for fossil turtle carapaces. The medullary canal in the long bones (where the bone marrow is stored, in the limbs) is extremely narrowed (medullary stenosis). This degree of thickening is usually exhibited in semi-aquatic animals which use their heavy (pachyosteosclerotic) bones as ballasts to help them sink, induced by hypothyroidism.
It is largely unclear what function this could have served. Before more complete skeletons were discovered, Weidenreich suggested H. erectus was a gigantic species. Other explanations include a far more violent and impact-prone lifestyle than other Homo, or pathological nutrient deficiencies causing hyperparathyroidism (such as hypocalcemia).
Culture
Subsistence
H. erectus was early-on portrayed as the earliest hunter-gatherer and a skilled predator of big game, relying on endurance running. The gradual shift to "top predator" may have led to its dispersal throughout Afro-Eurasia. Though scavenging may have instead played a bigger role at least in some populations, H. erectus fossils are often associated with the butchered remains of large herbivores, especially elephants, rhinos, hippos, bovines, and boars. The complexities of prey behaviors and the nutritional value of meat have been connected to brain volume growth.
H. erectus is usually assumed to have practiced sexual division of labor much like recent hunter-gatherer societies, with men hunting and women gathering. This ideation is supported by a fossil trackway from Ileret, Kenya, made by a probably all-male band of over 20 H. erectus individuals, possibly a hunting party or (similar to chimpanzees) a border patrol group.
Since common modern human tapeworms began to diverge from those of other predators roughly 1.7 million years ago (specifically the pork tapeworm, beef tapeworm, and Asian tapeworm), not only was H. erectus consuming meat regularly enough for speciation to occur in these parasites, but meat was probably consumed raw more often than not. Some populations were collecting aquatic resources, include fish, shellfish, and turtles such as at Lake Turkana and Trinil. Underground storage organs (roots, tubers, etc.) were likely also major dietary components, and traces of the edible plant Celtis have been documented at several H. erectus sites.
Possibly due to overhunting of the biggest game available, the dispersal of H. erectus and descendant species may be implicated in the extinctions of large herbivores and the gradual reduction of average herbivore size over the Pleistocene. H. erectus overhunting has been blamed by some authors for the decline of proboscidean species as well as competing carnivores, but their decline may be better attributed to the spread of grasslands. The giant tortoise genus Megalochelys may have been driven to extinction by H. erectus in Island Southeast Asia due to insular species of the genus tending to go extinct shortly after the arrival of Homo erectus on the islands they inhabited.
Technology
Stone tools
H. erectus manufactured Lower Paleolithic technologies, and is credited with the invention of the Acheulean stone tool industry at latest 1.75 million years ago. This was a major technological breakthrough featuring large, symmetrical, heavy-duty tools; most iconicly the handaxe. Over hundreds of thousands of years, the Achuelean eventually replaced its predecessor — the Oldowan (a chopper and flake industry) — in Africa, and spread out across Western Eurasia. This sudden innovation was typically explained as a response to environmental instability in order to process more types of food and broaden the diet, which allowed H. erectus to colonize Eurasia. Despite this characterization of the Acheulean, H. e. georgicus was able to leave Africa despite only manufacturing Oldowan-style tools, and the handaxe does not seem to have been manufactured commonly in East Asia. This conspicuous pattern was first noted by American archaeologist Hallam L. Movius in 1948, who drew the "Movius Line", dividing the East into a "chopping-tool culture" and the West into a "hand axe culture".
H. erectus seems to have been using stone tools in butchery, vegetable processing, and woodworking (maybe manufacturing digging sticks and spears). In Africa, Oldowan sites are typically found alongside major fossil assemblages, but Acheulean sites normally feature more stone tools than fossils, so H. erectus could have been using choppers and handaxes for different activities.
Materials for stone tools were normally sourced locally, and it seems blanks were usually chosen based on size rather than material quality. H. erectus also produced tools from shells at Sangiran and Trinil.
Fire
H. erectus is credited as the first human species to wield fire. The earliest claimed fire site is Wonderwerk Cave, South Africa, at 1.7 million years old. While its dispersal far out of Africa has often been attributed to fire and cave dwelling, fire does not become common in the archaeological record until 300,000–400,000 years ago, and cave-dwelling about 600,000 years ago. Therefore, H. erectus may have only been scavenging fire opportunistically. Similarly, H. erectus sites usually stay within warmer tropical or subtropical latitudes, and the dating of northerly populations (namely Peking Man) could suggest that they were retreating to warmer refugia during glacial periods, but the precise age of the Peking Man fossils is poorly resolved.
Healthcare
Like other primates, H. erectus probably used medicinal plants and infirmed sick group members. The earliest probable example of this is a 1.77 million year old H. e. georgicus specimen who had lost all but one tooth due to age or gum disease (the earliest example of severe chewing impairment) yet still survived for several years afterwards.
Seafaring
H. erectus made long sea crossings to arrive on the islands of Flores, Luzon, and some Mediterranean islands. Some authors have asserted that H. erectus intentionally made these crossings by inventing watercrafts and seafaring so early in time, speaking to advanced cognition and language skills. These populations could have also been founded by natural rafting events instead.
Art and rituals
In East Asia, H. erectus is usually represented only by skullcaps, which used to be interpreted as widespread cannibalism and ritual headhunting. This had been reinforced by the historic practice of headhunting and cannibalism in some recent Indonesian, Australian, and Polynesian cultures, which were formerly believed to have directly descended from these H. erectus populations. The lack of the rest of the skeleton is now normally explained by natural phenomena.
Art-making could be evidence of symbolic thinking. An engraved Pseudodon shell DUB1006-fL from Trinil, Java, with geometric markings could possibly be the earliest example of art-making, dating to 546,000 to 436,000 years ago. H. erectus was also the earliest human to collect red-colored pigments, namely ochre. Ochre lumps at Olduvai Gorge, Tanzania, associated with the 1.4 million year old Olduvai Hominid 9 may have been purposefully shaped and trimmed by a hammerstone. Red ochre is normally recognized as bearing symbolic value when associated with modern humans.
Language
The hyoid bone supports the tongue and makes possible modulation of the vocal tract to control pitch and volume. A 400,000 year old H. erectus hyoid bone from Castel di Guido, Italy, is bar-shaped—more similar to that of other Homo than to that of non-human apes and Australopithecus—but is devoid of muscle impressions, has a shield-shaped body, and is implied to have had reduced greater horns, meaning H. erectus lacked a humanlike vocal apparatus and thus anatomical prerequisites for a modern human level of speech. Similarly, the spinal column of the 1.6 million year old Turkana boy would not have supported properly developed respiratory muscles required to produce speech; and a 1.5 million year old infant H. erectus skull from Mojokerto, Java, shows that this population did not have an extended childhood, which is a prerequisite for language acquisition. On the other hand, despite the cochlear (ear) anatomy of Sangiran 2 and 4 retaining several traits reminiscent of australopithecines, the hearing range may have included the higher frequencies used to discern speech.
Given expanding brain size and technological innovation, H. erectus may have been using some basic proto-language in combination with gesturing, and built the basic framework which fully-fledged languages would eventually be formed around.
| Biology and health sciences | Evolution | null |
19555586 | https://en.wikipedia.org/wiki/Classical%20mechanics | Classical mechanics | Classical mechanics is a physical theory describing the motion of objects such as projectiles, parts of machinery, spacecraft, planets, stars, and galaxies. The development of classical mechanics involved substantial change in the methods and philosophy of physics. The qualifier classical distinguishes this type of mechanics from physics developed after the revolutions in physics of the early 20th century, all of which revealed limitations in classical mechanics.
The earliest formulation of classical mechanics is often referred to as Newtonian mechanics. It consists of the physical concepts based on the 17th century foundational works of Sir Isaac Newton, and the mathematical methods invented by Newton, Gottfried Wilhelm Leibniz, Leonhard Euler and others to describe the motion of bodies under the influence of forces. Later, methods based on energy were developed by Euler, Joseph-Louis Lagrange, William Rowan Hamilton and others, leading to the development of analytical mechanics (which includes Lagrangian mechanics and Hamiltonian mechanics). These advances, made predominantly in the 18th and 19th centuries, extended beyond earlier works; they are, with some modification, used in all areas of modern physics.
If the present state of an object that obeys the laws of classical mechanics is known, it is possible to determine how it will move in the future, and how it has moved in the past. Chaos theory shows that the long term predictions of classical mechanics are not reliable. Classical mechanics provides accurate results when studying objects that are not extremely massive and have speeds not approaching the speed of light. With objects about the size of an atom's diameter, it becomes necessary to use quantum mechanics. To describe velocities approaching the speed of light, special relativity is needed. In cases where objects become extremely massive, general relativity becomes applicable. Some modern sources include relativistic mechanics in classical physics, as representing the field in its most developed and accurate form.
Branches
Traditional division
Classical mechanics was traditionally divided into three main branches.
Statics is the branch of classical mechanics that is concerned with the analysis of force and torque acting on a physical system that does not experience an acceleration, but rather is in equilibrium with its environment. Kinematics describes the motion of points, bodies (objects), and systems of bodies (groups of objects) without considering the forces that cause them to move. Kinematics, as a field of study, is often referred to as the "geometry of motion" and is occasionally seen as a branch of mathematics. Dynamics goes beyond merely describing objects' behavior and also considers the forces which explain it.
Some authors (for example, Taylor (2005) and Greenwood (1997)) include special relativity within classical dynamics.
Forces vs. energy
Another division is based on the choice of mathematical formalism. Classical mechanics can be mathematically presented in multiple different ways. The physical content of these different formulations is the same, but they provide different insights and facilitate different types of calculations. While the term "Newtonian mechanics" is sometimes used as a synonym for non-relativistic classical physics, it can also refer to a particular formalism based on Newton's laws of motion. Newtonian mechanics in this sense emphasizes force as a vector quantity.
In contrast, analytical mechanics uses scalar properties of motion representing the system as a whole—usually its kinetic energy and potential energy. The equations of motion are derived from the scalar quantity by some underlying principle about the scalar's variation. Two dominant branches of analytical mechanics are Lagrangian mechanics, which uses generalized coordinates and corresponding generalized velocities in configuration space, and Hamiltonian mechanics, which uses coordinates and corresponding momenta in phase space. Both formulations are equivalent by a Legendre transformation on the generalized coordinates, velocities and momenta; therefore, both contain the same information for describing the dynamics of a system. There are other formulations such as Hamilton–Jacobi theory, Routhian mechanics, and Appell's equation of motion. All equations of motion for particles and fields, in any formalism, can be derived from the widely applicable result called the principle of least action. One result is Noether's theorem, a statement which connects conservation laws to their associated symmetries.
By region of application
Alternatively, a division can be made by region of application:
Celestial mechanics, relating to stars, planets and other celestial bodies
Continuum mechanics, for materials modelled as a continuum, e.g., solids and fluids (i.e., liquids and gases).
Relativistic mechanics (i.e. including the special and general theories of relativity), for bodies whose speed is close to the speed of light.
Statistical mechanics, which provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk thermodynamic properties of materials.
Description of objects and their motion
For simplicity, classical mechanics often models real-world objects as point particles, that is, objects with negligible size. The motion of a point particle is determined by a small number of parameters: its position, mass, and the forces applied to it. Classical mechanics also describes the more complex motions of extended non-pointlike objects. Euler's laws provide extensions to Newton's laws in this area. The concepts of angular momentum rely on the same calculus used to describe one-dimensional motion. The rocket equation extends the notion of rate of change of an object's momentum to include the effects of an object "losing mass". (These generalizations/extensions are derived from Newton's laws, say, by decomposing a solid body into a collection of points.)
In reality, the kind of objects that classical mechanics can describe always have a non-zero size. (The behavior of very small particles, such as the electron, is more accurately described by quantum mechanics.) Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom, e.g., a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made of a large number of collectively acting point particles. The center of mass of a composite object behaves like a point particle.
Classical mechanics assumes that matter and energy have definite, knowable attributes such as location in space and speed. Non-relativistic mechanics also assumes that forces act instantaneously (see also Action at a distance).
Kinematics
The position of a point particle is defined in relation to a coordinate system centered on an arbitrary fixed reference point in space called the origin O. A simple coordinate system might describe the position of a particle P with a vector notated by an arrow labeled r that points from the origin O to point P. In general, the point particle does not need to be stationary relative to O. In cases where P is moving relative to O, r is defined as a function of t, time. In pre-Einstein relativity (known as Galilean relativity), time is considered an absolute, i.e., the time interval that is observed to elapse between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space.
Velocity and speed
The velocity, or the rate of change of displacement with time, is defined as the derivative of the position with respect to time:
.
In classical mechanics, velocities are directly additive and subtractive. For example, if one car travels east at 60 km/h and passes another car traveling in the same direction at 50 km/h, the slower car perceives the faster car as traveling east at . However, from the perspective of the faster car, the slower car is moving 10 km/h to the west, often denoted as −10 km/h where the sign implies opposite direction. Velocities are directly additive as vector quantities; they must be dealt with using vector analysis.
Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector and the velocity of the second object by the vector , where u is the speed of the first object, v is the speed of the second object, and d and e are unit vectors in the directions of motion of each object respectively, then the velocity of the first object as seen by the second object is:
Similarly, the first object sees the velocity of the second object as:
When both objects are moving in the same direction, this equation can be simplified to:
Or, by ignoring direction, the difference can be given in terms of speed only:
Acceleration
The acceleration, or rate of change of velocity, is the derivative of the velocity with respect to time (the second derivative of the position with respect to time):
Acceleration represents the velocity's change over time. Velocity can change in magnitude, direction, or both. Occasionally, a decrease in the magnitude of velocity "v" is referred to as deceleration, but generally any change in the velocity over time, including deceleration, is referred to as acceleration.
Frames of reference
While the position, velocity and acceleration of a particle can be described with respect to any observer in any state of motion, classical mechanics assumes the existence of a special family of reference frames in which the mechanical laws of nature take a comparatively simple form. These special reference frames are called inertial frames. An inertial frame is an idealized frame of reference within which an object with zero net force acting upon it moves with a constant velocity; that is, it is either at rest or moving uniformly in a straight line. In an inertial frame Newton's law of motion, , is valid.
Non-inertial reference frames accelerate in relation to another inertial frame. A body rotating with respect to an inertial frame is not an inertial frame. When viewed from an inertial frame, particles in the non-inertial frame appear to move in ways not explained by forces from existing fields in the reference frame. Hence, it appears that there are other forces that enter the equations of motion solely as a result of the relative acceleration. These forces are referred to as fictitious forces, inertia forces, or pseudo-forces.
Consider two reference frames S and S'. For observers in each of the reference frames an event has space-time coordinates of (x,y,z,t) in frame S and (x',y',z',t') in frame S'. Assuming time is measured the same in all reference frames, if we require when , then the relation between the space-time coordinates of the same event observed from the reference frames S' and S, which are moving at a relative velocity u in the x direction, is:
This set of formulas defines a group transformation known as the Galilean transformation (informally, the Galilean transform). This group is a limiting case of the Poincaré group used in special relativity. The limiting case applies when the velocity u is very small compared to c, the speed of light.
The transformations have the following consequences:
v′ = v − u (the velocity v′ of a particle from the perspective of S′ is slower by u than its velocity v from the perspective of S)
a′ = a (the acceleration of a particle is the same in any inertial reference frame)
F′ = F (the force on a particle is the same in any inertial reference frame)
the speed of light is not a constant in classical mechanics, nor does the special position given to the speed of light in relativistic mechanics have a counterpart in classical mechanics.
For some problems, it is convenient to use rotating coordinates (reference frames). Thereby one can either keep a mapping to a convenient inertial frame, or introduce additionally a fictitious centrifugal force and Coriolis force.
Newtonian mechanics
A force in physics is any action that causes an object's velocity to change; that is, to accelerate. A force originates from within a field, such as an electro-static field (caused by static electrical charges), electro-magnetic field (caused by moving charges), or gravitational field (caused by mass), among others.
Newton was the first to mathematically express the relationship between force and momentum. Some physicists interpret Newton's second law of motion as a definition of force and mass, while others consider it a fundamental postulate, a law of nature. Either interpretation has the same mathematical consequences, historically known as "Newton's Second Law":
The quantity mv is called the (canonical) momentum. The net force on a particle is thus equal to the rate of change of the momentum of the particle with time. Since the definition of acceleration is , the second law can be written in the simplified and more familiar form:
So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion.
As an example, assume that friction is the only force acting on the particle, and that it may be modeled as a function of the velocity of the particle, for example:
where λ is a positive constant, the negative sign states that the force is opposite the sense of the velocity. Then the equation of motion is
This can be integrated to obtain
where v0 is the initial velocity. This means that the velocity of this particle decays exponentially to zero as time progresses. In this case, an equivalent viewpoint is that the kinetic energy of the particle is absorbed by friction (which converts it to heat energy in accordance with the conservation of energy), and the particle is slowing down. This expression can be further integrated to obtain the position r of the particle as a function of time.
Important forces include the gravitational force and the Lorentz force for electromagnetism. In addition, Newton's third law can sometimes be used to deduce the forces acting on a particle: if it is known that particle A exerts a force F on another particle B, it follows that B must exert an equal and opposite reaction force, −F, on A. The strong form of Newton's third law requires that F and −F act along the line connecting A and B, while the weak form does not. Illustrations of the weak form of Newton's third law are often found for magnetic forces.
Work and energy
If a constant force F is applied to a particle that makes a displacement Δr, the work done by the force is defined as the scalar product of the force and displacement vectors:
More generally, if the force varies as a function of position as the particle moves from r1 to r2 along a path C, the work done on the particle is given by the line integral
If the work done in moving the particle from r1 to r2 is the same no matter what path is taken, the force is said to be conservative. Gravity is a conservative force, as is the force due to an idealized spring, as given by Hooke's law. The force due to friction is non-conservative.
The kinetic energy Ek of a particle of mass m travelling at speed v is given by
For extended objects composed of many particles, the kinetic energy of the composite body is the sum of the kinetic energies of the particles.
The work–energy theorem states that for a particle of constant mass m, the total work W done on the particle as it moves from position r1 to r2 is equal to the change in kinetic energy Ek of the particle:
Conservative forces can be expressed as the gradient of a scalar function, known as the potential energy and denoted Ep:
If all the forces acting on a particle are conservative, and Ep is the total potential energy (which is defined as a work of involved forces to rearrange mutual positions of bodies), obtained by summing the potential energies corresponding to each force
The decrease in the potential energy is equal to the increase in the kinetic energy
This result is known as conservation of energy and states that the total energy,
is constant in time. It is often useful, because many commonly encountered forces are conservative.
Lagrangian mechanics
Lagrangian mechanics is a formulation of classical mechanics founded on the stationary-action principle (also known as the principle of least action). It was introduced by the Italian-French mathematician and astronomer Joseph-Louis Lagrange in his presentation to the Turin Academy of Science in 1760 culminating in his 1788 grand opus, Mécanique analytique. Lagrangian mechanics describes a mechanical system as a pair consisting of a configuration space and a smooth function within that space called a Lagrangian. For many systems, where and are the kinetic and potential energy of the system, respectively. The stationary action principle requires that the action functional of the system derived from must remain at a stationary point (a maximum, minimum, or saddle) throughout the time evolution of the system. This constraint allows the calculation of the equations of motion of the system using Lagrange's equations.
Hamiltonian mechanics
Hamiltonian mechanics emerged in 1833 as a reformulation of Lagrangian mechanics. Introduced by Sir William Rowan Hamilton, Hamiltonian mechanics replaces (generalized) velocities used in Lagrangian mechanics with (generalized) momenta. Both theories provide interpretations of classical mechanics and describe the same physical phenomena. Hamiltonian mechanics has a close relationship with geometry (notably, symplectic geometry and Poisson structures) and serves as a link between classical and quantum mechanics.
In this formalism, the dynamics of a system are governed by Hamilton's equations, which express the time derivatives of position and momentum variables in terms of partial derivatives of a function called the Hamiltonian:
The Hamiltonian is the Legendre transform of the Lagrangian, and in many situations of physical interest it is equal to the total energy of the system.
Limits of validity
Many branches of classical mechanics are simplifications or approximations of more accurate forms; two of the most accurate being general relativity and relativistic statistical mechanics. Geometric optics is an approximation to the quantum theory of light, and does not have a superior "classical" form.
When both quantum mechanics and classical mechanics cannot apply, such as at the quantum level with many degrees of freedom, quantum field theory (QFT) is of use. QFT deals with small distances, and large speeds with many degrees of freedom as well as the possibility of any change in the number of particles throughout the interaction. When treating large degrees of freedom at the macroscopic level, statistical mechanics becomes useful. Statistical mechanics describes the behavior of large (but countable) numbers of particles and their interactions as a whole at the macroscopic level. Statistical mechanics is mainly used in thermodynamics for systems that lie outside the bounds of the assumptions of classical thermodynamics. In the case of high velocity objects approaching the speed of light, classical mechanics is enhanced by special relativity. In case that objects become extremely heavy (i.e., their Schwarzschild radius is not negligibly small for a given application), deviations from Newtonian mechanics become apparent and can be quantified by using the parameterized post-Newtonian formalism. In that case, general relativity (GR) becomes applicable. However, until now there is no theory of quantum gravity unifying GR and QFT in the sense that it could be used when objects become extremely small and heavy.[4][5]
Newtonian approximation to special relativity
In special relativity, the momentum of a particle is given by
where m is the particle's rest mass, v its velocity, v is the modulus of v, and c is the speed of light.
If v is very small compared to c, v2/c2 is approximately zero, and so
Thus the Newtonian equation is an approximation of the relativistic equation for bodies moving with low speeds compared to the speed of light.
For example, the relativistic cyclotron frequency of a cyclotron, gyrotron, or high voltage magnetron is given by
where fc is the classical frequency of an electron (or other charged particle) with kinetic energy T and (rest) mass m0 circling in a magnetic field. The (rest) mass of an electron is 511 keV. So the frequency correction is 1% for a magnetic vacuum tube with a 5.11 kV direct current accelerating voltage.
Classical approximation to quantum mechanics
The ray approximation of classical mechanics breaks down when the de Broglie wavelength is not much smaller than other dimensions of the system. For non-relativistic particles, this wavelength is
where h is the Planck constant and p is the momentum.
Again, this happens with electrons before it happens with heavier particles. For example, the electrons used by Clinton Davisson and Lester Germer in 1927, accelerated by 54 V, had a wavelength of 0.167 nm, which was long enough to exhibit a single diffraction side lobe when reflecting from the face of a nickel crystal with atomic spacing of 0.215 nm. With a larger vacuum chamber, it would seem relatively easy to increase the angular resolution from around a radian to a milliradian and see quantum diffraction from the periodic patterns of integrated circuit computer memory.
More practical examples of the failure of classical mechanics on an engineering scale are conduction by quantum tunneling in tunnel diodes and very narrow transistor gates in integrated circuits.
Classical mechanics is the same extreme high frequency approximation as geometric optics. It is more often accurate because it describes particles and bodies with rest mass. These have more momentum and therefore shorter De Broglie wavelengths than massless particles, such as light, with the same kinetic energies.
History
The study of the motion of bodies is an ancient one, making classical mechanics one of the oldest and largest subjects in science, engineering, and technology. The development of classical mechanics lead to the development of many areas of mathematics.
Some Greek philosophers of antiquity, among them Aristotle, founder of Aristotelian physics, may have been the first to maintain the idea that "everything happens for a reason" and that theoretical principles can assist in the understanding of nature. While to a modern reader, many of these preserved ideas come forth as eminently reasonable, there is a conspicuous lack of both mathematical theory and controlled experiment, as we know it. These later became decisive factors in forming modern science, and their early application came to be known as classical mechanics. In his Elementa super demonstrationem ponderum, medieval mathematician Jordanus de Nemore introduced the concept of "positional gravity" and the use of component forces.
The first published causal explanation of the motions of planets was Johannes Kepler's Astronomia nova, published in 1609. He concluded, based on Tycho Brahe's observations on the orbit of Mars, that the planet's orbits were ellipses. This break with ancient thought was happening around the same time that Galileo was proposing abstract mathematical laws for the motion of objects. He may (or may not) have performed the famous experiment of dropping two cannonballs of different weights from the tower of Pisa, showing that they both hit the ground at the same time. The reality of that particular experiment is disputed, but he did carry out quantitative experiments by rolling balls on an inclined plane. His theory of accelerated motion was derived from the results of such experiments and forms a cornerstone of classical mechanics. In 1673 Christiaan Huygens described in his Horologium Oscillatorium the first two laws of motion. The work is also the first modern treatise in which a physical problem (the accelerated motion of a falling body) is idealized by a set of parameters then analyzed mathematically and constitutes one of the seminal works of applied mathematics.
Newton founded his principles of natural philosophy on three proposed laws of motion: the law of inertia, his second law of acceleration (mentioned above), and the law of action and reaction; and hence laid the foundations for classical mechanics. Both Newton's second and third laws were given the proper scientific and mathematical treatment in Newton's Philosophiæ Naturalis Principia Mathematica. Here they are distinguished from earlier attempts at explaining similar phenomena, which were either incomplete, incorrect, or given little accurate mathematical expression. Newton also enunciated the principles of conservation of momentum and angular momentum. In mechanics, Newton was also the first to provide the first correct scientific and mathematical formulation of gravity in Newton's law of universal gravitation. The combination of Newton's laws of motion and gravitation provides the fullest and most accurate description of classical mechanics. He demonstrated that these laws apply to everyday objects as well as to celestial objects. In particular, he obtained a theoretical explanation of Kepler's laws of motion of the planets.
Newton had previously invented the calculus; however, the Principia was formulated entirely in terms of long-established geometric methods in emulation of Euclid. Newton, and most of his contemporaries, with the notable exception of Huygens, worked on the assumption that classical mechanics would be able to explain all phenomena, including light, in the form of geometric optics. Even when discovering the so-called Newton's rings (a wave interference phenomenon) he maintained his own corpuscular theory of light.
After Newton, classical mechanics became a principal field of study in mathematics as well as physics. Mathematical formulations progressively allowed finding solutions to a far greater number of problems. The first notable mathematical treatment was in 1788 by Joseph Louis Lagrange. Lagrangian mechanics was in turn re-formulated in 1833 by William Rowan Hamilton.
Some difficulties were discovered in the late 19th century that could only be resolved by more modern physics. Some of these difficulties related to compatibility with electromagnetic theory, and the famous Michelson–Morley experiment. The resolution of these problems led to the special theory of relativity, often still considered a part of classical mechanics.
A second set of difficulties were related to thermodynamics. When combined with thermodynamics, classical mechanics leads to the Gibbs paradox of classical statistical mechanics, in which entropy is not a well-defined quantity. Black-body radiation was not explained without the introduction of quanta. As experiments reached the atomic level, classical mechanics failed to explain, even approximately, such basic things as the energy levels and sizes of atoms and the photo-electric effect. The effort at resolving these problems led to the development of quantum mechanics.
Since the end of the 20th century, classical mechanics in physics has no longer been an independent theory. Instead, classical mechanics is now considered an approximate theory to the more general quantum mechanics. Emphasis has shifted to understanding the fundamental forces of nature as in the Standard Model and its more modern extensions into a unified theory of everything. Classical mechanics is a theory useful for the study of the motion of non-quantum mechanical, low-energy particles in weak gravitational fields.
| Physical sciences | Physics | null |
459018 | https://en.wikipedia.org/wiki/Pointer%20%28computer%20programming%29 | Pointer (computer programming) | In computer science, a pointer is an object in many programming languages that stores a memory address. This can be that of another value located in computer memory, or in some cases, that of memory-mapped computer hardware. A pointer references a location in memory, and obtaining the value stored at that location is known as dereferencing the pointer. As an analogy, a page number in a book's index could be considered a pointer to the corresponding page; dereferencing such a pointer would be done by flipping to the page with the given page number and reading the text found on that page. The actual format and content of a pointer variable is dependent on the underlying computer architecture.
Using pointers significantly improves performance for repetitive operations, like traversing iterable data structures (e.g. strings, lookup tables, control tables, linked lists, and tree structures). In particular, it is often much cheaper in time and space to copy and dereference pointers than it is to copy and access the data to which the pointers point.
Pointers are also used to hold the addresses of entry points for called subroutines in procedural programming and for run-time linking to dynamic link libraries (DLLs). In object-oriented programming, pointers to functions are used for binding methods, often using virtual method tables.
A pointer is a simple, more concrete implementation of the more abstract reference data type. Several languages, especially low-level languages, support some type of pointer, although some have more restrictions on their use than others. While "pointer" has been used to refer to references in general, it more properly applies to data structures whose interface explicitly allows the pointer to be manipulated (arithmetically via ) as a memory address, as opposed to a magic cookie or capability which does not allow such. Because pointers allow both protected and unprotected access to memory addresses, there are risks associated with using them, particularly in the latter case. Primitive pointers are often stored in a format similar to an integer; however, attempting to dereference or "look up" such a pointer whose value is not a valid memory address could cause a program to crash (or contain invalid data). To alleviate this potential problem, as a matter of type safety, pointers are considered a separate type parameterized by the type of data they point to, even if the underlying representation is an integer. Other measures may also be taken (such as validation and bounds checking), to verify that the pointer variable contains a value that is both a valid memory address and within the numerical range that the processor is capable of addressing.
History
In 1955, Soviet Ukrainian computer scientist Kateryna Yushchenko created the Address programming language that made possible indirect addressing and addresses of the highest rank – analogous to pointers. This language was widely used on the Soviet Union computers. However, it was unknown outside the Soviet Union and usually Harold Lawson is credited with the invention, in 1964, of the pointer. In 2000, Lawson was presented the Computer Pioneer Award by the IEEE "[f]or inventing the pointer variable and introducing this concept into PL/I, thus providing for the first time, the capability to flexibly treat linked lists in a general-purpose high-level language". His seminal paper on the concepts appeared in the June 1967 issue of CACM entitled: PL/I List Processing. According to the Oxford English Dictionary, the word pointer first appeared in print as a stack pointer in a technical memorandum by the System Development Corporation.
Formal description
In computer science, a pointer is a kind of reference.
A data primitive (or just primitive) is any datum that can be read from or written to computer memory using one memory access (for instance, both a byte and a word are primitives).
A data aggregate (or just aggregate) is a group of primitives that are logically contiguous in memory and that are viewed collectively as one datum (for instance, an aggregate could be 3 logically contiguous bytes, the values of which represent the 3 coordinates of a point in space). When an aggregate is entirely composed of the same type of primitive, the aggregate may be called an array; in a sense, a multi-byte word primitive is an array of bytes, and some programs use words in this way.
A pointer is a programming concept used in computer science to reference or point to a memory location that stores a value or an object. It is essentially a variable that stores the memory address of another variable or data structure rather than storing the data itself.
Pointers are commonly used in programming languages that support direct memory manipulation, such as C and C++. They allow programmers to work with memory directly, enabling efficient memory management and more complex data structures. By using pointers, you can access and modify data located in memory, pass data efficiently between functions, and create dynamic data structures like linked lists, trees, and graphs.
In simpler terms, you can think of a pointer as an arrow that points to a specific spot in a computer's memory, allowing you to interact with the data stored at that location.
A memory pointer (or just pointer) is a primitive, the value of which is intended to be used as a memory address; it is said that a pointer points to a memory address. It is also said that a pointer points to a datum [in memory] when the pointer's value is the datum's memory address.
More generally, a pointer is a kind of reference, and it is said that a pointer references a datum stored somewhere in memory; to obtain that datum is to dereference the pointer. The feature that separates pointers from other kinds of reference is that a pointer's value is meant to be interpreted as a memory address, which is a rather low-level concept.
| Technology | Software development: General | null |
459163 | https://en.wikipedia.org/wiki/Siphon | Siphon | A siphon (; also spelled syphon) is any of a wide variety of devices that involve the flow of liquids through tubes. In a narrower sense, the word refers particularly to a tube in an inverted "U" shape, which causes a liquid to flow upward, above the surface of a reservoir, with no pump, but powered by the fall of the liquid as it flows down the tube under the pull of gravity, then discharging at a level lower than the surface of the reservoir from which it came.
There are two leading theories about how siphons cause liquid to flow uphill, against gravity, without being pumped, and powered only by gravity. The traditional theory for centuries was that gravity pulling the liquid down on the exit side of the siphon resulted in reduced pressure at the top of the siphon. Then atmospheric pressure was able to push the liquid from the upper reservoir, up into the reduced pressure at the top of the siphon, like in a barometer or drinking straw, and then over. However, it has been demonstrated that siphons can operate in a vacuum and to heights exceeding the barometric height of the liquid. Consequently, the cohesion tension theory of siphon operation has been advocated, where the liquid is pulled over the siphon in a way similar to the chain fountain. It need not be one theory or the other that is correct, but rather both theories may be correct in different circumstances of ambient pressure. The atmospheric pressure with gravity theory cannot explain siphons in vacuum, where there is no significant atmospheric pressure. But the cohesion tension with gravity theory cannot explain gas siphons, siphons working despite bubbles, and the flying droplet siphon, where gases do not exert significant pulling forces, and liquids not in contact cannot exert a cohesive tension force.
All known published theories in modern times recognize Bernoulli's equation as a decent approximation to idealized, friction-free siphon operation.
History
Egyptian reliefs from 1500 BC depict siphons used to extract liquids from large storage jars.
Physical evidence for the use of siphons by Greeks are the Justice cup of Pythagoras in Samos in the 6th century BC and usage by Greek engineers in the 3rd century BC at Pergamon.
Hero of Alexandria wrote extensively about siphons in the treatise Pneumatica.
The Banu Musa brothers of 9th-century Baghdad invented a double-concentric siphon, which they described in their Book of Ingenious Devices. The edition edited by Hill includes an analysis of the double-concentric siphon.
Siphons were studied further in the 17th century, in the context of suction pumps (and the recently developed vacuum pumps), particularly with an eye to understanding the maximum height of pumps (and siphons) and the apparent vacuum at the top of early barometers. This was initially explained by Galileo Galilei via the theory of ("nature abhors a vacuum"), which dates to Aristotle, and which Galileo restated as , but this was subsequently disproved by later workers, notably Evangelista Torricelli and Blaise Pascal – see barometer: history.
Theory
A practical siphon, operating at typical atmospheric pressures and tube heights, works because gravity pulling down on the taller column of liquid leaves reduced pressure at the top of the siphon (formally, hydrostatic pressure when the liquid is not moving). This reduced pressure at the top means gravity pulling down on the shorter column of liquid is not sufficient to keep the liquid stationary against the atmospheric pressure pushing it up into the reduced-pressure zone at the top of the siphon. So the liquid flows from the higher-pressure area of the upper reservoir up to the lower-pressure zone at the top of the siphon, over the top, and then, with the help of gravity and a taller column of liquid, down to the higher-pressure zone at the exit.
The chain model is a useful but not completely accurate conceptual model of a siphon. The chain model helps to understand how a siphon can cause liquid to flow uphill, powered only by the downward force of gravity. A siphon can sometimes be thought of like a chain hanging over a pulley, with one end of the chain piled on a higher surface than the other. Since the length of chain on the shorter side is lighter than the length of chain on the taller side, the heavier chain on the taller side will move down and pull up the chain on the lighter side. Similar to a siphon, the chain model is obviously just powered by gravity acting on the heavier side, and there is clearly no violation of conservation of energy, because the chain is ultimately just moving from a higher to a lower location, as the liquid does in a siphon.
There are a number of problems with the chain model of a siphon, and understanding these differences helps to explain the actual workings of siphons. First, unlike in the chain model of the siphon, it is not actually the weight on the taller side compared to the shorter side that matters. Rather it is the difference in height from the reservoir surfaces to the top of the siphon, that determines the balance of pressure. For example, if the tube from the upper reservoir to the top of the siphon has a much larger diameter than the taller section of tube from the lower reservoir to the top of the siphon, the shorter upper section of the siphon may have a much larger weight of liquid in it, and yet the lighter volume of liquid in the down tube can pull liquid up the fatter up tube, and the siphon can function normally.
Another difference is that under most practical circumstances, dissolved gases, vapor pressure, and (sometimes) lack of adhesion with tube walls, conspire to render the tensile strength within the liquid ineffective for siphoning. Thus, unlike a chain, which has significant tensile strength, liquids usually have little tensile strength under typical siphon conditions, and therefore the liquid on the rising side cannot be pulled up in the way the chain is pulled up on the rising side.
An occasional misunderstanding of siphons is that they rely on the tensile strength of the liquid to pull the liquid up and over the rise. While water has been found to have a significant tensile strength in some experiments (such as with the z-tube), and siphons in vacuum rely on such cohesion, common siphons can easily be demonstrated to need no liquid tensile strength at all to function. Furthermore, since common siphons operate at positive pressures throughout the siphon, there is no contribution from liquid tensile strength, because the molecules are actually repelling each other in order to resist the pressure, rather than pulling on each other.
To demonstrate, the longer lower leg of a common siphon can be plugged at the bottom and filled almost to the crest with liquid as in the figure, leaving the top and the shorter upper leg completely dry and containing only air. When the plug is removed and the liquid in the longer lower leg is allowed to fall, the liquid in the upper reservoir will then typically sweep the air bubble down and out of the tube. The apparatus will then continue to operate as a normal siphon. As there is no contact between the liquid on either side of the siphon at the beginning of this experiment, there can be no cohesion between the liquid molecules to pull the liquid over the rise. It has been suggested by advocates of the liquid tensile strength theory, that the air start siphon only demonstrates the effect as the siphon starts, but that the situation changes after the bubble is swept out and the siphon achieves steady flow. But a similar effect can be seen in the flying-droplet siphon (see above). The flying-droplet siphon works continuously without liquid tensile strength pulling the liquid up.
The siphon in the video demonstration operated steadily for more than 28 minutes until the upper reservoir was empty. Another simple demonstration that liquid tensile strength is not needed in the siphon is to simply introduce a bubble into the siphon during operation. The bubble can be large enough to entirely disconnect the liquids in the tube before and after the bubble, defeating any liquid tensile strength, and yet if the bubble is not too big, the siphon will continue to operate with little change as it sweeps the bubble out.
Another common misconception about siphons is that because the atmospheric pressure is virtually identical at the entrance and exit, the atmospheric pressure cancels, and therefore atmospheric pressure cannot be pushing the liquid up the siphon. But equal and opposite forces may not completely cancel if there is an intervening force that counters some or all of one of the forces. In the siphon, the atmospheric pressure at the entrance and exit are both lessened by the force of gravity pulling down the liquid in each tube, but the pressure on the down side is lessened more by the taller column of liquid on the down side. In effect, the atmospheric pressure coming up the down side does not entirely "make it" to the top to cancel all of the atmospheric pressure pushing up the up side. This effect can be seen more easily in the example of two carts being pushed up opposite sides of a hill. As shown in the diagram, even though the person on the left seems to have his push canceled entirely by the equal and opposite push from the person on the right, the person on the left's seemingly canceled push is still the source of the force to push the left cart up.
In some situations siphons do function in the absence of atmospheric pressure and due to tensile strength – see vacuum siphons – and in these situations the chain model can be instructive. Further, in other settings water transport does occur due to tension, most significantly in transpirational pull in the xylem of vascular plants. Water and other liquids may seem to have no tensile strength because when a handful is scooped up and pulled on, the liquids narrow and pull apart effortlessly. But liquid tensile strength in a siphon is possible when the liquid adheres to the tube walls and thereby resists narrowing. Any contamination on the tube walls, such as grease or air bubbles, or other minor influences such as turbulence or vibration, can cause the liquid to detach from the walls and lose all tensile strength.
In more detail, one can look at how the hydrostatic pressure varies through a static siphon, considering in turn the vertical tube from the top reservoir, the vertical tube from the bottom reservoir, and the horizontal tube connecting them (assuming a U-shape). At liquid level in the top reservoir, the liquid is under atmospheric pressure, and as one goes up the siphon, the hydrostatic pressure decreases (under vertical pressure variation), since the weight of atmospheric pressure pushing the water up is counterbalanced by the column of water in the siphon pushing down (until one reaches the maximal height of a barometer/siphon, at which point the liquid cannot be pushed higher) – the hydrostatic pressure at the top of the tube is then lower than atmospheric pressure by an amount proportional to the height of the tube. Doing the same analysis on the tube rising from the lower reservoir yields the pressure at the top of that (vertical) tube; this pressure is lower because the tube is longer (there is more water pushing down), and requires that the lower reservoir is lower than the upper reservoir, or more generally that the discharge outlet simply be lower than the surface of the upper reservoir. Considering now the horizontal tube connecting them, one sees that the pressure at the top of the tube from the top reservoir is higher (since less water is being lifted), while the pressure at the top of the tube from the bottom reservoir is lower (since more water is being lifted), and since liquids move from high pressure to low pressure, the liquid flows across the horizontal tube from the top basin to the bottom basin. The liquid is under positive pressure (compression) throughout the tube, not tension.
Bernoulli's equation is considered in the scientific literature to be a fair approximation to the operation of the siphon. In non-ideal fluids, compressibility, tensile strength and other characteristics of the working fluid (or multiple fluids) complicate Bernoulli's equation.
Once started, a siphon requires no additional energy to keep the liquid flowing up and out of the reservoir. The siphon will draw liquid out of the reservoir until the level falls below the intake, allowing air or other surrounding gas to break the siphon, or until the outlet of the siphon equals the level of the reservoir, whichever comes first.
In addition to atmospheric pressure, the density of the liquid, and gravity, the maximal height of the crest in practical siphons is limited by the vapour pressure of the liquid. When the pressure within the liquid drops to below the liquid's vapor pressure, tiny vapor bubbles can begin to form at the high point, and the siphon effect will end. This effect depends on how efficiently the liquid can nucleate bubbles; in the absence of impurities or rough surfaces to act as easy nucleation sites for bubbles, siphons can temporarily exceed their standard maximal height during the extended time it takes bubbles to nucleate. One siphon of degassed water was demonstrated to for an extended period of time and other controlled experiments to . For water at standard atmospheric pressure, the maximal siphon height is approximately ; for mercury it is , which is the definition of standard pressure. This equals the maximal height of a suction pump, which operates by the same principle. The ratio of heights (about 13.6) equals the ratio of densities of water and mercury (at a given temperature), since the column of water (resp. mercury) is balancing with the column of air yielding atmospheric pressure, and indeed maximal height is (neglecting vapor pressure and velocity of liquid) inversely proportional to density of liquid.
Modern research into the operation of the siphon
In 1948, Malcolm Nokes investigated siphons working in both air pressure and in a partial vacuum; for siphons in vacuum he concluded: "The gravitational force on the column of liquid in the downtake tube less the gravitational force in the uptake tube causes the liquid to move. The liquid is therefore in tension and sustains a longitudinal strain which, in the absence of disturbing factors, is insufficient to break the column of liquid". But for siphons of small uptake height working at atmospheric pressure, he wrote: "... the tension of the liquid column is neutralized and reversed by the compressive effect of the atmosphere on the opposite ends of the liquid column."
Potter and Barnes at the University of Edinburgh revisited siphons in 1971. They re-examined the theories of the siphon and ran experiments on siphons in air pressure. They concluded: "By now it should be clear that, despite a wealth of tradition, the basic mechanism of a siphon does not depend upon atmospheric pressure."
Gravity, pressure and molecular cohesion were the focus of work in 2010 by Hughes at the Queensland University of Technology. He used siphons at air pressure and his conclusion was: "The flow of water out of the bottom of a siphon depends on the difference in height between the inflow and outflow, and therefore cannot be dependent on atmospheric pressure…"
Hughes did further work on siphons at air pressure in 2011 and concluded: "The experiments described above demonstrate that ordinary siphons at atmospheric pressure operate through gravity and not atmospheric pressure".
The father and son researchers Ramette and Ramette successfully siphoned carbon dioxide under air pressure in 2011 and concluded that molecular cohesion is not required for the operation of a siphon, but: "The basic explanation of siphon action is that, once the tube is filled, the flow is initiated by the greater pull of gravity on the fluid on the longer side compared with that on the short side. This creates a pressure drop throughout the siphon tube, in the same sense that 'sucking' on a straw reduces the pressure along its length all the way to the intake point. The ambient atmospheric pressure at the intake point responds to the reduced pressure by forcing the fluid upwards, sustaining the flow, just as in a steadily sucked straw in a milkshake."
Again in 2011, Richert and Binder (at the University of Hawaii) examined the siphon and concluded that molecular cohesion is not required for the operation of a siphon but relies upon gravity and a pressure differential, writing: "As the fluid initially primed on the long leg of the siphon rushes down due to gravity, it leaves behind a partial vacuum that allows pressure on the entrance point of the higher container to push fluid up the leg on that side".
The research team of Boatwright, Puttick, and Licence, all at the University of Nottingham, succeeded in running a siphon in high vacuum, also in 2011. They wrote: "It is widely believed that the siphon is principally driven by the force of atmospheric pressure. An experiment is described that shows that a siphon can function even under high-vacuum conditions. Molecular cohesion and gravity are shown to be contributing factors in the operation of a siphon; the presence of a positive atmospheric pressure is not required".
Writing in Physics Today in 2011, J. Dooley from Millersville University stated that both a pressure differential within the siphon tube and the tensile strength of the liquid are required for a siphon to operate.
A researcher at Humboldt State University, A. McGuire, examined flow in siphons in 2012. Using the advanced general-purpose multiphysics simulation software package LS-DYNA he examined pressure initialisation, flow, and pressure propagation within a siphon. He concluded: "Pressure, gravity and molecular cohesion can all be driving forces in the operation of siphons".
In 2014, Hughes and Gurung (at the Queensland University of Technology) ran a water siphon under varying air pressures ranging from sea level to 11.9 km () altitude. They noted: "Flow remained more or less constant during ascension indicating that siphon flow is independent of ambient barometric pressure". They used Bernoulli's equation and the Poiseuille equation to examine pressure differentials and fluid flow within a siphon. Their conclusion was: "It follows from the above analysis that there must be a direct cohesive connection between water molecules flowing in and out of a siphon. This is true at all atmospheric pressures in which the pressure in the apex of the siphon is above the vapour pressure of water, an exception being ionic liquids".
Practical requirements
A plain tube can be used as a siphon. An external pump has to be applied to start the liquid flowing and prime the siphon (in home use this is often done by a person inhaling through the tube until enough of it has filled with liquid; this may pose danger to the user, depending on the liquid that is being siphoned). This is sometimes done with any leak-free hose to siphon gasoline from a motor vehicle's gasoline tank to an external tank. (Siphoning gasoline by mouth often results in the accidental swallowing of gasoline, or aspirating it into the lungs, which can cause death or lung damage.) If the tube is flooded with liquid before part of the tube is raised over the intermediate high point and care is taken to keep the tube flooded while it is being raised, no pump is required. Devices sold as siphons often come with a siphon pump to start the siphon process.
In some applications it can be helpful to use siphon tubing that is not much larger than necessary. Using piping of too great a diameter and then throttling the flow using valves or constrictive piping appears to increase the effect of previously cited concerns over gases or vapor collecting in the crest which serve to break the vacuum. If the vacuum is reduced too much, the siphon effect can be lost. Reducing the size of pipe used closer to requirements appears to reduce this effect and creates a more functional siphon that does not require constant re-priming and restarting. In this respect, where the requirement is to match a flow into a container with a flow out of said container (to maintain a constant level in a pond fed by a stream, for example) it would be preferable to utilize two or three smaller separate parallel pipes that can be started as required rather than attempting to use a single large pipe and attempting to throttle it.
Automatic intermittent siphon
Siphons are sometimes employed as automatic machines, in situations where it is desirable to turn a continuous trickling flow or an irregular small surge flow into a large surge volume. A common example of this is a public restroom with urinals regularly flushed by an automatic siphon in a small water tank overhead. When the container is filled, all the stored liquid is released, emerging as a large surge volume that then resets and fills again. One way to do this intermittent action involves complex machinery such as floats, chains, levers, and valves, but these can corrode, wear out, or jam over time. An alternate method is with rigid pipes and chambers, using only the water itself in a siphon as the operating mechanism.
A siphon used in an automatic unattended device needs to be able to function reliably without failure. This is different from the common demonstration self-starting siphons in that there are ways the siphon can fail to function which require manual intervention to return to normal surge flow operation. A video demonstration of a self-starting siphon can be found here, courtesy of The Curiosity Show.
The most common failure is for the liquid to dribble out slowly, matching the rate that the container is filling, and the siphon enters an undesired steady-state condition. Preventing dribbling typically involves pneumatic principles to trap one or more large air bubbles in various pipes, which are sealed by water traps. This method can fail if it cannot start working intermittently without water already present in parts of the mechanism, and which will not be filled if the mechanism starts from a dry state.
A second problem is that the trapped air pockets will shrink over time if the siphon is not operating due to no inflow. The air in pockets is absorbed by the liquid, which pulls liquid up into the piping until the air pocket disappears, and can cause activation of water flow outside the normal range of operating when the storage tank is not full, leading to loss of the liquid seal in lower parts of the mechanism.
A third problem is where the lower end of the liquid seal is simply a U-trap bend in an outflow pipe. During vigorous emptying, the kinetic motion of the liquid out the outflow can propel too much liquid out, causing a loss of the sealing volume in the outflow trap and loss of the trapped air bubble to maintain intermittent operation.
A fourth problem involves seep holes in the mechanism, intended to slowly refill these various sealing chambers when the siphon is dry. The seep holes can be plugged by debris and corrosion, requiring manual cleaning and intervention. To prevent this, the siphon may be restricted to pure liquid sources, free of solids or precipitate.
Many automatic siphons have been invented going back to at least the 1850s, for automatic siphon mechanisms that attempt to overcome these problems using various pneumatic and hydrodynamic principles.
Applications and terminology
When certain liquids needs to be purified, siphoning can help prevent either the bottom (dregs) or the top (foam and floaties) from being transferred out of one container into a new container. Siphoning is thus useful in the fermentation of wine and beer for this reason, since it can keep unwanted impurities out of the new container.
Self-constructed siphons, made of pipes or tubes, can be used to evacuate water from cellars after floodings. Between the flooded cellar and a deeper place outside a connection is built, using a tube or some pipes. They are filled with water through an intake valve (at the highest end of the construction). When the ends are opened, the water flows through the pipe into the sewer or the river.
Siphoning is common in irrigated fields to transfer a controlled amount of water from a ditch, over the ditch wall, into furrows.
Large siphons may be used in municipal waterworks and industry. Their size requires control via valves at the intake, outlet and crest of the siphon. The siphon may be primed by closing the intake and outlets and filling the siphon at the crest. If intakes and outlets are submerged, a vacuum pump may be applied at the crest to prime the siphon. Alternatively the siphon may be primed by a pump at either the intake or outlet. Gas in the liquid is a concern in large siphons. The gas tends to accumulate at the crest and if enough accumulates to break the flow of liquid, the siphon stops working. The siphon itself will exacerbate the problem because as the liquid is raised through the siphon, the pressure drops, causing dissolved gases within the liquid to come out of solution. Higher temperature accelerates the release of gas from liquids so maintaining a constant, low temperature helps. The longer the liquid is in the siphon, the more gas is released, so a shorter siphon overall helps. Local high points will trap gas so the intake and outlet legs should have continuous slopes without intermediate high points. The flow of the liquid moves bubbles thus the intake leg can have a shallow slope as the flow will push the gas bubbles to the crest. Conversely, the outlet leg needs to have a steep slope to allow the bubbles to move against the liquid flow; though other designs call for a shallow slope in the outlet leg as well to allow the bubbles to be carried out of the siphon. At the crest the gas can be trapped in a chamber above the crest. The chamber needs to be occasionally primed again with liquid to remove the gas.
Siphon rain gauge
A siphon rain gauge is a rain gauge that can record rainfall over an extended period. A siphon is used to automatically empty the gauge. It is often simply called a "siphon gauge" and is not to be confused with a siphon pressure gauge.
Siphon drainage
A siphon drainage method is being implemented in several expressways as of 2022. Recent studies found that it can reduce groundwater level behind expressway retaining walls, and there was no indication of clogging. This new drainage system is being pioneered as a long-term method to limit leakage hazard in the retaining wall. Siphon drainage is also used in draining unstable slopes, and siphon roof-water drainage systems have been in use since the 1960s.
Siphon spillway
A siphon spillway in a dam is usually not technically a siphon, as it is generally used to drain elevated water levels. However, a siphon spillway operates as an actual siphon if it raises the flow higher than the surface of the source reservoir, as sometimes is the case when used in irrigation. In operation, a siphon spillway is considered to be "pipe flow" or "closed-duct flow". A normal spillway flow is pressurized by the height of the reservoir above the spillway, whereas a siphon flow rate is governed by the difference in height of the inlet and outlet. Some designs make use of an automatic system that uses the flow of water in a spiral vortex to remove the air above to prime the siphon. Such a design includes the volute siphon.
Flush toilet
Flush toilets often have some siphon effect as the bowl empties.
Some toilets also use the siphon principle to obtain the actual flush from the cistern. The flush is triggered by a lever or handle that operates a simple diaphragm-like piston pump that lifts enough water to the crest of the siphon to start the flow of water which then completely empties the contents of the cistern into the toilet bowl. The advantage of this system was that no water would leak from the cistern excepting when flushed. These were mandatory in the UK until 2011.
Early urinals incorporated a siphon in the cistern which would flush automatically on a regular cycle because there was a constant trickle of clean water being fed to the cistern by a slightly open valve.
Devices that are not true siphons
Siphon coffee
While if both ends of a siphon are at atmospheric pressure, liquid flows from high to low, if the bottom end of a siphon is pressurized, liquid can flow from low to high. If pressure is removed from the bottom end, the liquid flow will reverse, illustrating that it is pressure driving the siphon. An everyday illustration of this is the siphon coffee brewer, which works as follows (designs vary; this is a standard design, omitting coffee grounds):
a glass vessel is filled with water, then corked (so air-tight) with a siphon sticking vertically upwards
another glass vessel is placed on top, open to the atmosphere – the top vessel is empty, the bottom is filled with water
the bottom vessel is then heated; as the temperature increases, the vapor pressure of the water increases (it increasingly evaporates); when the water boils the vapor pressure equals atmospheric pressure, and as the temperature increases above boiling the pressure in the bottom vessel then exceeds atmospheric pressure, and pushes the water up the siphon tube into the upper vessel.
a small amount of still hot water and steam remain in the bottom vessel and are kept heated, with this pressure keeping the water in the upper vessel
when the heat is removed from the bottom vessel, the vapor pressure decreases, and can no longer support the column of water – gravity (acting on the water) and atmospheric pressure then push the water back into the bottom vessel.
In practice, the top vessel is filled with coffee grounds, and the heat is removed from the bottom vessel when the coffee has finished brewing. What vapor pressure means concretely is that the boiling water converts high-density water (a liquid) into low-density steam (a gas), which thus expands to take up more volume (in other words, the pressure increases). This pressure from the expanding steam then forces the liquid up the siphon; when the steam then condenses down to water the pressure decreases and the liquid flows back down.
Siphon pump
While a simple siphon cannot output liquid at a level higher than the source reservoir, a more complicated device utilizing an airtight metering chamber at the crest and a system of automatic valves, may discharge liquid on an ongoing basis, at a level higher than the source reservoir, without outside pumping energy being added. It can accomplish this despite what initially appears to be a violation of conservation of energy because it can take advantage of the energy of a large volume of liquid dropping some distance, to raise and discharge a small volume of liquid above the source reservoir. Thus it might be said to "require" a large quantity of falling liquid to power the dispensing of a small quantity. Such a system typically operates in a cyclical or start/stop but ongoing and self-powered manner. Ram pumps do not work in this way. These metering pumps are true siphon pumping devices which use siphons as their power source.
Inverted siphon
An inverted siphon is not a siphon but a term applied to pipes that must dip below an obstruction to form a U-shaped flow path.
Large inverted siphons are used to convey water being carried in canals or flumes across valleys, for irrigation or gold mining. The Romans used inverted siphons of lead pipes to cross valleys that were too big to construct an aqueduct.
Inverted siphons are commonly called traps for their function in preventing sewer gases from coming back out of sewers and sometimes making dense objects like rings and electronic components retrievable after falling into a drain. Liquid flowing in one end simply forces liquid up and out the other end, but solids like sand will accumulate. This is especially important in sewerage systems or culverts which must be routed under rivers or other deep obstructions where the better term is "depressed sewer".
Back siphonage
Back siphonage is a plumbing term applied to the reversal of normal water flow in a plumbing system due to sharply reduced or negative pressure on the water supply side, such as high demand on water supply by fire-fighting; it is not an actual siphon as it is suction. Back siphonage is rare as it depends on submerged inlets at the outlet (home) end and these are uncommon. Back siphonage is not to be confused with backflow; which is the reversed flow of water from the outlet end to the supply end caused by pressure occurring at the outlet end. Also, building codes usually demand a check valve where the water supply enters a building to prevent backflow into the drinking water system.
Anti-siphon valve
Building codes often contain specific sections on back siphonage and especially for external faucets (See the sample building code quote, below). Backflow prevention devices such as anti-siphon valves are required in such designs. The reason is that external faucets may be attached to hoses which may be immersed in an external body of water, such as a garden pond, swimming pool, aquarium or washing machine. In these situations the unwanted flow is not actually the result of a siphon but suction due to reduced pressure on the water supply side. Should the pressure within the water supply system fall, the external water may be returned by back pressure into the drinking water system through the faucet. Another possible contamination point is the water intake in the toilet tank. An anti-siphon valve is also required here to prevent pressure drops in the water supply line from suctioning water out of the toilet tank (which may contain additives such as "toilet blue") and contaminating the water system. Anti-siphon valves function as a one-direction check valve.
Anti-siphon valves are also used medically. Hydrocephalus, or excess fluid in the brain, may be treated with a shunt which drains cerebrospinal fluid from the brain. All shunts have a valve to relieve excess pressure in the brain. The shunt may lead into the abdominal cavity such that the shunt outlet is significantly lower than the shunt intake when the patient is standing. Thus a siphon effect may take place and instead of simply relieving excess pressure, the shunt may act as a siphon, completely draining cerebrospinal fluid from the brain. The valve in the shunt may be designed to prevent this siphon action so that negative pressure on the drain of the shunt does not result in excess drainage. Only excess positive pressure from within the brain should result in drainage.
The anti-siphon valve in medical shunts is preventing excess forward flow of liquid. In plumbing systems, the anti-siphon valve is preventing backflow.
Sample building code regulations regarding "back siphonage" from the Canadian province of Ontario:
7.6.2.3.Back Siphonage
Every potable water system that supplies a fixture or tank that is not subject to pressures above atmospheric shall be protected against back-siphonage by a backflow preventer.
Where a potable water supply is connected to a boiler, tank, cooling jacket, lawn sprinkler system or other device where a non-potable fluid may be under pressure that is above atmospheric or the water outlet may be submerged in the non-potable fluid, the water supply shall be protected against backflow by a backflow preventer.
Where a hose bibb is installed outside a building, inside a garage, or where there is an identifiable risk of contamination, the potable water system shall be protected against backflow by a backflow preventer.
Other anti-siphoning devices
Along with anti-siphon valves, anti-siphoning devices also exist. The two are unrelated in application. Siphoning can be used to remove fuel from tanks. With the cost of fuel increasing, it has been linked in several countries to the rise in fuel theft. Trucks, with their large fuel tanks, are most vulnerable. The anti-siphon device prevents thieves from inserting a tube into the fuel tank.
Siphon barometer
A siphon barometer is the term sometimes applied to the simplest of mercury barometers. A continuous U-shaped tube of the same diameter throughout is sealed on one end and filled with mercury. When placed into the upright, "U", position, mercury will flow away from the sealed end, forming a partial vacuum, until balanced by atmospheric pressure on the other end. The term "siphon" derives from the belief that air pressure is involved in the operation of a siphon. The difference in height of the fluid between the two arms of the U-shaped tube is the same as the maximum intermediate height of a siphon. When used to measure pressures other than atmospheric pressure, a siphon barometer is sometimes called a siphon gauge; these are not siphons but follow a standard U-shaped design leading to the term. Siphon barometers are still produced as precision instruments. Siphon barometers should not be confused with a siphon rain gauge.,
Siphon bottle
A siphon bottle (also called a soda syphon or, archaically, a siphoid) is a pressurized bottle with a vent and a valve. It is not a siphon as pressure within the bottle drives the liquid up and out a tube. A special form was the gasogene.
Siphon cup
A siphon cup is the (hanging) reservoir of paint attached to a spray gun, it is not a siphon as a vacuum pump extracts the paint. This name is to distinguish it from gravity-fed reservoirs. An archaic use of the term is a cup of oil in which the oil is transported out of the cup via a cotton wick or tube to a surface to be lubricated, this is not a siphon but an example of capillary action.
Heron's siphon
Heron's siphon is not a siphon as it works as a gravity driven pressure pump, at first glance it appears to be a perpetual motion machine but will stop when the air in the priming pump is depleted. In a slightly differently configuration, it is also known as Heron's fountain.
Venturi siphon
A venturi siphon, also known as an eductor, is not a siphon but a form of vacuum pump using the Venturi effect of fast flowing fluids (e.g. air), to produce low pressures to suction other fluids; a common example is the carburetor. See pressure head. The low pressure at the throat of the venturi is called a siphon when a second fluid is introduced, or an aspirator when the fluid is air, this is an example of the misconception that air pressure is the operating force for siphons.
Siphonic roof drainage
Despite the name, siphonic roof drainage does not work as a siphon; the technology makes use of gravity induced vacuum pumping to carry water horizontally from multiple roof drains to a single downpipe and to increase flow velocity. Metal baffles at the roof drain inlets reduce the injection of air which increases the efficiency of the system. One benefit to this drainage technique is reduced capital costs in construction compared to traditional roof drainage. Another benefit is the elimination of pipe pitch or gradient required for conventional roof drainage piping. However this system of gravity pumping is mainly suitable for large buildings and is not usually suitable for residential properties.
Self-siphons
The term self-siphon is used in a number of ways. Liquids that are composed of long polymers can "self-siphon" and these liquids do not depend on atmospheric pressure. Self-siphoning polymer liquids work the same as the siphon-chain model where the lower part of the chain pulls the rest of the chain up and over the crest. This phenomenon is also called a tubeless siphon.
"Self-siphon" is also often used in sales literature by siphon manufacturers to describe portable siphons that contain a pump. With the pump, no external suction (e.g. from a person's mouth/lungs) is required to start the siphon and thus the product is described as a "self-siphon".
If the upper reservoir is such that the liquid there can rise above the height of the siphon crest, the rising liquid in the reservoir can "self-prime" the siphon and the whole apparatus be described as a "self-siphon". Once primed, such a siphon will continue to operate until the level of the upper reservoir falls below the intake of the siphon. Such self-priming siphons are useful in some rain gauges and dams.
In nature
Anatomy
The term "siphon" is used for a number of structures in human and animal anatomy, either because flowing liquids are involved or because the structure is shaped like a siphon, but in which no actual siphon effect is occurring: see Siphon (disambiguation).
There has been a debate if whether the siphon mechanism plays a role in blood circulation. However, in the 'closed loop' of circulation this was discounted; "In contrast, in 'closed' systems, like the circulation, gravity does not hinder uphill flow nor does it cause downhill flow, because gravity acts equally on the ascending and descending limbs of the circuit", but for "historical reasons", the term is used. One hypothesis (in 1989) was that a siphon existed in the circulation of the giraffe. But further research in 2004 found that, "There is no hydrostatic gradient and since the 'fall' of fluid does not assist the ascending arm, there is no siphon. The giraffe's high arterial pressure, which is sufficient to raise the blood 2 m from heart to head with sufficient remaining pressure to perfuse the brain, supports this concept." However, a paper written in 2005 urged more research on the hypothesis:
The principle of the siphon is not species specific and should be a fundamental principle of closed circulatory systems. Therefore, the controversy surrounding the role of the siphon principle may best be resolved by a comparative approach. Analyses of blood pressure on a variety of long-necked and long-bodied animals, which take into account phylogenetic relatedness, will be important. In addition experimental studies that combined measurements of arterial and venous blood pressures, with cerebral blood flow, under a variety of gravitational stresses (different head positions), will ultimately resolve this controversy.
Species
Some species are named after siphons because they resemble siphons in whole or in part. Geosiphons are fungi. There are species of alga belonging to the family Siphonocladaceae in the phylum Chlorophyta which have tube-like structures. Ruellia villosa is a tropical plant in the family Acanthaceae that is also known by the botanical synonym, Siphonacanthus villosus Nees'.
Geology
In speleology, a siphon or a sump is that part of a cave passage that lies under water and through which cavers have to dive to progress further into the cave system, but it is not an actual siphon.
Rivers
A river siphon occurs when part of the water flow passes under a submerged object like a rock or tree trunk. The water flowing under the obstruction can be very powerful, and as such can be very dangerous for kayaking, canyoning, and other river-based watersports.
Explanation using Bernoulli's equation
Bernoulli's equation may be applied to a siphon to derive its ideal flow rate and theoretical maximum height.
Let the surface of the upper reservoir be the reference elevation.
Let point A be the start point of siphon, immersed within the higher reservoir and at a depth −d below the surface of the upper reservoir.
Let point B be the intermediate high point on the siphon tube at height +hB above the surface of the upper reservoir.
Let point C be the drain point of the siphon at height −hC below the surface of the upper reservoir.
Bernoulli's equation:
= fluid velocity along the streamline
= gravitational acceleration downwards
= elevation in gravity field
= pressure along the streamline
= fluid density
Apply Bernoulli's equation to the surface of the upper reservoir. The surface is technically falling as the upper reservoir is being drained. However, for this example we will assume the reservoir to be infinite and the velocity of the surface may be set to zero. Furthermore, the pressure at both the surface and the exit point C is atmospheric pressure. Thus:
Apply Bernoulli's equation to point A at the start of the siphon tube in the upper reservoir where P = PA, v = vA and y = −d
Apply Bernoulli's equation to point B at the intermediate high point of the siphon tube where P = PB, v = vB and y = hB
Apply Bernoulli's equation to point C where the siphon empties. Where v = vC and y = −hC. Furthermore, the pressure at the exit point is atmospheric pressure. Thus:
Velocity
As the siphon is a single system, the constant in all four equations is the same. Setting equations 1 and 4 equal to each other gives:
Solving for vC:
Velocity of siphon:
The velocity of the siphon is thus driven solely by the height difference between the surface of the upper reservoir and the drain point. The height of the intermediate high point, hB, does not affect the velocity of the siphon. However, as the siphon is a single system, vB = vC and the intermediate high point does limit the maximum velocity. The drain point cannot be lowered indefinitely to increase the velocity. Equation 3 will limit the velocity to retain a positive pressure at the intermediate high point to prevent cavitation. The maximum velocity may be calculated by combining equations 1 and 3:
Setting PB = 0 and solving for vmax:
Maximum velocity of siphon:
The depth, −d, of the initial entry point of the siphon in the upper reservoir, does not affect the velocity of the siphon. No limit to the depth of the siphon start point is implied by Equation 2 as pressure PA increases with depth d. Both these facts imply the operator of the siphon may bottom skim or top skim the upper reservoir without impacting the siphon's performance.
This equation for the velocity is the same as that of any object falling height hC. This equation assumes PC is atmospheric pressure. If the end of the siphon is below the surface, the height to the end of the siphon cannot be used; rather the height difference between the reservoirs should be used.
Maximum height
Although siphons can exceed the barometric height of the liquid in special circumstances, e.g. when the liquid is degassed and the tube is clean and smooth, in general the practical maximum height can be found as follows.
Setting equations 1 and 3 equal to each other gives:
Maximum height of the intermediate high point occurs when it is so high that the pressure at the intermediate high point is zero; in typical scenarios this will cause the liquid to form bubbles and if the bubbles enlarge to fill the pipe then the siphon will "break". Setting PB = 0:
Solving for hB:
General height of siphon:
This means that the height of the intermediate high point is limited by pressure along the streamline being always greater than zero.
Maximum height of siphon:
This is the maximum height that a siphon will work. Substituting values will give approximately for water and, by definition of standard pressure, for mercury. The ratio of heights (about 13.6) equals the ratio of densities of water and mercury (at a given temperature). As long as this condition is satisfied (pressure greater than zero), the flow at the output of the siphon is still only governed by the height difference between the source surface and the outlet. Volume of fluid in the apparatus is not relevant as long as the pressure head remains above zero in every section. Because pressure drops when velocity is increased, a static siphon (or manometer) can have a slightly higher height than a flowing siphon.
Operation in a vacuum
Experiments have shown that siphons can operate in a vacuum, via cohesion and tensile strength between molecules, provided that the liquids are pure and degassed and surfaces are very clean.
The Oxford English Dictionary (OED) entry on siphon, published in 1911, states that a siphon works by atmospheric pressure. Stephen Hughes of Queensland University of Technology criticized this in a 2010 article which was widely reported in the media. The OED editors stated, "there is continuing debate among scientists as to which view is correct. ... We would expect to reflect this debate in the fully updated entry for siphon, due to be published later this year." Hughes continued to defend his view of the siphon in a late September post at the Oxford blog. The 2015 definition by the OED is:
A tube used to convey liquid upwards from a reservoir and then down to a lower level of its own accord. Once the liquid has been forced into the tube, typically by suction or immersion, flow continues unaided.
The Encyclopædia Britannica currently describes a siphon as:
Siphon, also spelled syphon, instrument, usually in the form of a tube bent to form two legs of unequal length, for conveying liquid over the edge of a vessel and delivering it at a lower level. Siphons may be of any size. The action depends upon the influence of gravity (not, as sometimes thought, on the difference in atmospheric pressure; a siphon will work in a vacuum) and upon the cohesive forces that prevent the columns of liquid in the legs of the siphon from breaking under their own weight. At sea level, water can be lifted a little more than 10 metres (33 feet) by a siphon.
In civil engineering, pipelines called inverted siphons are used to carry sewage or stormwater under streams, highway cuts, or other depressions in the ground. In an inverted siphon the liquid completely fills the pipe and flows under pressure, as opposed to the open-channel gravity flow that occurs in most sanitary or storm sewers.
Standards in engineering or industry
The American Society of Mechanical Engineers (ASME) publishes the following Tri-Harmonized Standard:
ASSE 1002/ASME A112.1002/CSA B125.12 on Performance Requirements for Anti-Siphon Fill Valves (Ballcocks) for Gravity Water Closet Flush Tanks
| Technology | Basics_8 | null |
459402 | https://en.wikipedia.org/wiki/Pinus%20lambertiana | Pinus lambertiana | Pinus lambertiana (commonly known as the sugar pine or sugar cone pine) is the tallest and most massive pine tree and has the longest cones of any conifer. It is native to coastal and inland mountain areas along the Pacific coast of North America, as far north as Oregon and as far south as Baja California in Mexico.
Description
Growth
The sugar pine is the tallest and largest Pinus species, commonly growing to tall, exceptionally to tall, with a trunk diameter of , exceptionally . The tallest recorded specimen is tall, is located in Yosemite National Park, and was discovered in 2015. The second tallest recorded was "Yosemite Giant", an tall specimen in Yosemite National Park, which died from a bark beetle attack in 2007. Yosemite National Park also has the third tallest, measured to tall as of June 2013; the Rim Fire affected this specimen, but it survived. The next tallest known living specimens grow in southern Oregon: one in Umpqua National Forest is tall and another in Siskiyou National Forest is tall.
The bark of Pinus lambertiana ranges from brown to purple in color and is thick. The upper branches can reach out over . Like all members of the white pine group (Pinus subgenus Strobus), the leaves ("needles") grow in fascicles ("bundles") of five with a deciduous sheath. They are long. Sugar pine is notable for having the longest cones of any conifer, mostly long, exceptionally to long (although the cones of the Coulter pine are more massive); their unripe weight of makes them perilous projectiles when chewed off by squirrels. The seeds are long, with a long wing that aids their dispersal by wind. Sugar pine never grows in pure stands, always in a mixed forest, and is shade tolerant in its youth.
Distribution
The sugar pine occurs in Oregon and California in the Western United States southward to Baja California, specifically in the Cascade Range, Sierra Nevada, Coast Ranges, and Sierra San Pedro Martir. It is generally more abundant towards the south and can be found between above sea level.
Genome
The massive 31 gigabase mega-genome of sugar pine has been sequenced in 2016 by the large PineRefSeq consortium. This makes the genome one of the largest sequenced and assembled so far.
The transposable elements that make up the megagenome are linked to the evolutionary change of the sugar pine. The sugar pine contains extended regions of non-coding DNA, most of which is derived from transposable elements. The genome of the sugar pine represents one extreme in all plants, with a stable diploid genome that is expanded by the proliferation of transposable elements, in contrast to the frequent polyploidization events in angiosperms.
Embryonal growth
In late stage of embryonal development, the sugar pine embryo changes from a smooth and narrow paraboloid to a less symmetric structure. This configuration is caused by a transverse orientation of division planes in the upper portion of the embryo axis. The root initial zone is established, and the epicotyl develops as an anlage flanked by regions of that define the cotyledonary buttresses. At this stage, the embryo is composed of the suspensor, root initials and root cap region, hypocotyl-shoot axis, and the epicotyl. The upper (distal) portion of the embryo, which gives rise to the cotyledons and the epicotyl, is considered to be the shoot apex.
Shoot apex
The shoot apex has the following four zones:
The apical initials produce all cells of the shoot apex through cell division. It is located at the top of the meristem and the cells are larger in size compared to other cells on the surface layer.
The central mother cell generates the rib meristem and the inner layers of the peripheral tissue zone through cell division. It presents a typical gymnosperm appearance and is characterized by cell expansion and unusual mitosis that occurs in the central region. The rate of mitosis increases on its outer edge.
The peripheral tissue zone consists of two layers of cells that are characterized by dense cytoplasm and mitosis of high frequency.
Lastly, the rib meristem is a regular arrangement of vertical files of cells which mature into the pith of the axis.
Etymology
Naturalist John Muir considered sugar pine to be the "king of the conifers". The common name comes from the sweet resin, which Native Americans used as a sweetener. John Muir found it preferable to maple sugar. It is also known as the great sugar pine. The scientific name was assigned by David Douglas in 1826, in honor of his friend, a botanist friend from London: Aylmer Bourke Lambert.
Ecology
Wildlife
The large size and high nutritional value of the sugar pine seeds are appealing to many species. Yellow pine chipmunks (Neotamias amoenus) and Steller's jays (Cyanocitta stelleri) gather and hoard sugar pine seeds. Chipmunks gather wind-dispersed seeds from the ground and store them in large amounts. Jays collect seeds by pecking the cones with their beaks and catching the seeds as they fall out. Although wind is a main dispersal agent of sugar pine seeds, animals tend to collect and store them before the wind can blow them far.
Black bears (Ursus americanus) feed on sugar pine seeds in the fall months within the Sierra Nevada. Both sugar pine and oak species are currently in decline, directly affecting black bear food sources within the Sierra Nevada.
Threats
Sugar pine trees have been impacted by the mountain pine beetle (Dendroctonus ponderosae), which is native to western North America. The beetles lay their eggs inside of the tree and inhibit the tree's ability to defend itself against invading species. Beetle infestation can also cause nutrient deficiencies that slowly weaken the tree's overall health, making pines more susceptible to other threats such as fires and white pine blister rust. Blister rust can weaken the tree and enable further infestation by mountain pine beetles.
The sugar pine has been severely affected by the white pine blister rust (Cronartium ribicola), a fungal pathogen accidentally introduced from Europe in 1909. A high proportion of sugar pines have been killed by the blister rust, particularly in the northern part of the species' range where blister rust has been present longer. The rust has also destroyed much of the Western white pine and whitebark pine throughout their ranges. The U.S. Forest Service has a program for developing rust-resistant varieties of sugar pine and western white pine. Seedlings of these trees have been introduced into the wild. The Sugar Pine Foundation in the Lake Tahoe Basin has been successful in finding resistant sugar pine seed trees. Blister rust is much less common in California, where sugar, Western white and whitebark pines still survive in great numbers.
The species is generally resistant to fire because of its thick bark and because it clears away competing species. However, its mortality has been directly linked to drier conditions and higher temperatures. Climate change presents a threat to species health: higher temperatures can decrease resin levels within the trees, weakening defenses against pathogens. At the same time, warmer winters increase survival of pests and pathogens. The weakened or dying trees then provide fuel for forest fires, which may become more frequent and more intense with rising summer temperatures, particularly if coupled with drier conditions and stronger winds.
Protective efforts
Sugar pine trees are in slow decline due to several threats: white pine blister rust, mountain pine beetles, and climate change. Efforts to restore sugar pines and other white pine trees that have been impacted by invasive species, climate change, and fires have been undertaken by governmental and non-governmental entities. One nonprofit, the Sugar Pine Foundation, was created in 2004 to plant sugar pine seeds in the Sierra Nevada along the border of California and Nevada. They plant seedlings grown from seeds collected from tree strains resistant to blister rust. The foundation's aim is to build a wild a sugar pine population that is resistant to white pine rust.
Uses
According to David Douglas, who was guided to the (exceptionally thick) tree specimen he was looking for by a Native American, some tribes ate the sweetish seeds. These were eaten raw and roasted, and also used to make flour or pulverized into a spread. Native Americans also ate the inner bark. The sweet sap or pitch was consumed, in small quantities due to its laxative properties, but could also be chewed as gum. Its flavor is thought largely to be derived from the pinitol it contains.
In the mid-19th century, the trees were used liberally as lumber during the California Gold Rush. In modern times they are used in much lower quantities, being spared for high-end products as with Western white pine.
The odorless wood is also preferred for packing fruit, as well as storing drugs and other goods. Its straight grain also makes it a useful organ pipe material. The wood was also long used for piano keys; in 1907 or 1908 the Connection piano-action maker Pratt, Read & Co. purchased "950,000 feet of clear sugar pine" for that use in & around Placerville, CA.
Folklore
In the Achomawi creation myth, Annikadel, the creator, makes one of the 'First People' by intentionally dropping a sugar pine seed in a place where it can grow. One of the descendants in this ancestry is Sugarpine-Cone man, who has a handsome son named Ahsoballache.
After Ahsoballache marries the daughter of To'kis the Chipmunk-woman, his grandfather insists that the new couple have a child. To this end, the grandfather breaks open a scale from a sugar pine cone, and secretly instructs Ahsoballache to immerse the scale's contents in spring water, then hide them inside a covered basket. Ahsoballache performs the tasks that night; at the next dawn, he and his wife discover the infant Edechewe near their bed.
The Washo language has a word for sugar pine, , and also a word for "sugar pine sugar", .
| Biology and health sciences | Pinaceae | Plants |
459758 | https://en.wikipedia.org/wiki/Neuroptera | Neuroptera | The insect order Neuroptera, or net-winged insects, includes the lacewings, mantidflies, antlions, and their relatives. The order consists of some 6,000 species. Neuroptera is grouped together with the Megaloptera (alderflies, fishflies, and dobsonflies) and Raphidioptera (snakeflies) in the unranked taxon Neuropterida (once known as Planipennia).
Adult neuropterans have four membranous wings, all about the same size, with many veins. They have chewing mouthparts, and undergo complete metamorphosis.
Neuropterans first appeared during the Permian period, and continued to diversify through the Mesozoic era. During this time, several unusually large forms evolved, especially in the extinct family Kalligrammatidae, often called "the butterflies of the Jurassic" for their large, patterned wings.
Anatomy and biology
Neuropterans are soft-bodied insects with relatively few specialized features. They have large lateral compound eyes, and may or may not also have ocelli. Their mouthparts have strong mandibles suitable for chewing, and lack the various adaptations found in most other holometabolan insect groups.
They have four wings, usually similar in size and shape, and a generalised pattern of veins. Some neuropterans have specialised sense organs in their wings, or have bristles or other structures to link their wings together during flight.
The larvae are specialised predators, with elongated mandibles adapted for piercing and sucking. The larval body form varies between different families, depending on the nature of their prey. In general, however, they have three pairs of thoracic legs, each ending in two claws. The abdomen often has adhesive discs on the last two segments.
Life cycle and ecology
The larvae of most families are predators. Many chrysopids, hemerobiids and coniopterygids eat aphids and other pest insects, and some have been used for biological control (either from commercial distributors, but also abundant and widespread in nature).
Larvae in various families cover themselves in debris (including other insects, living and dead) as camouflage, taken to an extreme in the ant lions, which bury themselves completely out of sight and ambush prey from "pits" in the soil. Larvae of some Ithonidae are root feeders, and larvae of Sisyridae are aquatic, and feed on freshwater sponges. A few mantispids are parasites of spider egg sacs.
As in other holometabolic orders, the pupal stage is enclosed in some form of cocoon composed of silk and soil or other debris. The pupa eventually cuts its way out of the cocoon with its mandibles, and may even move about for a short while before undergoing the moult to the adult form.
Adults of many groups are also predatory, but some do not feed, or consume only nectar.
Beetles, wasps, and some lake flies parasitize neuropteran larvae.
Evolution
Neuropterans first appeared near the end of the Permian period, as shown by fossils of the Permithonidae from the Tunguska basin in Siberia and a similar fauna from Australia.
The osmylids are of Jurassic or Early Cretaceous origin and may be the most ancient of the Neuropteran groups. The extinct osmylid Protosmylus is fossilized in middle Eocene Baltic amber. The genus Burmaleon is described from two fossils of Cenomanian age Burmese amber, implying crown group radiation in the Early Cretaceous or earlier. The family Kalligrammatidae lived from the Jurassic to Aptian (Lower Cretaceous) periods.
Ithonidae are from the Jurassic to Recent, and the extinct lineages of the family were widespread geographically.
Following the end of the Cretaceous period, the diversity of neuropterans appears to have declined.
Phylogeny
Molecular analysis in 2018 using mitochondrial rRNA and mitogenomic data places the Megaloptera as sister to Neuroptera, and Raphidioptera as sister to this combined lineage, though these results were considered tentative. The fossil record has contributed to the understanding of the group's phylogeny. Relationships within the Myrmeleontiformia are still in flux.
A phylogenomic analysis published in 2023 confirmed the topology of the neuropterid orders and found the relationships between the families of Neuropterida as shown in the following phylogenetic tree.
Taxonomy
Review of the Neropterid group orders by Engel, Winterton, and Breitkreuz (2018) included grouping of the Neuropteran families in a nested set of clades, an abandonment of the paraphyletic suborder "Hemerobiiformia" and redefinition of Myrmeleontiformia as a clade.
Neuroptera
Superfamily Coniopterygoidea
family Coniopterygidae dustywings (Late Jurassic–Present)
Clade Euneuroptera
Superfamily Osmyloidea
Family Osmylidae: osmylids (Early Jurassic–Present)
Family Sisyridae: spongillaflies (Late Cretaceous–Present)
Family Nevrorthidae (Late Cretaceous–Present)
Family †Archeosmylidae (Permian–Triassic)
Family †Saucrosmylidae (Middle Jurassic)
Superfamily Dilaroidea
Family Dilaridae: pleasing lacewings (Late Cretaceous–Present)
Superfamily Mantispoidea
Family Berothidae: beaded lacewings (Late Jurassic–Present)
Family Mantispidae: mantidflies (including †Dipteromantispidae) (Jurassic–Present)
Family †Mesoberothidae (including †Mesithonidae) (Triassic)
Family Rhachiberothidae: thorny lacewings (Early Cretaceous–Present)
Clade Neoneuroptera
Superfamily Hemerobioidea (inc. Chrysopoidea)
Family †Ascalochrysidae
Family Chrysopidae: green lacewings (including †Mesochrysopidae) (Jurassic–Present)
Family Hemerobiidae: brown lacewings (Jurassic–Present)
Family †Osmylitidae
Family †Solenoptilidae
Clade Geoneuroptera
Superfamily Ithonioidea
Family Ithonidae: moth lacewings (includes Rapismatidae and Polystoechotidae) (Early Jurassic–Present)
Clade Myrmeleontiformia
Superfamily Myrmeleontoidea (syn Nemopteroidea)
Family Ascalaphidae: owlflies
Family †Babinskaiidae (Cretaceous)
Family Myrmeleontidae: antlions (includes Palaeoleontidae) (Cretaceous–Present)
Family Nemopteridae: spoonwings etc (Cretaceous–Present)
Family Nymphidae: split-footed lacewings (includes Myiodactylidae) (Cretaceous–Present)
Family †Rafaelianidae
Superfamily Psychopsoidea
Family †Aetheogrammatidae
Family †Kalligrammatidae (Jurassic–Late Cretaceous)
Family †Osmylopsychopidae (syn †Brongniartiellidae)
Family †Panfiloviidae (syn †Grammosmylidae)
Family †Prohemerobiidae
Family Psychopsidae: silky lacewings (Late Triassic–Present)
The fossil genus †Mesohemerobius from the Late Jurassic–Early Cretaceous of China has been treated as incertae sedis within Neuroptera, while the fossil families †Permoberothidae and †Permithonidae are treated as a sister group to clade Eidoneuroptera formed by Neuroptera + Megaloptera.
In human culture
The use of Neuroptera in biological control of insect pests has been investigated, showing that it is difficult to establish and maintain populations in fields of crops.
Five species of Neuroptera are among 1681 insect species eaten by humans worldwide.
The New Guinea Highland people claim to be able to maintain a muscular build and great stamina despite their low protein intake as a result of eating insects including Neuroptera.
| Biology and health sciences | Insects: General | Animals |
460235 | https://en.wikipedia.org/wiki/Resultant%20force | Resultant force | In physics and engineering, a resultant force is the single force and associated torque obtained by combining a system of forces and torques acting on a rigid body via vector addition. The defining feature of a resultant force, or resultant force-torque, is that it has the same effect on the rigid body as the original system of forces. Calculating and visualizing the resultant force on a body is done through computational analysis, or (in the case of sufficiently simple systems) a free body diagram.
The point of application of the resultant force determines its associated torque. The term resultant force should be understood to refer to both the forces and torques acting on a rigid body, which is why some use the term resultant force–torque.
The force equal to the resultant force in magnitude, yet pointed in the opposite direction, is called an equilibrant force.
Illustration
The diagram illustrates simple graphical methods for finding the line of application of the resultant force of simple planar systems.
Lines of application of the actual forces and in the leftmost illustration intersect. After vector addition is performed "at the location of ", the net force obtained is translated so that its line of application passes through the common intersection point. With respect to that point all torques are zero, so the torque of the resultant force is equal to the sum of the torques of the actual forces.
Illustration in the middle of the diagram shows two parallel actual forces. After vector addition "at the location of ", the net force is translated to the appropriate line of application, whereof it becomes the resultant force . The procedure is based on a decomposition of all forces into components for which the lines of application (pale dotted lines) intersect at one point (the so-called pole, arbitrarily set at the right side of the illustration). Then the arguments from the previous case are applied to the forces and their components to demonstrate the torque relationships.
The rightmost illustration shows a couple, two equal but opposite forces for which the amount of the net force is zero, but they produce the net torque where is the distance between their lines of application. This is "pure" torque, since there is no resultant force.
Bound vector
A force applied to a body has a point of application. The effect of the force is different for different points of application. For this reason a force is called a bound vector, which means that it is bound to its point of application.
Forces applied at the same point can be added together to obtain the same effect on the body. However, forces with different points of application cannot be added together and maintain the same effect on the body.
It is a simple matter to change the point of application of a force by introducing equal and opposite forces at two different points of application that produce a pure torque on the body. In this way, all of the forces acting on a body can be moved to the same point of application with associated torques.
A system of forces on a rigid body is combined by moving the forces to the same point of application and computing the associated torques. The sum of these forces and torques yields the resultant force-torque.
Associated torque
If a point R is selected as the point of application of the resultant force F of a system of n forces Fi then the associated torque T is determined from the formulas
and
It is useful to note that the point of application R of the resultant force may be anywhere along the line of action of F without changing the value of the associated torque. To see this add the vector kF to the point of application R in the calculation of the associated torque,
The right side of this equation can be separated into the original formula for T plus the additional term including kF,
because the second term is zero. To see this notice that F is the sum of the vectors Fi which yields
thus the value of the associated torque is unchanged.
Torque-free resultant
It is useful to consider whether there is a point of application R such that the associated torque is zero. This point is defined by the property
where F is resultant force and Fi form the system of forces.
Notice that this equation for R has a solution only if the sum of the individual torques on the right side yield a vector that is perpendicular to F. Thus, the condition that a system of forces has a torque-free resultant can be written as
If this condition is satisfied then there is a point of application for the resultant which results in a pure force. If this condition is not satisfied, then the system of forces includes a pure torque for every point of application.
Wrench
The forces and torques acting on a rigid body can be assembled into the pair of vectors called a wrench. If a system of forces and torques has a net resultant force F and a net resultant torque T, then the entire system can be replaced by a force F and an arbitrarily located couple that yields a torque of T. In general, if F and T are orthogonal, it is possible to derive a radial vector R such that , meaning that the single force F, acting at displacement R, can replace the system. If the system is zero-force (torque only), it is termed a screw and is mathematically formulated as screw theory.
The resultant force and torque on a rigid body obtained from a system of forces Fi i=1,...,n, is simply the sum of the individual wrenches Wi, that is
Notice that the case of two equal but opposite forces F and -F acting at points A and B respectively, yields the resultant W=(F-F, A×F - B× F) = (0, (A-B)×F). This shows that wrenches of the form W=(0, T) can be interpreted as pure torques.
| Physical sciences | Classical mechanics | Physics |
460322 | https://en.wikipedia.org/wiki/Nuclear%20reaction | Nuclear reaction | In nuclear physics and nuclear chemistry, a nuclear reaction is a process in which two nuclei, or a nucleus and an external subatomic particle, collide to produce one or more new nuclides. Thus, a nuclear reaction must cause a transformation of at least one nuclide to another. If a nucleus interacts with another nucleus or particle, they then separate without changing the nature of any nuclide, the process is simply referred to as a type of nuclear scattering, rather than a nuclear reaction.
In principle, a reaction can involve more than two particles colliding, but because the probability of three or more nuclei to meet at the same time at the same place is much less than for two nuclei, such an event is exceptionally rare (see triple alpha process for an example very close to a three-body nuclear reaction). The term "nuclear reaction" may refer either to a change in a nuclide induced by collision with another particle or to a spontaneous change of a nuclide without collision.
Natural nuclear reactions occur in the interaction between cosmic rays and matter, and nuclear reactions can be employed artificially to obtain nuclear energy, at an adjustable rate, on-demand. Nuclear chain reactions in fissionable materials produce induced nuclear fission. Various nuclear fusion reactions of light elements power the energy production of the Sun and stars.
History
In 1919, Ernest Rutherford was able to accomplish transmutation of nitrogen into oxygen at the University of Manchester, using alpha particles directed at nitrogen 14N + α → 17O + p. This was the first observation of an induced nuclear reaction, that is, a reaction in which particles from one decay are used to transform another atomic nucleus. Eventually, in 1932 at Cambridge University, a fully artificial nuclear reaction and nuclear transmutation was achieved by Rutherford's colleagues John Cockcroft and Ernest Walton, who used artificially accelerated protons against lithium-7, to split the nucleus into two alpha particles. The feat was popularly known as "splitting the atom", although it was not the modern nuclear fission reaction later (in 1938) discovered in heavy elements by the German scientists Otto Hahn, Lise Meitner, and Fritz Strassmann.
Nuclear reaction equations
Nuclear reactions may be shown in a form similar to chemical equations, for which invariant mass must balance for each side of the equation, and in which transformations of particles must follow certain conservation laws, such as conservation of charge and baryon number (total atomic mass number). An example of this notation follows:
To balance the equation above for mass, charge and mass number, the second nucleus to the right must have atomic number 2 and mass number 4; it is therefore also helium-4. The complete equation therefore reads:
or more simply:
Instead of using the full equations in the style above, in many situations a compact notation is used to describe nuclear reactions. This style of the form A(b,c)D is equivalent to A + b producing c + D. Common light particles are often abbreviated in this shorthand, typically p for proton, n for neutron, d for deuteron, α representing an alpha particle or helium-4, β for beta particle or electron, γ for gamma photon, etc. The reaction above would be written as 6Li(d,α)α.
Energy conservation
Kinetic energy may be released during the course of a reaction (exothermic reaction) or kinetic energy may have to be supplied for the reaction to take place (endothermic reaction). This can be calculated by reference to a table of very accurate particle rest masses, as follows: according to the reference tables, the nucleus has a standard atomic weight of 6.015 atomic mass units (abbreviated u), the deuterium has 2.014 u, and the helium-4 nucleus has 4.0026 u. Thus:
the sum of the rest mass of the individual nuclei = 6.015 + 2.014 = 8.029 u;
the total rest mass on the two helium-nuclei = 2 × 4.0026 = 8.0052 u;
missing rest mass = 8.029 – 8.0052 = 0.0238 atomic mass units.
In a nuclear reaction, the total (relativistic) energy is conserved. The "missing" rest mass must therefore reappear as kinetic energy released in the reaction; its source is the nuclear binding energy. Using Einstein's mass-energy equivalence formula E = mc2, the amount of energy released can be determined. We first need the energy equivalent of one atomic mass unit:
Hence, the energy released is 0.0238 × 931 MeV = 22.2 MeV.
Expressed differently: the mass is reduced by 0.3%, corresponding to 0.3% of 90 PJ/kg is 270 TJ/kg.
This is a large amount of energy for a nuclear reaction; the amount is so high because the binding energy per nucleon of the helium-4 nucleus is unusually high because the He-4 nucleus is "doubly magic". (The He-4 nucleus is unusually stable and tightly bound for the same reason that the helium atom is inert: each pair of protons and neutrons in He-4 occupies a filled 1s nuclear orbital in the same way that the pair of electrons in the helium atom occupy a filled 1s electron orbital). Consequently, alpha particles appear frequently on the right-hand side of nuclear reactions.
The energy released in a nuclear reaction can appear mainly in one of three ways:
kinetic energy of the product particles (fraction of the kinetic energy of the charged nuclear reaction products can be directly converted into electrostatic energy);
emission of very high energy photons, called gamma rays;
some energy may remain in the nucleus, as a metastable energy level.
When the product nucleus is metastable, this is indicated by placing an asterisk ("*") next to its atomic number. This energy is eventually released through nuclear decay.
A small amount of energy may also emerge in the form of X-rays. Generally, the product nucleus has a different atomic number, and thus the configuration of its electron shells is wrong. As the electrons rearrange themselves and drop to lower energy levels, internal transition X-rays (X-rays with precisely defined emission lines) may be emitted.
Q-value and energy balance
In writing down the reaction equation, in a way analogous to a chemical equation, one may, in addition, give the reaction energy on the right side:
For the particular case discussed above, the reaction energy has already been calculated as Q = 22.2 MeV. Hence:
The reaction energy (the "Q-value") is positive for exothermal reactions and negative for endothermal reactions, opposite to the similar expression in chemistry. On the one hand, it is the difference between the sums of kinetic energies on the final side and on the initial side. But on the other hand, it is also the difference between the nuclear rest masses on the initial side and on the final side (in this way, we have calculated the Q-value above).
Reaction rates
If the reaction equation is balanced, that does not mean that the reaction really occurs. The rate at which reactions occur depends on the energy and the flux of the incident particles, and the reaction cross section. An example of a large repository of reaction rates is the REACLIB database, as maintained by the Joint Institute for Nuclear Astrophysics.
Charged vs. uncharged particles
In the initial collision which begins the reaction, the particles must approach closely enough so that the short-range strong force can affect them. As most common nuclear particles are positively charged, this means they must overcome considerable electrostatic repulsion before the reaction can begin. Even if the target nucleus is part of a neutral atom, the other particle must penetrate well beyond the electron cloud and closely approach the nucleus, which is positively charged. Thus, such particles must be first accelerated to high energy, for example by:
particle accelerators;
nuclear decay (alpha particles are the main type of interest here since beta and gamma rays are rarely involved in nuclear reactions);
very high temperatures, on the order of millions of degrees, producing thermonuclear reactions;
cosmic rays.
Also, since the force of repulsion is proportional to the product of the two charges, reactions between heavy nuclei are rarer, and require higher initiating energy, than those between a heavy and light nucleus; while reactions between two light nuclei are the most common ones.
Neutrons, on the other hand, have no electric charge to cause repulsion, and are able to initiate a nuclear reaction at very low energies. In fact, at extremely low particle energies (corresponding, say, to thermal equilibrium at room temperature), the neutron's de Broglie wavelength is greatly increased, possibly greatly increasing its capture cross-section, at energies close to resonances of the nuclei involved. Thus low-energy neutrons may be even more reactive than high-energy neutrons.
Notable types
While the number of possible nuclear reactions is immense, there are several types that are more common, or otherwise notable. Some examples include:
Fusion reactions – two light nuclei join to form a heavier one, with additional particles (usually protons or neutrons) emitted subsequently.
Spallation – a nucleus is hit by a particle with sufficient energy and momentum to knock out several small fragments or smash it into many fragments.
Induced gamma emission belongs to a class in which only photons were involved in creating and destroying states of nuclear excitation.
Fission reactions – a very heavy nucleus, after absorbing additional light particles (usually neutrons), splits into two or sometimes three pieces. This is an induced nuclear reaction. Spontaneous fission, which occurs without assistance of a neutron, is usually not considered a nuclear reaction. At most, it is not an induced nuclear reaction.
Direct reactions
An intermediate energy projectile transfers energy or picks up or loses nucleons to the nucleus in a single quick (10−21 second) event. Energy and momentum transfer are relatively small. These are particularly useful in experimental nuclear physics, because the reaction mechanisms are often simple enough to calculate with sufficient accuracy to probe the structure of the target nucleus.
Inelastic scattering
Only energy and momentum are transferred.
(p,p') tests differences between nuclear states.
(α,α') measures nuclear surface shapes and sizes. Since α particles that hit the nucleus react more violently, elastic and shallow inelastic α scattering are sensitive to the shapes and sizes of the targets, like light scattered from a small black object.
(e,e') is useful for probing the interior structure. Since electrons interact less strongly than do protons and neutrons, they reach to the centers of the targets and their wave functions are less distorted by passing through the nucleus.
Charge-exchange reactions
Energy and charge are transferred between projectile and target. Some examples of this kind of reactions are:
(p,n)
(3He,t)
Nucleon transfer reactions
Usually at moderately low energy, one or more nucleons are transferred between the projectile and target. These are useful in studying outer shell structure of nuclei. Transfer reactions can occur:
from the projectile to the target - stripping reactions
from the target to the projectile - pick-up reactions
Examples:
(α,n) and (α,p) reactions. Some of the earliest nuclear reactions studied involved an alpha particle produced by alpha decay, knocking a nucleon from a target nucleus.
(d,n) and (d,p) reactions. A deuteron beam impinges on a target; the target nuclei absorb either the neutron or proton from the deuteron. The deuteron is so loosely bound that this is almost the same as proton or neutron capture. A compound nucleus may be formed, leading to additional neutrons being emitted more slowly. (d,n) reactions are used to generate energetic neutrons.
The strangeness exchange reaction (K, π) has been used to study hypernuclei.
The reaction 14N(α,p)17O performed by Rutherford in 1917 (reported 1919), is generally regarded as the first nuclear transmutation experiment.
Reactions with neutrons
Reactions with neutrons are important in nuclear reactors and nuclear weapons. While the best-known neutron reactions are neutron scattering, neutron capture, and nuclear fission, for some light nuclei (especially odd-odd nuclei) the most probable reaction with a thermal neutron is a transfer reaction:
Some reactions are only possible with fast neutrons:
(n,2n) reactions produce small amounts of protactinium-231 and uranium-232 in the thorium cycle which is otherwise relatively free of highly radioactive actinide products.
9Be + n → 2α + 2n can contribute some additional neutrons in the beryllium neutron reflector of a nuclear weapon.
7Li + n → T + α + n unexpectedly contributed additional yield in the Bravo, Romeo and Yankee shots of Operation Castle, the three highest-yield nuclear tests conducted by the U.S.
Compound nuclear reactions
Either a low-energy projectile is absorbed or a higher energy particle transfers energy to the nucleus, leaving it with too much energy to be fully bound together. On a time scale of about 10−19 seconds, particles, usually neutrons, are "boiled" off. That is, it remains together until enough energy happens to be concentrated in one neutron to escape the mutual attraction. The excited quasi-bound nucleus is called a compound nucleus.
Low energy (e, e' xn), (γ, xn) (the xn indicating one or more neutrons), where the gamma or virtual gamma energy is near the giant dipole resonance. These increase the need for radiation shielding around electron accelerators.
| Physical sciences | Nuclear physics | Physics |
460617 | https://en.wikipedia.org/wiki/Hornwort | Hornwort | Hornworts are a group of non-vascular Embryophytes (land plants) constituting the division Anthocerotophyta (). The common name refers to the elongated horn-like structure, which is the sporophyte. As in mosses and liverworts, hornworts have a gametophyte-dominant life cycle, in which cells of the plant carry only a single set of genetic information; the flattened, green plant body of a hornwort is the gametophyte stage of the plant.
Hornworts may be found worldwide, though they tend to grow only in places that are damp or humid. Some species grow in large numbers as tiny weeds in the soil of gardens and cultivated fields. Large tropical and sub-tropical species of Dendroceros may be found growing on the bark of trees.
The total number of species is still uncertain. While there are more than 300 published species names, the actual number could be as low as 100–150 species.
Description
Like all bryophytes, the dominant life phase of a hornwort is the haploid gametophyte. This stage usually grows as a thin rosette or ribbon-like thallus between one and five centimeters in diameter. Hornworts have lost two plastid division-associated genes, ARC3 and FtsZ2, and have just a single chloroplast per cell (monoplastidy), with the exception of the genus Megaceros and some species in the genera Nothoceros and Anthoceros, which have more than one chloroplast per cell (polyplastidy). In the polyplastidic species, and also some of the monoplastidic species, a cellular structure called a pyrenoid is absent. The pyrenoid is a liquid-like organelle which enables a more efficient photosynthesis , has evolved independently five to six times in hornworts and is present in half of the roughly 200 species. It is formed by the fusion of the chloroplast with other organelles and is composed predominantly of RuBisCO, the key enzyme in carbon fixation. By using inorganic carbon transporters and carbonic anhydrases, up to a 50-fold increase in levels can be achieved. This particular feature is very unusual in land plants, unique to hornworts, but is common among algae. They are also the only group of land plants where flavonoids are completely absent.
Many hornworts develop internal mucilage-filled cavities or canals when groups of cells break down. These cavities secrete hormogonium-inducing factors (HIF) that stimulate nearby, free-living photosynthetic cyanobacteria, especially species of Nostoc, to invade and colonize these cavities. Such colonies of bacteria growing inside the thallus give the hornwort a distinctive blue-green color. Symbiotic cyanobacteria have not been reported in Megaceros or Folioceros. There may also be small slime pores on the underside of the thallus. These pores superficially resemble the stomata of other plants.
The horn-shaped sporophyte grows from an archegonium embedded deep in the gametophyte. The growth of the hornwort sporophyte happens from a persistent basal meristem, in contrast to the sporophyte of moss (apical growth) and liverworts (intercalary growth). Unlike liverworts, hornworts have true stomata on their sporophyte as most mosses do. The exceptions are the species Folioceros incurvus, the genus Notothylas and the three closely related genera Megaceros, Nothoceros and Dendroceros, which do not have stomata. Notothylas also differ from other hornworts in having a reduced sporophyte only a few millimeters tall. The sporophyte in hornworts is unique among bryophytes in being long-lived with a persistent photosynthetic capacity. The sporophyte lacks an apical meristem, an auxin-sensitive point of divergence with other land plants some time in the Late Silurian/Early Devonian.
When the sporophyte is mature, it has a multicellular outer layer, a central rod-like columella running up the center, and a layer of tissue in between that produces spores and pseudo-elaters. The pseudo-elaters are multi-cellular, unlike the elaters of liverworts. They have helical thickenings that change shape in response to drying out; they twist and thereby help to disperse the spores. Hornwort spores are relatively large for bryophytes, measuring between 30 and 80 μm in diameter or more. The spores are polar, usually with a distinctive Y-shaped tri-radiate ridge on the proximal surface, and with a distal surface ornamented with bumps or spines.
Life cycle
The life of a hornwort starts from a haploid spore. The spores can be yellow, brown or green. Yellow and brown spores have a thicker wall and contain oils that both protect against desiccation and function as a nutrient storage, allowing them to survive for years. The species Folioceros fuciformis and the genera Megaceros, Nothoceros and Dendroceros have short-lived spores with thin and colorless walls that appear green due to the presence of a chloroplast. In most species, there is a single cell inside the spore, and a slender extension of this cell called the germ tube germinates from the proximal side of the spore. The tip of the germ tube divides to form an octant (solid geometry) of cells, and the first rhizoid grows as an extension of the original germ cell. The tip continues to divide new cells, which produces a thalloid protonema. By contrast, species of the family Dendrocerotaceae may begin dividing within the spore, becoming multicellular and even photosynthetic before the spore germinates. In either case, the protonema is a transitory stage in the life of a hornwort.
From the protonema grows the adult gametophyte, which is the persistent and independent stage in the life cycle. This stage usually grows as a thin rosette or ribbon-like thallus between one and five centimeters in diameter, and several layers of cells in thickness. It is green or yellow-green from the chlorophyll in its cells, or bluish-green when colonies of cyanobacteria grow inside the plant.
When the gametophyte has grown to its adult size, it produces the sex organs of the hornwort. Most plants are monoecious, with both sex organs on the same plant, but some plants (even within the same species) are dioecious, with separate male and female gametophytes. The female organs are known as archegonia (singular archegonium) and the male organs are known as antheridia (singular antheridium). Both kinds of organs develop just below the surface of the plant and are only later exposed by disintegration of the overlying cells.
The biflagellate sperm must swim from the antheridia, or else be splashed to the archegonia. When this happens, the sperm and egg cell fuse to form a zygote, the cell from which the sporophyte stage of the life cycle will develop. Unlike all other bryophytes, the first cell division of the zygote is longitudinal. Further divisions produce three basic regions of the sporophyte.
At the bottom of the sporophyte (closest to the interior of the gametophyte), is a foot. This is a globular group of cells that receives nutrients from the parent gametophyte, on which the sporophyte will spend its entire existence. In the middle of the sporophyte (just above the foot), is a meristem that will continue to divide and produce new cells for the third region. This third region is the capsule. Both the central and surface cells of the capsule are sterile, but between them is a layer of cells that will divide to produce pseudo-elaters and spores. These are released from the capsule when it splits lengthwise from the tip.
Evolutionary history
While the fossil record of crown group hornworts only begins in the upper Cretaceous, the lower Devonian Horneophyton may represent a stem group to the clade, as it possesses a sporangium with central columella not attached at the roof. However, the same form of columella is also characteristic of basal moss groups, such as the Sphagnopsida and Andreaeopsida, and has been interpreted as a character common to all early land plants with stomata. The divergence between hornworts and Setaphyta (mosses and liverworts) is estimated to have occurred 479–450 million years ago, and the last common ancestor of present-day hornworts lived in middle Permian about 275 million years ago. Chromosome-scale genome sequencing of three hornwort species corroborates that stomata evolved only once during land plant evolution. It also shows that the three groups of bryophytes share a common ancestor that branched off from the other landplants early in evolution, and that liverworts and mosses are more closely related to each other than to hornworts. Unlike other land plants, the hornwort genome has the low- inducible B gene (LCIB), which is also found in some species of algae. Because the diffusion rate of carbon dioxide is 10,000-fold higher in air than in water, aquatic algae require a mechanism to concentrate CO2 in chloroplasts so as to allow the photosynthetic RuBisCo protein to function efficiently. LCIB is one component of this CO2-concentrating mechanism.
Classification
Hornworts were traditionally considered a class within the division Bryophyta (bryophytes). Later on, the bryophytes were considered paraphyletic, and hence the hornworts were given their own division, Anthocerotophyta (sometimes misspelled Anthocerophyta). However, the most recent phylogenetic evidence leans strongly towards bryophyte monophyly, and it has been proposed that hornworts are de-ranked to the original class Anthocerotopsida.
Traditionally, there was a single class of hornworts, called Anthocerotopsida, or older Anthocerotae. More recently, a second class Leiosporocertotopsida has been segregated for the singularly unusual species Leiosporoceros dussii. All other hornworts remain in the class Anthocerotopsida. These two classes are divided further into five orders, each containing a single family.
Among land plants, hornworts are one of the earliest-diverging lineages of the early land plant ancestors; cladistic analysis implies that the group originated prior to the Devonian, around the same time as the mosses and liverworts. There are about 200 species known, but new species are still being discovered. The number and names of genera are a current matter of investigation, and several competing classification schemes have been published since 1988.
Structural features that have been used in the classification of hornworts include: the anatomy of chloroplasts and their numbers within cells, the presence of a pyrenoid, the numbers of antheridia within androecia, and the arrangement of jacket cells of the antheridia.
Phylogeny
Recent studies of molecular, ultrastructural, and morphological data have yielded a new classification of hornworts.
| Biology and health sciences | Bryophytes | null |
460791 | https://en.wikipedia.org/wiki/Prowfish | Prowfish | The prowfish (Zaprora silenus) is a species of scorpaeniform marine fish found in the northern Pacific Ocean. It is the only extant member of the family, Zaproridae. There are 2 extinct species only known with fossil records, Zaprora koreana from the Middle Miocene aged Duho Formation in Pohang, South Korea. Another genus, Araeosteus rothi, is known from the Monterey Formation and the Modelo Formation in Southern California.
Prowfish range from the Aleutian Islands, Alaska west to Kamchatka, Russia; from Navarin Canyon in the Bering Sea south to Hokkaidō, Japan and Monterey, California. An otherwise little-known species, prowfish are important to subsistence fisheries in remote regions.
Growing to a length of , prowfish have stout, laterally compressed and elongated bodies. They have single, somewhat high dorsal fin running nearly the entire length of the back; it may contain 54–58 pliable spines. The anal fin is also fairly extensive. The tail fin is large, rounded and truncated; the pectoral fins are enlarged and pelvic fins are conspicuously absent. The mouth is slightly upturned with small, closely set, sharp teeth confined to the jaws. The head is convex, ending in a projecting snout. This explains the family name Zaproridae; from the Greek za, an intensifier, and prora meaning "prow". The species name silneus is a reference to Silenus, a figure in Greek mythology.
The distinctive head of the prowfish also features a number of sensory pores made all the more obvious by fringes of blue or white. Prowfish have small ctenoid scales and a variable coloration; typically, they are bluish-grey to olive brown with small dark spots, grading to lighter shades ventrally. The lateral line and swim bladder are absent.
Prowfish prefer rocky substrates and range from relatively shallow waters to . They are benthic animals spending most of their time on or near the bottom. Their diet consists principally of scyphozoans and salps; prowfish use their large mouths to tear chunks from the bells of jellyfish and ctenophores. Prowfish may also eat smaller fish and amphipods; however, juveniles feed exclusively on jellyfish. Larger skates and Pacific halibut are known predators of prowfish.
Little is known of prowfish reproduction, but juveniles have been observed to be pelagic; unlike adults, they spend their time in the middle levels of the water column, closely associated with their jellyfish prey. Indeed, juvenile prowfish will seek shelter from prey within the bells of larger jellies. This behaviour has led to their confusion with the medusafish (Icichthys lockingtoni) of the family Centrolophidae. Most female prowfish are thought to reach maturity at around five years. There is little sexual dimorphism; females are slightly heavier for their length.
Timeline
| Biology and health sciences | Acanthomorpha | Animals |
461082 | https://en.wikipedia.org/wiki/Heath | Heath | A heath () is a shrubland habitat found mainly on free-draining infertile, acidic soils and is characterised by open, low-growing woody vegetation. Moorland is generally related to high-ground heaths with—especially in Great Britain—a cooler and damper climate.
Heaths are widespread worldwide but are fast disappearing and considered a rare habitat in Europe. They form extensive and highly diverse communities across Australia in humid and sub-humid areas where fire regimes with recurring burning are required for the maintenance of the heathlands. Even more diverse though less widespread heath communities occur in Southern Africa. Extensive heath communities can also be found in the Texas chaparral, New Caledonia, central Chile, and along the shores of the Mediterranean Sea. In addition to these extensive heath areas, the vegetation type is also found in scattered locations across all continents, except Antarctica.
Characteristics
Heathland is favoured where climatic conditions are typically hard and dry, particularly in summer, and soils acidic, of low fertility, and often sandy and very free-draining; a mire may occur where drainage is poor, but usually is only small in extent. Heaths are dominated by low shrubs, to tall.
Heath vegetation can be extremely plant-species rich, and heathlands of Australia are home to some 3,700 endemic or typical species in addition to numerous less restricted species. The fynbos heathlands of South Africa are second only to tropical rainforests in plant biodiversity with over 7,000 species. In marked contrast, the tiny pockets of heathland in Europe are extremely depauperate with a flora consisting primarily of heather (Calluna vulgaris), heath (Erica species) and gorse (Ulex species).
The bird fauna of heathlands are usually cosmopolitan species of the region. In the depauperate heathlands of Europe, bird species tend to be more characteristic of the community, and include Montagu's harrier and the tree pipit. In Australia the heathland avian fauna is dominated by nectar-feeding birds such as honey-eaters and lorikeets, although numerous other birds from emus to eagles are also common in Australian heathlands. The birds of the South African fynbos include sunbirds, warblers and siskins. Heathlands are also an excellent habitat for insects including ants, moths, butterflies and wasps; many species are restricted entirely to it. One such example of an organism restricted to heathland is the silver-studded blue butterfly, Plebejus argus.
Anthropogenic heaths
Anthropogenic heath habitats are a cultural landscape that can be found worldwide in locations as diverse as northern and western Europe, the Americas, Australia, New Zealand, Madagascar and New Guinea.
These heaths were originally made or expanded by centuries of human clearance of the natural forest and woodland vegetation, by grazing and burning. In some cases this clearance went so far that parts of the heathland have given way to open spots of pure sand and sand dunes, with a local climate that, even in Europe, can rise to temperatures of in summer, drying the sand spot bordering the heathland and further raising its vulnerability for wildfires. Referring to heathland in England, Oliver Rackham says, "Heaths are clearly the product of human activities and need to be managed as heathland; if neglected they turn into woodland".
The conservation value of these human-made heaths has become much more appreciated due to their historical cultural value as habitats; consequently, most heathlands are protected. However they are also threatened by tree incursion because of the discontinuation of traditional management techniques, such as grazing and burning, that mediated the landscapes. Some are also threatened by urban sprawl. Anthropogenic heathlands are maintained artificially by a combination of grazing and periodic burning (known as swailing), or (rarely) mowing; if not so maintained, they are rapidly recolonised by forest or woodland. The recolonising tree species will depend on what is available as the local seed source, and thus it may not reflect the natural vegetation before the heathland became established.
In literature
The heath features prominently in:
King Lear, by William Shakespeare
Wuthering Heights, by Emily Brontë
The Return of the Native, by Thomas Hardy
Ethan Frome, by Edith Wharton
The Letters of Vincent van Gogh, by Vincent van Gogh
Gallery
| Physical sciences | Biomes: General | Earth science |
461263 | https://en.wikipedia.org/wiki/Mammutidae | Mammutidae | Mammutidae is an extinct family of proboscideans belonging to Elephantimorpha. It is best known for the mastodons (genus Mammut), which inhabited North America from the Late Miocene (around 8 million years ago) until their extinction at the beginning of the Holocene, around 11,000 years ago. The earliest fossils of the group are known from the Late Oligocene of Africa, around 24 million years ago, and fossils of the group have also been found across Eurasia. The name "mastodon" derives from Greek, "nipple" and "tooth", referring to their characteristic teeth.
Description
Mammutids are characterised by their zygodont molars, where pairs of parallel cusps are merged into sharp-sided riges, which are morphologically conservative and differ little between mammutid species. Like other members of Elephantimorpha, mammutids exhibited horizontal tooth replacement like modern elephants. Some authors have argued that horizontal tooth replacement evolved in parallel in mammutids and members of Elephantida (which includes gomphotheres and elephants), though this is uncertain. Compared to modern elephants, the bones of most mammutids were more robust, with the limb bones in particular being massive, with the legs being proportionally shorter than living elephants, while their bodies were proportionally more elongate. Early members of the group like Eozygodon and Zygolophodon had elongate mandibular symphysis (the front-most part) of the lower jaws with lower incisors/tusks (which tend to be flattened and narrow in shape), while in later representatives like Sinomammut and Mammut, the lower incisors/tusks were either lost or only vestigially present, and the lower jaws shortened (brevirostrine). This process happened convergently amongst other elephantimorph proboscideans, including gomphotheres, stegodontids, and elephantids. Mammutids are thought to have had prehensile trunks like those of living elephants, with those of Mammut suggested to have been possibly long enough to reach the ground. The upper tusks in primitive mammutids are relatively small as well as being downward (ventrally) and outward (laterally) curving, while those of mastodons (Mammut) are large and upward curving, often reaching around in length. The mammutid "Mammut" borsoni is one of the largest of all proboscideans with an estimated average male body weight of making it one of the largest land mammals of all time, with the tusks of this species being the longest known of any animal, reaching over in length.
Ecology
Members of Mammutidae are thought to have been primarily browsers on the foliage and twigs of trees and shrubs. The jaws of mammutids are adapted to powerful vertical biting (orthal movement) that served to crush food items and to a considerably lesser extent grind it with side-to side movement. Analysis of American mastodon (Mammut americanum) remains suggests that mammutids had a similar social structure to modern elephants, with herds of adult females and juveniles, with adult males living solitarily or in bonding groups with other males, with adult males periodically engaging in musth-like fighting behaviour against other males.
Evolution
Mammutids are the most basal group within Elephantimorpha, with gomphotheres and other members of Elephantida like amebelodonts being more closely related to elephants. Cladogram after Li et al. (2024).
Mammutids originated in Africa during the Late Oligocene, with the oldest genus Losodokodon dating to around 27.5-24 million years ago. Mammutids belonging to the genus Zygolophodon (as well as possibly other mammutid genera) entered Eurasia across the "Gomphotherium land bridge" during the early Miocene, around 18 million years ago. Mammutid remains are generally rare in Eurasia in comparison to contemporary gomphotheres and deinotheres. During the late early Miocene, around 16.5 million years ago, a population of Zygolophodon entered North America, giving rise to Mammut. The youngest confirmed records of mammutids in Africa date to around 13 million years ago, though possible Late Miocene fossils have been reported from North Africa. At the beginning of the Pleistocene, around 2 to 2.5 million years ago, the last of the Eurasian mammutids, "Mammut" borsoni became extinct, with members of Mammut persisting in North America until the end of the Pleistocene, approximately 11,000 years ago.
| Biology and health sciences | Proboscidea | Animals |
461407 | https://en.wikipedia.org/wiki/Longleaf%20pine | Longleaf pine | The longleaf pine (Pinus palustris) is a pine species native to the Southeastern United States, found along the coastal plain from East Texas to southern Virginia, extending into northern and central Florida. In this area it is also known as "yellow pine" or "long leaf yellow pine", although it is properly just one out of a number of species termed yellow pine. It reaches a height of and a diameter of . In the past, before extensive logging, they reportedly grew to with a diameter of . The tree is a cultural symbol of the Southern United States, being the official state tree of Alabama. This particular species is one of the eight pine tree species that falls under the "Pine" designation as the state tree of North Carolina.
Description
The bark is thick, reddish-brown, and scaly. The leaves are dark green and needle-like, and occur in bundles of mainly three, sometimes two or four, especially in seedlings. They often are twisted and in length. A local race of P. palustris in a cove near Rockingham, North Carolina, have needles up to 24 inches (61 centimeters ) in length. It is one of the two Southeastern U.S. pines with long needles, the other being slash pine.
The cones, both female seed cones (ovulate strobili) and male pollen cones (staminate strobili), are initiated during the growing season before buds emerge. Pollen cones begin forming in their buds in July, while seed conelets are formed during a relatively short period of time in August. Pollination occurs early the following spring, with the male cones long. The female (seed) cones mature in about 20 months from pollination; when mature, they are yellow-brown in color, long, and broad, opening to , and have a small, but sharp, downward-pointing spine on the middle of each scale. The seeds are long, with a wing.
Longleaf pine takes 100 to 150 years to become full size and may live to be 500 years old. When young, they grow a long taproot, which usually is long; by maturity, they have a wide spreading lateral root system with several deep 'sinker' roots. They grow on well-drained, usually sandy soil, characteristically in pure stands.
Longleaf pine also is known as being one of several species grouped as a southern yellow pine or longleaf yellow pine, and in the past as pitch pine (a name dropped as it caused confusion with pitch pine, Pinus rigida).
Etymology
The species epithet palustris is Latin for "of the marsh" and indicates its common habitat. The scientific name meaning "of marshes" is a misunderstanding on the part of Philip Miller, who described the species, after seeing longleaf pine forests with temporary winter flooding.
Ecology
Longleaf pine is highly pyrophytic (resistant to wildfire) and dependent on fire. Their thick bark and growth habits help to provide a tolerance to fire. Periodic natural wildfire and anthropogenic fires select for this species by removing competition and exposing bare soil for successful germination of seeds. The lack of medium-tall trees (called a midstory canopy) leads to open longleaf pine forests or savannas. New seedlings do not appear at all tree-like and resemble a dark-green fountain of needles. This form is called the grass stage. During this stage, which lasts for 5–12 years, vertical growth is very slow, and the tree may take a number of years simply to grow ankle high. After that, it has a growth spurt, especially if it is in a gap or no tree canopy is above it. In the grass stage, it is very resistant to low intensity fires because the terminal bud is protected from lethal heating by the tightly packed needles. While relatively immune to fire at this stage, the plant is quite appealing to feral pigs; the early settlers' habit of releasing swine into the woodlands to feed may have been partly responsible for the decline of the species.
Longleaf pine forests are rich in biodiversity. They are well-documented for their high levels of plant diversity, in groups including sedges, grasses, carnivorous plants, and orchids. These forests also provide habitat for gopher tortoises, which as keystone species, dig burrows that provide habitat for hundreds of other species of animals. The red-cockaded woodpecker is dependent on mature pine forests and is now endangered as a result of this decline. Longleaf pine seeds are large and nutritious, forming a significant food source for birds (notably the brown-headed nuthatch) and other wildlife. Nine salamander species and 26 frog species are characteristic of pine savannas, along with 56 species of reptiles, 13 of which could be considered specialists on this habitat.
The Red Hills Region of Florida and Georgia is home to some of the best-preserved stands of longleaf pines. These forests have been burned regularly for many decades to encourage bobwhite quail habitat in private hunting plantations.
Native range, restoration, and protection
Before European settlement, longleaf pine forest dominated as much as stretching from Virginia south to Florida and west to East Texas. Its range was defined by the frequent widespread fires that were lit by humans and occurred naturally throughout the southeast. In the late 19th century, these virgin timber stands were "among the most sought-after timber trees in the country." This rich ecosystem now has been relegated to less than 5% of its presettlement range due to fire suppression and clear-cutting practices:
As they stripped the woods of their trees, loggers left mounds of flammable debris that frequently fueled catastrophic fires, destroying both the remaining trees and seedlings. The exposed earth left behind by clear-cutting operations was highly susceptible to erosion, and nutrients were washed from the already porous soils. This further destroyed the natural seeding process. At the peak of the timber cutting in the 1890s and first decade of the new century, the longleaf pine forests of the Sandhills were providing millions of board feet of timber each year. The timber cutters gradually moved across the South; by the 1920s, most of the "limitless" virgin longleaf pine forests were gone.
— Jerry Simmons, "ASLC Large Operation from Beginnings"
In "pine barrens" most of the day. Low, level, sandy tracts; the pines wide apart; the sunny spaces between full of beautiful abounding grasses, liatris, long, wand-like solidago, saw palmettos, etc., covering the ground in garden style. Here I sauntered in delightful freedom, meeting none of the cat-clawed vines, or shrubs, of the alluvial bottoms.
– John Muir
Efforts are being made to restore longleaf pine ecosystems within its natural range. Some groups such as the Longleaf Alliance are actively promoting research, education, and management of the longleaf pine.
The USDA offers cost-sharing and technical assistance to private landowners for longleaf restoration through the NRCS Longleaf Pine Initiative. Similar programs are available through most state forestry agencies in the longleaf's native range. In August 2009, the Alabama Forestry Commission received $1.757 million in stimulus money to restore longleaf pines in state forests.
Four large core areas within the range of the species provide the opportunity to protect the biological diversity of the coastal plain and to restore wilderness areas east of the Mississippi River. Each of these four (Eglin Air Force Base: 187,000+ ha; Apalachicola National Forest: 228,000+ ha; Okefenokee-Osceola: 289,000+ ha; De Soto National Forest: 200,000+ ha) have nearby lands that offer the potential to expand the total protected territory for each area to well beyond 500,000 ha. These areas would provide the opportunity not only to restore forest stands, but also to restore populations of native plants and animals threatened by landscape fragmentation.
Notable eccentric populations exist within the Uwharrie National Forest in the central Piedmont region of North Carolina. These have survived owing to relative inaccessibility, and in one instance, intentional protection in the 20th century by a private landowner (a property now owned and conserved by the LandTrust for Central North Carolina).
The United States Forest Service is conducting prescribed burning programs in the 258,864-acre Francis Marion National Forest, located outside of Charleston, South Carolina. They are hoping to increase the longleaf pine forest type to by 2017 and in the long term. In addition to longleaf restoration, prescribed burning will enhance the endangered red-cockaded woodpeckers' preferred habitat of open, park-like stands, provide habitat for wildlife dependent on grass-shrub habitat, which is very limited, and reduce the risk of damaging wildfires.
Since the 1960s, longleaf restoration has been ongoing on almost 95,000 acres of state and federal land in the sandhills region of South Carolina, between the piedmont and coastal plain. The region is characterized by deep, infertile sands deposited by a prehistoric sea, with generally arid conditions. By the 1930s, most of the native longleaf had been logged, and the land was heavily eroded. Between 1935 and 1939, the federal government purchased large portions of this area from local landowners as a relief measure under the Resettlement Administration. These landowners were resettled on more fertile land elsewhere. Today, the South Carolina Sand Hills State Forest comprises about half of the acreage, and half is owned by the United States Fish and Wildlife Service as the adjacent Carolina Sandhills National Wildlife Refuge. At first, restoration of forest cover was the goal. Fire suppression was practiced until the 1960s, when prescribed fire was introduced on both the state forest and the Sandhills NWR as part of the restoration of the longleaf/wiregrass ecosystem.
Nokuse Plantation is a 53,000-acre private nature preserve located around 100 miles east of Pensacola, Florida. The preserve was established by M.C. Davis, a wealthy philanthropist who made his fortune buying and selling land and mineral rights, and who has spent $90 million purchasing land for the preserve, primarily from timber companies. One of its main goals is the restoration of longleaf pine forest, to which end he has had 8 million longleaf pine seedlings planted on the land.
A 2009 study by the National Wildlife Federation says that longleaf pine forests will be particularly well adapted to environmental changes caused by climate disruption.
In 2023, the Virginia Department of Conservation and Recreation announced a plan to reintroduce longleaf pines to the Dendron Swamp Natural Area Preserve, with seedlings propagated from cones collected at South Quay Sandhills Natural Area Preserve.
Uses
Vast forests of longleaf pine once were present along the southeastern Atlantic coast and Gulf Coast of North America, as part of the eastern savannas. These forests were the source of naval storesresin, turpentine, and timberneeded by merchants and the navy for their ships. They have been cutover since for timber and usually replaced with faster-growing loblolly pine and slash pine, for agriculture, and for urban and suburban development. Due to this deforestation and overharvesting, only about 3% of the original longleaf pine forest remains, and little new is planted. Longleaf pine is available, however, at many nurseries within its range; the southernmost known point of sale is in Lake Worth Beach, Florida.
The yellow, resinous wood is used for lumber and pulp. Boards cut years ago from virgin timber were very wide, up to , and a thriving salvage business obtains these boards from demolition projects to be reused as flooring in upscale homes.
The extremely long needles are popular for use in the ancient craft of coiled basket making.
Annual sales of pine straw for use as mulch were estimated at $200M in 2021.
The stumps and taproots of old trees become saturated with resin and will not rot. Farmers sometimes find old buried stumps in fields, even in some that were cleared a century ago, and these usually are dug up and sold as fatwood, "fat lighter", or "lighter wood", which is in demand as kindling for fireplaces, wood stoves, and barbecue pits. In old-growth pine, the heartwood of the bole is often saturated in the same way. When boards are cut from the fat lighter wood, they are very heavy and will not rot, but buildings constructed of them are quite flammable and make extremely hot fires.
The seeds of the longleaf pine are edible raw or roasted.
Culture
The longleaf pine is the official state tree of Alabama. It is referenced by name in the first line of the official North Carolina State Toast. Also, the state's highest honor is named the "Order of the Long Leaf Pine". The state tree of North Carolina is officially designated as simply "pine", under which this and seven other species fall.
| Biology and health sciences | Pinaceae | Plants |
461454 | https://en.wikipedia.org/wiki/Conservative%20vector%20field | Conservative vector field | In vector calculus, a conservative vector field is a vector field that is the gradient of some function. A conservative vector field has the property that its line integral is path independent; the choice of path between two points does not change the value of the line integral. Path independence of the line integral is equivalent to the vector field under the line integral being conservative. A conservative vector field is also irrotational; in three dimensions, this means that it has vanishing curl. An irrotational vector field is necessarily conservative provided that the domain is simply connected.
Conservative vector fields appear naturally in mechanics: They are vector fields representing forces of physical systems in which energy is conserved. For a conservative system, the work done in moving along a path in a configuration space depends on only the endpoints of the path, so it is possible to define potential energy that is independent of the actual path taken.
Informal treatment
In a two- and three-dimensional space, there is an ambiguity in taking an integral between two points as there are infinitely many paths between the two points—apart from the straight line formed between the two points, one could choose a curved path of greater length as shown in the figure. Therefore, in general, the value of the integral depends on the path taken. However, in the special case of a conservative vector field, the value of the integral is independent of the path taken, which can be thought of as a large-scale cancellation of all elements that do not have a component along the straight line between the two points. To visualize this, imagine two people climbing a cliff; one decides to scale the cliff by going vertically up it, and the second decides to walk along a winding path that is longer in length than the height of the cliff, but at only a small angle to the horizontal. Although the two hikers have taken different routes to get up to the top of the cliff, at the top, they will have both gained the same amount of gravitational potential energy. This is because a gravitational field is conservative.
Intuitive explanation
M. C. Escher's lithograph print Ascending and Descending illustrates a non-conservative vector field, impossibly made to appear to be the gradient of the varying height above ground (gravitational potential) as one moves along the staircase. The force field experienced by the one moving on the staircase is non-conservative in that one can return to the starting point while ascending more than one descends or vice versa, resulting in nonzero work done by gravity. On a real staircase, the height above the ground is a scalar potential field: one has to go upward exactly as much as one goes downward in order to return to the same place, in which case the work by gravity totals to zero. This suggests path-independence of work done on the staircase; equivalently, the force field experienced is conservative (see the later section: Path independence and conservative vector field). The situation depicted in the print is impossible.
Definition
A vector field , where is an open subset of , is said to be conservative if there exists a (continuously differentiable) scalar field on such that
Here, denotes the gradient of . Since is continuously differentiable, is continuous. When the equation above holds, is called a scalar potential for .
The fundamental theorem of vector calculus states that, under some regularity conditions, any vector field can be expressed as the sum of a conservative vector field and a solenoidal field.
Path independence and conservative vector field
Path independence
A line integral of a vector field is said to be path-independent if it depends on only two integral path endpoints regardless of which path between them is chosen:
for any pair of integral paths and between a given pair of path endpoints in .
The path independence is also equivalently expressed as
for any piecewise smooth closed path in where the two endpoints are coincident. Two expressions are equivalent since any closed path can be made by two path; from an endpoint to another endpoint , and from to , so
where is the reverse of and the last equality holds due to the path independence
Conservative vector field
A key property of a conservative vector field is that its integral along a path depends on only the endpoints of that path, not the particular route taken. In other words, if it is a conservative vector field, then its line integral is path-independent. Suppose that for some (continuously differentiable) scalar field over as an open subset of (so is a conservative vector field that is continuous) and is a differentiable path (i.e., it can be parameterized by a differentiable function) in with an initial point and a terminal point . Then the gradient theorem (also called fundamental theorem of calculus for line integrals) states that
This holds as a consequence of the definition of a line integral, the chain rule, and the second fundamental theorem of calculus. in the line integral is an exact differential for an orthogonal coordinate system (e.g., Cartesian, cylindrical, or spherical coordinates). Since the gradient theorem is applicable for a differentiable path, the path independence of a conservative vector field over piecewise-differential curves is also proved by the proof per differentiable curve component.
So far it has been proven that a conservative vector field is line integral path-independent. Conversely, if a continuous vector field is (line integral) path-independent, then it is a conservative vector field, so the following biconditional statement holds:
The proof of this converse statement is the following.
is a continuous vector field which line integral is path-independent. Then, let's make a function defined as
over an arbitrary path between a chosen starting point and an arbitrary point . Since it is path-independent, it depends on only and regardless of which path between these points is chosen.
Let's choose the path shown in the left of the right figure where a 2-dimensional Cartesian coordinate system is used. The second segment of this path is parallel to the axis so there is no change along the axis. The line integral along this path is
By the path independence, its partial derivative with respect to (for to have partial derivatives, needs to be continuous.) is
since and are independent to each other. Let's express as where and are unit vectors along the and axes respectively, then, since ,
where the last equality is from the second fundamental theorem of calculus.
A similar approach for the line integral path shown in the right of the right figure results in so
is proved for the 2-dimensional Cartesian coordinate system. This proof method can be straightforwardly expanded to a higher dimensional orthogonal coordinate system (e.g., a 3-dimensional spherical coordinate system) so the converse statement is proved. Another proof is found here as the converse of the gradient theorem.
Irrotational vector fields
Let (3-dimensional space), and let be a (continuously differentiable) vector field, with an open subset of . Then is called irrotational if its curl is everywhere in , i.e., if
For this reason, such vector fields are sometimes referred to as curl-free vector fields or curl-less vector fields. They are also referred to as longitudinal vector fields.
It is an identity of vector calculus that for any (continuously differentiable up to the 2nd derivative) scalar field on , we have
Therefore, every conservative vector field in is also an irrotational vector field in . This result can be easily proved by expressing in a Cartesian coordinate system with Schwarz's theorem (also called Clairaut's theorem on equality of mixed partials).
Provided that is a simply connected open space (roughly speaking, a single piece open space without a hole within it), the converse of this is also true: Every irrotational vector field in a simply connected open space is a conservative vector field in .
The above statement is not true in general if is not simply connected. Let be with removing all coordinates on the -axis (so not a simply connected space), i.e., . Now, define a vector field on by
Then has zero curl everywhere in ( at everywhere in ), i.e., is irrotational. However, the circulation of around the unit circle in the -plane is ; in polar coordinates, , so the integral over the unit circle is
Therefore, does not have the path-independence property discussed above so is not conservative even if since where is defined is not a simply connected open space.
Say again, in a simply connected open region, an irrotational vector field has the path-independence property (so as conservative). This can be proved directly by using Stokes' theorem,for any smooth oriented surface which boundary is a simple closed path . So, it is concluded that In a simply connected open region, any vector field that has the path-independence property (so it is a conservative vector field.) must also be irrotational and vice versa.
Abstraction
More abstractly, in the presence of a Riemannian metric, vector fields correspond to differential . The conservative vector fields correspond to the exact , that is, to the forms which are the exterior derivative of a function (scalar field) on . The irrotational vector fields correspond to the closed , that is, to the such that . As any exact form is closed, so any conservative vector field is irrotational. Conversely, all closed are exact if is simply connected.
Vorticity
The vorticity of a vector field can be defined by:
The vorticity of an irrotational field is zero everywhere. Kelvin's circulation theorem states that a fluid that is irrotational in an inviscid flow will remain irrotational. This result can be derived from the vorticity transport equation, obtained by taking the curl of the Navier–Stokes equations.
For a two-dimensional field, the vorticity acts as a measure of the local rotation of fluid elements. The vorticity does not imply anything about the global behavior of a fluid. It is possible for a fluid that travels in a straight line to have vorticity, and it is possible for a fluid that moves in a circle to be irrotational.
Conservative forces
If the vector field associated to a force is conservative, then the force is said to be a conservative force.
The most prominent examples of conservative forces are gravitational force (associated with a gravitational field) and electric force (associated with an electrostatic field). According to Newton's law of gravitation, a gravitational force acting on a mass due to a mass located at a distance from , obeys the equation
where is the gravitational constant and is a unit vector pointing from toward . The force of gravity is conservative because , where
is the gravitational potential energy. In other words, the gravitation field associated with the gravitational force is the gradient of the gravitation potential associated with the gravitational potential energy . It can be shown that any vector field of the form is conservative, provided that is integrable.
For conservative forces, path independence can be interpreted to mean that the work done in going from a point to a point is independent of the moving path chosen (dependent on only the points and ), and that the work done in going around a simple closed loop is :
The total energy of a particle moving under the influence of conservative forces is conserved, in the sense that a loss of potential energy is converted to the equal quantity of kinetic energy, or vice versa.
| Mathematics | Multivariable and vector calculus | null |
461477 | https://en.wikipedia.org/wiki/Incompressible%20flow | Incompressible flow | In fluid mechanics, or more generally continuum mechanics, incompressible flow (isochoric flow) refers to a flow in which the material density of each fluid parcel — an infinitesimal volume that moves with the flow velocity — is time-invariant. An equivalent statement that implies incompressible flow is that the divergence of the flow velocity is zero (see the derivation below, which illustrates why these conditions are equivalent).
Incompressible flow does not imply that the fluid itself is incompressible. It is shown in the derivation below that under the right conditions even the flow of compressible fluids can, to a good approximation, be modelled as incompressible flow.
Derivation
The fundamental requirement for incompressible flow is that the density, , is constant within a small element volume, dV, which moves at the flow velocity u. Mathematically, this constraint implies that the material derivative (discussed below) of the density must vanish to ensure incompressible flow. Before introducing this constraint, we must apply the conservation of mass to generate the necessary relations. The mass is calculated by a volume integral of the density, :
The conservation of mass requires that the time derivative of the mass inside a control volume be equal to the mass flux, J, across its boundaries. Mathematically, we can represent this constraint in terms of a surface integral:
The negative sign in the above expression ensures that outward flow results in a decrease in the mass with respect to time, using the convention that the surface area vector points outward. Now, using the divergence theorem we can derive the relationship between the flux and the partial time derivative of the density:
therefore:
The partial derivative of the density with respect to time need not vanish to ensure incompressible flow. When we speak of the partial derivative of the density with respect to time, we refer to this rate of change within a control volume of fixed position. By letting the partial time derivative of the density be non-zero, we are not restricting ourselves to incompressible fluids, because the density can change as observed from a fixed position as fluid flows through the control volume. This approach maintains generality, and not requiring that the partial time derivative of the density vanish illustrates that compressible fluids can still undergo incompressible flow. What interests us is the change in density of a control volume that moves along with the flow velocity, u. The flux is related to the flow velocity through the following function:
So that the conservation of mass implies that:
The previous relation (where we have used the appropriate product rule) is known as the continuity equation. Now, we need the following relation about the total derivative of the density (where we apply the chain rule):
So if we choose a control volume that is moving at the same rate as the fluid (i.e. (dx/dt, dy/dt, dz/dt) = u), then this expression simplifies to the material derivative:
And so using the continuity equation derived above, we see that:
A change in the density over time would imply that the fluid had either compressed or expanded (or that the mass contained in our constant volume, dV, had changed), which we have prohibited. We must then require that the material derivative of the density vanishes, and equivalently (for non-zero density) so must the divergence of the flow velocity:
And so beginning with the conservation of mass and the constraint that the density within a moving volume of fluid remains constant, it has been shown that an equivalent condition required for incompressible flow is that the divergence of the flow velocity vanishes.
Relation to compressibility
In some fields, a measure of the incompressibility of a flow is the change in density as a result of the pressure variations. This is best expressed in terms of the compressibility
If the compressibility is acceptably small, the flow is considered incompressible.
Relation to solenoidal field
An incompressible flow is described by a solenoidal flow velocity field. But a solenoidal field, besides having a zero divergence, also has the additional connotation of having non-zero curl (i.e., rotational component).
Otherwise, if an incompressible flow also has a curl of zero, so that it is also irrotational, then the flow velocity field is actually Laplacian.
Difference from material
As defined earlier, an incompressible (isochoric) flow is the one in which
This is equivalent to saying that
i.e. the material derivative of the density is zero. Thus if one follows a material element, its mass density remains constant. Note that the material derivative consists of two terms. The first term describes how the density of the material element changes with time. This term is also known as the unsteady term. The second term, describes the changes in the density as the material element moves from one point to another. This is the advection term (convection term for scalar field). For a flow to be accounted as bearing incompressibility, the accretion sum of these terms should vanish.
On the other hand, a homogeneous, incompressible material is one that has constant density throughout. For such a material, . This implies that,
and
independently.
From the continuity equation it follows that
Thus homogeneous materials always undergo flow that is incompressible, but the converse is not true. That is, compressible materials might not experience compression in the flow.
Related flow constraints
In fluid dynamics, a flow is considered incompressible if the divergence of the flow velocity is zero. However, related formulations can sometimes be used, depending on the flow system being modelled. Some versions are described below:
Incompressible flow: . This can assume either constant density (strict incompressible) or varying density flow. The varying density set accepts solutions involving small perturbations in density, pressure and/or temperature fields, and can allow for pressure stratification in the domain.
Anelastic flow: . Principally used in the field of atmospheric sciences, the anelastic constraint extends incompressible flow validity to stratified density and/or temperature as well as pressure. This allows the thermodynamic variables to relax to an 'atmospheric' base state seen in the lower atmosphere when used in the field of meteorology, for example. This condition can also be used for various astrophysical systems.
Low Mach-number flow, or pseudo-incompressibility: . The low Mach-number constraint can be derived from the compressible Euler equations using scale analysis of non-dimensional quantities. The restraint, like the previous in this section, allows for the removal of acoustic waves, but also allows for large perturbations in density and/or temperature. The assumption is that the flow remains within a Mach number limit (normally less than 0.3) for any solution using such a constraint to be valid. Again, in accordance with all incompressible flows the pressure deviation must be small in comparison to the pressure base state.
These methods make differing assumptions about the flow, but all take into account the general form of the constraint for general flow dependent functions and .
Numerical approximations
The stringent nature of incompressible flow equations means that specific mathematical techniques have been devised to solve them. Some of these methods include:
The projection method (both approximate and exact)
Artificial compressibility technique (approximate)
Compressibility pre-conditioning
| Physical sciences | Fluid mechanics | Physics |
461493 | https://en.wikipedia.org/wiki/Howler%20monkey | Howler monkey | Howler monkeys (genus Alouatta, monotypic in subfamily Alouattinae) are the most widespread primate genus in the Neotropics and are among the largest of the platyrrhines along with the muriquis (Brachyteles), the spider monkeys (Ateles) and woolly monkeys (Lagotrix). The monkeys are native to South and Central American forests. They are famous for their loud howls, which can be heard up to three miles away through dense rain forest. Fifteen species are recognized. Previously classified in the family Cebidae, they are now placed in the family Atelidae. They are primarily folivores but also significant frugivores, acting as seed dispersal agents through their digestive system and their locomotion. Threats include human predation, habitat destruction, illegal wildlife trade, and capture for pets or zoo animals.
Classification
Anatomy and physiology
Howler monkeys have short snouts and wide-set, round nostrils. Their noses are very keen, and they can smell out food (primarily fruit and nuts) up to 2 km away. Their noses are usually roundish snout-type, and the nostrils have many sensory hairs growing from the interior. They range in size from , excluding their tails, which can be equally long; in fact in some cases the tail has been found to be almost five times the body length. This is a prime characteristic. Like many New World monkeys, they have prehensile tails, which they use while picking fruit and nuts from trees. Unlike other New World monkeys, both male and female howler monkeys have trichromatic color vision. This has evolved independently from other New World monkeys due to gene duplication. They have lifespans of 15 to 20 years. Howler species are dimorphic and can also be dichromatic (i.e. Alouatta caraya). Males are typically 1.5 to 2.0 kg heavier than females.
Males experience an evolutionary trade off between investments in precopulatory traits, larger hyoids but smaller testes, or post-copulatory traits, larger testes and smaller hyoids. The hyoid of Alouatta is pneumatized, one of the few cases of postcranial pneumaticity outside the Saurischia. The volume of the hyoid of male howler monkeys is negatively correlated with the dimensions of their testes, and with the number of males per group. Larger hyoids decrease space between formant, and offers impression of larger body size. They have a flat cranial shape due to a folivorous diet and an advanced vocal system. Their brain growth is posterior rather than superior or inferior as in other platyrrhines.
Locomotion
Howler monkeys generally move quadrupedally on the tops of branches, usually grasping a branch with at least two hands or one hand and the tail at all times. Their strong prehensile tails are able to support their entire body weight. Fully grown adult howler monkeys do not often rely on their tails for full-body support, but juveniles do so more frequently. A significant amount of their travel is done through the ground, with sitting and resting being their most frequent postures.
Behaviour
Social systems
Most howler species live in groups of six to 15 animals, with one to three adult males and multiple females. Mantled howler monkeys are an exception, commonly living in groups of 15 to 20 individuals with more than three adult males. The number of males in a given group is inversely correlated with the size of their hyoids and is positively correlated with testes size. This results in two distinct groups, wherein one male with a larger hyoid and smaller testes copulates exclusively with a group of females, suggesting precopulatory vocal competition. The other group has more males, which have smaller hyoids, and larger testes. The larger the number of males, the smaller the hyoid, and the larger the testes. Female howler monkeys breed with multiple males within their group, with males in neighboring groups, and with solitary males. Central males tie up fellowship with cycling females. Unlike most New World monkeys, in which one sex remains in natal groups, juveniles of both sexes emigrate from their natal groups, such that howler monkeys could spend the majority of their adult lives in association with unrelated monkeys.
Physical fighting among group members is infrequent and generally of short duration, but serious injuries can result. Both males and females rarely fight with each other, but physical aggression is even more rare between sexes. Group size varies by species and by location, with an approximate ratio of one male to four females.
Communication
As their name suggests, vocal communication forms an important part of their social behavior. They each have an enlarged basihyal or hyoid bone, which helps them make their loud vocalizations. Group males generally call at dawn and dusk, as well as interspersed times throughout the day. Their main vocals consist of loud, deep, guttural growls or "howls". Howler monkeys are widely considered to be the loudest land animals. According to Guinness Book of World Records, their vocalizations can be heard clearly for . The function of howling is thought to relate to intergroup spacing and territory protection, as well as possibly to mate-guarding. Howlers call usually when they are in areas with major feeding sites, which in some sort lead to advertise major feeding sites and their willingness to defend locally available fruit trees. Black howler monkeys incorporate information on resource availability along with neighbors’ current location. And the abundance of flowers are found to be an important factor that influenced behavior. Neighbors are more likely to move towards these calls when resource are scarce, and the reverse is true.
Diet and feeding
These large and slow-moving monkeys are the only folivores of the New World monkeys. Howlers eat mainly top canopy leaves, together with fruit, buds, flowers, and nuts. They need to be careful not to eat too many leaves of certain species in one sitting, as some contain toxins that can poison them. Howler monkeys are also known to occasionally raid birds' nests, chicken coops, and consume the eggs. When in smaller groups (up to twelve individuals) and low rainfall (up to ), they are more frugivorous. In larger groups and increased rainfall, frugivory decreases as a result of competition and fast food depletion. As they digest fruit, more than 90% of the fruits' seeds are excreted without damage, which results in seed dispersal and distribution in tropical forests.
Sleeping
Howlers use the upper-middle part of their sleeping tree and use large branches on the 70% of nights that potentially allow for grouped sleeping or resistance to weather conditions and risk of branch breaking. Their sleeping sites are usually close to morning feeding sites.
Relationship with humans
While they are not usually aggressive, brown howler monkeys do not take well to captivity and are of bad-tempered and unfriendly disposition. However, the black howler monkey (Alouatta caraya) is a relatively common pet in contemporary Argentina due to its gentle nature (in comparison to the capuchin monkey's aggressive tendencies), in spite of its lesser intelligence, as well as the liabilities of the size of its droppings and the male monkey’s loud vocalizations.
John Lloyd Stephens described the howler monkeys at the Maya ruins of Copán as "grave and solemn, almost emotionally wounded, as if officiating as the guardians of consecrated ground". To the Mayas of the Classic period, they were the divine patrons of the artisans, especially scribes and sculptors. They were seen as gods in some tribes, and the long, sleek tail was worshipped for its beauty. Copán, in particular, is famous for its representations of howler monkey gods. Two howler monkey brothers play a role in the myth of the Maya Hero Twins included in the Popol Vuh, a widely feared tale of soul and passion.
| Biology and health sciences | New World monkeys | Animals |
461654 | https://en.wikipedia.org/wiki/Scalar%20potential | Scalar potential | In mathematical physics, scalar potential describes the situation where the difference in the potential energies of an object in two different positions depends only on the positions, not upon the path taken by the object in traveling from one position to the other. It is a scalar field in three-space: a directionless value (scalar) that depends only on its location. A familiar example is potential energy due to gravity.
A scalar potential is a fundamental concept in vector analysis and physics (the adjective scalar is frequently omitted if there is no danger of confusion with vector potential). The scalar potential is an example of a scalar field. Given a vector field , the scalar potential is defined such that:
where is the gradient of and the second part of the equation is minus the gradient for a function of the Cartesian coordinates . In some cases, mathematicians may use a positive sign in front of the gradient to define the potential. Because of this definition of in terms of the gradient, the direction of at any point is the direction of the steepest decrease of at that point, its magnitude is the rate of that decrease per unit length.
In order for to be described in terms of a scalar potential only, any of the following equivalent statements have to be true:
where the integration is over a Jordan arc passing from location to location and is evaluated at location .
where the integral is over any simple closed path, otherwise known as a Jordan curve.
The first of these conditions represents the fundamental theorem of the gradient and is true for any vector field that is a gradient of a differentiable single valued scalar field . The second condition is a requirement of so that it can be expressed as the gradient of a scalar function. The third condition re-expresses the second condition in terms of the curl of using the fundamental theorem of the curl. A vector field that satisfies these conditions is said to be irrotational (conservative).
Scalar potentials play a prominent role in many areas of physics and engineering. The gravity potential is the scalar potential associated with the gravity per unit mass, i.e., the acceleration due to the field, as a function of position. The gravity potential is the gravitational potential energy per unit mass. In electrostatics the electric potential is the scalar potential associated with the electric field, i.e., with the electrostatic force per unit charge. The electric potential is in this case the electrostatic potential energy per unit charge. In fluid dynamics, irrotational lamellar fields have a scalar potential only in the special case when it is a Laplacian field. Certain aspects of the nuclear force can be described by a Yukawa potential. The potential play a prominent role in the Lagrangian and Hamiltonian formulations of classical mechanics. Further, the scalar potential is the fundamental quantity in quantum mechanics.
Not every vector field has a scalar potential. Those that do are called conservative, corresponding to the notion of conservative force in physics. Examples of non-conservative forces include frictional forces, magnetic forces, and in fluid mechanics a solenoidal field velocity field. By the Helmholtz decomposition theorem however, all vector fields can be describable in terms of a scalar potential and corresponding vector potential. In electrodynamics, the electromagnetic scalar and vector potentials are known together as the electromagnetic four-potential.
Integrability conditions
If is a conservative vector field (also called irrotational, curl-free, or potential), and its components have continuous partial derivatives, the potential of with respect to a reference point is defined in terms of the line integral:
where is a parametrized path from to ,
The fact that the line integral depends on the path only through its terminal points and is, in essence, the path independence property of a conservative vector field. The fundamental theorem of line integrals implies that if is defined in this way, then , so that is a scalar potential of the conservative vector field . Scalar potential is not determined by the vector field alone: indeed, the gradient of a function is unaffected if a constant is added to it. If is defined in terms of the line integral, the ambiguity of reflects the freedom in the choice of the reference point .
Altitude as gravitational potential energy
An example is the (nearly) uniform gravitational field near the Earth's surface. It has a potential energy
where is the gravitational potential energy and is the height above the surface. This means that gravitational potential energy on a contour map is proportional to altitude. On a contour map, the two-dimensional negative gradient of the altitude is a two-dimensional vector field, whose vectors are always perpendicular to the contours and also perpendicular to the direction of gravity. But on the hilly region represented by the contour map, the three-dimensional negative gradient of always points straight downwards in the direction of gravity; . However, a ball rolling down a hill cannot move directly downwards due to the normal force of the hill's surface, which cancels out the component of gravity perpendicular to the hill's surface. The component of gravity that remains to move the ball is parallel to the surface:
where is the angle of inclination, and the component of perpendicular to gravity is
This force , parallel to the ground, is greatest when is 45 degrees.
Let be the uniform interval of altitude between contours on the contour map, and let be the distance between two contours. Then
so that
However, on a contour map, the gradient is inversely proportional to , which is not similar to force : altitude on a contour map is not exactly a two-dimensional potential field. The magnitudes of forces are different, but the directions of the forces are the same on a contour map as well as on the hilly region of the Earth's surface represented by the contour map.
Pressure as buoyant potential
In fluid mechanics, a fluid in equilibrium, but in the presence of a uniform gravitational field is permeated by a uniform buoyant force that cancels out the gravitational force: that is how the fluid maintains its equilibrium. This buoyant force is the negative gradient of pressure:
Since buoyant force points upwards, in the direction opposite to gravity, then pressure in the fluid increases downwards. Pressure in a static body of water increases proportionally to the depth below the surface of the water. The surfaces of constant pressure are planes parallel to the surface, which can be characterized as the plane of zero pressure.
If the liquid has a vertical vortex (whose axis of rotation is perpendicular to the surface), then the vortex causes a depression in the pressure field. The surface of the liquid inside the vortex is pulled downwards as are any surfaces of equal pressure, which still remain parallel to the liquids surface. The effect is strongest inside the vortex and decreases rapidly with the distance from the vortex axis.
The buoyant force due to a fluid on a solid object immersed and surrounded by that fluid can be obtained by integrating the negative pressure gradient along the surface of the object:
Scalar potential in Euclidean space
In 3-dimensional Euclidean space , the scalar potential of an irrotational vector field is given by
where is an infinitesimal volume element with respect to . Then
This holds provided is continuous and vanishes asymptotically to zero towards infinity, decaying faster than and if the divergence of likewise vanishes towards infinity, decaying faster than .
Written another way, let
be the Newtonian potential. This is the fundamental solution of the Laplace equation, meaning that the Laplacian of is equal to the negative of the Dirac delta function:
Then the scalar potential is the divergence of the convolution of with :
Indeed, convolution of an irrotational vector field with a rotationally invariant potential is also irrotational. For an irrotational vector field , it can be shown that
Hence
as required.
More generally, the formula
holds in -dimensional Euclidean space () with the Newtonian potential given then by
where is the volume of the unit -ball. The proof is identical. Alternatively, integration by parts (or, more rigorously, the properties of convolution) gives
| Physical sciences | Classical mechanics | Physics |
15627440 | https://en.wikipedia.org/wiki/Phragmites%20australis | Phragmites australis | Phragmites australis, known as the common reed, is a species of flowering plant in the grass family Poaceae. It is a wetland grass that can grow up to tall and has a cosmopolitan distribution worldwide.
Description
Phragmites australis commonly forms extensive stands (known as reed beds), which may be as much as or more in extent. Where conditions are suitable it can also spread at or more per year by horizontal runners, which put down roots at regular intervals. It can grow in damp ground, in standing water up to or so deep, or even as a floating mat. The erect stems grow to tall, with the tallest plants growing in areas with hot summers and fertile growing conditions.
The leaves are long and broad. The flowers are produced in late summer in a dense, dark purple panicle, about long. Later the numerous long, narrow, sharp pointed spikelets appear greyer due to the growth of long, silky hairs. These eventually help disperse the minute seeds.
Taxonomy
Recent studies have characterized morphological distinctions between the introduced and native stands of Phragmites australis in North America. The Eurasian phenotype can be distinguished from the North American phenotype by its shorter ligules of up to as opposed to over 1 mm, shorter glumes of under against over 3.2 mm (although there is some overlap in this character), and in culm characteristics.
Phragmites australis subsp. americanus – the North American genotype has been described as a distinct species, Phragmites americanus
Phragmites australis subsp. australis – the Eurasian genotype
Phragmites australis subsp. berlandieri (E.Fourn.) Saltonst. & Hauber
Phragmites australis subsp. isiacus (Arcang.) ined.
Ecology
It is a helophyte (aquatic plant), especially common in alkaline habitats, and it also tolerates brackish water, and so is often found at the upper edges of estuaries and on other wetlands (such as grazing marsh) which are occasionally inundated by the sea. A study demonstrated that P. australis has similar greenhouse gas emissions to native Spartina alterniflora. However, other studies have demonstrated that it is associated with larger methane emissions and greater carbon dioxide uptake than native New England salt marsh vegetation that occurs at higher marsh elevations.
Common reed is suppressed where it is grazed regularly by livestock. Under these conditions it either grows as small shoots within the grassland sward, or it disappears altogether. In Europe, common reed is rarely invasive, except in damp grasslands where traditional grazing has been abandoned.
Invasive status
In North America, the status of Phragmites australis subsp. australis is a source of confusion and debate. It is commonly considered a non-native and often invasive species, introduced from Europe in the 1800s. However, there is evidence of the existence of Phragmites as a native plant in North America long before European colonization of the continent. The North American native subspecies, P. a. subsp. americanus (sometimes considered a separate species, P. americanus), is markedly less vigorous than European forms. The expansion of Phragmites in North America is due to the more vigorous, but similar-looking European subsp. australis.
Phragmites australis subsp. australis outcompetes native vegetation and lowers the local plant biodiversity. It forms dense thickets of vegetation that are unsuitable habitat for native fauna. It displaces native plants species such as wild rice, cattails, and native orchids. Phragmites has a high above ground biomass that blocks light to other plants allowing areas to turn into Phragmites monoculture very quickly. Decomposing Phragmites increases the rate of marsh accretion more rapidly than would occur with native marsh vegetation.
Phragmites australis subsp. australis is causing serious problems for many other North American hydrophyte wetland plants, including the native P. australis subsp. americanus. Gallic acid released by phragmites is degraded by ultraviolet light to produce mesoxalic acid, effectively hitting susceptible plants and seedlings with two harmful toxins. Phragmites is so difficult to control that one of the most effective methods of eradicating the plant is to burn it over 2–3 seasons. The roots grow so deep and strong that one burn is not enough. Ongoing research suggests that goats could be effectively used to control the species.
Natural enemies
Since 2017, over 80% of the beds of Phragmites in the Pass a Loutre Wildlife Management Area have been damaged by the invasive roseau cane scale (Nipponaclerda biwakoensis), threatening wildlife habitat throughout the affected regions of the area. While typically considered a noxious weed, in Louisiana the reed beds are considered critical to the stability of the shorelines of wetland areas and waterways of the Mississippi River Delta, and the die-off of reed beds is believed to accelerate coastal erosion.
Uses
The entire plant is edible raw or cooked. The young stems can be boiled, or later on be used to make flour. The underground stems can be used but are tough, as can the seeds but they are hard to find.
Stems can be made into eco-friendly drinking straws. Many parts of the plant can be eaten. The young shoots can be consumed raw or cooked. The hardened sap from damaged stems can be eaten fresh or toasted. The stems can be dried, ground, sifted, hydrated, and toasted like marshmallows. The seeds can be crushed, mixed with berries and water, and cooked to make a gruel. The roots can be prepared similar to those of cattails.
Common reed is the primary source of thatch for traditional thatch housing in Europe and beyond. The plant is extensively used in phytodepuration, or natural water treatment systems, since the root hairs are excellent at filtering out impurities in waste water. It also shows excellent potential as a source of biomass.
| Biology and health sciences | Poales | Plants |
6171868 | https://en.wikipedia.org/wiki/Regurgitation%20%28digestion%29 | Regurgitation (digestion) | Regurgitation is the expulsion of material from the pharynx, or esophagus, usually characterized by the presence of undigested food or blood.
Regurgitation is used by a number of species to feed their young. This is typically in circumstances where the young are at a fixed location and a parent must forage or hunt for food, especially under circumstances where the carriage of small prey would be subject to robbing by other predators or the whole prey is larger than can be carried to a den or nest. Some bird species also occasionally regurgitate pellets of indigestible matter such as bones and feathers.
It is in most animals a normal and voluntary process unlike the complex vomiting reflex in response to toxins.
Humans
In humans it can be voluntary or involuntary, the latter being due to a small number of disorders. Regurgitation of a person's meals following ingestion is known as rumination syndrome, an uncommon and often misdiagnosed motility disorder that affects eating. It may be a symptom of gastroesophageal reflux disease (GERD).
In infants, regurgitation – or spitting up – is quite common, with 67% of 4-month-old infants spitting up more than once per day.
Some people are able to regurgitate without using any external stimulation or drug, by means of muscle control. Practitioners of yoga have also been known to do this. Professional regurgitators perfect the ability to such a degree as to be able to exploit it as entertainment.
Birds
For birds that transport food to their mates and/or their young over long distances — especially seabirds — it is impractical to carry food in their bills because of the risk that it would be stolen by other birds, such as frigatebirds, skuas and gulls. Such birds often employ a regurgitative feeding strategy. Many species of gulls have an orange to red spot near the end of the bill (called a "subterminal spot") that the chicks peck in order to stimulate regurgitation.
All of the Suliformes employ a regurgitative strategy to feed their young. In some species — such as the blue-footed booby, masked booby, and the Nazca booby — a brood hierarchy exists, in which the older chick is fed before the younger, subordinate chick. In times when food is scarce, siblicide may occur, where the dominant chick kills its younger sibling in order to sequester all of the resources of the parents. Penguins chicks are fed regurgitated food by both parents. Researchers found that the practice may potentially cause metabolic alkalosis in certain penguins.
Some birds, such as fulmars, employ regurgitation as a defense when threatened.
Other animals
Ruminants regurgitate their food as a normal part of digestion. During their idle time, they chew the regurgitated food (cud) and swallow it again, which increases digestibility by reducing particle size.
Honey is produced by a process of regurgitation by honey bees, which is stored in the beehive as a primary food source.
| Biology and health sciences | Basics | Biology |
1231425 | https://en.wikipedia.org/wiki/Talwar | Talwar | The talwar (), also spelled talwaar and tulwar, is a type of curved sword or sabre from the Indian subcontinent.
Etymology and classification
The word talwar originated from the Sanskrit word taravāri () which means "one-edged sword". It is the word for sword in several related languages, such as Hindustani (Hindi and Urdu), Nepali, Marathi, Gujarati, Punjabi, etc. and as () in Bengali.
Like many swords from around the world with an etymology derived from a term meaning simply 'sword', the talwar has in scholarship, and in museum and collector usage, acquired a more specific meaning. However, South Asian swords, while showing a rich diversity of forms, suffer from relatively poor dating (so developmental history is obscure) and a lack of precise nomenclature and classification. The typical talwar is a type of sabre, characterised by a curved blade (without the radical curve of some Persian swords), possessing an all-metal hilt with integral quillons and a disc-shaped pommel. This type of hilt is sometimes called the 'Indo-Muslim hilt', or 'standard Indian hilt'. Talwars possessing only slightly curved blades can be called sirohi. However, many other variations exist. Swords with straight blades and the disc-pommel hilt are usually referred to as 'straight-bladed talwars' (though the word dhup is also used), while those with the same hilt but yatagan-type forward-curved blades are termed 'sosun patta'. Swords with sabre-blades and all metal Indo-Muslim hilts, but having the pommel in the shape of the head of an animal or bird, instead of the disc, are termed talwar, without being differentiated by name.
History
The talwar belongs to the same family of curved swords as the Persian shamshir, the Turkish kilij, Arabian saif and the Afghan pulwar, all such swords being originally derived from earlier curved swords developed in Turkic Central Asia. The talwar typically does not have as radical a curve as the shamshir and only a very small minority have the expanded, stepped yelman (a sharp back edge on the distal third of the blade) typical of the kilij.
The talwar has a distinctive, all-metal, Indo-Muslim hilt, developed in Medieval western India. The increasing influence in India of Turco-Afghan, and later Turco-Mongol, dynasties (employing Persian and Central Asian arms) in the Late Medieval and subsequent eras led to ever greater use of sabre-like, curved swords. By Mughal times, the talwar had become the most popular form of sword in the Subcontinent. The talwar was the product of the marriage of the curved blade derived from Turco-Mongol and Persian swords and the native all-metal Indo-Muslim hilt.
Characteristics
The talwar was produced in many varieties, with different types of blades. Some blades are very unusual, from those with double-pointed tips (zulfiqar) to those with massive blades (sometimes called tegha – often deemed to be executioner's swords but on little evidence). However, all such blades are curved, and the vast majority of talwars have blades more typical of a generalised sabre. As noted above, swords with blades other than curved sabre-blades, or possessing hilts radically different from the Indo-Muslim type are usually differentiated by name, though usage is not entirely consistent.
Many examples of the talwar exhibit an increased curvature in the distal half of the blade, compared to the curvature nearer the hilt. Also relatively common is a widening of the blade near the tip (often without the distinct step [latchet] to the back of the blade, characteristic of the yelman of the kilij). The blade profile of the British Pattern 1796 light cavalry sabre is similar to some examples of the talwar, and it has been suggested that the talwar may have contributed to the design of the British sabre.
A typical talwar has a wider blade than the shamshir. Late examples often had European-made blades, set into distinctive Indian-made hilts. The hilt of the typical talwar, is of the Indo-Muslim type, and is often termed a "disc hilt" from the prominent disc-shaped flange surrounding the pommel. The pommel often has a short spike projecting from its centre, sometimes pierced for a cord to secure the sword to the wrist. The hilt incorporates a simple cross-guard which frequently has a slender knucklebow attached. The hilt is usually entirely of iron, though brass and silver hilts are found, and is connected to the tang of the blade by a very powerful adhesive resin. This resin, or lac, is derived from the peepal tree, it is softened by heating and, when cooled, it sets solidly. More ornate examples of the talwar often show silver or gold plated decoration in a form called koftgari. Talwars of princely status can have hilts of gold, profusely set with precious stones, one such, preserved at the Baroda Palace Armoury, is decorated with 275 diamonds and an emerald.
Use
The talwar was used by both cavalry and infantry. The grip of the talwar is cramped and the prominent disc of the pommel presses into the wrist if attempts are made to use it to cut like a conventional sabre. These features of the talwar hilt result in the hand having a very secure and rather inflexible hold on the weapon, enforcing the use of variations on the very effective "draw cut". The fact that the talwar does not have the kind of radical curve of the shamshir indicates that it could be used for thrusting as well as cutting purposes. The blades of some examples of the talwar widen towards the tip. This increases the momentum of the distal portion of the blade when used to cut; when a blow was struck by a skilled warrior, limbs could be amputated and persons decapitated. Because of this attribute, the talwar was also used for executions in some regions. The spike attached to the pommel could be used for striking the opponent in extreme close quarter circumstances when it was not always possible to use the blade. Due to the presence of a blunted ricasso the talwar can be held with the fore-finger wrapped around the lower quillon of the cross guard.
Gallery
| Technology | Swords | null |
1231733 | https://en.wikipedia.org/wiki/Airglow | Airglow | Airglow (also called nightglow) is a faint emission of light by a planetary atmosphere. In the case of Earth's atmosphere, this optical phenomenon causes the night sky never to be completely dark, even after the effects of starlight and diffused sunlight from the far side are removed. This phenomenon originates with self-illuminated gases and has no relationship with Earth's magnetism or sunspot activity.
History
The airglow phenomenon was first identified in 1868 by Swedish physicist Anders Ångström. Since then, it has been studied in the laboratory, and various chemical reactions have been observed to emit electromagnetic energy as part of the process. Scientists have identified some of those processes that would be present in Earth's atmosphere, and astronomers have verified that such emissions are present. Simon Newcomb was the first person to scientifically study and describe airglow, in 1901.
Airglow existed in pre-industrial society and was known to the ancient Greeks. "Aristotle and Pliny described the phenomena of Chasmata, which can be identified in part as auroras, and in part as bright airglow nights."
Description
Airglow is caused by various processes in the upper atmosphere of Earth, such as the recombination of atoms which were photoionized by the Sun during the day, luminescence caused by cosmic rays striking the upper atmosphere, and chemiluminescence caused mainly by oxygen and nitrogen reacting with hydroxyl free radicals at heights of a few hundred kilometres. It is not noticeable during the daytime due to the glare and scattering of sunlight.
Even at the best ground-based observatories, airglow limits the photosensitivity of optical telescopes. Partly for this reason, space telescopes like Hubble can observe much fainter objects than current ground-based telescopes at visible wavelengths.
Airglow at night may be bright enough for a ground observer to notice and appears generally bluish. Although airglow emission is fairly uniform across the atmosphere, it appears brightest at about 10° above the observer's horizon, since the lower one looks, the greater the mass of atmosphere one is looking through. Very low down, however, atmospheric extinction reduces the apparent brightness of the airglow.
One airglow mechanism is when an atom of nitrogen combines with an atom of oxygen to form a molecule of nitric oxide (NO). In the process, a photon is emitted. This photon may have any of several different wavelengths characteristic of nitric oxide molecules. The free atoms are available for this process, because molecules of nitrogen (N2) and oxygen (O2) are dissociated by solar energy in the upper reaches of the atmosphere and may encounter each other to form NO. Other chemicals that can create air glow in the atmosphere are hydroxyl (OH), atomic oxygen (O), sodium (Na), and lithium (Li).
The sky brightness is typically measured in units of apparent magnitude per square arcsecond of sky.
Calculation
In order to calculate the relative intensity of airglow, we need to convert apparent magnitudes into fluxes of photons; this clearly depends on the spectrum of the source, but we will ignore that initially. At visible wavelengths, we need the parameter S0(V), the power per square centimetre of aperture and per micrometre of wavelength produced by a zeroth-magnitude star, to convert apparent magnitudes into fluxes – . If we take the example of a star observed through a normal V band filter ( bandpass, frequency ), the number of photons we receive per square centimeter of telescope aperture per second from the source is Ns:
(where h is the Planck constant; hν is the energy of a single photon of frequency ν).
At V band, the emission from airglow is per square arc-second at a high-altitude observatory on a moonless night; in excellent seeing conditions, the image of a star will be about 0.7 arc-second across with an area of 0.4 square arc-second, and so the emission from airglow over the area of the image corresponds to about . This gives the number of photons from airglow, Na:
The signal-to-noise for an ideal ground-based observation with a telescope of area A (ignoring losses and detector noise), arising from Poisson statistics, is only:
If we assume a 10 m diameter ideal ground-based telescope and an unresolved star: every second, over a patch the size of the seeing-enlarged image of the star, 35 photons arrive from the star and 3500 from air-glow. So, over an hour, roughly arrive from the air-glow, and approximately arrive from the source; so the S/N ratio is about:
We can compare this with "real" answers from exposure time calculators. For an 8 m unit Very Large Telescope telescope, according to the FORS exposure time calculator, 40 hours of observing time are needed to reach , while the 2.4 m Hubble only takes 4 hours according to the ACS exposure time calculator. A hypothetical 8 m Hubble telescope would take about 30 minutes.
It should be clear from this calculation that reducing the view field size can make fainter objects more detectable against the airglow; unfortunately, adaptive optics techniques that reduce the diameter of the view field of an Earth-based telescope by an order of magnitude only as yet work in the infrared, where the sky is much brighter. A space telescope isn't restricted by the view field, since it is not affected by airglow.
Induced airglow
Scientific experiments have been conducted to induce airglow by directing high-power radio emissions at the Earth's ionosphere. These radiowaves interact with the ionosphere to induce faint but visible optical light at specific wavelengths under certain conditions.
The effect is also observable in the radio frequency band, using ionosondes.
Experimental observation
SwissCube-1 is a Swiss satellite operated by Ecole Polytechnique Fédérale de Lausanne. The spacecraft is a single unit CubeSat, which was designed to conduct research into airglow within the Earth's atmosphere and to develop technology for future spacecraft. Though SwissCube-1 is rather small (10 cm × 10 cm × 10 cm) and weighs less than 1 kg, it carries a small telescope for obtaining images of the airglow. The first SwissCube-1 image came down on 18 February 2011 and was quite black with some thermal noise on it. The first airglow image came down on 3 March 2011. This image has been converted to the human optical range (green) from its near-infrared measurement. This image provides a measurement of the intensity of the airglow phenomenon in the near-infrared. The range measured is from 500 to 61400 photons, with a resolution of 500 photons.
Observation of airglow on other planets
The Venus Express spacecraft contains an infrared sensor which has detected near-IR emissions from the upper atmosphere of Venus. The emissions come from nitric oxide (NO) and from molecular oxygen.
Scientists had previously determined in laboratory testing that during NO production, ultraviolet emissions and near-IR emissions were produced. The UV radiation had been detected in the atmosphere, but until this mission, the atmosphere-produced near-IR emissions were only theoretical.
Gallery
| Physical sciences | Basics | Astronomy |
1231792 | https://en.wikipedia.org/wiki/Hornfels | Hornfels | Hornfels is the group name for a set of contact metamorphic rocks that have been baked and hardened by the heat of intrusive igneous masses and have been rendered massive, hard, splintery, and in some cases exceedingly tough and durable. These properties are caused by fine grained non-aligned crystals with platy or prismatic habits, characteristic of metamorphism at high temperature but without accompanying deformation. The term is derived from the German word Hornfels, meaning "hornstone", because of its exceptional toughness and texture both reminiscent of animal horns. These rocks were referred to by miners in northern England as whetstones.
Most hornfels are fine-grained, and while the original rocks (such as sandstone, shale, slate and limestone) may have been more or less fissile owing to the presence of bedding or cleavage planes, this structure is effaced or rendered inoperative in the hornfels. Though many hornfels show vestiges of the original bedding, they break across this as readily as along it; in fact, they tend to separate into cubical fragments rather than into thin plates. Sheet minerals may be abundant but are aligned at random.
Hornfels most commonly form in the aureole of granitic intrusions in the upper or middle crust. Hornfels formed from contact metamorphism by volcanic activity very close to the surface can produce unusual and distinctive minerals. Changes in composition caused by fluids given off by the magmatic body (metasomatism) sometimes take place. The hornfels facies is the metamorphic facies which occupies the lowest pressure portion of the metamorphic pressure-temperature space.
The most common hornfels (the biotite hornfels) are dark-brown to black with a somewhat velvety luster owing to the abundance of small crystals of shining black mica. Also, most common hornfels have a black streak. The lime hornfels are often white, yellow, pale-green, brown and other colors. Green and dark-green are the prevalent tints of the hornfels produced by the alteration of igneous rocks. Although for the most part the constituent grains are too small to be determined by the unaided eye, there are often larger crystals (porphyroblasts) of cordierite, garnet or andalusite scattered through the fine matrix, and these may become very prominent on the weathered faces of the rock.
Structure
The structure of the hornfels is very characteristic. Very rarely do any of the minerals show crystalline form, but the small grains fit closely together like the fragments of a mosaic; they are usually of nearly equal dimensions. This has been called pflaster or pavement structure from the resemblance to rough pavement work. Each mineral may also enclose particles of the others; in quartz, for example, small crystals of graphite, biotite, iron oxides, sillimanite or feldspar may appear in great numbers. Often the whole of the grains are rendered semi-opaque in this way. The minutest crystals may show traces of crystalline outlines; undoubtedly they are of new formation and have originated in situ. This leads us to believe that the whole rock has been recrystallized at a high temperature and in the solid state so that there was little freedom for the mineral molecules to build up well-individualized crystals. The regeneration of the rock has been sufficient to efface most of the original structures and to replace the former minerals more-or-less completely by new ones. But crystallization has been hampered by the solid condition of the mass and the new minerals are formless and have been unable to reject impurities, but have grown around them.
Compositions
Pelitic
Clays, sedimentary slates and shales yield biotite hornfels in which the most conspicuous mineral is biotite mica, the small scales of which are transparent under the microscope and have a dark reddish-brown color and strong dichroism. There is also quartz, and often a considerable amount of feldspar, while graphite, tourmaline and iron oxides frequently occur in lesser quantity. In these biotite hornfels the minerals, which consist of aluminium silicates, are commonly found; they are usually andalusite and sillimanite, but kyanite appears also in hornfels, especially in those that have a schistose character. The andalusite may be pink and is then often pleochroic in thin sections, or it may be white with the cross-shaped dark enclosures of the matrix that are characteristic of chiastolite. Sillimanite usually forms exceedingly minute needles embedded in quartz.
In the rocks of this group cordierite also occurs, not rarely, and may have the outlines of imperfect hexagonal prisms that are divided up into six sectors when seen in polarized light. In biotite hornfels, a faint striping may indicate the original bedding of the unaltered rock and corresponds to small changes in the nature of the sediment deposited. More commonly there is a distinct spotting, visible on the surfaces of the hand specimens. The spots are round or elliptical, and may be paler or darker than the rest of the rock. In some cases they are rich in graphite or carbonaceous matter; in others they are full of brown mica; some spots consist of rather coarser grains of quartz than occur in the matrix. The frequency with which this feature reappears in the less altered slates and hornfels is rather remarkable, especially as it seems certain that the spots are not always of the same nature or origin. Tourmaline hornfels are found sometimes near the margins of tourmaline granites; they are black with small needles of schorl that under the microscope are dark brown and richly pleochroic. As the tourmaline contains boron, there must have been some permeation of vapors from the granite into the sediments. Rocks of this group are often seen in the Cornish tin-mining districts, especially near the lodes.
Carbonate
A second great group of hornfels are the calc–silicate hornfels that arise from the thermal alteration of impure limestone. The purer beds recrystallize as marbles, but where there has been originally an admixture of sand or clay lime-bearing silicates are formed, such as diopside, epidote, garnet, sphene, vesuvianite and scapolite; with these phlogopite, various feldspars, pyrites, quartz and actinolite often occur. These rocks are fine-grained, and though often banded, are tough and much harder than the original limestones. They are excessively variable in their mineralogical composition, and very often alternate in thin seams with biotite hornfels and indurated quartzites. When perfused with boric and fluoric vapors from the granite they may contain much axinite, fluorite and datolite, but the altiminous silicates are absent from these rocks.
Mafic
From diabases, basalts, andesites and other igneous rocks a third type of hornfels is produced. They consist essentially of feldspar with hornblende (generally of brown color) and pale pyroxene. Sphene, biotite and iron oxides are the other common constituents, but these rocks show much variety of composition and structure. Where the original mass was decomposed and contained calcite, zeolites, chlorite and other secondary minerals either in veins or in cavities, there are usually rounded areas or irregular streaks containing a suite of new minerals, which may resemble those of the calcium-silicate hornfelses above described. The original porphyritic, fluidal, vesicular or fragmental structures of the igneous rock are clearly visible in the less advanced stages of hornfelsing, but become less evident as the alteration progresses.
In some districts hornfelsed rocks occur that have acquired a schistose structure through shearing, and these form transitions to schists and gneisses that contain the same minerals as the hornfels, but have a schistose instead of a hornfels structure. Among these may be mentioned cordierite and sillimanite gneisses, andalusite and kyanite mica-schists, and those schistose calcite-silicate rocks that are known as cipolins. That these are sediments that have undergone thermal alteration is generally admitted, but the exact conditions under which they were formed are not always clear. The essential features of hornfelsing are ascribed to the action of heat, pressure and permeating vapors, regenerating a rock mass without the production of fusion (at least on a large scale). It has been argued, however, that often there is extensive chemical change owing to the introduction of matter from the granite into the rocks surrounding it. The formation of new feldspar in the hornfelses is pointed out as evidence of this. While this felspathization may have occurred in a few localities, it seems conspicuously absent from others. Most authorities at the present time regard the changes as being purely of a physical and not of a chemical nature.
Facies
The hornfels facies occupies the portion of the metamorphic pressure-temperature space of lowest pressure and low to high temperature. It is subdivided into a low-temperature regime of albite-epidote hornfels, a medium-temperature regime of hornblende hornfels, a high-temperature regime of pyroxene hornfels, and an ultra-high-temperature sanidinite regime. The latter is sometimes regarded as a separate facies. Maximum pressures are around 2 kbar and temperatures are around 300-500 C for the albite-epidote hornfels facies, 500-650 C for the hornblende hornfels facies, 650-800 C for the pyroxene hornfels facies, and above 800 C for the sanidinite facies.
The actual minerals present in each facies depends on the composition of the protolith. For a mafic protolith, the albite-epidote hornfels facies is characterized by albite and epidote or zoisite with minor actinolite and chlorite. This gives way to hornblende, plagioclase, pyroxene, and garnet in the hornblende hornfels facies, which in turn gives way to orthopyroxene, augite, plagioclase, and characteristic trace garnet in the pyroxene hornfels facies and sanidinite facies, the latter two being indistinguishable for this composition of protolith.
For an ultramafic protolith, the albite-epidote facies is characterized by serpentine, talc, tremolite, and chlorite, giving way to forsterite, orthopyroxene, hornblende, chlorite, and characteristic minor aluminum spinel and magnetite in the hornblende facies, which in turn gives way to forsterite, orthopyroxene, augite, plagioclase, and aluminum spinel in the pyroxene hornfels facies. The sanidinite facies for this composition differs from the pyroxene hornfels facies only in the disappearance of aluminum spinel.
For a pelitic protolith, the sequence is quartz, plagioclase, muscovite, chlorite, and cordierite in the albite-epidote facies; quartz, plagioclase, muscovite, biotite, cordierite, and andalusite in the hornblende hornfels facies; and quartz, plagioclase, orthoclase, andalusite, sillimanite, cordierite, and orthopyroxene in the pyroxene hornfels facies. The sanidinite facies features quartz, plagioclase, sillimanite, cordierite, orthopyroxene, sapphirine, and aluminum spinel.
For a calcareous protolith, the sequence is calcite, dolomite, quartz, tremolite, talc, and forsterite for the albite-epidote hornfels facies; calcite, dolomite, quartz, tremolite, diopside, and forsterite for the hornblende hornfels facies; calcite, quartz, diposide, forsterite, and wollastonite for the pyroxene hornfels facies; and calcite, quartz, diopside, forsterite, wollastonite, monticellite, and akermanite for the sanidinite facies.
Acoustic properties
Hornfels have the ability to resonate when struck. Michael Tellinger had described these stones in South Africa also known as "ring-stones" due to their ability to ring like a bell. The Musical Stones of Skiddaw are an example of a lithophone made from hornfels.
| Physical sciences | Metamorphic rocks | Earth science |
1231797 | https://en.wikipedia.org/wiki/Extinction%20%28astronomy%29 | Extinction (astronomy) | In astronomy, extinction is the absorption and scattering of electromagnetic radiation by dust and gas between an emitting astronomical object and the observer. Interstellar extinction was first documented as such in 1930 by Robert Julius Trumpler. However, its effects had been noted in 1847 by Friedrich Georg Wilhelm von Struve, and its effect on the colors of stars had been observed by a number of individuals who did not connect it with the general presence of galactic dust. For stars lying near the plane of the Milky Way which are within a few thousand parsecs of the Earth, extinction in the visual band of frequencies (photometric system) is roughly 1.8 magnitudes per kiloparsec.
For Earth-bound observers, extinction arises both from the interstellar medium and the Earth's atmosphere; it may also arise from circumstellar dust around an observed object. Strong extinction in Earth's atmosphere of some wavelength regions (such as X-ray, ultraviolet, and infrared) is overcome by the use of space-based observatories. Since blue light is much more strongly attenuated than red light, extinction causes objects to appear redder than expected; this phenomenon is called interstellar reddening.
Interstellar reddening
Interstellar reddening is a phenomenon associated with interstellar extinction where the spectrum of electromagnetic radiation from a radiation source changes characteristics from that which the object originally emitted. Reddening occurs due to the light scattering off dust and other matter in the interstellar medium. Interstellar reddening is a different phenomenon from redshift, which is the proportional frequency shifts of spectra without distortion. Reddening preferentially removes shorter wavelength photons from a radiated spectrum while leaving behind the longer wavelength photons, leaving the spectroscopic lines unchanged.
In most photometric systems, filters (passbands) are used from which readings of magnitude of light may take account of latitude and humidity among terrestrial factors. Interstellar reddening equates to the "color excess", defined as the difference between an object's observed color index and its intrinsic color index (sometimes referred to as its normal color index). The latter is the theoretical value which it would have if unaffected by extinction. In the first system, the UBV photometric system devised in the 1950s and its most closely related successors, the object's color excess is related to the object's B−V color (calibrated blue minus calibrated visible) by:
For an A0-type main sequence star (these have median wavelength and heat among the main sequence) the color indices are calibrated at 0 based on an intrinsic reading of such a star (± exactly 0.02 depending on which spectral point, i.e. precise passband within the abbreviated color name is in question, see color index). At least two and up to five measured passbands in magnitude are then compared by subtraction: U, B, V, I, or R during which the color excess from extinction is calculated and deducted. The name of the four sub-indices (R minus I etc.) and order of the subtraction of recalibrated magnitudes is from right to immediate left within this sequence.
General characteristics
Interstellar reddening occurs because interstellar dust absorbs and scatters blue light waves more than red light waves, making stars appear redder than they are. This is similar to the effect seen when dust particles in the atmosphere of Earth contribute to red sunsets.
Broadly speaking, interstellar extinction is strongest at short wavelengths, generally observed by using techniques from spectroscopy. Extinction results in a change in the shape of an observed spectrum. Superimposed on this general shape are absorption features (wavelength bands where the intensity is lowered) that have a variety of origins and can give clues as to the chemical composition of the interstellar material, e.g. dust grains. Known absorption features include the 2175 Å bump, the diffuse interstellar bands, the 3.1 μm water ice feature, and the 10 and 18 μm silicate features.
In the solar neighborhood, the rate of interstellar extinction in the Johnson–Cousins V-band (visual filter) averaged at a wavelength of 540 nm is usually taken to be 0.7–1.0 mag/kpc−simply an average due to the clumpiness of interstellar dust. In general, however, this means that a star will have its brightness reduced by about a factor of 2 in the V-band viewed from a good night sky vantage point on earth for every kiloparsec (3,260 light years) it is farther away from us.
The amount of extinction can be significantly higher than this in specific directions. For example, some regions of the Galactic Center are awash with obvious intervening dark dust from our spiral arm (and perhaps others) and themselves in a bulge of dense matter, causing as much as more than 30 magnitudes of extinction in the optical, meaning that less than 1 optical photon in 1012 passes through. This results in the zone of avoidance, where our view of the extra-galactic sky is severely hampered, and background galaxies, such as Dwingeloo 1, were only discovered recently through observations in radio and infrared.
The general shape of the ultraviolet through near-infrared (0.125 to 3.5 μm) extinction curve (plotting extinction in magnitude against wavelength, often inverted) looking from our vantage point at other objects in the Milky Way, is fairly well characterized by the stand-alone parameter of relative visibility (of such visible light) R(V) (which is different along different lines of sight), but there are known deviations from this characterization. Extending the extinction law into the mid-infrared wavelength range is difficult due to the lack of suitable targets and various contributions by absorption features.
R(V) compares aggregate and particular extinctions. It is . Restated, it is the total extinction, A(V) divided by the selective total extinction (A(B)−A(V)) of those two wavelengths (bands). A(B) and A(V) are the total extinction at the B and V filter bands. Another measure used in the literature is the absolute extinction A(λ)/A(V) at wavelength λ, comparing the total extinction at that wavelength to that at the V band.
R(V) is known to be correlated with the average size of the dust grains causing the extinction. For the Milky Way Galaxy, the typical value for R(V) is 3.1, but is found to vary considerably across different lines of sight. As a result, when computing cosmic distances it can be advantageous to move to star data from the near-infrared (of which the filter or passband Ks is quite standard) where the variations and amount of extinction are significantly less, and similar ratios as to R(Ks): 0.49±0.02 and 0.528±0.015 were found respectively by independent groups. Those two more modern findings differ substantially relative to the commonly referenced historical value ≈0.7.
The relationship between the total extinction, A(V) (measured in magnitudes), and the column density of neutral hydrogen atoms column, NH (usually measured in cm−2), shows how the gas and dust in the interstellar medium are related. From studies using ultraviolet spectroscopy of reddened stars and X-ray scattering halos in the Milky Way, Predehl and Schmitt found the relationship between NH and A(V) to be approximately:
(see also:).
Astronomers have determined the three-dimensional distribution of extinction in the "solar circle" (our region of our galaxy), using visible and near-infrared stellar observations and a model of distribution of stars. The dust causing extinction mainly lies along the spiral arms, as observed in other spiral galaxies.
Measuring extinction towards an object
To measure the extinction curve for a star, the star's spectrum is compared to the observed spectrum of a similar star known not to be affected by extinction (unreddened). It is also possible to use a theoretical spectrum instead of the observed spectrum for the comparison, but this is less common. In the case of emission nebulae, it is common to look at the ratio of two emission lines which should not be affected by the temperature and density in the nebula. For example, the ratio of hydrogen-alpha to hydrogen-beta emission is always around 2.85 under a wide range of conditions prevailing in nebulae. A ratio other than 2.85 must therefore be due to extinction, and the amount of extinction can thus be calculated.
The 2175-angstrom feature
One prominent feature in measured extinction curves of many objects within the Milky Way is a broad 'bump' at about 2175 Å, well into the ultraviolet region of the electromagnetic spectrum. This feature was first observed in the 1960s, but its origin is still not well understood. Several models have been presented to account for this bump which include graphitic grains with a mixture of PAH molecules. Investigations of interstellar grains embedded in interplanetary dust particles (IDP) observed this feature and identified the carrier with organic carbon and amorphous silicates present in the grains.
Extinction curves of other galaxies
The form of the standard extinction curve depends on the composition of the ISM, which varies from galaxy to galaxy. In the Local Group, the best-determined extinction curves are those of the Milky Way, the Small Magellanic Cloud (SMC) and the Large Magellanic Cloud (LMC).
In the LMC, there is significant variation in the characteristics of the ultraviolet extinction with a weaker 2175 Å bump and stronger far-UV extinction in the region associated with the LMC2 supershell (near the 30 Doradus starbursting region) than seen elsewhere in the LMC and in the Milky Way. In the SMC, more extreme variation is seen with no 2175 Å bump and very strong far-UV extinction in the star forming Bar and fairly normal ultraviolet extinction seen in the more quiescent Wing.
This gives clues as to the composition of the ISM in the various galaxies. Previously, the different average extinction curves in the Milky Way, LMC, and SMC were thought to be the result of the different metallicities of the three galaxies: the LMC's metallicity is about 40% of that of the Milky Way, while the SMC's is about 10%. Finding extinction curves in both the LMC and SMC which are similar to those found in the Milky Way and finding extinction curves in the Milky Way that look more like those found in the LMC2 supershell of the LMC and in the SMC Bar has given rise to a new interpretation. The variations in the curves seen in the Magellanic Clouds and Milky Way may instead be caused by processing of the dust grains by nearby star formation. This interpretation is supported by work in starburst galaxies (which are undergoing intense star formation episodes) which shows that their dust lacks the 2175 Å bump.
Atmospheric extinction
Atmospheric extinction gives the rising or setting Sun an orange hue and varies with location and altitude. Astronomical observatories generally are able to characterise the local extinction curve very accurately, to allow observations to be corrected for the effect. Nevertheless, the atmosphere is completely opaque to many wavelengths requiring the use of satellites to make observations.
This extinction has three main components: Rayleigh scattering by air molecules, scattering by particulates, and molecular absorption. Molecular absorption is often referred to as telluric absorption, as it is caused by the Earth (telluric is a synonym for terrestrial). The most important sources of telluric absorption are molecular oxygen and ozone, which strongly absorb radiation near ultraviolet, and water, which strongly absorbs infrared.
The amount of such extinction is lowest at the observer's zenith and highest near the horizon. A given star, preferably at solar opposition, reaches its greatest celestial altitude and optimal time for observation when the star is near the local meridian around solar midnight and if the star has a favorable declination (i.e., similar to the observer's latitude); thus, the seasonal time due to axial tilt is key. Extinction is approximated by multiplying the standard atmospheric extinction curve (plotted against each wavelength) by the mean air mass calculated over the duration of the observation. A dry atmosphere reduces infrared extinction significantly.
| Physical sciences | Observational astronomy | null |
1232575 | https://en.wikipedia.org/wiki/Suicide%20methods | Suicide methods | A suicide method is any means by which a person may choose to end their life. Suicide attempts do not always result in death, and a non-fatal suicide attempt can leave the person with serious physical injuries, long-term health problems, or brain damage.
Worldwide, three suicide methods predominate, with the pattern varying in different countries: these are hanging, pesticides, and firearms. Some suicides may be preventable by removing the means. Making common suicide methods less accessible leads to an overall reduction in the number of suicides.
Method-specific ways to do this might include restricting access to pesticides, firearms, and commonly used drugs. Other important measures are the introduction of policies that address the misuse of alcohol and the treatment of mental disorders. Gun-control measures in a number of countries have seen a reduction in suicides and other gun-related deaths. Other preventive measures are not method-specific; these include support, access to treatment, and calling a crisis hotline. There are multiple talk therapies that reduce suicidal thoughts and behaviors regardless of method, including dialectical behavior therapy (DBT).
Purpose of study
The study of suicide methods aims to identify those commonly used, and the groups at risk of suicide; making methods less accessible may be useful in suicide prevention. Limiting the availability of means such as pesticides and firearms is recommended by a World Health Report on suicide and its prevention. The early identification of mental disorders and substance abuse disorders, follow-up care for those who have attempted suicide, and responsible reporting by the media are all seen to be key in reducing the number of deaths by suicide. National suicide prevention strategies are also advocated using a comprehensive and coordinated response to suicide prevention. This needs to include the registration and monitoring of suicides and attempted suicide, breaking figures down by age, sex, and method.
Such information allows public health resources to focus on the problems that are relevant in a particular place, or for a given population or subpopulation. For instance, if firearms are used in a significant number of suicides in one place, then public health policies there could focus on gun safety, such as keeping guns locked away, and the key inaccessible to at-risk family members. If young people are found to be at increased risk of suicide by overdosing on particular medications, then an alternative class of medication may be prescribed instead, a safety plan and monitoring of medication can be put in place, and parents can be educated about how to prevent the hoarding of medication for a future suicide attempt.
Media reporting
Media reporting of the methods used in suicides is "strongly discouraged" by the World Health Organization, government health agencies, universities, and the Associated Press among others. Detailed descriptions of suicides or the personal characteristics of the person who died contribute to copycat suicides (suicide contagion). Dramatic or inappropriate descriptions of individual suicides by mass media has been linked specifically to copycat suicides among teenagers. Writing for the New Yorker about celebrity suicides, Andrew Solomon wrote that "You who are reading this are at statistically increased risk of suicide right now." In one study, changes in how news outlets reported suicide reduced suicides by a particular method.
Media reporting guidelines also apply to "online content including citizen-generated media coverage". The Recommendations for Reporting on Suicide, created by journalists, suicide prevention groups, and internet safety non-profit organizations, encourage linking to resources such as a list of suicide crisis lines and information about risk factors for suicide, and reporting on suicide as a multi-faceted, treatable health issue.
Method restriction
Method restriction, also called lethal means reduction, is an effective way to reduce the number of suicide deaths in the short and medium term. Method restriction is considered a best practice supported by "compelling" evidence. Some of these actions, such as installing barriers on bridges and reducing the toxicity in gas, require action by governments, industries, or public utilities. At the individual level, method restriction can be as simple as asking a trusted friend or family member to store firearms until the crisis has passed. According to Danuta Wasserman, professor in psychiatry and suicidology at Karolinska Institute, choosing not to restrict access to suicide methods is unethical.
Method restriction is effective and prevents suicides. It has the largest effect on overall suicide rates when the method being restricted is common and no direct substitution is available. If the method being restricted is uncommon, or if a substitute is readily available, then it may be effective in individual cases but not produce a large-scale reduction in the number of deaths in a country.
Method substitution is the process of choosing a different suicide method when the first-choice method is inaccessible. In many cases, when the first-choice method is restricted, the person does not attempt to find a substitute. Method substitution has been measured over the course of decades, so when a common method is restricted (for example, by making domestic gas less toxic), overall suicide rates may be suppressed for many years. If the first-choice suicide method is inaccessible, a method substitution may be made which may be less lethal, tending to result in fewer fatal suicide attempts.
In an example of the curb cut effect, changes unrelated to suicide have also functioned as suicide method restrictions. Examples of this include changes to align train doors with platforms, switching from coal gas to natural gas in homes, and gun control laws, all of which have reduced suicides despite being intended for a different purpose.
List
Suffocation
Suffocation, as a classification of suicide method, includes strangulation and hanging.
Suicide by suffocation involves restricting breathing or the amount of oxygen taken in, causing asphyxia and eventually hypoxia. It is not possible to die simply by holding the breath, since a reflex causes the respiratory muscles to contract, forcing an in-breath, and the re-establishment of a normal breathing rhythm. Therefore, inhaling an inert gas such as helium or nitrogen, or a toxic gas such as carbon monoxide, is used to bring about unconsciousness. Certain devices such as exit bags are designed to be used with this method, and provide a way for the carbon dioxide to passively escape, which prevents the panic, sense of suffocation and struggling before unconsciousness, known as the hypercapnic alarm response caused by the presence of high carbon dioxide concentrations in the blood. , organizations supporting a right to die promoted death by helium inhalation, although most cases using this method in the US were people with psychiatric conditions.
Hanging
Hanging is a common method of suicide. Hanging involves the use of a ligature such as a rope or cord attached to an anchor point with the other end used to form a noose placed around the neck. The cause of death will either be due to strangulation or a broken neck. About half of attempted suicides by hanging result in death. People who favor this method are usually unaware that it is often a "slow, painful, and messy method that [needs] technical knowledge".
Hanging is the prevalent means of suicide in impoverished pre-industrial societies, and is more common in rural areas than in urban areas.
Hanging was the most common method in traditional Chinese culture, as it was believed that the rage involved in such a death permitted the person's spirit to haunt and torment survivors. In the Chinese culture, suicide by hanging was used as an act of revenge by women and of defiance by powerless officials, who used it as a "final, but unequivocal, way of standing still against and above oppressive authorities". Chinese people would often approach the act ceremonially, including the use of proper attire.
Drowning
Suicide by drowning is the act of deliberately submerging oneself in water or other liquid to prevent breathing. It accounts for less than 2% of all suicides in the United States. People with dementia and schizophrenia have a higher risk of dying by drowning. Of those who attempt suicide by drowning in the US, about half die.
About 2% to 3% of suicides by drowning involve driving a vehicle into a body of water.
Poisoning
Suicide by poisoning, also called self-poisoning, is usually classed as a drug overdose when drugs such as painkillers or recreational drugs are used. The use of pesticides to self-poison is the most common method used in some countries. Poisoning through the means of toxic plants is usually slow and painful.
Pesticide
, worldwide, around 30% of suicides were from pesticide poisonings. It was the leading suicide method in developing countries, with about half of suicide deaths in India involving poisoning, and most of those involving pesticides. The use of this method varies markedly in different areas of the world, from 0.9% in Europe to about 50% in the Pacific region. In the US, pesticide poisoning is used in about 12 suicides per year. The overall case fatality rate for suicide attempts using pesticide is about 10–20%; the risk of death increases if the person is also drunk at the time.
Method restriction is an effective way to reduce suicide by pesticide poisoning. In Finland, limiting access to parathion in the 1960s resulted in a rapid decline in both poisoning-related suicides and total suicide deaths for several years, and a slower decline in subsequent years. In Sri Lanka, both suicide by pesticide and total suicides declined after first toxicity class I and later class II endosulfan were banned. Overall suicide deaths were cut by 70%, with 93,000 lives saved over 20 years as a result of banning these pesticides. In Korea, banning a single pesticide, paraquat, halved the number of suicides by pesticide poisoning and reduced the total number of suicides in that country.
Drug overdose
A drug overdose involves taking a dose of a drug that exceeds safe levels. In the UK (England and Wales) until 2013, a drug overdose was the most common suicide method in females. In 2019 in males the percentage is 16%. Self-poisoning accounts for the highest number of non-fatal suicide attempts. In the United States about 60% of suicide attempts and 14% of suicide deaths involve drug overdoses. The risk of death in suicide attempts involving overdose is about 2%.
Overdose attempts using painkillers are among the most common, due to their easy availability over-the-counter. Paracetamol (also called acetaminophen) is the most widely used analgesic worldwide and is commonly used in overdose attempts. Paracetamol poisoning is a common cause of acute liver failure. If not treated, the overdose produces a long and painful illness, with symptoms of nausea, vomiting, sweating, and abdominal pain appearing several hours after ingestion and continuing for several days. People who take overdoses of paracetamol do not fall asleep or lose consciousness, although most people who attempt suicide with paracetamol wrongly believe that they will be rendered unconscious by the drug. Method-specific restriction through reducing package size in the UK and Ireland has reduced suicide deaths by drug overdose.
Carbon monoxide
A particular type of poisoning involves the inhalation of high levels of carbon monoxide (CO). Death usually occurs through hypoxia. A nonfatal attempt can result in memory loss and other symptoms.
Carbon monoxide is a colorless and odorless gas, so its presence cannot be detected by sight or smell. It acts by binding preferentially to the hemoglobin in the bloodstream, displacing oxygen molecules and progressively deoxygenating the blood, eventually resulting in the failure of cellular respiration and death. Carbon monoxide is extremely dangerous to bystanders and people who may discover the body; right-to-die advocate Philip Nitschke has therefore recommended against this method.
Before air quality regulations and catalytic converters, suicide by carbon monoxide poisoning was often achieved by running a car's engine in an enclosed space such as a garage, or by redirecting a running car's exhaust back inside the cabin with a hose. Motor car exhaust may have contained up to 25% carbon monoxide. Catalytic converters found on all modern automobiles eliminate over 99% of carbon monoxide produced. As a further complication, the amount of unburned gasoline in emissions can make exhaust unbearable to breathe well before a person loses consciousness.
Charcoal-burning suicide induces death from carbon monoxide poisoning. Originally used in Hong Kong, it spread to Japan, where small charcoal-burning heaters (hibachi) or stoves (shichirin) have been used in a sealed room. By 2001, this method accounted for 25% of deaths from suicide in Japan. Nonfatal attempts can result in severe brain damage due to cerebral anoxia.
Other toxins
Gas-oven suicide was a common method of suicide in the early to mid-20th centuries in some North American and European countries. Household gas was originally coal gas, also called illuminating gas, or town gas, which was composed of methane, hydrogen and carbon monoxide. Stoves of this era required one to manually ignite a pilot light with a match; without the combustion the gas cloud would spread unimpeded. Carbon monoxide poisoning was the proximate cause of death. Natural gas, introduced in the 1960s, is composed of methane, ethane and an odorant added for safety. The suicide rates by domestic gas fell from 1960 to 1980, as changes were made to the formula to make it less lethal.
Shooting
In the United States, suicide by firearm is the most lethal method of suicide, resulting in a fatality 90% of the time, and is thus the leading cause of death by suicide as of 2017. Worldwide, firearm prevalence in suicides varies widely, depending on the acceptance and availability of firearms in a culture. The use of firearms in suicides ranges from less than 10% in Australia to 50.5% in the U.S., where it is the most common method of suicide.
Generally, the bullet will be aimed at point-blank range. Surviving a self-inflicted gunshot may result in severe chronic pain as well as reduced cognitive abilities and motor function, subdural hematoma, foreign bodies in the head, pneumocephalus and cerebrospinal fluid leaks. For temporal bone directed bullets, temporal lobe abscess, meningitis, aphasia, hemianopsia, and hemiplegia are common late intracranial complications. As many as 50% of people who survive gunshot wounds directed at the temporal bone suffer facial nerve damage, usually due to a severed nerve.
Gun control
Reducing access to guns at a population level decreases the risk of suicide by firearms.
Fewer people die from suicide overall in places with stricter laws regulating the use, purchase, and trading of firearms. Suicide risk goes up when firearms are more available.
Gun control is a primary method of reducing suicide by people who live in a home with guns. Prevention measures include simple actions such as locking all firearms in a gun safe or installing gun locks. Some people self-impose a barrier to using the keys to unlock their guns, such as by asking a friend to keep the keys in a different place, or by freezing them in an ice cube. This prevents spur-of-the-moment access to their own guns. Some stores that sell guns provide temporary storage as a service; in other cases, a trusted friend or family member will offer to store the guns until the crisis has passed. When a person is going through a crisis, red flag laws in some places allow family members to petition the courts to have firearms temporarily removed and stored elsewhere.
More firearms are involved in suicide than are involved in homicides in the United States. A 1999 study of California and gun mortality found that a person is more likely to die by suicide if they have purchased a firearm, with a measurable increase of suicide by firearm beginning at most a week after the purchase and continuing for six years or more.
The United States has both the highest number of suicides and firearms in circulation in a developed country, and when gun ownership rises so too does suicide involving the use of a firearm. A 2004 report by the National Academy of Sciences found an association between estimated household firearm ownership and gun suicide rates, though a study by two Harvard researchers did not find a statistically significant association between household firearms and gun suicide rates, except in the suicides of children aged 5–14. Another study found that gun prevalence rates were positively associated with suicide rates among people aged 15 to 24, and 65 to 84, but not among those aged 25 to 64. Access to firearms is associated with a higher risk of suicide, especially for people keeping loaded guns in the home. Numerous ecological and time series studies have also shown a positive association between gun ownership rates and suicide rates. This association tends to only exist for firearm-related and overall suicides, not for non-firearm suicides. Studies consistently find a relationship between gun ownership and gun-related suicides, with few exceptions. A 2016 study found a positive association between gun ownership and both gun-related and overall suicides among men, but not among women; gun ownership was only strongly associated with gun-related suicides among women. During the 1980s and early 1990s, there was a strong upward trend in adolescent suicides with a gun, as well as a sharp overall increase in suicides among those age 75 and over.
Firearm-related suicides declined in Australia after the introduction of nationwide gun control. The same study found no evidence of substitution to other methods. In Canada, gun suicides declined after gun control, but other methods rose, leading to no change in the overall rates. Similarly, in New Zealand, gun suicides declined after more legislation, but overall suicide rates did not change; this might be due to the highly stringent firearm storage laws and very low prevalence of handgun ownership in New Zealand. A study about Canada found no significant correlations between provincial firearm ownership and overall provincial suicide rates.
Jumping
Jumping is the most common method of suicide in Hong Kong, accounting for 52.1% of all reported suicide cases in 2006 and similar rates for the years before that. The Centre for Suicide Research and Prevention of the University of Hong Kong believes that it may be due to the abundance of easily accessible high-rise buildings in Hong Kong. In the United States, jumping is among the least common methods of suicide (less than 2% of all reported suicides in 2005). In a 75-year period to 2012, there were around 2,000 suicides at the Golden Gate Bridge. Jumping deaths are often impulsive, and one study of the Golden Gate Bridge demonstrated that more than 90% of people interrupted in a suicide attempt ultimately died by natural or accidental causes, with only 6% dying in a subsequent suicide attempt.
Many jumping deaths could be prevented through the construction of fencing or other safety equipment. For example, suicide by jumping into a volcanic crater is a rare method of suicide. Mount Mihara in Japan briefly became a notorious suicide site during the Great Depression following media reports of a suicide there. Copycat suicides in the ensuing years prompted the erection of a protective fence surrounding the crater. Similarly, in New Zealand, secure fencing at the Grafton Bridge substantially reduced the rate of suicides. Chest-high barriers are more effective than waist-high barriers because they require more time and effort to climb over.
Constructing barriers is not the only option, and it can be expensive. Other method-specific prevention actions include making staff members visible in high-risk areas, using closed-circuit television cameras to identify people in inappropriate places or behaving abnormally (e.g., lingering in a place that people normally spend little time in), and installing awnings and soft-looking landscaping, which deters suicide attempts by making the place look ineffective.
Another factor in reducing jumping deaths is to avoid suggesting in news articles, signs, or other communication that a high-risk place is a common, appropriate, or effective place for dying by jumping from. The efficacy of signage is uncertain, and may depend on whether the wording is simple and appropriate.
Cutting and stabbing
A fatal self-inflicted wound to the wrist is termed a deep wrist injury, and is often preceded by several tentative surface-breaking attempts known as hesitation wounds, indicating indecision or a self-harm tactic. For every suicide by wrist cutting, there are many more nonfatal attempts, so that the number of actual deaths using this method is very low.
Wounds from suicide attempts involve the non-dominant hand, with damage often done to the median nerve, ulnar nerve, radial artery, palmaris longus muscle, and flexor carpi radialis muscle. Such injuries can severely affect the function of the hand, and the inability caused to carry out work or interests increases the risk of further attempts.
Seppuku is a form of Japanese ritual suicide by disembowelment. While reserved for samurai in their code of honour, a feminine counterpart of female ritual suicide also exists (sometimes incorrectly referred to in western understanding as jigai), which involves cutting the jugular vein. While seppuku requires the assistance of another samurai, jigai can be performed on the self. Seppuku is painful and slow - neither method is common in the modern day.
Starvation and dehydration
A classification has been made of Voluntarily Stopping Eating and Drinking (VSED) which is often resorted to by those with a terminal illness. This includes fasting and dehydration, and has also been referred to as autoeuthanasia. It has been used by assisted dying activists, such as Wendy Mitchell, as an means of death in places where assisted suicide is not available.
Fasting to death has been used by Hindu, Buddhist, and Jain ascetics and householders, as a ritual method of suicide known as Prayopavesa in Hinduism; Sokushinbutsu historically in Buddhism; and as Sallekhana in Jainism. Cathars also fasted to death after receiving the consolamentum sacrament, in order to die while in a morally perfect state. The method is also used in passive senicide and associated with the political protest of the hunger strike such as the 1981 Irish hunger strike in which ten prisoners died.
Death from dehydration can take from several days to a few weeks. This means that unlike many other suicide methods, it cannot be accomplished impulsively. Those who die by terminal dehydration typically lapse into unconsciousness before death, and may also experience delirium and deranged serum sodium.
Terminal dehydration has been described as having substantial advantages over physician-assisted suicide with respect to self-determination, access, professional integrity, and social implications. Specifically, a patient has a right to refuse treatment and it would be a personal assault for someone to force water on a patient, but such is not the case if a doctor merely refuses to provide lethal medication. But it also has distinctive drawbacks as a humane means of voluntary death. One survey of hospice nurses found that nearly twice as many had cared for patients who chose voluntary refusal of food and fluids to hasten death as had cared for patients who chose physician-assisted suicide. They also rated fasting and dehydration as causing less suffering and pain and being more peaceful than physician-assisted suicide. Other sources note very painful side effects of dehydration, including seizures, skin cracking and bleeding, blindness, nausea, vomiting, cramping and severe headaches.
Collision with or of a vehicle
Another suicide method is to lie down, or throw oneself, in the path of a fast-moving vehicle, either on the road or onto railway tracks. Nonfatal attempts may result in profound injuries, such as multiple bone fractures, amputations, concussion and severe mental and physical handicapping.
Road
Some people use intentional car crashes as a suicide method. This especially applies to single-occupant, single-vehicle wrecks, although some suicidal drivers cause head-on collisions. Even single-vehicle collisions may harm other road users; for example, a driver who brakes abruptly or swerves to avoid a suicidal person may collide with something else on the road, resulting in harm to the driver or others. Both the innocent driver and bystanders may be traumatized by the experience, even if everyone survives. Being victimized by a suicidal pedestrian is recognized as an occupational hazard for professional drivers, especially if they operate heavy vehicles.
The real percentage of suicides among motor vehicle fatalities is not reliably known and likely varies by the ease of accessing a car and the ease of accessing other methods. Suicidal intent is often inferred from the circumstances, such as the driver being alone in the vehicle, driving at a high speed, without normal use of a seat belt, under circumstances that do not normally result in fatal wrecks (e.g., a straight road and good weather conditions). Somewhere between 1% and 10% of all crashes (fatal and non-fatal combined) likely result from suicidal intent. In addition a vehicle being used as a method (e.g., deliberately causing a wreck), a vehicle may be the location of a suicide attempt using another method (e.g., while the suicidal person is inside a parked car).
People who attempt vehicular suicide or murder–suicides tend to be adult men who recently experienced a stressful event. They tend to be impulsive, to have previously attempted suicide, and to have a history of reckless driving. Suicidal drivers are unlikely to be drunk at the time, though in the case of vehicle–pedestrian collisions, it may be difficult to determine whether an intoxicated pedestrian had suicidal intent or was non-suicidal but was so drunk as to be unable to recognize and respond to a dangerous situation.
Rail
Air
Toward the end of the 20th century, one or two pilots in the US died by suicide by aircraft each year. The pilot was usually flying alone at the time, and was using alcohol or drugs about half the time. In the rare case of a pilot engaging in murder–suicide, the number of innocent people is sometimes very high. On 24 March 2015, a Germanwings co-pilot deliberately crashed Germanwings Flight 9525 into the French Alps to kill himself, killing 150 people with him. Suicide by pilot has also been proposed as a potential cause for the disappearance and following destruction of Malaysian Airlines Flight 370 in 2014, with supporting evidence being found in a flight simulator application used by the flight's pilot.
Disease
There have been documented cases of gay men deliberately trying to contract a disease such as HIV/AIDS as a means of suicide.
Electrocution
Suicide by electrocution involves using a lethal electric shock, and is a rarely used method. This causes arrhythmias of the heart, meaning that the heart does not contract in synchrony between the different chambers, essentially causing elimination of blood flow. Furthermore, depending on the current, burns may also occur.
Fire
Self-immolation is suicide usually by fire. This method of suicide is rare due to it being long and painful. If the attempt is intervened, severe burns and scar tissue will prevail with subsequent emotional impact.
It has been used as a protest tactic, by Thích Quảng Đức in 1963 to protest the South Vietnam's anti-Buddhist policies; by Malachi Ritscher in 2006 to protest the United States' involvement in the Iraq War; by Mohamed Bouazizi in 2011 in Tunisia which gave rise to the Tunisian Revolution; by Aaron Bushnell in 2024 to protest the United States' support for Israel in the Israel–Hamas war; and historically as a ritual known as sati where a Hindu widow would immolate herself in her husband's funeral pyre.
Hypothermia
Hypothermia is a rare method of suicide. Between 1991 and 2014 in the United States, there were eight cases in the scientific literature, and they usually involved some other factor like drugs.
Assisted suicide
Indirect
Indirect suicide is the act of setting out on an obviously fatal course without directly carrying out the act upon oneself. Indirect suicide is differentiated from legally defined suicide by the fact that the person does not directly cause the action meant to kill them, but rather expects and allows the action to happen to them. Examples of indirect suicide include a soldier enlisting in the army with the intention and expectation of being killed in combat, or provoking an armed law enforcement officer into using lethal force against them. The latter is generally called "suicide by cop".
Evidence exists for suicide by capital crime in colonial Australia. Convicts seeking to escape their brutal treatment would murder another individual. This was felt necessary due to a religious taboo against direct suicide. A person completing suicide was believed to be destined for hell, whereas a person committing murder could be absolved of their sins before execution. In its most extreme form, groups of prisoners on the extremely brutal penal colony of Norfolk Island would form suicide lotteries. Prisoners would draw straws with one prisoner murdering another. The remaining participants would witness the crime, and would be sent away to Sydney, as capital trials could not be held on Norfolk Island, thus earning a break from the Island. There is uncertainty as to the extent of suicide lotteries. While surviving contemporary accounts claim that the practice was common, such claims are probably exaggerated.
Rituals
Ritual suicide is performed in a specifically prescribed way, often as part of a cultural or religious practice. Suicide by hanging was traditionally practiced in China and the Sinosphere as a means of ensuring that one's ghost would be able to haunt and torment the powerful but unjust. Self-immolation was practiced similarly in India and spread with Dharmic religions. Some forms of suicide involve or are understood as martyrdom and are undertaken ritualistically. Sallekhana is the practice of ritualized starvation following Jain practices. Romans who considered themselves dishonored would "fall on their sword", ritualistically transfixing themselves on their swords; the similar medieval Japanese practice became known as seppuku or harakiri for samurai. Female ritual suicide (incorrectly referred to in some English sources as jigai) was carried out in Japan by wives of samurai who had committed seppuku or otherwise brought dishonour.
| Biology and health sciences | Mental disorders | Health |
1232763 | https://en.wikipedia.org/wiki/Crab%20Pulsar | Crab Pulsar | The Crab Pulsar (PSR B0531+21 or Baade's Star) is a relatively young neutron star. The star is the central star in the Crab Nebula, a remnant of the supernova SN 1054, which was widely observed on Earth in the year 1054. Discovered in 1968, the pulsar was the first to be connected with a supernova remnant.
The Crab Pulsar is one of very few pulsars to be identified optically. The optical pulsar is roughly in diameter and has a rotational period of about 33 milliseconds, that is, the pulsar "beams" perform about 30 revolutions per second. The outflowing relativistic wind from the neutron star generates synchrotron emission, which produces the bulk of the emission from the nebula, seen from radio waves through to gamma rays. The most dynamic feature in the inner part of the nebula is the point where the pulsar's equatorial wind slams into the surrounding nebula, forming a termination shock. The shape and position of this feature shifts rapidly, with the equatorial wind appearing as a series of wisp-like features that steepen, brighten, then fade as they move away from the pulsar into the main body of the nebula. The period of the pulsar's rotation is increasing by 38 nanoseconds per day due to the large amounts of energy carried away in the pulsar wind.
The Crab Nebula is often used as a calibration source in X-ray astronomy. It is very bright in X-rays, and the flux density and spectrum are known to be constant, with the exception of the pulsar itself. The pulsar provides a strong periodic signal that is used to check the timing of the X-ray detectors. In X-ray astronomy, "crab" and "millicrab" are sometimes used as units of flux density. A millicrab corresponds to a flux density of about () in the 2–10 keV X-ray band, for a "crab-like" X-ray spectrum, which is roughly power-law in photon energy: I ~ E−1.1.
Very few X-ray sources ever exceed one crab in brightness.
Pulsed emission up to 1.5 TeV has been detected from the Crab pulsar. The only other known pulsar with emission in this energy range is the Vela Pulsar at 20 TeV.
History of observation
The Crab Nebula was identified as the remnant of SN 1054 by 1939. Astronomers then searched for the nebula's central star.
There were two candidates, referred to in the literature as the "north following" and "south preceding" stars. In September 1942, Walter Baade ruled out the "north following" star but found the evidence inconclusive for the "south preceding" star.
Rudolf Minkowski, in the same issue of The Astrophysical Journal as Baade, advanced spectral arguments claiming that the "evidence admits, but does not prove, the conclusion that the south preceding star is the central star of the nebula".
In late 1968, David H. Staelin and Edward C. Reifenstein III reported the discovery of two rapidly varying radio sources "near the crab nebula that could be coincident with it" using the Green Bank radio antenna. They were given the designations NP 0527 and NP 0532. The period of 33 milliseconds and location of the Crab Nebula pulsar NP 0532 was discovered by Richard V. E. Lovelace and collaborators on 10 November 1968, at the Arecibo Radio Observatory. The discovery of the pulsar with such a short period proved that pulsars are rotating neutron stars (not pulsating white dwarfs, as many scientists suggested). Soon after the discovery of the Crab Pulsar, David Richards discovered (using the Arecibo Telescope) that it spins down and, therefore, loses its rotational energy. Thomas Gold has shown that the pulsar's spin-down power is sufficient to power the Crab Nebula.
A subsequent study by them, including William D. Brundage, also found that the NP 0532 source is located at the Crab Nebula. A radio source was also reported coincident with the Crab Nebula in late 1968 by L. I. Matveenko in Soviet Astronomy.
Optical pulsations were first reported by Cocke, Disney and Taylor using the telescope on Kitt Peak of the Steward Observatory of the University of Arizona. This observation had an audio tape recording the pulses and this tape also recorded the voices of John Cocke, Michael Disney and Bob McCallister (the night assistant) at the time of the discovery. Their discovery was confirmed by Nather, Warner and Macfarlane.
Jocelyn Bell Burnell, who co-discovered the first pulsar PSR B1919+21 in 1967, relates that in the late 1950s a woman viewed the Crab Nebula source at the University of Chicago's telescope, then open to the public, and noted that it appeared to be flashing. The astronomer she spoke to, Elliot Moore, disregarded the effect as scintillation, despite the woman's protestation that as a qualified pilot she understood scintillation and this was something else. Bell Burnell notes that the 30 Hz frequency of the Crab Nebula optical pulsar is difficult for many people to see.
In 2007, it was reported that Charles Schisler detected a celestial source of radio emission in 1967 at the location of the Crab Nebula, using a United States Air Force radar system in Alaska designed as an early warning system to detect intercontinental ballistic missiles. This source was later understood by Schisler to be the Crab Pulsar, after the news of Bell Burnell's initial pulsar discoveries was reported. However, Schisler's detection was not reported publicly for four decades due to the classified nature of the radar observations.
The Crab Pulsar was the first pulsar for which the spin-down limit was broken using several months of data of the LIGO observatory. Most pulsars do not rotate at constant rotation frequency, but can be observed to slow down at a very slow rate (3.7 Hz/s in case of the Crab). This spin-down can be explained as a loss of rotation energy due to various mechanisms. The spin-down limit is a theoretical upper limit of the amplitude of gravitational waves that a pulsar can emit, assuming that all the losses in energy are converted to gravitational waves. No gravitational waves observed at the expected amplitude and frequency (after correcting for the expected Doppler shift) proves that other mechanisms must be responsible for the loss in energy. The non-observation so far is not totally unexpected, since physical models of the rotational symmetry of pulsars puts a more realistic upper limit on the amplitude of gravitational waves several orders of magnitude below the spin-down limit. It is hoped that with the improvement of the sensitivity of gravitational wave instruments and the use of longer stretches of data, gravitational waves emitted by pulsars will be observed in future. The only other pulsar for which the spin-down limit was broken so far is the Vela Pulsar.
In 2019 the Crab Nebula, and presumably therefore the Crab Pulsar, was observed to emit gamma rays in excess of 100 TeV, making it the first identified source of ultra-high-energy cosmic rays.
In 2023, very-long-baseline interferometry (VLBI) was used to conduct precision astrometry using the radio giant-pulse emission of the Crab Pulsar, thus measuring a precise distance to the Crab Pulsar.
| Physical sciences | Notable stars | Astronomy |
2560500 | https://en.wikipedia.org/wiki/Ground%20hornbill | Ground hornbill | The ground hornbills (Bucorvidae) are a family of the order Bucerotiformes, with a single genus Bucorvus and two extant species. The family is endemic to sub-Saharan Africa: the Abyssinian ground hornbill occurs in a belt from Senegal east to Ethiopia, and the southern ground hornbill occurs in southern and East Africa.
Ground hornbills are large, with adults around a metre tall. Both species are ground-dwelling, unlike other hornbills. Also unlike most other hornbills, they are carnivorous and feed on insects, snakes, other birds, amphibians and even tortoises. They are among the longest-lived of all birds, and the larger southern species is possibly the slowest-breeding (triennially) and longest-lived of all birds.
Taxonomy
The genus Bucorvus was introduced, originally as a subgenus, by the French naturalist René Lesson in 1830 with the Abyssinian ground hornbill Bucorvus abyssinicus as the type species. The generic name is derived from the name of the genus Buceros introduced by Carl Linnaeus in 1758 for the Asian hornbills where corvus is the Latin word for a "raven".
A molecular phylogenetic study published in 2013 found that the genus Bucorvus was sister to the rest of the hornbills.
The genus Bucorvus contains two species:
A prehistoric ground hornbill, Bucorvus brailloni, has been described from fossil bones in Morocco, suggesting that prior to Quaternary glaciations the genus was either much more widespread or differently distributed.
It is currently thought that the ground hornbills, along with Tockus and Tropicranus, are almost exclusively carnivorous and lack the gular pouch that allows other, less closely related hornbill genera to store fruit.
| Biology and health sciences | Coraciiformes | Animals |
2560733 | https://en.wikipedia.org/wiki/Black%20swallower | Black swallower | The black swallower (Chiasmodon niger) is a species of deep sea fish in the family Chiasmodontidae. It is known for its ability to swallow fish larger than itself.
It has a worldwide distribution in tropical and subtropical waters, in the mesopelagic and bathypelagic zones at a depth of . It is a very common and widespread ocean fish; of its genus, it is the most common species in the North Atlantic.
Description
The black swallower is a small fish, averaging between , with a maximum known length of . The body is elongated and compressed, without scales, and is a uniform brownish-black in color. Its head is long, with a blunt snout, moderately sized eyes, and a large mouth. The lower jaw protrudes past the upper; both jaws are lined with a single row of sharp, depressible teeth, which interlock when the mouth is closed. The first three teeth in each jaw are enlarged into canines.
A small lower spine occurs on the preoperculum. The pectoral fins are long, with 12–14 (usually 13) rays; the pelvic fins are small and contain five rays. Of the two dorsal fins, the first is spiny with 10–12 spines, and the second is longer with one spine and 26–29 soft rays. The anal fin contains one spine and 26–29 soft rays. The caudal fin is forked with 9 rays. The lateral line is continuous with two pores per body segment.
Dentition
The black swallower has a unique set of teeth. Specifically, it has fangs that are incurred and almost fully straight. Usually, the fish has multiple fangs, whereas the second one tends to be the largest. The black swallower also has mobile fangs, which means that these fangs are loose in their sockets, and can help in damaging and chewing prey.
Feeding
The black swallower feeds on bony fish and cephalopods, which are swallowed whole. With its greatly distensible stomach, it is capable of swallowing prey over twice its length and 10 times its mass. Its upper jaws are articulated with the skull at the front via the suspensorium, which allows the jaws to swing down and encompass objects larger than the swallower's head. Theodore Gill speculated that the swallower seizes prey fish by the tail, and then "walks" its jaws over the prey until it is fully coiled inside the stomach.
Black swallowers have been found to have swallowed fish so large that they could not be digested before decomposition set in, and the resulting release of gases forced the swallower to the ocean surface. This is, in fact, how most known specimens came to be collected. In 2007, a black swallower measuring long was found dead off of Grand Cayman. Its stomach contained a snake mackerel (Gempylus serpens) long, or four and a half times its own length.
Reproduction
Reproduction is oviparous; the eggs are pelagic and measure in diameter and contain a clear oil globule and six dark pigment patches, which become distributed along the newly hatched larva from in front of the eyes to the tip of the notochord. These patches eventually disappear and the body darkens overall to black. The eggs are mostly found in winter off South Africa; juveniles have been found from April to August off Bermuda.
The larvae and juveniles are covered in small, projecting spinules.
| Biology and health sciences | Acanthomorpha | Animals |
2561756 | https://en.wikipedia.org/wiki/Ji%20%28polearm%29 | Ji (polearm) | The ji (pronunciation: , English approximation: , ) was a Chinese polearm, sometimes translated into English as spear or halberd, though they are conceptually different weapons. They were used in one form or another for over 3000 years, from at least as early as the Zhou dynasty, until the end of the Qing dynasty. They are still used for training in many Chinese martial arts.
History
The ji was initially a hybrid between a spear and a dagger-axe. It was a relatively common infantry weapon in Ancient China, and was also used by cavalry and charioteers.
In the Song dynasty, several weapons were referred to as ji but developed from spears, not from ancient ji. One variety was called the qinglong ji (), and had a spear tip with a crescent blade on one side. Another type was the fangtian ji (), which had a spear tip with crescent blades on both sides. They had multiple means of attack: the side blade or blades, the spear tip, plus often a rear counterweight that could be used to strike the opponent. The way the side blades were fixed to the shaft differs, but usually there were empty spaces between the pole and the side blade. The wielder could strike with the shaft, pulling the weapon back to hook with a side blade; or, he could slap his opponent with the flat side of the blade to knock him off his horse.
Popular legend
One of the earliest known appearances of the Ji in the historical record is the fangtian huaji (: "painted heavenly halberd") attributed to the warrior Lü Bu in the Romance of the Three Kingdoms. It is unknown whether the peculiarity of his weapon was a literary device used by Luo Guanzhong, the author. Since this novel was based on earlier depictions of the era during the Song dynasty, the Song-era polearm may have been named based on its similarity to, or in honor of, the weapon that was attributed to Lü Bu in this famous novel. This would be comparable to the famous semi-mythological origin story of the yanyuedao (; lit. "reclining moon blade"), the weapon wielded by Guan Yu (), another character from the novel and a real historical person. The first historical or archaeological evidence of this polearm comes from an 11th-century illustration in the military manual Wujing Zongyao (). The yanyuedao came to be known as the guandao after its invention was anachronistically attributed to Guan Yu himself, due to his wielding the weapon throughout the Romance.
Gallery
| Technology | Polearms | null |
2562358 | https://en.wikipedia.org/wiki/Spontaneous%20magnetization | Spontaneous magnetization | Spontaneous magnetization is the appearance of an ordered spin state (magnetization) at zero applied magnetic field in a ferromagnetic or ferrimagnetic material below a critical point called the Curie temperature or .
Overview
Heated to temperatures above , ferromagnetic materials become paramagnetic and their magnetic behavior is dominated by spin waves or magnons, which are boson collective excitations with energies in the meV range. The magnetization that occurs below is an example of the "spontaneous" breaking of a global symmetry, a phenomenon that is described by Goldstone's theorem. The term "symmetry breaking" refers to the choice of a magnetization direction by the spins, which have spherical symmetry above , but a preferred axis (the magnetization direction) below .
Temperature dependence
To a first order approximation, the temperature dependence of spontaneous magnetization at low temperatures is given by the Bloch T3/2 law:
where is the spontaneous magnetization at absolute zero. The decrease in spontaneous magnetization at higher temperatures is caused by the increasing excitation of spin waves. In a particle description, the spin waves correspond to magnons, which are the massless Goldstone bosons corresponding to the broken symmetry. This is exactly true for an isotropic magnet.
Magnetic anisotropy, that is the existence of an easy direction along which the moments align spontaneously in the crystal, corresponds however to "massive" magnons. This is a way of saying that they cost a minimum amount of energy to excite, hence they are very unlikely to be excited as . Hence the magnetization of an anisotropic magnet is harder to destroy at low temperature and the temperature dependence of the magnetization deviates accordingly from the Bloch T3/2 law. All real magnets are anisotropic to some extent.
Near the Curie temperature,
where is a critical exponent that depends on the universality class of the magnetic interaction. Experimentally the exponent is 0.34 for iron and 0.51 for nickel.
An empirical interpolation of the two regimes is given by
it is easy to check two limits of this interpolation that follow laws similar to the Bloch law, for , and the critical behavior, for , respectively.
| Physical sciences | Magnetostatics | Physics |
2562582 | https://en.wikipedia.org/wiki/El%20Ferdan%20Railway%20Bridge | El Ferdan Railway Bridge | The El Ferdan Railway Bridge is a swing bridge that spans the western shipping lane of the Suez Canal near Ismailia, Egypt. It is the longest swing bridge in the world, with a span of .
The bridge was not functional for a while due to the expansion of the Suez Canal completed in 2015 which added a parallel shipping lane just east of the existing bridge, cutting off the railway into Sinai. As of 2021 a second swing bridge spanning the new eastern shipping lane was under construction and started operating in 2024.
History
The first El Ferdan Railway Bridge over the Suez Canal was completed in April 1918 for the Sinai Military Railway. It was considered a hindrance to shipping so after the First World War it was removed. A steel swing bridge was built in 1942 (during the Second World War), but this was damaged by a steamship and removed in 1947. A double swing bridge was completed in 1954 but the 1956 Anglo-Franco-Israeli war with Egypt severed rail traffic across the canal for a third time. A replacement bridge was completed in 1963 which was destroyed in 1967 in the Six-Day War by the Egyptian engineering General Ahmed Hamdy.
In July 1996, a consortium led by German Krupp was awarded a $US70 million contract to design and build the bridge, raised to $80 million to increase the main span from .
The current bridge was constructed in 2001.
Significant developments in the region
The El Ferdan Railway Bridge was part of a major drive to develop the areas surrounding the Suez Canal, including other projects such as the Ahmed Hamdi Tunnel under the Suez Canal (completed in 1983), the Suez Canal overhead powerline crossing, and the Suez Canal Bridge (completed in 2001, roughly 12 miles north of the El Ferdan Railway Bridge).
The bridge today: plans for upgrading tracks and building a new bridge
The parallel New Suez Canal was excavated in 2014/2015 a short distance to the east but without a bridge spanning it. Without a second bridge, the railway across El Ferdan bridge is a dead end.
Initially, a plan was in place to construct a new railway tunnel in the Ismailia region (and another near Port Said is planned) in order to reconnect the Sinai to the rest of Egypt's rail network. However, Kamel ElWazir, who was at that time the head of the Egyptian Armed Forces - Engineering corps, announced that due to high costs the plans for a new tunnel would be scrapped and that the Engineering Corps would seek other alternatives including moving the existing bridge to a narrow section of the canal at El-Qantara.
However, the final decision was made in 2017 to keep the existing bridge at its current location and build a new double-track railway bridge (based on the current El Ferdan bridge design) across the new, eastern shipping lane several hundred meters to the east. The existing bridge over the western shipping lane would be converted from a single-track railway to a double-track railway. This was near completion in October 2023.
| Technology | Bridges | null |
2564375 | https://en.wikipedia.org/wiki/Shifter%20%28bicycle%20part%29 | Shifter (bicycle part) | A bicycle shifter or gear control or gear levers is a component used to control the gearing mechanisms and select the desired gear ratio. Typically, they operate either a derailleur mechanism or an internal hub gear mechanism. In either case, the control is operated by moving a cable that connects the shifter to the gear mechanism.
Location
Traditionally shifters were mounted on the down tube of the frame or stem. For ergonomic reasons, they tend to be located somewhere on the handlebars on modern bicycles.
Mechanisms
There are various types of shifter:
Grip shifter - a wheel with click stops surrounding the handlebar is turned until the desired gear is reached, though typically one gear at a time
Trigger shifter - a lever is pulled or pushed to change gears one at a time
Thumb shifter
Road bike shifter - integrated with brake levers, sometimes known as a "brifter".
In 1990, Shimano introduced their Shimano Total Integration, STI, shifting levers for road bicycles, this was an indexed shifting system and the first to integrate shifting with the brake levers. Campagnolo soon followed with their ErgoPower system. The SRAM Double Tap was introduced in 2005.
| Technology | Human-powered transport | null |
2565475 | https://en.wikipedia.org/wiki/Seedling | Seedling | A seedling is a young sporophyte developing out of a plant embryo from a seed. Seedling development starts with germination of the seed. A typical young seedling consists of three main parts: the radicle (embryonic root), the hypocotyl (embryonic shoot), and the cotyledons (seed leaves). The two classes of flowering plants (angiosperms) are distinguished by their numbers of seed leaves: monocotyledons (monocots) have one blade-shaped cotyledon, whereas dicotyledons (dicots) possess two round cotyledons. Gymnosperms are more varied. For example, pine seedlings have up to eight cotyledons. The seedlings of some flowering plants have no cotyledons at all. These are said to be acotyledons.
The plumule is the part of a seed embryo that develops into the shoot bearing the first true leaves of a plant. In most seeds, for example the sunflower, the plumule is a small conical structure without any leaf structure. Growth of the plumule does not occur until the cotyledons have grown above ground. This is epigeal germination. However, in seeds such as the broad bean, a leaf structure is visible on the plumule in the seed. These seeds develop by the plumule growing up through the soil with the cotyledons remaining below the surface. This is known as hypogeal germination.
Photomorphogenesis and etiolation
Dicot seedlings grown in the light develop short hypocotyls and open cotyledons exposing the epicotyl. This is also referred to as photomorphogenesis. In contrast, seedlings grown in the dark develop long hypocotyls and their cotyledons remain closed around the epicotyl in an apical hook. This is referred to as skotomorphogenesis or etiolation. Etiolated seedlings are yellowish in color as chlorophyll synthesis and chloroplast development depend on light. They will open their cotyledons and turn green when treated with light.
In a natural situation, seedling development starts with skotomorphogenesis while the seedling is growing through the soil and attempting to reach the light as fast as possible. During this phase, the cotyledons are tightly closed and form the apical hook to protect the shoot apical meristem from damage while pushing through the soil. In many plants, the seed coat still covers the cotyledons for extra protection.
Upon breaking the surface and reaching the light, the seedling's developmental program is switched to photomorphogenesis. The cotyledons open upon contact with light (splitting the seed coat open, if still present) and become green, forming the first photosynthetic organs of the young plant. Until this stage, the seedling lives off the energy reserves stored in the seed. The opening of the cotyledons exposes the shoot apical meristem and the plumule consisting of the first true leaves of the young plant.
The seedlings sense light through the light receptors phytochrome (red and far-red light) and cryptochrome (blue light). Mutations in these photo receptors and their signal transduction components lead to seedling development that is at odds with light conditions, for example seedlings that show photomorphogenesis when grown in the dark..
Seedling growth and maturation
Once the seedling starts to photosynthesize, it is no longer dependent on the seed's energy reserves. The apical meristems start growing and give rise to the root and shoot. The first "true" leaves expand and can often be distinguished from the round cotyledons through their species-dependent distinct shapes. While the plant is growing and developing additional leaves, the cotyledons eventually senesce and fall off. Seedling growth is also affected by mechanical stimulation, such as by wind or other forms of physical contact, through a process called thigmomorphogenesis.
Temperature and light intensity interact as they affect seedling growth; at low light levels about 40 lumens/m2 a day/night temperature regime of 28 °C/13 °C is effective (Brix 1972). A photoperiod shorter than 14 hours causes growth to stop, whereas a photoperiod extended with low light intensities to 16 h or more brings about continuous (free) growth. Little is gained by using more than 16 h of low light intensity once seedlings are in the free growth mode. Long photoperiods using high light intensities from 10,000 to 20,000 lumens/m2 increase dry matter production, and increasing the photoperiod from 15 to 24 hours may double dry matter growth (Pollard and Logan 1976, Carlson 1979).
The effects of carbon dioxide enrichment and nitrogen supply on the growth of white spruce and trembling aspen were investigated by Brown and Higginbotham (1986). Seedlings were grown in controlled environments with ambient or enriched atmospheric CO2 (350 or 750 f1/L, respectively) and with nutrient solutions with high, medium, and low N content (15.5, 1.55, and 0.16 mM). Seedlings were harvested, weighed, and measured at intervals of less than 100 days. N supply strongly affected biomass accumulation, height, and leaf area of both species. In white spruce only, the root weight ratio (RWR) was significantly increased with the low-nitrogen regime. CO2 enrichment for 100 days significantly increased the leaf and total biomass of white spruce seedlings in the high-N regime, RWR of seedlings in the medium-N regime, and root biomass of seedlings in the low-N regime.
First-year seedlings typically have high mortality rates, drought being the principal cause, with roots having been unable to develop enough to maintain contact with soil sufficiently moist to prevent the development of lethal seedling water stress. Somewhat paradoxically, however, Eis (1967a) observed that on both mineral and litter seedbeds, seedling mortality was greater in moist habitats (alluvium and Aralia–Dryopteris) than in dry habitats (Cornus–Moss). He commented that in dry habitats after the first growing season surviving seedlings appeared to have a much better chance of continued survival than those in moist or wet habitats, in which frost heave and competition from lesser vegetation became major factors in later years. The annual mortality documented by Eis (1967a) is instructive.
Pests and diseases
Seedlings are particularly vulnerable to attack by pests and diseases and can consequently experience high mortality rates. Diseases which are especially damaging to seedlings include damping off. Pests which are especially damaging to seedlings include cutworms, pillbugs, slugs and snails.
Transplanting
Seedlings are generally transplanted, when the first pair of true leaves appear. This is often known as pricking out in the UK. A shade may be provided if the area is arid or hot. A commercially available vitamin hormone concentrate may be used to avoid transplant shock which may contain thiamine hydrochloride, 1-Naphthaleneacetic acid and indole butyric acid.
Images
| Biology and health sciences | Plant anatomy and morphology: General | Biology |
768402 | https://en.wikipedia.org/wiki/Rhynchocephalia | Rhynchocephalia | Rhynchocephalia (; ) is an order of lizard-like reptiles that includes only one living species, the tuatara (Sphenodon punctatus) of New Zealand. Despite its current lack of diversity, during the Mesozoic rhynchocephalians were a speciose group with high morphological and ecological diversity. The oldest record of the group is dated to the Middle Triassic around 238 to 240 million years ago, and they had achieved global distribution by the Early Jurassic. Most rhynchocephalians belong to the group Sphenodontia ('wedge-teeth'). Their closest living relatives are lizards and snakes in the order Squamata, with the two orders being grouped together in the superorder Lepidosauria.
Once representing the world's dominant group of small reptiles, many of the niches occupied by lizards today were held by rhynchocephalians during the Triassic and Jurassic. Rhynchocephalians underwent a great decline during the Cretaceous, and they had disappeared almost entirely by the beginning of the Cenozoic. While the modern tuatara is primarily insectivorous and carnivorous, the diversity of the group also included the herbivorous eilenodontines, and there were other rhynchocephalians with specialised ecologies like the durophagous sapheosaurs. There were even successful groups of aquatic sphenodontians, such as the pleurosaurs.
History
Tuatara were originally classified as agamid lizards when they were first described by John Edward Gray in 1831. They remained misclassified until 1867, when Albert Günther of the British Museum noted features similar to birds, turtles, and crocodiles. He proposed the order Rhynchocephalia (from Ancient Greek () 'beak' and () 'head', meaning "beak head") for the tuatara and its fossil relatives. In 1925, Samuel Wendell Williston proposed the Sphenodontia to include only tuatara and their closest fossil relatives. Sphenodon is derived from Ancient Greek () 'wedge' and () 'tooth'. Many disparately related species were subsequently added to the Rhynchocephalia, resulting in what taxonomists call a "wastebasket taxon". These include the superficially similar (both in shape and name) but unrelated rhynchosaurs, which lived in the Triassic. Studies in the 1970s and 1980s demonstrated that rhynchosaurs were unrelated, with computer-based cladistic analysis conducted in the 1980s providing a robust diagnosis for the definition of the group.
Anatomy
Rhynchocephalia and their sister group Squamata (which includes lizards, snakes and amphisbaenians) belong to the superorder Lepidosauria, the only surviving taxon within Lepidosauromorpha.
Squamates and rhynchocephalians have a number of shared traits (synapomorphies), including fracture planes within the tail vertebrae allowing caudal autotomy (loss of the tail when threatened), transverse cloacal slits, an opening in the pelvis known as the thyroid fenestra, the presence of extra ossification centres in the limb bone epiphyses, a knee joint where a lateral recess on the femur allows the articulation of the fibula, the development of a sexual segment of the kidney, and a number of traits of the feet bones, including a fused astralago-calcaneun and enlarged fourth distal tarsal, which creates a new joint, along with a hooked fifth metatarsal.
Like some lizards, the tuatara possesses a parietal eye (also called a pineal eye or a third eye) covered by scales at the top of the head formed by the parapineal organ, with an accompanying hole in the skull roof enclosed by the parietal bones, dubbed the "pineal foramen", which is also present in fossil rhynchocephalians. The parietal eye detects light (though it is probably not capable of detecting movement or forming images), monitoring the day-night and seasonal cycles, helping to regulate the circadian rhythm, among other functions. While parietal eyes were widespread among early vertebrates, including early reptiles, they have been lost among most living groups.
Rhynchocephalians are distinguished from squamates by a number of traits, including the retention of gastralia (rib-like bones present in the belly of the body, ancestrally present in tetrapods and also present in living crocodilians). Unlike squamates, but similar to the majority of birds, the tuatara lacks a penis. This is a secondary loss, as a penis or squamate-like hemipenes were probably present in the last common ancestor of rhynchocephalians and squamates.
The complete lower temporal bar (caused by the fusion of the jugal and quadtrate/quadratojugal bones of the skull) of the tuatara, often historically asserted to be a primitive feature retained from earlier reptiles, is actually a derived feature among sphenodontians, with primitive lepidosauromorphs and many rhynchocephalians including the most primitive ones having an open lower temporal fenestra without a temporal bar. While often lacking a complete temporal bar, the vast majority of rhynchocephalians have a posteriorly directed process (extension) of the jugal bone. All known rhynchocephalians lack the splenial bone present in the lower jaw of more primitive reptiles, with the skulls of all members of Sphenodontia lacking lacrimal bones. The majority of rhynchocephalians also have fused frontal bones of the skull. While early rhynchocephalians possessed a tympanic membrane in the ear and a corresponding quadrate conch, similar to those found in lizards, these have been lost in the tuatara and likely other derived rhynchocephalians. This loss may be connected to the development of back and forth motion of the lower jaw.
The dentition of most rhynchocephalians, including the tuatara, is described as acrodont, which is associated with the condition of the teeth being attached to the crest of the jaw bone, lacking tooth replacement and having extensive bone growth fusing the teeth to the jaws resulting in the boundary between the teeth and bone being difficult to discern. This differs from the condition found in most lizards (except acrodontans), which have pleurodont teeth which are attached to the shelf on the inward-facing side of the jaw, and are replaced throughout life. The teeth of the tuatara have no roots, though the teeth of some other rhynchocephalians possess roots. The acrodont dentition appears to be a derived character of rhynchocephalians not found in more primitive lepidosauromorphs. The most primitive rhynchocephalians have either pleurodont teeth or a combination of both pleurodont front and acrodont posterior teeth. Some rhynchocephalians differ from these conditions, with Ankylosphenodon having superficially acrodont teeth that continue deeply into the jaw bone, and are fused to the bone at the base of the socket (ankylothecodont). In many derived sphenodontians, the premaxillary teeth at the front of the upper jaw are merged into a large chisel-like structure.
Rhynchocephalians possess palatal dentition (teeth present on the bones of the roof of the mouth). Palatal teeth are ancestrally present in tetrapods, but have been lost in many groups. The earliest rhynchocephalians had teeth present on the palatine, vomer and pterygoid bones, though the vomer and/or the pterygoid teeth are lost in some groups, including the living tuatara, which only has palatine teeth. A distinctive character found in all rhynchocephalians is the enlargement of the tooth row present on the palatine bones. While in other rhynchocephalians the palatine tooth row is oblique to the teeth of the maxilla, in members of Sphenodontinae (including the tuatara) and Eilenodontinae it is orientated parallel to the maxilla. In these groups, during biting, the teeth of the dentary in the lower jaw slot between the maxillary and palatine tooth rows. This arrangement, which is unique among amniotes, permits three point bending of food items, and in combination with propalinal movement (back and forward motion of the lower jaw) allows for a shearing bite.
The body size of rhynchocephalians is highly variable. The tuatara has an average total length of for females and males respectively. Clevosaurus sectumsemper has an estimated total length of , while large individuals of the largest known terrestrial sphenodontian, Priosphenodon avelasi reached total lengths of just over . The aquatic pleurosaurs reached lengths of up to .
The tuatara has among the highest known ages of sexual maturity among reptiles, at around 9 to 13 years of age, and has a high longevity in comparison to lizards of similar size, with wild individuals likely reaching 70 years, and possibly over 100 years in age. Such a late onset of sexual maturity and longevity may have or not have been typical of extinct rhynchocephalians.
Classification
While the grouping of Rhynchocephalia is well supported, the relationships of many taxa to each other are uncertain, varying substantially between studies. In modern cladistics, the clade Sphenodontia includes all rhynchocephalians other than Wirtembergia, as well as Gephyrosaurus and other gephyrosaurids. Gephyrosaurids have been found as more closely related to squamates in some analyses. In 2018, two major clades within Sphenodontia were defined, the infraorder Eusphenodontia which is defined by the least inclusive clade containing Polysphenodon, Clevosaurus hudsoni and Sphenodon, which is supported by the presence of three synapomorphies, including the presence of clearly visible wear facets on the teeth of the dentary or maxilla, the premaxillary teeth are merged into a chisel like structure, and the palatine teeth are reduced to a single tooth row, with the presence of an additional isolated tooth. The unranked clade Neosphenodontia is defined as the most inclusive clade containing Sphenodon but not Clevosaurus hudsoni, which is supported by the presence of six synapomorphies, including the increased relative length of the antorbital region of the skull (the part of the skull forward of the eye socket), reaching 1/4 to 1/3 of the total skull length, the posterior (hind) edge of the parietal bone is only slightly curved inward, the parietal foramen is found at the same level or forward of the anterior border of the supratemporal fenestra (an opening of the skull), the palatine teeth are further reduced from the condition in eusphenodontians to a single lateral tooth row, the number of pterygoid tooth rows are reduced to one or none, and the posterior border of the ischium is characterised by a distinctive process. In 2021 the clade Acrosphenodontia was defined, which is less inclusive than Sphenodontia and more inclusive than Eusphenodontia, and includes all sphenodontians with fully acrodont dentition, excluding basal partially acrodont sphenodontians. In 2022 the extinct clade Leptorhynchia was defined, including a variety of neosphenodontians, at least some of which were aquatically adapted, characterised by the elongation of the fourth metacarpal, the presence of a posterior process on the ischium, and the antorbital region of the skulls is between a third and a quarter of the total skull length. The clade Opisthodontia has been used for the grouping of all sphenodontians more closely related to Priosphenodon (a member of Eilenodontinae) than to Sphenodon. Not all studies use this clade, as some studies have found the scope of the clade to be identical to Eilenodontinae.
The family Sphenodontidae has been used to include the tuatara and its closest relatives within Rhynchocephalia. However the grouping has lacked a formal definition, with the included taxa varying substantially between analyses. The closest relatives of the tuatara are placed in the clade Sphenodontinae, which are characterised by a completely closed temporal bar.
The following is a cladogram of Rhynchocephalia after DeMar et al. 2022 (based on maximum parsimony, note that cladogram collapses into a polytomy under Bayesian analysis):
Cladogram after Simoes et al. 2022 (based on Bayesian inference analysis):
Clades and genera
†Wirtembergia
†Gephyrosauridae?
†Deltadectes
†Gephyrosaurus
†Penegephyrosaurus
†Bharatagama?
Sphenodontia
†Diphydontosaurus
†Micromenodon
†Paleollanosaurus?
†Pelecymala
†Whitakersaurus
†Parvosaurus
Acrosphenodontia
†Godavarisaurus
†Planocephalosaurus
†Theretairus
†Sphenocondor
†Rebbanasaurus
Eusphenodontia
†Opisthiamimus
†Brachyrhinodon
†Colobops?
†Lanceirosphenodon
†Polysphenodon
Clevosauridae
†Clevosaurus
†Brachyrhinodon?
†Polysphenodon?
†Sigmala
†Microsphenodon
†Trullidens
Neosphenodontia
†Lamarquesaurus
†Pamizinsaurus
†Tingitana
†Ankylosphenodon
†Derasmosaurus
†Opisthodontia
†Eilenodontinae
Sphenodontidae
†Eilenodontinae?
Sphenodontinae
†Leptorhynchia
†Leptosaurus
†Kallimodon
†Homoeosaurus
†Vadasaurus
†Sapheosauridae
†Pleurosauridae
Ecology
The fossil record of rhynchocephalians demonstrates that they were a diverse group that exploited a wide array of ecological niches. Early rhynchocephalians possess small ovoid teeth designed for piercing, and were probably insectivores. Like modern tuatara, extinct members of Sphenodontinae were likely generalists with a carnivorous/insectivorous diet. Amongst the most distinct rhynchocephalians are the pleurosaurs, known from the Jurassic of Europe, which were adapted for marine life, with elongated snake-like bodies with reduced limbs, with the specialised Late Jurassic genus Pleurosaurus having an elongated triangular skull highly modified from those of other rhynchocephalians. Pleurosaurs are thought to have been piscivores (consuming fish). Several other lineages of rhynchocephalians have been suggested to have had semi-aquatic habits. Eilenodontines are thought to have been herbivorous, with batteries of wide teeth with thick enamel used to process plant material. The sapheosaurids, such as Oenosaurus and Sapheosaurus from the Late Jurassic of Europe possess broad tooth plates unique amongst tetrapods, and are thought to have been durophagous, with the tooth plates being used to crush hard shelled organisms. Sphenovipera from the Jurassic of Mexico has been suggested to have been venomous, based on presence of grooves on two enlarged teeth at the front of the lower jaw though this interpretation has been questioned by other authors. The body of Pamizinsaurus from the Early Cretaceous of Mexico was covered in osteoscutes, similar to those of helodermatid lizards like the Gila monster, which is unique among known sphenodontians, which probably served to protect it against predators.
Evolutionary history
The timing of when Rhynchocephalia is estimated to have diverged from Squamata is disputed. Older estimates place the divergence between the Middle Permian and earliest Triassic, around 270 to 252 million years ago, while other authors posit a younger date of around 242 million years ago. The oldest known remains of rhynchocephalians are those of Wirtembergia known from the Erfurt Formation near Vellberg in Southern Germany, dating to the Ladinian stage of the Middle Triassic, around 238-240 million years old, which is also the most primitive rhynchocephalian known. Rhynchocephalians underwent considerable diversification during the Late Triassic, and reached a worldwide distribution across Pangaea by the end of the Triassic, with the Late Triassic-Early Jurassic genus Clevosaurus having 10 species across Asia, Africa, Europe, North and South America. The earliest rhynchocephalians were small animals, but by the Late Triassic the group had evolved a wide range of body sizes. During the Jurassic, rhynchocephalians were the dominant group of small reptiles globally, reaching their apex of morphological diversity during this period, including specialised herbivorous and aquatic forms. The only record of Rhynchocephalians from Asia (excluding the Indian subcontinent, which was not part of Asia during the Mesozoic) are indeterminate remains of Clevosaurus from the Early Jurassic (Sinemurian) aged Lufeng Formation of Yunnan, China. Rhynchocephalians are noticeably absent from younger localities in the region, despite the presence of favourable preservation conditions. Rhynchocephalians remained diverse into the Late Jurassic, and were more abundant than lizards during the Late Jurassic in North America.
Rhynchocephalian diversity declined during the Early Cretaceous, disappearing from North America and Europe after the end of the epoch, and were absent from North Africa and northern South America by the early Late Cretaceous. The cause of the decline of Rhynchocephalia remains unclear, but has often been suggested to be due to competition with advanced lizards and mammals. They appear to have remained prevalent in southern South America during the Late Cretaceous, where lizards remained rare, with their remains outnumbering terrestrial lizards in this region by a factor of 200. Late Cretaceous South American sphenodontians are represented by both Eilenodontinae and Sphenodontidae (including Sphenodontinae). An indeterminate rhynchocephalian is known from a partial lower jaw of a hatchling from the latest Cretaceous or possibly earliest Paleocene Intertrappean Beds, in what was then the isolated landmass of Insular India, which appears to be an acrosphenodontian, possibly related to Godavarisaurus from the Jurassic of India. The youngest undoubted remains of rhynchocephalians outside of New Zealand are those of the sphenodontid Kawasphenodon peligrensis from the early Paleocene (Danian) of Patagonia, shortly after the Cretaceous–Paleogene extinction event. Indeterminate sphenodontine jaw fragments bearing teeth are known from the early Miocene (19–16 million years ago) St Bathans fauna, New Zealand, that are indistinguishable from those of the living tuatara. It is unlikely that the ancestors of the tuatara arrived in New Zealand via oceanic dispersal, and it is thought that they were already present in New Zealand when it separated from Antarctica between 80 and 66 million years ago.
| Biology and health sciences | Rhynchocephalia | Animals |
768413 | https://en.wikipedia.org/wiki/Ear | Ear | In vertebrates, an ear is the organ that enables hearing and (in mammals) body balance using the vestibular system. In humans, the ear is described as having three parts: the outer ear, the middle ear and the inner ear. The outer ear consists of the auricle and the ear canal. Since the outer ear is the only visible portion of the ear, the word "ear" often refers to the external part (auricle) alone. The middle ear includes the tympanic cavity and the three ossicles. The inner ear sits in the bony labyrinth, and contains structures which are key to several senses: the semicircular canals, which enable balance and eye tracking when moving; the utricle and saccule, which enable balance when stationary; and the cochlea, which enables hearing. The ear canal is cleaned via earwax, which naturally migrates to the auricle.
The ear develops from the first pharyngeal pouch and six small swellings that develop in the early embryo called otic placodes, which are derived from the ectoderm.
The ear may be affected by disease, including infection and traumatic damage. Diseases of the ear may lead to hearing loss, tinnitus and balance disorders such as vertigo, although many of these conditions may also be affected by damage to the brain or neural pathways leading from the ear.
The ear has been adorned by earrings and other jewelry in numerous cultures for thousands of years, and has been subjected to surgical and cosmetic alterations.
Structure
The human ear consists of three parts—the outer ear, middle ear and inner ear. The ear canal of the outer ear is separated from the air-filled tympanic cavity of the middle ear by the eardrum. The middle ear contains the three small bones—the ossicles—involved in the transmission of sound, and is connected to the throat at the nasopharynx, via the pharyngeal opening of the Eustachian tube. The inner ear contains the otolith organs—the utricle and saccule—and the semicircular canals belonging to the vestibular system, as well as the cochlea of the auditory system.
Outer ear
The outer ear is the external portion of the ear and includes the fleshy visible auricle, the ear canal, and the outer layer of the eardrum (also called the tympanic membrane).
The auricle consists of the curving outer rim called the helix, the inner curved rim called the antihelix, and opens into the ear canal. The tragus protrudes and partially obscures the ear canal, as does the facing antitragus. The hollow region in front of the ear canal is called the concha. The ear canal stretches for about 1inch (2.5cm). The first part of the canal is surrounded by cartilage, while the second part near the eardrum is surrounded by bone. This bony part is known as the auditory bulla and is formed by the tympanic part of the temporal bone. The ear canal ends at the external surface of the eardrum, while the surrounding skin contains ceruminous and sebaceous glands that produce protective earwax. Earwax naturally migrates outward through ear canal, constituting a self-cleaning system.
Two sets of muscles are associated with the outer ear: the intrinsic and extrinsic muscles. In some mammals, these muscles can adjust the direction of the pinna. In humans, these muscles have little or no effect. The ear muscles are supplied by the facial nerve, which also supplies sensation to the skin of the ear itself, as well as to the external ear cavity. The great auricular nerve, auricular nerve, auriculotemporal nerve, and lesser and greater occipital nerves of the cervical plexus all supply sensation to parts of the outer ear and the surrounding skin.
The auricle consists of a single piece of elastic cartilage with a complicated relief on its inner surface and a fairly smooth configuration on its posterior surface. A tubercle, known as Darwin's tubercle, is sometimes present, lying in the descending part of the helix and corresponding to the ear-tip of mammals. The earlobe consists of areola and adipose tissue. The symmetrical arrangement of the two ears allows for the localisation of sound. The brain accomplishes this by comparing arrival-times and intensities from each ear, in circuits located in the superior olivary complex and the trapezoid bodies, which are connected via pathways to both ears.
Middle ear
The middle ear lies between the outer ear and the inner ear. It consists of an air-filled cavity called the tympanic cavity and includes the three ossicles and their attaching ligaments; the auditory tube; and the round and oval windows. The ossicles are three small bones that function together to receive, amplify, and transmit the sound from the eardrum to the inner ear. The ossicles are the malleus (hammer), incus (anvil), and the stapes (stirrup). The stapes is the smallest named bone in the body. The middle ear also connects to the upper throat at the nasopharynx via the pharyngeal opening of the Eustachian tube.
The three ossicles transmit sound from the outer ear to the inner ear. The malleus receives vibrations from sound pressure on the eardrum, where it is connected at its longest part (the manubrium or handle) by a ligament. It transmits vibrations to the incus, which in turn transmits the vibrations to the small stapes bone. The wide base of the stapes rests on the oval window. As the stapes vibrates, vibrations are transmitted through the oval window, causing movement of fluid within the cochlea.
The round window allows for the fluid within the inner ear to move. As the stapes pushes the secondary tympanic membrane, fluid in the inner ear moves and pushes the membrane of the round window out by a corresponding amount into the middle ear. The ossicles help amplify sound waves by nearly 15–20 times.
Inner ear
The inner ear sits within the temporal bone in a complex cavity called the bony labyrinth. A central area known as the vestibule contains two small fluid-filled recesses, the utricle and saccule. These connect to the semicircular canals and the cochlea. There are three semicircular canals angled at right angles to each other which are responsible for dynamic balance. The cochlea is a spiral shell-shaped organ responsible for the sense of hearing. These structures together create the membranous labyrinth.
The bony labyrinth refers to the bony compartment which contains the membranous labyrinth, contained within the temporal bone. The inner ear structurally begins at the oval window, which receives vibrations from the incus of the middle ear. Vibrations are transmitted into the inner ear into a fluid called endolymph, which fills the membranous labyrinth. The endolymph is situated in two vestibules, the utricle and saccule, and eventually transmits to the cochlea, a spiral-shaped structure. The cochlea consists of three fluid-filled spaces: the vestibular duct, the cochlear duct, and the tympanic duct. Hair cells responsible for transduction—changing mechanical changes into electrical stimuli are present in the organ of Corti in the cochlea.
Blood supply
The blood supply of the ear differs according to each part of the ear.
The outer ear is supplied by a number of arteries. The posterior auricular artery provides the majority of the blood supply. The anterior auricular arteries provide some supply to the outer rim of the ear and scalp behind it. The posterior auricular artery is a direct branch of the external carotid artery, and the anterior auricular arteries are branches from the superficial temporal artery. The occipital artery also plays a role.
The middle ear is supplied by the mastoid branch of either the occipital or posterior auricular arteries and the deep auricular artery, a branch of the maxillary artery. Other arteries which are present but play a smaller role include branches of the middle meningeal artery, ascending pharyngeal artery, internal carotid artery, and the artery of the pterygoid canal.
The inner ear is supplied by the anterior tympanic branch of the maxillary artery; the stylomastoid branch of the posterior auricular artery; the petrosal branch of middle meningeal artery; and the labyrinthine artery, arising from either the anterior inferior cerebellar artery or the basilar artery.
Functions
Hearing
Sound waves travel through the outer ear, are modulated by the middle ear, and are transmitted to the vestibulocochlear nerve in the inner ear. This nerve transmits information to the temporal lobe of the brain, where it is registered as sound.
Sound that travels through the outer ear impacts on the eardrum, and causes it to vibrate. The three ossicles bones transmit this sound to a second window (the oval window), which protects the fluid-filled inner ear. In detail, the pinna of the outer ear helps to focus a sound, which impacts on the eardrum. The malleus rests on the membrane, and receives the vibration. This vibration is transmitted along the incus and stapes to the oval window. Two small muscles, the tensor tympani and stapedius, also help modulate noise. The two muscles reflexively contract to dampen excessive vibrations. Vibration of the oval window causes vibration of the endolymph within the vestibule and the cochlea.
The inner ear houses the apparatus necessary to change the vibrations transmitted from the outside world via the middle ear into signals passed along the vestibulocochlear nerve to the brain. The hollow channels of the inner ear are filled with liquid, and contain a sensory epithelium that is studded with hair cells. The microscopic "hairs" of these cells are structural protein filaments that project out into the fluid. The hair cells are mechanoreceptors that release a chemical neurotransmitter when stimulated. Sound waves moving through fluid flows against the receptor cells of the organ of Corti. The fluid pushes the filaments of individual cells; movement of the filaments causes receptor cells to become open to receive the potassium-rich endolymph. This causes the cell to depolarise, and creates an action potential that is transmitted along the spiral ganglion, which sends information through the auditory portion of the vestibulocochlear nerve to the temporal lobe of the brain.
The human ear can generally hear sounds with frequencies between 20 Hz and 20 kHz (the audio range). Sounds outside this range are considered infrasound (below 20 Hz) or ultrasound (above 20 kHz) Although hearing requires an intact and functioning auditory portion of the central nervous system as well as a working ear, human deafness (extreme insensitivity to sound) most commonly occurs because of abnormalities of the inner ear, rather than in the nerves or tracts of the central auditory system.
Balance
Providing balance, when moving or stationary, is also a central function of the ear. The ear facilitates two types of balance: static balance, which allows a person to feel the effects of gravity, and dynamic balance, which allows a person to sense acceleration.
Static balance is provided by two ventricles, the utricle and the saccule. Cells lining the walls of these ventricles contain fine filaments, and the cells are covered with a fine gelatinous layer. Each cell has 50–70 small filaments, and one large filament, the kinocilium. Within the gelatinous layer lie otoliths, tiny formations of calcium carbonate. When a person moves, these otoliths shift position. This shift alters the positions of the filaments, which opens ion channels within the cell membranes, creating depolarisation and an action potential that is transmitted to the brain along the vestibulocochlear nerve.
Dynamic balance is provided through the three semicircular canals. These three canals are orthogonal (at right angles) to each other. At the end of each canal is a slight enlargement, known as the ampulla, which contains numerous cells with filaments in a central area called the cupula. The fluid in these canals rotates according to the momentum of the head. When a person changes acceleration, the inertia of the fluid changes. This affects the pressure on the cupula, and results in the opening of ion channels. This causes depolarisation, which is passed as a signal to the brain along the vestibulocochlear nerve. Dynamic balance also helps maintain eye tracking when moving, via the vestibulo-ocular reflex.
Development
During embryogenesis, the ear develops as three distinct structures: the inner ear, the middle ear and the outer ear. Each structure originates from a different germ layer: the ectoderm, endoderm and mesenchyme.
Inner ear
Around its second to third week, the developing embryo consists of three layers: ectoderm, mesoderm, and endoderm. The first part of the ear to develop is the inner ear, which begins to form from the ectoderm around the embryo's 22nd day, derived from two thickenings called otic placodes on either side of the head. Each otic placode recedes below the ectoderm, forms an otic pit and then an otic vesicle. This entire mass is eventually surrounded by mesenchyme to form the bony labyrinth.
Around the 28th day, parts of the otic vesicle begin to form the vestibulocochlear nerve. These form bipolar neurons, which supply sensation to parts of the inner ear (namely the sensory parts of the semicircular canals, macular of the utricle and saccule, and organ of Corti).
Around the 33rd day, the vesicles begin to differentiate. Posteriorly, they form what will become the utricle and semicircular canals. Anteriorly, the vesicles differentiate into a rudimentary saccule, which eventually becomes the saccule and cochlea. Part of the saccule eventually gives rise and connects to the cochlear duct, which appears approximately during the sixth week and connects to the saccule through the ductus reuniens.
As the cochlear duct's mesenchyme begins to differentiate, three cavities are formed: the scala vestibuli, the scala tympani and the scala media. Both the scala vestibuli and the scala tympani contain an extracellular fluid called perilymph, while the scala media contains endolymph. The vestibular membrane and the basilar membrane develop to separate the cochlear duct from the vestibular duct and the tympanic duct, respectively.
Molecular regulation
Most of the genes responsible for the regulation of inner ear formation and its morphogenesis are members of the homeobox gene family such as Pax, Msx and Otx homeobox genes. The development of inner ear structures such as the cochlea is regulated by Dlx5/Dlx6, Otx1/Otx2 and Pax2, which in turn are controlled by the master gene Shh. Shh is secreted by the notochord.
Middle ear
The middle ear and its components develop from the first and second pharyngeal arches. The tympanic cavity and auditory tube develop from the first part of the pharyngeal pouch between the first two arches in an area which will also go on to develop the pharynx. This develops as a structure called the tubotympanic recess. The ossicles (malleus, incus and stapes) normally appear during the first half of fetal development. The first two (malleus and incus) derive from the first pharyngeal arch and the stapes derives from the second. All three ossicles develop from the neural crest. Eventually, cells from the tissue surrounding the ossicles will experience apoptosis and a new layer of endodermal epithelial will constitute the formation of the tympanic cavity wall.
Outer ear
Unlike structures of the inner and middle ear, which develop from pharyngeal pouches, the ear canal originates from the dorsal portion of the first pharyngeal cleft. It is fully expanded by the end of the 18th week of development. The eardrum is made up of three layers (ectoderm, endoderm and connective tissue). The auricle originates as a fusion of six hillocks. The first three hillocks are derived from the lower part of the first pharyngeal arch and form the tragus, crus of the helix, and helix, respectively. The final three hillocks are derived from the upper part of the second pharyngeal arch and form the antihelix, antitragus, and earlobe. The outer ears develop in the lower neck. As the mandible forms, they move towards their final position level with the eyes.
Growth
The ears of newborn humans are proportionally very large, even more so than the head's largeness as compared to the body. Ears grow quickly until about the age of nine, then continue to grow steadily in circumference (about 0.5 millimeters a year) throughout life, with the increase in length more extreme in males.
Uniqueness
Ears are individually very unique, with the odds of two people having matching ears very low. Additionally, the ear's proportions are normally retained for life, and have thus been employed for forensic identification since the 1950s.
Clinical significance
Hearing loss
Hearing loss may be either partial or total. This may be a result of injury or damage, congenital disease, or physiological causes. When hearing loss is a result of injury or damage to the outer ear or middle ear, it is known as conductive hearing loss. When deafness is a result of injury or damage to the inner ear, vestibulochoclear nerve, or brain, it is known as sensorineural hearing loss.
Causes of conductive hearing loss include an ear canal blocked by earwax, ossicles that are fixed together or absent, or holes in the eardrum. Conductive hearing loss may also result from middle ear inflammation causing fluid build-up in the normally air-filled space, such as by otitis media. Tympanoplasty is the general name of the operation to repair the middle ear's eardrum and ossicles. Grafts from muscle fascia are ordinarily used to rebuild an intact eardrum. Sometimes artificial ear bones are placed to substitute for damaged ones, or a disrupted ossicular chain is rebuilt in order to conduct sound effectively.
Hearing aids or cochlear implants may be used if the hearing loss is severe or prolonged. Hearing aids work by amplifying the sound of the local environment and are best suited to conductive hearing loss. Cochlear implants transmit the sound that is heard as if it were a nervous signal, bypassing the cochlea. Active middle ear implants send sound vibrations to the ossicles in the middle ear, bypassing any non-functioning parts of the outer and middle ear.
Congenital abnormalities
Anomalies and malformations of the auricle are common. These anomalies include chromosome syndromes such as ring 18. Children may also present cases of abnormal ear canals and low ear implantation. In rare cases, no auricle is formed (atresia), or is extremely small (microtia). Small auricles can develop when the auricular hillocks do not develop properly. The ear canal can fail to develop if it does not channelise properly or if there is an obstruction. Reconstructive surgery to treat hearing loss is considered as an option for children older than five, with a cosmetic surgical procedure to reduce the size or change the shape of the ear is called an otoplasty. The initial medical intervention is aimed at assessing the baby's hearing and the condition of the ear canal, as well as the middle and inner ear. Depending on the results of tests, reconstruction of the outer ear is done in stages, with planning for any possible repairs of the rest of the ear.
Approximately one out of one thousand children suffer some type of congenital deafness related to the development of the inner ear. Inner ear congenital anomalies are related to sensorineural hearing loss and are generally diagnosed with a computed tomography (CT) scan or a magnetic resonance imaging (MRI) scan. Hearing loss problems also derive from inner ear anomalies because its development is separate from that of the middle and external ear. Middle ear anomalies can occur because of errors during head and neck development. The first pharyngeal pouch syndrome associates middle ear anomalies to the malleus and incus structures as well as to the non-differentiation of the annular stapedial ligament. Temporal bone and ear canal anomalies are also related to this structure of the ear and are known to be associated with sensorineural hearing loss and conductive hearing loss.
Vertigo
Vertigo refers to the inappropriate perception of motion. This is due to dysfunction of the vestibular system. One common type of vertigo is benign paroxysmal positional vertigo, when an otolith is displaced from the ventricles to the semicircular canal. The displaced otolith rests on the cupola, causing a sensation of movement when there is none. Ménière's disease, labyrinthitis, strokes, and other infective and congenital diseases may also result in the perception of vertigo.
Injury
Outer ear
Injuries to the external ear occur fairly frequently, and can leave minor to major deformity. Injuries include: laceration, avulsion injuries, burn and repeated twisting or pulling of an ear, for discipline or torture. Chronic damage to the ears can cause cauliflower ear, a common condition in boxers and wrestlers in which the cartilage around the ears becomes lumpy and distorted owing to persistence of a haematoma around the perichondrium, which can impair blood supply and healing. Owing to its exposed position, the external ear is susceptible to frostbite as well as skin cancers, including squamous-cell carcinoma and basal-cell carcinomas.
Middle ear
The ear drum may become perforated in the event of a large sound or explosion, when diving or flying (called barotrauma), or by objects inserted into the ear. Another common cause of injury is due to an infection such as otitis media. These may cause a discharge from the ear called otorrhea, and are often investigated by otoscopy and audiometry. Treatment may include watchful waiting, antibiotics and possibly surgery, if the injury is prolonged or the position of the ossicles is affected. Skull fractures that go through the part of the skull containing the ear structures (the temporal bone) can also cause damage to the middle ear. A cholesteatoma is a cyst of squamous skin cells that may develop from birth or secondary to other causes such as chronic ear infections. It may impair hearing or cause dizziness or vertigo, and is usually investigated by otoscopy and may require a CT scan. The treatment for cholesteatoma is surgery.
Inner ear
There are two principal damage mechanisms to the inner ear in industrialised society, and both injure hair cells. The first is exposure to elevated sound levels (noise trauma), and the second is exposure to drugs and other substances (ototoxicity). A large number of people are exposed to sound levels on a daily basis that are likely to lead to significant hearing loss. The National Institute for Occupational Safety and Health has recently published research on the estimated numbers of persons with hearing difficulty (11%) and the percentage of those that can be attributed to occupational noise exposure (24%). Furthermore, according to the National Health and Nutrition Examination Survey (NHANES), approximately twenty-two million (17%) US workers reported exposure to hazardous workplace noise. Workers exposed to hazardous noise further exacerbate the potential for developing noise-induced hearing loss when they do not wear hearing protection.
Tinnitus
Tinnitus is the hearing of sound when no external sound is present. While often described as a ringing, it may also sound like a clicking, hiss or roaring. Rarely, unclear voices or music are heard. The sound may be soft or loud, low pitched or high pitched and appear to be coming from one ear or both. Most of the time, it comes on gradually. In some people, the sound causes depression, anxiety, or concentration difficulties.
Tinnitus is not a disease but a symptom that can result from a number of underlying causes. One of the most common causes is noise-induced hearing loss. Other causes include: ear infections, disease of the heart or blood vessels, Ménière's disease, brain tumors, emotional stress, exposure to certain medications, a previous head injury, and earwax. It is more common in those with depression and anxiety.
Society and culture
The ears have been ornamented with jewelry for thousands of years, traditionally by piercing of the earlobe. In ancient and modern cultures, ornaments have been placed to stretch and enlarge the earlobes, allowing for larger plugs to be slid into a large fleshy gap in the lobe. Tearing of the earlobe from the weight of heavy earrings, or from traumatic pull of an earring (for example, by snagging on a sweater), is fairly common.
Injury to the ears has been present since Roman times as a method of reprimand or punishment – "In Roman times, when a dispute arose that could not be settled amicably, the injured party cited the name of the person thought to be responsible before the Praetor; if the offender did not appear within the specified time limit, the complainant summoned witnesses to make statements. If they refused, as often happened, the injured party was allowed to drag them by the ear and to pinch them hard if they resisted. Hence the French expression "", of which the literal meaning is "to have one's ear pulled" and the figurative meaning "to take a lot of persuading". We use the expression "to tweak (or pull) someone's ears" to mean "inflict a punishment"."
The auricles have an effect on facial appearance. In Western societies, protruding ears (present in about 5% of ethnic Europeans) have been considered unattractive, particularly if asymmetric. The first surgery to reduce the projection of prominent ears was published in the medical literature by Ernst Dieffenbach in 1845, and the first case report in 1881.
Pointy ears are a characteristic of some creatures in folklore such as the French croquemitaine, Brazilian curupira or Japanese earth spider. It has been a feature of characters on art as old as that of Ancient Greece and medieval Europe. Pointy ears are a common characteristic of many creatures in the fantasy genre, including elves, faeries, pixies, hobbits, or orcs. They are a characteristic of creatures in the horror genre, such as vampires. Pointy ears are also found in the science fiction genre; for example among the Vulcan and Romulan races of the Star Trek universe and the Nightcrawler character from the X-Men universe.
Georg von Békésy was a Hungarian biophysicist born in Budapest, Hungary. In 1961, he was awarded the Nobel Prize in Physiology or Medicine for his research on the function of the cochlea in the mammalian hearing organ.
The Vacanti mouse was a laboratory mouse that had what looked like a human ear grown on its back. The "ear" was actually an ear-shaped cartilage structure grown by seeding cow cartilage cells into a biodegradable ear-shaped mold and then implanted under the skin of the mouse; then the cartilage naturally grew by itself. It was developed as an alternative to ear repair or grafting procedures and the results met with much publicity and controversy in 1997.
Other animals
The ears of vertebrates are placed somewhat symmetrically on either side of the head, an arrangement that aids sound localization.
All mammals have three auditory ossicles. The external pinna in therian mammals helps direct sound through the ear canal to the eardrum. The complex geometry of ridges on the inner surface of some mammalian ears helps to sharply focus sounds produced by prey, using echolocation signals. These ridges can be regarded as the acoustic equivalent of a Fresnel lens, and may be seen in a wide range of animals, including the bat, aye-aye, lesser galago, bat-eared fox, mouse lemur and others.
Some large primates such as gorillas and orangutans (and also humans) have undeveloped ear muscles that are non-functional vestigial structures, yet are still large enough to be easily identified. An ear muscle that cannot move the ear, for whatever reason, has lost that biological function. This serves as evidence of homology between related species. In humans, there is variability in these muscles, such that some people are able to move their ears in various directions, and it has been said that it may be possible for others to gain such movement by repeated trials. In such primates, the inability to move the ear is compensated for mainly by the ability to easily turn the head on a horizontal plane, an ability which is not common to most monkeys—a function once provided by one structure is now replaced by another.
In some animals with mobile pinnae (like the horse), each pinna can be aimed independently to better receive the sound. For these animals, the pinnae help localise the direction of the sound source.
The ear, with its blood vessels close to the surface, is an essential thermoregulator in some land mammals, including the elephant, the fox, and the rabbit. There are five types of ear carriage in domestic rabbits, some of which have been bred for exaggerated ear length—a potential health risk that is controlled in some countries. Abnormalities in the skull of a half-lop rabbit were studied by Charles Darwin in 1868. In marine mammals, earless seals are one of three groups of Pinnipedia.
Invertebrates
Only vertebrate animals have ears, though many invertebrates detect sound using other kinds of sense organs. In insects, tympanal organs are used to hear distant sounds. They are located either on the head or elsewhere, depending on the insect family. The tympanal organs of some insects are extremely sensitive, offering acute hearing beyond that of most other animals. The female cricket fly Ormia ochracea has tympanal organs on each side of her abdomen. They are connected by a thin bridge of exoskeleton and they function like a tiny pair of eardrums, but, because they are linked, they provide acute directional information. The fly uses her "ears" to detect the call of her host, a male cricket. Depending on where the song of the cricket is coming from, the fly's hearing organs will reverberate at slightly different frequencies. This difference may be as little as 50 billionths of a second, but it is enough to allow the fly to home in directly on a singing male cricket and parasitise it.
Simpler structures allow other arthropods to detect near-field sounds. Spiders and cockroaches, for example, have hairs on their legs, which are used for detecting sound. Caterpillars may also have hairs on their body that perceive vibrations and allow them to respond to sound.
| Biology and health sciences | Biology | null |
768511 | https://en.wikipedia.org/wiki/Automatic%20firearm | Automatic firearm | An automatic firearm or fully automatic firearm (to avoid confusion with semi-automatic firearms) is a self-loading firearm that continuously chambers and fires rounds when the trigger mechanism is actuated. The action of an automatic firearm is capable of harvesting the excess energy released from a previous discharge to feed a new ammunition round into the chamber, and then igniting the propellant and discharging the projectile (either bullet, shot, or slug) by delivering a hammer or striker impact on the primer.
If both the feeding and ignition procedures are automatically cycled, the weapon will be considered "fully automatic" and will fire continuously as long as the trigger is kept depressed and the ammunition feeding (either from a magazine or a belt) remains available. In contrast, a firearm is considered "semi-automatic" if it only automatically cycles to chamber new rounds (i.e. self-loading) but does not automatically fire off the shot unless the user manually resets (usually by releasing) and re-actuates the trigger, so only one round gets discharged with each individual trigger-pull. A burst-fire firearm is an "in-between" of fully and semi-automatic firearms, firing a brief continuous "burst" of multiple rounds with each trigger-pull, but then will require a manual re-actuation of the trigger to fire another burst.
Automatic firearms are further defined by the type of cycling principles used, such as recoil operation, blowback, blow forward, or gas operation.
Rates of fire
Cyclic rate
Self-loading firearms are designed with varying rates of fire due to having different purposes. The speed with which a self-loading firearm can cycle through the functions of:
Fire
Eject
Load
Cock
is referred to as its cyclic rate. In fully automatic firearms, the cyclic rate is tailored to the purpose the firearm is intended to serve. Anti-aircraft machine guns often have extremely high rates of fire to maximize the probability of a hit. In infantry support weapons, these rates of fire are often much lower and in some cases, vary with the design of the particular firearm. The MG 34 is a WWII-era machine gun which falls under the category of a "general purpose machine gun". It was manufactured in several variations: with a cyclic rate as high as 1200 rounds per minute, but also in an infantry model which fired at 900 rounds per minute.
Effective rate of fire
Continuous fire generates high temperatures in a firearm's barrel and increased temperatures throughout most of its structure. If fired continuously, the components of the firearm will eventually suffer structural failure. All firearms, whether they are semi-automatic, fully automatic, or otherwise, will overheat and fail if fired indefinitely. This issue tends to present itself primarily with fully automatic fire. For example, the MG34 may have a calculated cyclic rate of 1200 rounds per minute, but is likely to overheat and fail in the space of one minute of continuous fire.
Semi-automatic firearms may also overheat if continuously fired. Recoil plays a significant role in the time it takes to reacquire one's sight picture, ultimately reducing the effective rate of fire.
Automatic firearm types
Automatic firearms can be divided into six main categories:
Automatic rifle The standard type of service rifles in most modern militaries, usually capable of selective fire. Assault rifles are a specific type of select-fire rifle chambered in an intermediate cartridge and fed via a high-capacity detachable magazine. Battle rifles are similar, but chambered in a full-powered cartridge.
Automatic shotgun A type of combat shotgun capable of firing shotgun shells automatically, usually also semi-automatically.
Machine gun A large group of heavier firearms used for suppressive automatic fire of rifle cartridges, usually attached to a mount or supported by a bipod. Depending on size, weight and role, machine guns are divided into heavy, medium or light machine guns. The ammunition is often belt-fed.
Submachine gun An automatic, short rifle (carbine) typically chambered for pistol cartridges. Today seldom used in military contexts due to a rise in the use of body armor, they are commonly used by police forces and close protection units in many parts of the world.
Personal defense weapon A new breed of automatic firearms that combines the light weight and size of the submachine gun with the medium power caliber ammunition of the rifle, thus in practice creating a submachine gun with body armor penetration capability.
Machine pistol A handgun-style firearm, capable of fully automatic or burst fire. They are sometimes equipped with a foldable shoulder stock, to promote accuracy during automatic fire, creating similarities to their submachine gun counterparts. Some machine pistols are shaped similarly to semi-automatics (e.g., the Glock 18, Beretta 93R). As with SMGs, machine pistols fire pistol caliber cartridges (such as the 9mm, .40, .45 ACP etc.).
Burst mode
Burst mode is an automatic fire mode that limits the number of rounds fired with each trigger pull, most often to three rounds. After the burst is fired, the firearm will not fire again until the trigger is released and pulled again. Burst mode was implemented into firearms due to the inaccuracy of fully automatic fire in combat, and due to suggestions that fully automatic fire has no genuine benefit. Additionally, many militaries have restricted automatic fire in combat due to the ammunition wasted.
Regulation
Possession of automatic firearms tends to be restricted to members of military and law enforcement organizations in most developed countries, even in those that permit the civilian use of semi-automatic firearms. Where automatic weapons are permitted, restrictions and regulations on their possession and use may be much stricter than for other firearms. In the United States, taxes and strict regulations affect the manufacture and sale of fully automatic firearms under the National Firearms Act of 1934 and the Firearm Owners Protection Act of 1986; the latter act banned civilian machine gun ownership, grandfathering in existing legally owned weapons. As legally owned weapons were registered under the NFA this that meant that only previously registered automatic weapons may be purchased. A prospective user must go through an application process administered by the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF), which requires a federal tax payment of $200 and a thorough criminal background check. The tax payment buys a revenue stamp, which is the legal document allowing possession of an automatic firearm. The use of a gun trust to register with the ATF has become an increasingly popular method of acquisition and ownership of automatic firearms.
Similar weapons
Other similar weapons not usually referred to as automatic firearms include the following:
Autocannon, which are 15 mm or greater in bore diameter and thus considered cannons, not small arms.
Gatling guns, multiple-barrel designs, often used with external power supplies to generate rates of fire higher than automatic firearms.
| Technology | Mechanisms_2 | null |
768586 | https://en.wikipedia.org/wiki/Select%20fire | Select fire | Select fire, is the capability of a weapon to be adjusted to fire in semi-automatic, fully automatic, and/or burst mode. The modes are chosen by means of a selector switch, which varies depending on the weapon's design. Some select-fire weapons have burst fire mechanisms to limit the maximum number of shots fired automatically in this mode. The most common limits are two or three rounds per trigger pull. Fully automatic fire refers to the ability for a weapon to fire continuously until either the feeding mechanism is emptied or the trigger is released. Semi-automatic refers to the ability to fire one round per trigger pull.
The presence of select fire modes on firearms permits more efficient use of rounds to be fired for specific needs, versus having a single mode of operation, such as fully automatic, thereby conserving ammunition while maximizing on-target accuracy and effectiveness. This capability is most commonly found on military weapons of the 20th and 21st centuries.
History
Early attempts at this technology were hindered by one or both of two obstacles: over-powerful ammunition and mechanical complexity. The latter led to excessive weight and unreliability in the firearm. One of the earliest designs dates to just before the end of the 19th century with the development of the Cei-Rigotti, an early automatic rifle created by Italian Army officer Amerigo Cei-Rigotti that had select-fire capability (single shots or burst).
Another is the M1918 Browning Automatic Rifle (BAR) developed during the First World War. The BAR and its subsequent designs incorporated a variety of select-fire functions. The first design (M1918) is a select-fire, air-cooled automatic rifle that used a trigger mechanism with a fire selector lever that enabled operating in either semi-automatic or fully automatic firing modes. The selector lever is located on the left side of the receiver and is simultaneously the manual safety (selector lever in the "S" position – weapon is "safe", "F" – "Fire", "A" – "Automatic" fire). The next version (M1918A1) had a unique rate-of-fire reducer mechanism purchased from FN Herstal with two rates of automatic fire. This reducer mechanism was later changed to one designed by the Springfield Armory. The final version (M1918A2) provided two selectable rates of fully automatic fire only.
During World War II the Germans began development of the select-fire function which resulted in the FG 42 battle rifle developed in 1942 at the request of the German Air Force (Luftwaffe) in 1941. Another German design that used select fire was the StG 44 that was the first of its kind to see major deployment and is considered by many historians to be the first modern assault rifle. "The principle of this weapon -- the reduction of muzzle impulse to get useful automatic fire within actual ranges of combat -- was probably the most important advance in small arms since the invention of smokeless powder."
The select-fire function was later seen in the Russian AK-47 (designed in 1946), the Belgian FN FAL (designed 1947–53) the British EM-2 (designed in 1948), and the U.S. AR-10 (designed in 1957) and its AR derivatives.
Design
Select-fire weapons, by definition, have a semi-automatic mode, where the weapon automatically reloads the chamber after each fired round, but requires the trigger be released and pulled again before firing the next round. This allows for rapid and (in theory) aimed fire. In some weapons, the selection is between different rates of automatic fire and/or varying burst limiters. The selection is often by a small rotating switch often integrated with the safety catch or a switch separate from the safety, as in the British SA80 family. Another method is a weighted trigger, such as the Steyr AUG, which will fire a single shot when 4.0 - 7.1 kg (8.8 – 15.4 lbs.) of weight is exerted on the trigger, and then become fully automatic when over 7.1 kg (15.4 lbs.) of weight is applied. This is useful for emergency situations where a rapid volley of rounds is more effective for suppressing a close enemy rather than a single-round burst (multiple single shots with a trigger pull for each round).
Some select-fire weapons offer a burst mode as the second option, where each pull of the trigger automatically fires a predetermined number of rounds (generally two or three), but will not fire any more until the trigger is released and pulled again. The current U.S. standard assault rifle, the M16A4, and the M4 carbine variant of this rifle fire a maximum of three rounds with each pull of the trigger in burst mode. In this design, it retains the count of previously fired rounds and may fire fewer than three rounds. Other designs reset the count with each trigger pull, allowing a uniform three-round burst as long as rounds remain.
A common version of the Heckler & Koch MP5 submachine gun (which is widely used by SWAT teams and military special operations personnel) fires single shots, three-round-bursts, and automatically. A special variant uses a two-round-burst to minimize the chances of missing with a third round. Some automatic cannons have larger burst limiters to coincide with higher rates of fire.
| Technology | Mechanisms_2 | null |
769016 | https://en.wikipedia.org/wiki/Japanese%20garden | Japanese garden | are traditional gardens whose designs are accompanied by Japanese aesthetics and philosophical ideas, avoid artificial ornamentation, and highlight the natural landscape. Plants and worn, aged materials are generally used by Japanese garden designers to suggest a natural landscape, and to express the fragility of existence as well as time's unstoppable advance. Ancient Japanese art inspired past garden designers. Water is an important feature of many gardens, as are rocks and often gravel. Despite there being many attractive Japanese flowering plants, herbaceous flowers generally play much less of a role in Japanese gardens than in the West, though seasonally flowering shrubs and trees are important, all the more dramatic because of the contrast with the usual predominant green. Evergreen plants are "the bones of the garden" in Japan. Though a natural-seeming appearance is the aim, Japanese gardeners often shape their plants, including trees, with great rigour.
Japanese literature on gardening goes back almost a thousand years, and several different styles of garden have developed, some with religious or philosophical implications. A characteristic of Japanese gardens is that they are designed to be seen from specific points. Some of the most significant different traditional styles of Japanese garden are the ("lake-spring-boat excursion garden"), which was imported from China during the Heian period (794–1185). These were designed to be seen from small boats on the central lake. No original examples of these survive, but they were replaced by the "paradise garden" associated with Pure Land Buddhism, with a Buddha shrine on an island in the lake. Later large gardens are often in the , or promenade garden style, designed to be seen from a path circulating around the garden, with fixed stopping points for viewing. Specialized styles, often small sections in a larger garden, include the moss garden, the dry garden with gravel and rocks, associated with Zen Buddhism, the or teahouse garden, designed to be seen only from a short pathway, and the , a very small urban garden.
Most modern Japanese homes have little space for a garden, though the style of tiny gardens in passages and other spaces, as well as bonsai (in Japan always grown outside) and houseplants mitigates this, and domestic garden tourism is very important. The Japanese tradition has long been to keep a well-designed garden as near as possible to its original condition, and many famous gardens appear to have changed little over several centuries, apart from the inevitable turnover of plants, in a way that is extremely rare in the West.
Awareness of the Japanese style of gardening reached the West near the end of the 19th century, and was enthusiastically received as part of the fashion for Japonisme, and as Western gardening taste had by then turned away from rigid geometry to a more naturalistic style, of which the Japanese style was an attractive variant. There were immediately popular in the UK, where the climate was similar and Japanese plants grew well. Japanese gardens, typically a section of a larger garden, continue to be popular in the West, and many typical Japanese garden plants, such as cherry trees and the many varieties of Acer palmatum or Japanese maple, are also used in all types of garden, giving a faint hint of the style to very many gardens.
History
Origins
The ideas central to Japanese gardens were first introduced to Japan during the Asuka period ().
Japanese gardens first appeared on the island of Honshu, the large central island of Japan. Their aesthetic was influenced by the distinct characteristics of the Honshu landscape: rugged volcanic peaks, narrow valleys, mountain streams with waterfalls and cascades, lakes, and beaches of small stones. They were also influenced by the rich variety of flowers and different species of trees, particularly evergreen trees, on the islands, and by the four distinct seasons in Japan, including hot, wet summers and snowy winters.
Japanese gardens have their roots in the national religion of Shinto, with its story of the creation of eight perfect islands, and of the , the lakes of the gods. Prehistoric Shinto shrines to the , the gods and spirits, are found on beaches and in forests all over the island. They often took the form of unusual rocks or trees marked with cords of rice fiber () and surrounded with white stones or pebbles, a symbol of purity. The white gravel courtyard became a distinctive feature of Shinto shrines, Imperial Palaces, Buddhist temples, and Zen gardens. Although its original meaning is somewhat obscure, one of the Japanese words for garden——came to mean a place that had been cleansed and purified in anticipation of the arrival of , and the Shinto reverence for great rocks, lakes, ancient trees, and other "dignitaries of nature" would exert an enduring influence on Japanese garden design.
Japanese gardens were also strongly influenced by the Chinese philosophy of Daoism and Amida Buddhism, imported from China in or around 552 CE. Daoist legends spoke of five mountainous islands inhabited by the Eight Immortals, who lived in perfect harmony with nature. Each Immortal flew from his mountain home on the back of a crane. The islands themselves were located on the back of an enormous sea turtle. In Japan, the five islands of the Chinese legend became one island, called Horai-zen, or Mount Horai. Replicas of this legendary mountain, the symbol of a perfect world, are a common feature of Japanese gardens, as are rocks representing turtles and cranes.
In antiquity
The earliest recorded Japanese gardens were the pleasure gardens of the emperors and nobles. They are mentioned in several brief passages of the , the first chronicle of Japanese history, published in 720 CE. In spring 74 CE, the chronicle recorded: "The Emperor Keikō put a few carp into a pond, and rejoiced to see them morning and evening". The following year, "The Emperor launched a double-hulled boat in the pond of Ijishi at Ihare, and went aboard with his imperial concubine, and they feasted sumptuously together". In 486, the chronicle recorded that "The Emperor Kenzō went into the garden and feasted at the edge of a winding stream".
Chinese gardens had a very strong influence on early Japanese gardens. In or around 552 CE, Buddhism was officially installed from China, via Korea, into Japan. Between 600 and 612 CE, the Japanese emperor sent four legations to the court of the Chinese Sui dynasty. Between 630 and 838 CE, the Japanese court sent fifteen more legations to the court of the Tang dynasty. These legations, with more than five hundred members each, included diplomats, scholars, students, Buddhist monks, and translators. They brought back Chinese writing, art objects, and detailed descriptions of Chinese gardens.
In 612 CE, the Empress Suiko had a garden built with an artificial mountain, representing Shumi-Sen, or Mount Sumeru, reputed in Hindu and Buddhist legends to be located at the centre of the world. During the reign of the same empress, one of her ministers, Soga no Umako, had a garden built at his palace featuring a lake with several small islands, representing the islands of the Eight Immortals famous in Chinese legends and Daoist philosophy. This palace became the property of the Japanese emperors, was named "The Palace of the Isles", and was mentioned several times in the , the "Collection of Countless Leaves", the oldest known collection of Japanese poetry.
Nara period (710–794)
The Nara period is named after its capital city Nara. The first authentically Japanese gardens were built in this city at the end of the 8th century. Shorelines and stone settings were naturalistic, different from the heavier, earlier continental mode of constructing pond edges. Two such gardens have been found at excavations, both of which were used for poetry-writing festivities. One of these gardens, the East Palace garden at Heijō Palace, Nara, has been faithfully reconstructed using the same location and even the original garden features that had been excavated. It appears from the small amount of literary and archaeological evidence available that the Japanese gardens of this time were modest versions of the Imperial gardens of the Tang dynasty, with large lakes scattered with artificial islands and artificial mountains. Pond edges were constructed with heavy rocks as embankment. While these gardens had some Buddhist and Daoist symbolism, they were meant to be pleasure gardens, and places for festivals and celebrations.
Recent archaeological excavations in the ancient capital of Nara have brought to light the remains of two 8th-century gardens associated with the Imperial Court, a pond and stream garden – the To-in – located within the precinct of the Imperial Palace and a stream garden – Kyuseki – found within the modern city. They may be modeled after Chinese gardens, but the rock formations found in the To-in would appear to have more in common with prehistoric Japanese stone monuments than with Chinese antecedents, and the natural, serpentine course of the Kyuseki stream garden may be far less formal than what existed in Tang China. Whatever their origins, both the To-in and Kyuseki clearly anticipate certain developments in later Japanese gardens.
Heian period (794–1185)
In 794 CE, at the beginning of the Heian period (794–1185 CE), the Japanese court moved its capital to Heian-kyō (present-day Kyoto). During this period, there were three different kinds of gardens: palace gardens and the gardens of nobles in the capital, the gardens of villas at the edge of the city, and the gardens of temples.
The architecture of the palaces, residences and gardens in the Heian period followed Chinese practice. Houses and gardens were aligned on a north-south axis, with the residence to the north and the ceremonial buildings and main garden to the south, there were two long wings to the south, like the arms of an armchair, with the garden between them. The gardens featured one or more lakes connected by bridges and winding streams. The south garden of the imperial residences had a uniquely Japanese feature: a large empty area of white sand or gravel. The emperor was the chief priest of Japan, and the white sand represented purity, and was a place where the gods could be invited to visit. The area was used for religious ceremonies and dances for the welcoming of the gods.
The layout of the garden itself was strictly determined according to the principles of traditional Chinese geomancy, or Feng Shui. The first known book on the art of the Japanese garden, the (Records of Garden Keeping), written in the 11th century, said:
The Imperial gardens of the Heian period were water gardens, where visitors promenaded in elegant lacquered boats, listening to music, viewing the distant mountains, singing, reading poetry, painting, and admiring the scenery. The social life in the gardens was memorably described in the classic Japanese novel The Tale of Genji, written in about 1005 by Murasaki Shikibu, a lady-in-waiting to the empress. The traces of one such artificial lake, Osawa no ike, near the Daikaku-ji temple in Kyoto, still can be seen. It was built by the Emperor Saga, who ruled from 809 to 823, and was said to be inspired by Dongting Lake in China.
A scaled-down replica of the Kyoto Imperial Palace of 794, the Heian-jingū, was built in Kyoto in 1895 to celebrate the 1100th birthday of the city. The south garden is famous for its cherry blossom in spring, and for azaleas in the early summer. The west garden is known for its irises in June, and the large east garden lake recalls the leisurely boating parties of the 8th century. Near the end of the Heian period, a new garden architecture style appeared, created by the followers of Pure Land Buddhism. These were called "Paradise Gardens", built to represent the legendary Paradise of the West, where the Amida Buddha ruled. These were built by noblemen who wanted to assert their power and independence from the Imperial household, which was growing weaker.
The best surviving example of a Paradise Garden is Byōdō-in in Uji, near Kyoto. It was originally the villa of Fujiwara Michinaga (966–1028), who married his daughters to the sons of the Emperor. After his death, his son transformed the villa into a temple, and in 1053 built the Hall of Phoenix, which still stands.
The Hall is built in the traditional style of a Chinese Song dynasty temple, on an island in the lake. It houses a gilded statue of the Amitābha Buddha, looking to the west. In the lake in front of the temple is a small island of white stones, representing Mount Horai, the home of the Eight Immortals of the Daoists, connected to the temple by a bridge, which symbolized the way to paradise. It was designed for mediation and contemplation, not as a pleasure garden. It was a lesson in Daoist and Buddhist philosophy created with landscape and architecture, and a prototype for future Japanese gardens.
Notable existing or recreated Heian gardens include:
Daikaku-ji
Byōdō-in
Kyoto Imperial Palace
Jōruri-ji
Kamakura and Muromachi periods (1185–1573)
The weakness of the emperors and the rivalry of feudal warlords resulted in two civil wars (1156 and 1159), which destroyed most of Kyoto and its gardens. The capital moved to Kamakura, and then in 1336 back to the Muromachi quarter of Kyoto. The emperors ruled in name only; real power was held by a military governor, the . During this period, the government reopened relations with China, which had been broken off almost three hundred years earlier. Japanese monks went again to study in China, and Chinese monks came to Japan, fleeing the Mongol invasions. The monks brought with them a new form of Buddhism, called simply Zen, or "meditation". Japan enjoyed a renaissance in religion, in the arts, and particularly in gardens. The term Zen garden appears in English writing in the 1930s for the first time, in Japan , or comes up even later, from the 1950s. It applies to a Song China-inspired composition technique derived from ink-painting. The composition or construction of such small, scenic gardens have no relation to religious Zen.
Many famous temple gardens were built early in this period, including Kinkaku-ji, the Golden Pavilion, built in 1398, and Ginkaku-ji, the Silver Pavilion, built in 1482. In some ways they followed Zen principles of spontaneity, extreme simplicity and moderation, but in other ways they were traditional Chinese Song-dynasty temples; the upper floors of the Golden Pavilion were covered with gold leaf, and they were surrounded by traditional water gardens.
The most notable garden style invented in this period was the Zen garden, dry garden, or Japanese rock garden. One of the finest examples, and one of the best-known of all Japanese gardens is Ryōan-ji in Kyoto. This garden is just wide and long, composed of white sand carefully raked to suggest water, and fifteen rocks carefully arranged, like small islands. It is meant to be seen from a seated position on the porch of the residence the abbot of the monastery. There have been many debates about what the rocks are supposed to represent, but, as garden historian Gunter Nitschke wrote, "The garden at Ryōan-ji does not symbolize. It does not have the value of representing any natural beauty that can be found in the world, real or mythical. I consider it as an abstract composition of "natural" objects in space, a composition whose function is to incite mediation."
Several of the famous Zen gardens of Kyoto were the work of one man, Musō Soseki (1275–1351). He was a monk, a ninth-generation descendant of the Emperor Uda and a formidable court politician, writer and organizer, who armed and financed ships to open trade with China, and founded an organization called the Five Mountains, made up of the most powerful Zen monasteries in Kyoto. He was responsible for the building of the zen gardens of Nanzen-ji, Saihō-ji (the Moss Garden), and Tenryū-ji.
Notable gardens of the Kamakura and Muromachi periods include:
Kinkaku-ji (the Golden Pavilion)
Ginkaku-ji (the Silver Pavilion)
Nanzen-ji
Saihō-ji (the Moss Garden)
Tenryū-ji
Daisen-in
Momoyama period (1568–1600)
The Momoyama period was short, just 32 years, and was largely occupied with the wars between the , the leaders of the feudal Japanese clans. The new centers of power and culture in Japan were the fortified castles of the , around which new cities and gardens appeared. The characteristic garden of the period featured one or more ponds or lakes next to the main residence, or , not far from the castle. These gardens were meant to be seen from above, from the castle or residence. The had developed the skills of cutting and lifting large rocks to build their castles, and they had armies of soldiers to move them. The artificial lakes were surrounded by beaches of small stones and decorated with arrangements of boulders, with natural stone bridges and stepping stones. The gardens of this period combined elements of a promenade garden, meant to be seen from the winding garden paths, with elements of the Zen garden, such as artificial mountains, meant to be contemplated from a distance.
The most famous garden of this kind, built in 1592, is situated near the Tokushima castle on the island of Shikoku. Its notable features include a bridge long made of two natural stones.
Another notable garden of the period still existing is Sanbō-in, rebuilt by Toyotomi Hideyoshi in 1598 to celebrate the festival of the cherry blossom and to recreate the splendor of an ancient garden. Three hundred garden-builders worked on the project, digging the lakes and installing seven hundred boulders in a space of . The garden was designed to be seen from the veranda of the main pavilion, or from the "Hall of the Pure View", located on a higher elevation in the garden.
In the east of the garden, on a peninsula, is an arrangement of stones designed to represent the mythical Mount Horai. A wooden bridge leads to an island representing a crane, and a stone bridge connects this island to another representing a tortoise, which is connected by an earth-covered bridge back to the peninsula. The garden also includes a waterfall at the foot of a wooded hill. One characteristic of the Momoyama period garden visible at Sanbō-in is the close proximity of the buildings to the water.
The Momoyama period also saw the development of (tea ceremony), the (teahouse), and the (tea garden). Tea had been introduced to Japan from China by Buddhist monks, who used it as a stimulant to keep awake during long periods of meditation. The first great tea master, Sen no Rikyū (1522–1591), defined in the most minute detail the appearance and rules of the tea house and tea garden, following the principle of .
Following Sen no Rikyū's rules, the teahouse was supposed to suggest the cottage of a hermit-monk. It was a small and very plain wooden structure, often with a thatched roof, with just enough room inside for two tatami mats. The only decoration allowed inside a scroll with an inscription and a branch of a tree. It did not have a view of the garden.
The garden was also small, and constantly watered to be damp and green. It usually had a cherry tree or elm to bring color in the spring, but otherwise did not have bright flowers or exotic plants that would distract the attention of the visitor. A path led to the entrance of the teahouse. Along the path was waiting bench for guests and a privy, and a stone water-basin near the teahouse, where the guests rinsed their hands and mouths before entering the tea room through a small, square door called , or "crawling-in entrance", which requires bending low to pass through. Sen no Rikyū decreed that the garden should be left unswept for several hours before the ceremony, so that leaves would be scattered in a natural way on the path.
Notable gardens of the period include:
Tokushima Castle garden on the island of Shikoku.
Tai-an tea house at Myōki-an Temple in Kyoto, built in 1582 by Sen no Rikyū.
Sanbō-in at Daigo-ji, in Kyoto Prefecture (1598)
Edo period (1615–1867)
During the Edo period, power was won and consolidated by the Tokugawa clan, who became the , and moved the capital to Edo, which became Tokyo. The emperor remained in Kyoto as a figurehead leader, with authority only over cultural and religious affairs. While the political center of Japan was now Tokyo, Kyoto remained the cultural capital, the center for religion and art. The provided the emperors with little power, but with generous subsidies for building gardens.
The Edo period saw the widespread use of a new kind of Japanese architecture, called , which means literally "building according to chosen taste". The term first appeared at the end of the 16th century referring to isolated tea houses. It originally applied to the simple country houses of samurai warriors and Buddhist monks, but in the Edo period it was used in every kind of building, from houses to palaces.
The style was used in the most famous garden of the period, the Katsura Imperial Villa in Kyoto. The buildings were built in a very simple, undecorated style, a prototype for future Japanese architecture. They opened up onto the garden, so that the garden seemed entirely part of the building; whether the visitor was inside or outside of the building, they would ideally always feel they were in the center of nature. The garden buildings were arranged so that were always seen from a diagonal, rather than straight on. This arrangement had the poetic name , which meant literally "a formation of wild geese in flight".
Most of the gardens of the Edo period were either promenade gardens or dry rock Zen gardens, and they were usually much larger than earlier gardens. The promenade gardens of the period made extensive use of borrowed scenery (). Vistas of distant mountains are integrated in the design of the garden; or, even better, building the garden on the side of a mountain and using the different elevations to attain views over landscapes outside the garden. Edo promenade gardens were often composed of a series of , or "famous views", similar to postcards. These could be imitations of famous natural landscapes, like Mount Fuji, or scenes from Taoist or Buddhist legends, or landscapes illustrating verses of poetry. Unlike Zen gardens, they were designed to portray nature as it appeared, not the internal rules of nature.
Well-known Edo-period gardens include:
Shugakuin Imperial Villa
Shisen-dō (1641)
Suizen-ji
Hama Rikyu
Kōraku-en (Okayama)
Ritsurin Garden (Takamatsu)
Koishikawa Kōraku-en (Tokyo) (1629)
Ninna-ji, Kyoto
Enman-in, Otsu
Sanzen-in, north of Kyoto
Sengan-en, Kagoshima (1658)
Chishaku-in, southeast of Kyoto
Jōju-in, in the temple of Kiyomizu, southeast of Kyoto (1688–1703)
Manshu-in, northeast of Kyoto (1656)
Nanzen-ji, east of Kyoto (1688–1703)
Meiji period (1868–1912)
The Meiji period saw the modernization of Japan, and the re-opening of Japan to the West. Many of the old private gardens had been abandoned and left to ruin. In 1871, a new law transformed many gardens from the earlier Edo period into public parks, preserving them. Garden designers, confronted with ideas from the West experimented with western styles, leading to such gardens as Kyu-Furukawa Gardens, or Shinjuku Gyoen. Others, more in the north of Japan kept to Edo period blueprint design. A third wave was the naturalistic style of gardens, invented by captains of industry and powerful politicians like Aritomo Yamagata. Many gardeners soon were designing and constructing gardens catering to this taste. One of the gardens well-known for his technical perfection in this style was Ogawa Jihei VII, also known as Ueji.
Notable gardens of this period include:
Kyu-Furukawa Gardens
Kenroku-en, 18th and 19th centuries, finished in 1874.
Chinzan-so in Tokyo in 1877.
Murin-an in Kyoto, finished 1898.
Modern Japanese gardens (1912 to present)
During the Shōwa period (1926–1989), many traditional gardens were built by businessmen and politicians. After World War II, the principal builders of gardens were no longer private individuals, but banks, hotels, universities and government agencies. The Japanese garden became an extension of the landscape architecture with the building. New gardens were designed by landscape architects, and often used modern building materials such as concrete.
Some modern Japanese gardens, such as Tōfuku-ji, designed by Mirei Shigemori, were inspired by classical models. Other modern gardens have taken a much more radical approach to the traditions. One example is Awaji Yumebutai, a garden on the island of Awaji, in the Seto Inland Sea of Japan, designed by Tadao Ando. It was built as part of a resort and conference center on a steep slope, where land had been stripped away to make an island for an airport.
Garden elements
Japanese gardens are distinctive in their symbolism of nature, with traditional Japanese gardens being very different in style from occidental gardens: "Western gardens are typically optimised for visual appeal while Japanese gardens are modelled with spiritual and philosophical ideas in mind." Japanese gardens are conceived as a representation of a natural setting, tying in to Japanese connections between the land and Shinto spiritualism, where spirits are commonly found in nature; as such, Japanese gardens tend to incorporate natural materials, with the aim of creating a space that captures the beauties of nature in a realistic manner.
Traditional Japanese gardens can be categorized into three types: (hill gardens), (dry gardens) and gardens (tea gardens).
The small space given to create these gardens usually poses a challenge for the gardeners. Due to the absolute importance of the arrangement of natural rocks and trees, finding the right material becomes highly selective. The serenity of a Japanese landscape and the simple but deliberate structures of the Japanese gardens are a unique quality, with the two most important principles of garden design being "scaled reduction and symbolization".
Water
Japanese gardens always feature water, either physically with a pond or stream, or symbolically, represented by white sand in a dry rock garden. In Buddhist symbolism, water and stone are thought of as yin and yang, two opposites that complement and complete each other. A traditional garden will usually have an irregular-shaped pond or, in larger gardens, two or more ponds connected by a channel or stream, and a cascade, a miniature version of Japan's famous mountain waterfalls.
In traditional gardens, the ponds and streams are carefully placed according to Buddhist geomancy, the art of putting things in the place most likely to attract good fortune. The rules for the placement of water were laid out in the first manual of Japanese gardens, the , in the 11th century. According to the , water should enter the garden from the east or southeast and flow toward the west, because the east is the home of the Green Dragon (), an ancient Chinese divinity adopted in Japan, and the west is the home of the White Tiger, the divinity of the east. Water flowing from east to west will carry away evil, and the owner of the garden will be healthy and have a long life. According to the , another favorable arrangement is for the water to flow from north, which represents water in Buddhist cosmology, to the south, which represents fire, which are opposites (yin and yang) and therefore will bring good luck.
The recommends several possible miniature landscapes using lakes and streams: the "ocean style", which features rocks that appear to have been eroded by waves, a sandy beach, and pine trees; the "broad river style", recreating the course of a large river, winding like a serpent; the "marsh pond" style, a large still pond with aquatic plants; the "mountain torrent style", with many rocks and cascades; and the "rose letters" style, an austere landscape with small, low plants, gentle relief and many scattered flat rocks.
Traditional Japanese gardens have small islands in the lakes. In sacred temple gardens, there is usually an island which represents Mount Penglai or Mount Hōrai, the traditional home of the Eight Immortals.
The describes different kinds of artificial island which can be created in lakes, including the "mountainous island", made up of jagged vertical rocks mixed with pine trees, surrounded by a sandy beach; the "rocky island", composed of "tormented" rocks appearing to have been battered by sea waves, along with small, ancient pine trees with unusual shapes; the "cloud island", made of white sand in the rounded white forms of a cumulus cloud; and the "misty island", a low island of sand, without rocks or trees.
A cascade or waterfall is an important element in Japanese gardens, a miniature version of the waterfalls of Japanese mountain streams. The describes seven kinds of cascades. It notes that if possible, a cascade should face toward the moon and should be designed to capture the moon's reflection in the water. It is also mentioned in that cascades benefit from being located in such a manner that they are half-hidden in shadows.
Rocks and sand
Rock, sand and gravel are an essential feature of the Japanese garden. A vertical rock may represent Mount Horai, the legendary home of the Eight Immortals, or Mount Sumeru of Buddhist teaching, or a carp jumping from the water. A flat rock might represent the earth. Sand or gravel can represent a beach, or a flowing river. Rocks and water also symbolize yin and yang ( and in Japanese) in Buddhist philosophy; the hard rock and soft water complement each other, and water, though soft, can wear away rock.
Rough volcanic rocks () are usually used to represent mountains or as stepping stones. Smooth and round sedimentary rocks () are used around lakes or as stepping stones. Hard metamorphic rocks are usually placed by waterfalls or streams. Rocks are traditionally classified as tall vertical, low vertical, arching, reclining, or flat. Rocks should vary in size and color but from each other, but not have bright colors, which would lack subtlety. Rocks with strata or veins should have the veins all going in the same direction, and the rocks should all be firmly planted in the earth, giving an appearance of firmness and permanence. Rocks are arranged in careful compositions of two, three, five or seven rocks, with three being the most common. In a three-arrangement, a tallest rock usually represents heaven, the shortest rock is the earth, and the medium-sized rock is humanity, the bridge between heaven and earth. Sometimes one or more rocks, called ("nameless" or "discarded"), are placed in seemingly random locations in the garden, to suggest spontaneity, though their placement is carefully chosen.
In ancient Japan, sand () and gravel () were used around Shinto shrines and Buddhist temples. Later it was used in the Japanese rock garden or Zen Buddhist gardens to represent water or clouds. White sand represented purity, but sand could also be gray, brown or bluish-black.
Selection and subsequent placement of rocks was and still is a central concept in creating an aesthetically pleasing garden by the Japanese. During the Heian period, the concept of placing stones as symbolic representations of islands – whether physically existent or nonexistent – began to take hold, and can be seen in the Japanese word , which is of "particular importance[...] because the word contained the meaning 'island. Furthermore, the principle of , or "obeying (or following) the request of an object", was, and still is, a guiding principle of Japanese rock design that suggests "the arrangement of rocks be dictated by their innate characteristics". The specific placement of stones in Japanese gardens to symbolically represent islands (and later to include mountains), is found to be an aesthetically pleasing property of traditional Japanese gardens.
Thomas Heyd outlines some of the aesthetic principles of Japanese gardens in Encountering Nature:
Rock placement is a general "aim to portray nature in its essential characteristics" – the essential goal of all Japanese gardens. Furthermore, Heyd states:
Such attention to detail can be seen at places such as Midori Falls in Kenroku-en Garden in Kanazawa, Ishikawa Prefecture, as the rocks at the waterfall's base were changed at various times by six different .
In Heian-period Japanese gardens, built in the Chinese model, buildings occupied as much or more space than the garden. The garden was designed to be seen from the main building and its verandas, or from small pavilions built for that purpose. In later gardens, the buildings were less visible. Rustic teahouses were hidden in their own little gardens, and small benches and open pavilions along the garden paths provided places for rest and contemplation. In later garden architecture, walls of houses and teahouses could be opened to provide carefully framed views of the garden. The garden and the house became one.
Garden bridges
Bridges first appeared in the Japanese garden during the Heian period. At the Byōdō-in garden in Kyoto, a wooden bridge connects the Phoenix pavilion with a small island of stones, representing the Mount Penglai or Mount Horai, the island home of the Eight Immortals of Daoist teaching, The bridge symbolized the path to paradise and immortality.
Bridges could be made of stone (), or of wood, or made of logs with earth on top, covered with moss (); they could be either arched () or flat (). Sometimes if they were part of a temple garden, they were painted red, following the Chinese tradition, but for the most part they were unpainted.
During the Edo period, when large promenade gardens became popular, streams and winding paths were constructed, with a series of bridges, usually in a rustic stone or wood style, to take visitors on a tour of the scenic views of the garden.
Stone lanterns and water basins
Japanese date back to the Nara period and the Heian period. Originally they were located only at Buddhist temples, where they lined the paths and approaches to the temple, but in the Heian period they began to be used at Shinto shrines as well. According to tradition, during the Momoyama period they were introduced to the tea garden by the first great tea masters, and in later gardens they were used purely for decoration.
In its complete and original form, a , like the pagoda, represents the five elements of Buddhist cosmology. The piece touching the ground represents , the earth; the next section represents , or water; or fire, is represented by the section encasing the lantern's light or flame, while (air) and (void or spirit) are represented by the last two sections, top-most and pointing towards the sky. The segments express the idea that after death our physical bodies will go back to their original, elemental form.
Stone water basins () were originally placed in gardens for visitors to wash their hands and mouth before the tea ceremony. The water is provided to the basin by a bamboo pipe, or , and they usually have a wooden ladle for drinking the water. In tea gardens, the basin was placed low to the ground, so the drinker had to bend over to get water.
Garden fences, gates, and devices
Trees and flowers
Nothing in a Japanese garden is natural or left to chance; each plant is chosen according to aesthetic principles, either to hide undesirable sights, to serve as a backdrop to certain garden features, or to create a picturesque scene. Trees are carefully chosen and arranged for their autumn colors. Moss is often used to suggest that the garden is ancient. Flowers are also carefully chosen by their season of flowering. Formal flowerbeds are rare in older gardens, but more common in modern gardens. Some plants are chosen for their religious symbolism, such as the lotus, sacred in Buddhist teachings, or the pine, which represents longevity.
The trees are carefully trimmed to provide attractive scenes, and to prevent them from blocking other views of the garden. Their growth is also controlled, in a technique called , to give them more picturesque shapes, and to make them look more ancient. It has been suggested that the characteristic shape of pruned Japanese garden trees resemble trees found naturally in savannah landscapes. This resemblance has been used to motivate the so-called Savannah hypothesis. Trees are sometimes constrained to bend, in order to provide shadows or better reflections in the water. Very old pine trees are often supported by wooden crutches called tsurazue or hōdzue shichū, or their branches are held by cords, to keep them from breaking under the weight of snow.
In the late 16th century, a new art was developed in the Japanese garden; that of , the technique of trimming bushes into balls or rounded shapes which imitate waves. According to tradition this art was developed by Kobori Enshū (1579–1647), and it was most frequently practiced on azalea bushes. It was similar to the topiary gardens made in Europe at the same time, except that European topiary gardens tried to make trees look like geometric solid objects, while sought to make bushes look as if they were almost liquid, or in flowing natural shapes. It created an artistic play of light on the surface of the bush, and, according to garden historian Michel Baridon, "it also brought into play the sense of 'touching things' which even today succeeds so well in Japanese design."
The most common trees and plants found in Japanese gardens are the azalea (), the camellia (), the oak (), the elm (), the Japanese apricot (), cherry (), maple (), the willow (), the ginkgo (), the Japanese cypress (), the Japanese cedar (), pine (), and bamboo ().
Fish
The use of fish, particularly (colored carp), or goldfish as a decorative element in gardens was borrowed from the Chinese garden. Goldfish were developed in China more than a thousand years ago by selectively breeding Prussian carp for color mutations. By the Song dynasty (960–1279), yellow, orange, white and red-and-white colorations had been developed. Goldfish were introduced to Japan in the 16th century. Koi were developed from common carp (Cyprinus carpio) in Japan in the 1820s. Koi are domesticated common carp that are selected or culled for color; they are not a different species, and will revert to the original coloration within a few generations if allowed to breed freely. In addition to fish, turtles are kept in some gardens. Natural environments in the gardens offer habitats that attract wild animals; frogs and birds are notable as they contribute with a pleasant soundscape.
Aesthetic principles
The early Japanese gardens largely followed the Chinese model, but gradually Japanese gardens developed their own principles and aesthetics. These were spelled out by a series of landscape gardening manuals, beginning with in the Heian period (794–1185). The principles of sacred gardens, such as the gardens of Zen Buddhist temples, were different from those of pleasure or promenade gardens; for example, Zen Buddhist gardens were designed to be seen, while seated, from a platform with a view of the whole garden, without entering it, while promenade gardens were meant to be seen by walking through the garden and stopping at a series of view points. However, they often contain common elements and used the same techniques.
Miniaturisation: The Japanese garden is a miniature and idealized view of nature. Rocks can represent mountains, and ponds can represent seas. The garden is sometimes made to appear larger by forced perspective: placing larger rocks and trees in the foreground, and smaller ones in the background.
Concealment : The Zen Buddhist garden is meant to be seen all at once, but the promenade garden is meant to be seen one landscape at a time, like a scroll of painted landscapes unrolling. Features are hidden behind hills, trees groves or bamboo, walls or structures, to be discovered when the visitor follows the winding path.
: Smaller gardens are often designed to incorporate borrowed scenery, the view of features outside the garden such as hills, trees or temples, as part of the view. This makes the garden seem larger than it really is.
Asymmetry: Japanese gardens are not laid on straight axes, or with a single feature dominating the view. Buildings and garden features are usually placed to be seen from a diagonal, and are carefully composed into scenes that contrast right angles, such as buildings with natural features, and vertical features, such as rocks, bamboo or trees, with horizontal features, such as water.
According to garden historians David and Michigo Young, at the heart of the Japanese garden is the principle that a garden is a work of art. "Though inspired by nature, it is an interpretation rather than a copy; it should appear to be natural, but it is not wild."
Landscape gardener Seyemon Kusumoto wrote that the Japanese generate "the best of nature's handiwork in a limited space".
There has been mathematical analysis of some traditional Japanese garden designs. These designs avoid contrasts, symmetries and groupings that would create points which dominate visual attention. Instead, they create scenes in which visual salience is evenly distributed across the field of view. Stand-out colours, textures, objects, and groups are avoided. The size of objects, groupings, and the spacings between them are arranged to be self-similar at multiple spatial scales; that is, they produce similar patterns when scaled up or down (zoomed in or out). This property is also seen in fractals and many natural scenes. This fractal-like self-similarity may be extended all the way down to the scale of surface textures (such as those of rocks and moss lawns). These textures are considered to express a aesthetic.
Differences between Japanese and Chinese gardens
Japanese gardens during the Heian period were modeled upon Chinese gardens, but by the Edo period there were distinct differences.
Architecture: Chinese gardens have buildings in the center of the garden, occupying a large part of the garden space. The buildings are placed next to or over the central body of water. The garden buildings are very elaborate, with much architectural decoration. In later Japanese gardens, the buildings are well apart from the body of water, and the buildings are simple, with very little ornament. The architecture in a Japanese garden is largely or partly concealed.
Viewpoint: Chinese gardens are designed to be seen from the inside, from the buildings, galleries and pavilions in the center of the garden. Japanese gardens are designed to be seen from the outside, as in the Japanese rock garden or zen garden; or from a path winding through the garden.
Use of rocks: in a Chinese garden, particularly in the Ming dynasty, scholar's rocks were selected for their extraordinary shapes or resemblance to animals or mountains, and used for dramatic effect. They were often the stars and centerpieces of the garden. In later Japanese gardens, rocks were smaller and placed in more natural arrangements, integrated into the garden.
Marine landscapes: Chinese gardens were inspired by Chinese inland landscapes, particularly Chinese lakes and mountains, while Japanese gardens often use miniaturized scenery from the Japanese coast. Japanese gardens frequently include white sand or pebble beaches and rocks which seem to have been worn by the waves and tide, which rarely appear in Chinese gardens.
Garden styles
Chisen-shoyū-teien or pond garden
The chisen-shoyū-teien ("lake-spring-boat excursion garden") was imported from China during the Heian period (794–1185). It is also called the shinden-zukuri style, after the architectural style of the main building. It featured a large, ornate residence with two long wings reaching south to a large lake and garden. Each wing ended in a pavilion from which guests could enjoy the views of the lake. Visitors made tours of the lake in small boats. These gardens had large lakes with small islands, where musicians played during festivals and ceremonies worshippers could look across the water at the Buddha. No original gardens of this period remain, but reconstructions can be seen at Heian-jingū and Daikaku-ji temple in Kyoto.
The Paradise Garden
The Paradise Garden appeared in the late Heian period, created by nobles belonging to the Amida Buddhism sect. They were meant to symbolize Paradise or the Pure Land (Jōdo), where the Buddha sat on a platform contemplating a lotus pond. These gardens featured a lake island called Nakajima, where the Buddha hall was located, connected to the shore by an arching bridge. The most famous surviving example is the garden of the Phoenix Hall of Byōdō-in Temple, built in 1053, in Uji, near Kyoto. Other examples are Jōruri-ji temple in Kyoto, Enro-ji temple in Nara Prefecture, the Hokongoin in Kyoto, Mōtsū-ji Temple in Hiraizumi, and Shiramizu Amidado Garden in Iwaki City.
Karesansui dry rock gardens
Karesansui gardens (枯山水) or Japanese rock gardens, became popular in Japan in the 14th century thanks to the work of a Buddhist monk, Musō Soseki (1275–1351) who built zen gardens at the five major monasteries in Kyoto. These gardens have white sand or raked gravel in place of water, carefully arranged rocks, and sometimes rocks and sand covered with moss. Their purpose is to facilitate meditation, and they are meant to be viewed while seated on the porch of the residence of the hōjō, the abbot of the monastery. The most famous example is Ryōan-ji temple in Kyoto.
Roji, or tea gardens
The tea garden was created during the Muromachi period (1333–1573) and Momoyama period (1573–1600) as a setting for the Japanese tea ceremony, or chanoyu. The style of garden takes its name from the roji, or path to the teahouse, which is supposed to inspire the visitor to meditation to prepare him for the ceremony. There is an outer garden, with a gate and covered arbor where guests wait for the invitation to enter. They then pass through a gate to the inner garden, where they wash their hands and rinse their mouth, as they would before entering a Shinto shrine, before going into the teahouse itself. The path is always kept moist and green, so it will look like a remote mountain path, and there are no bright flowers that might distract the visitor from his meditation. Early teahouses had no windows, but later teahouses have a wall which can be opened for a view of the garden.
Kaiyū-shiki-teien, or promenade gardens
Promenade or stroll gardens (landscape gardens in the go-round style) appeared in Japan during the Edo period (1600–1854), at the villas of nobles or warlords. These gardens were designed to complement the houses in the new style of architecture, which were modeled after the teahouse. These gardens were meant to be seen by following a path clockwise around the lake from one carefully composed scene to another. These gardens used two techniques to provide interest: , which took advantage of views of scenery outside the garden such as mountains or temples, incorporating them into the view so the garden looked larger than it really was, and , or "hide-and-reveal", which used winding paths, fences, bamboo and buildings to hide the scenery so the visitor would not see it until he was at the best view point.
Edo period gardens also often feature recreations of famous scenery or scenes inspired by literature; Suizen-ji Jōju-en Garden in Kumamoto has a miniature version of Mount Fuji, and Katsura Villa in Kyoto has a miniature version of the Ama-no-hashidate sandbar in Miyazu Bay, near Kyoto. The Rikugi-en Garden in Tokyo creates small landscapes inspired by eighty-eight famous Japanese poems.
Small urban gardens
Small gardens were originally found in the interior courtyards (naka-niwa, "inner garden") of Heian period palaces, and were designed to give a glimpse of nature and some privacy to the residents of the rear side of the building. They were as small as one tsubo, or about 3.3 square meters, whence the name tsubo-niwa. During the Edo period, merchants began building small gardens in the space behind their shops, which faced the street, and their residences, located at the rear. These tiny gardens were meant to be seen, not entered, and usually had a stone lantern, a water basin, stepping stones and a few plants. Today, tsubo-niwa are found in many Japanese residences, hotels, restaurants, and public buildings. A good example from the Meiji period is found in the villa of Murin-an in Kyoto. Totekiko is a famous courtyard rock garden.
Hermitage garden
A hermitage garden is a small garden usually built by a samurai or government official who wanted to retire from public life and devote himself to study or meditation. It is attached to a rustic house, and approached by a winding path, which suggests it is deep in a forest. It may have a small pond, a Japanese rock garden, and the other features of traditional gardens, in miniature, designed to create tranquility and inspiration. An example is the Shisen-dō garden in Kyoto, built by a bureaucrat and scholar exiled by the shogun in the 17th century. It is now a Buddhist temple.
Literature and art of the Japanese garden
Garden manuals
The first manual of Japanese gardening was the Sakuteiki ("Records of Garden Making"), probably written in the late eleventh century by Tachibana no Tohshitsuna (1028–1094). Citing even older Chinese sources, it explains how to organize the garden, from the placement of rocks and streams to the correct depth of ponds and height of cascades. While it was based on earlier Chinese garden principles, it also expressed ideas which were unique to Japanese gardens, such as islands, beaches and rock formations imitating Japanese maritime landscapes.
Besides giving advice, Sakuteiki also gives dire warnings of what happens if the rules are not followed; the author warns that if a rock that in nature was in a horizontal position is stood upright in a garden, it will bring misfortune to the owner of the garden. And, if a large rock pointed toward the north or west is placed near a gallery, the owner of the garden will be forced to leave before a year passes.
Another influential work about the Japanese garden, bonseki, bonsai and related arts was Rhymeprose on a Miniature Landscape Garden (around 1300) by the Zen monk Kokan Shiren, which explained how meditation on a miniature garden purified the senses and the mind and led to understanding of the correct relationship between man and nature.
Other influential garden manuals which helped to define the aesthetics of the Japanese garden are Senzui Narabi ni Yagyo no Zu (Illustrations for Designing Mountain, Water and Hillside Field Landscapes), written in the fifteenth century, and Tsukiyama Teizoden (Building Mountains and Making Gardens), from the 18th century. The tradition of Japanese gardening was historically passed down from sensei to apprentice. The opening words of Illustrations for designing mountain, water and hillside field landscapes (1466) are "If you have not received the oral transmissions, you must not make gardens" and its closing admonition is "You must never show this writing to outsiders. You must keep it secret".
These garden manuals are still studied today.
Gardens in literature and poetry
The Tale of Genji, the classic Japanese novel of the Heian period, describes the role of the Japanese garden in court life. The characters attend festivals in the old Kyoto imperial palace garden, take boat trips on the lake, listen to music and watch formal dances under the trees.
Gardens were often the subject of poems during the Heian period. A poem in one anthology from the period, the Kokin-Shu, described the Kiku-shima, or island of chrystanthemums, found in the Osawa pond in the great garden of the period called Saga-in.
I had thought that here
only one chrysanthemum can grow.
Who therefore has planted
the other in the depths
of the pond of Osawa?
Another poem of the Heian period, in the Hyakunin isshu, described a cascade of rocks, which simulated a waterfall, in the same garden:
The cascade long ago
ceased to roar,
But we continue to hear
The murmur
of its name.
Philosophy, painting, and the Japanese garden
In Japanese culture, garden-making is a high art, equal to the arts of calligraphy and ink painting. Gardens are considered three-dimensional textbooks of Daoism and Zen Buddhism. Sometimes the lesson is very literal; the garden of Saihō-ji featured a pond shaped like the Japanese character shin (心) or xīn in Chinese, the heart-spirit of Chinese philosophy, the newspaper character is 心 but it's the full cursive, the sousho style (草書) for shin that would be used; sousho, this well-named "grass writing", would be appropriate for gardening purpose indeed, for in cursive writing the character shapes change depending on the context and of course, since it is cursive, depending on the person -that is to say that the character would be done in a single pencil stroke, it would match the state of mind and the context rather than the newspaper print.
However, usually the lessons are contained in the arrangements of the rocks, the water and the plants. For example, the lotus flower has a particular message; Its roots are in the mud at the bottom of the pond, symbolizing the misery of the human condition, but its flower is pure white, symbolizing the purity of spirit that can be achieved by following the teachings of the Buddha.
The Japanese rock gardens were intended to be intellectual puzzles for the monks who lived next to them to study and solve. They followed the same principles as the suiboku-ga, the black-and-white Japanese inks paintings of the same period, which, according to Zen Buddhist principles, tried to achieve the maximum effect using the minimum essential elements.
One painter who influenced the Japanese garden was Josetsu (1405–1423), a Chinese Zen monk who moved to Japan and introduced a new style of ink-brush painting, moving away from the romantic misty landscapes of the earlier period, and using asymmetry and areas of white space, similar to the white space created by sand in zen gardens, to set apart and highlight a mountain or tree branch or other element of his painting. He became chief painter of the Shogun and influenced a generation of painters and garden designers.
Japanese gardens also follow the principles of perspective of Japanese landscape painting, which feature a close-up plane, an intermediate plane, and a distant plane. The empty space between the different planes has a great importance, and is filled with water, moss, or sand. The garden designers used various optical tricks to give the garden the illusion of being larger than it really is, by borrowing of scenery ("shakkei"), employing distant views outside the garden, or using miniature trees and bushes to create the illusion that they are far away.
Noteworthy Japanese gardens
In Japan
The Minister of Education, Culture, Sports, Science and Technology of the government of Japan designates the most notable of the nation's scenic beauty as Special Places of Scenic Beauty, under the Law for the Protection of Cultural Properties. As of March 2007, 29 sites are listed, more than a half of which are Japanese gardens (boldface entries specify World Heritage Sites):
Tōhoku region
Mōtsū-ji Garden (Hiraizumi, Iwate)
Kantō region
Kairaku-en (Mito, Ibaraki)
Rikugi-en (Bunkyō, Tokyo)
Kyu Hamarikyu Gardens (Chūō, Tokyo)
Chūbu region
Kenroku-en (Kanazawa, Ishikawa)
Ichijōdani Asakura Family Gardens (Fukui, Fukui)
Kansai region
Byōdō-in Garden (Uji, Kyoto)
Jisho-ji Garden (Kyoto, Kyoto)
Nijō Castle Ninomaru Garden (Kyoto, Kyoto)
Rokuon-ji Garden (Kyoto, Kyoto)
Ryōan-ji Garden (Kyoto, Kyoto)
Tenryū-ji Garden (Kyoto, Kyoto)
The garden of Sanbōin in Daigo-ji (Kyoto, Kyoto)
The moss garden of Saihō-ji (the "Moss Temple") (Kyoto, Kyoto)
Daitoku-ji Garden (Kyoto, Kyoto)
The garden of Daisen-in in Daitoku-ji (Kyoto, Kyoto)
Murin-an garden, Kyoto, Kyoto
Negoro-ji Garden (Iwade, Wakayama)
Chūgoku region
Adachi Museum of Art Garden (Yasugi, Shimane)
Kōraku-en (Okayama, Okayama)
Matsue Vogel Park (Matsue)
Shūraku-en (Tsuyama)
Shikoku Region
Ritsurin Garden (Takamatsu, Kagawa)
Nakatsu Banshoen (Marugame, Kagawa)
Tensha-en (Uwajima, Ehime)
Kyushu Region
Suizen-ji Jōju-en (Kumamoto, Kumamoto)
Sengan-en (Kagoshima, Kagoshima)
Ryūkyū Islands
Shikina-en (Naha, Okinawa)
However, the Education Minister is not eligible to have jurisdiction over any imperial property. These two gardens, administered by Imperial Household Agency, are also considered to be great masterpieces.
Katsura Imperial Villa
Shugaku-in Imperial Villa
In Taiwan
Several Japanese gardens were built during Japanese Taiwan period.
Taipei Guest House
Beitou Plum Garden in Beitou, Taipei
Beitou Museum in Beitou, Taipei
Nanmon-cho 323 in Zhongzheng District, Taipei
Drop of Water Memorial Hall in Tamsui, New Taipei City
Shoyoen in Kaohsiung
In English-speaking countries
The aesthetic of Japanese gardens was introduced to the English-speaking world by Josiah Conder's Landscape Gardening in Japan (Kelly & Walsh, 1893). Conder was a British architect who had worked for the Japanese government and other clients in Japan from 1877 until his death. The book was published when the general trend of Japonisme, or Japanese influence in the arts of the West, was already well-established, and sparked the first Japanese gardens in the West. A second edition was required in 1912. Initially these were mostly sections of large private gardens, but as the style grew in popularity, many Japanese gardens were, and continue to be, added to public parks and gardens. Conder's principles have sometimes proved hard to follow:
Samuel Newsom's Japanese Garden Construction (1939) offered Japanese aesthetic as a corrective in the construction of rock gardens, which owed their quite separate origins in the West to the mid-19th century desire to grow alpines in an approximation of Alpine scree.
According to the Garden History Society, Japanese landscape gardener Seyemon Kusumoto was involved in the development of around 200 gardens in the UK. In 1937 he exhibited a rock garden at the Chelsea Flower Show, and worked on the Burngreave Estate at Bognor Regis, and also on a Japanese garden at Cottered in Hertfordshire. The lush courtyards at Du Cane Court – an art deco block of flats in Balham, London, built between 1935 and 1938 – were designed by Kusumoto. All four courtyards there may have originally contained ponds. Only one survives, and this is stocked with koi. There are also several stone lanterns, which are meant to symbolise the illumination of one's path through life; similarly, the paths through the gardens are not straight. Japanese maple, Japanese anemone, cherry trees, evergreens, and bamboo are other typical features of Du Cane Court's gardens.
According to David A. Slawson, many of the Japanese gardens that are recreated in the US are of "museum-piece quality". He also writes, however, that as the gardens have been introduced into the Western world, they have become more Americanized, decreasing their natural beauty.
Australia
Adelaide Himeji Garden, South Australia
Auburn Botanical Gardens, in Sydney, New South Wales
Canberra Nara Peace Park in Lennox Gardens, Canberra
Cowra Japanese Garden and Cultural Centre, Cowra, New South Wales
Melbourne Zoo, Victoria
Nerima Gardens, Ipswich, Queensland
"Tsuki-yama-chisen" Japanese Garden, Brisbane
University of Southern Queensland Japanese Garden, "Ju Raku En", Largest Japanese Gardens in Australia. Toowoomba, Queensland
Canada
Nitobe Memorial Garden, Vancouver, British Columbia
The University of Alberta Botanic Garden, Edmonton, Alberta, formerly named the Devonian Botanic Garden, which contains an extensive Japanese garden
Nikka Yuko Japanese Garden, Lethbridge, Alberta
The Japanese Garden and Pavilion, Montreal Botanical Garden, Quebec
Kariya Park, Mississauga, Ontario
United Kingdom
England
Compton Acres, Dorset
Dartington Hall, Devon
Hall Park, Leeds
Harewood House, Leeds
Holland Park, London
St Mawgan in Pydar, Cornwall
Tatton Park, Cheshire
School of Oriental and African Studies, London
Northern Ireland
Sir Thomas and Lady Dixon Park, Belfast
Fujiyama Japanese Garden
Scotland
Lauriston Castle, Edinburgh – garden opened 2002
Ireland
The Japanese Gardens at the Irish National Stud, Kildare
Lafcadio Hearn Japanese Gardens, Tramore, County Waterford
United States
Anderson Japanese Gardens (Rockford, Illinois)
Japanese Hill-and-Pond Garden at the Brooklyn Botanic Garden (Brooklyn, New York)
Chicago Botanic Garden (Glencoe, Illinois)
Earl Burns Miller Japanese Garden at California State University, Long Beach (Long Beach, California)
Richard & Helen DeVos Japanese Garden at Frederik Meijer Gardens & Sculpture Park (Grand Rapids, Michigan)
Fort Worth Japanese Garden at the Fort Worth Botanic Garden (Fort Worth, Texas)
Japanese Tea Garden at Golden Gate Park (San Francisco, California)
Hakone Gardens (Saratoga, California), used as a filming location for Memoirs of a Geisha
Hayward Japanese Gardens (Hayward, California), the oldest traditionally designed Japanese garden in California
Japanese Garden of Peace at the National Museum of the Pacific War (Fredericksburg, Texas)
Japanese Garden at the Huntington Library (San Marino, California)
Japanese Friendship Garden (Phoenix, Arizona)
Japanese Friendship Garden (Balboa Park) (San Diego, California)
Japanese Friendship Garden (Kelley Park) (San Jose, California)
Japanese Garden at Hermann Park (Houston, Texas)
Morikami Museum and Japanese Gardens (Delray Beach, Florida)
Portland Japanese Garden (Portland, Oregon)
Seattle Japanese Garden at the Washington Park Arboretum (Seattle, Washington)
Kubota Garden (Seattle, Washington)
The Japanese Garden (Los Angeles, California)
Seiwa-en at the Missouri Botanical Garden (St. Louis, Missouri)
Shofuso Japanese House and Garden (Philadelphia, Pennsylvania)
Yuko-En on the Elkhorn (Georgetown, Kentucky)
Japanese Garden (Ashland, Oregon)
In other countries
Argentina
The Buenos Aires Japanese Gardens, of the Fundación Cultural Argentino Japonesa
Japanese Garden, Belén de Escobar
Austria:
Setagayapark, Ecke Gallmeyergasse,1190 Vienna – opened 1992 (garden designer Ken Nakajima)
The Japanese Garden in Schlosspark Schönbrunn, Vienna – revitalized 1999
Belgium
Japanse tuin, Hasselt
Japanse tuin, Ostend
Jardin japonais Chevetogne Namur
Brazil
Parque Santos Dummont, São José dos Campos, São Paulo
Bosque Municipal Fábio Barreto, Ribeirão Preto, São Paulo
Bulgaria:
at the Kempinski Hotel Zografski in Sofia; built in 1979 as a large-scale copy of the garden at the Hotel New Otani Tokyo, first and only Japanese Garden in the Balkans until 2004.
Chile:
Jardin Japonés de La Serena (Kokoro No Niwa); It is the largest Japanese garden in South America.
Jardín Japonés de Santiago; Built in 1978 and reopened in 1997 by Masahito, Prince Hitachi.
Costa Rica:
Lankester Botanical Gardens, operated by the University of Costa Rica, in Cartago canton
Czech Republic:
Japanese Garden in Prague at Botanical Garden
Japanese Zen Garden in Karlovy Vary at Teplà river
Egypt:
Japanese garden in Cairo at Helwan district
France:
The Departmental Museum of Albert Kahn (Musée Albert-Kahn) in Boulogne-Billancourt has two Japanese gardens.
Japanese Garden at the UNESCO Head Quarters, created by Isamu Noguchi in 1958
Rising sun garden () in the botanical garden of Upper Brittany
Jardin japonais Pierre-Baudis, in the Jardin Compans-Cafferelli of Toulouse
Georgia:
in Tbilisi (in Tbilisi Botanical Garden) opened in 2016
Germany:
in Augsburg (in the Botanischer Garten Augsburg)
in Bad Langensalza (called "Kōfuku no niwa" and is the 2nd largest Japanese garden in Germany)
in Berlin (in the Gärten der Welt Park)
in Bielefeld (in borough of Gadderbaum) opened in 2003
in Bonn (in Rheinaue park)
in Bremen (in Overseas Museum and Botanika)
in Cologne (at Museum of East Asian Art)
in Dortmund (in Westfalenpark)
in Düsseldorf (temple garden of the EKŌ-House of Japanese Culture and Nordpark)
in Erfurt (in Egapark)
in Freiburg (in Seepark)
in Hamburg (in the Planten un Blomen Park)
in Hanover (in Stadtpark)
in Kaiserslautern (largest Japanese garden in Germany)
in Karlsruhe (in Zoological garden)
in Leverkusen (in Chempark)
in Munich (in the Englischer Garten)
in Rostock (in IGA park)
in Schwielowsee (Bonsai garden)
in Stuttgart (in Schlossgarten)
in Trier (called "U Raku En")
in Würzburg (called "Ōmi no wa" and is a contribution from its sister city Ōtsu and at Krankenkai)
Greece:
in Athens, established in 2021
Hungary:
on Margaret Island, Budapest
in the Budapest Zoo and Botanical Garden
India:
in Moti Jheel, Kanpur
in Buddha Park, Indira Nagar, Kalianpur, Kanpur
Japanese Garden, Chandigarh
Pune-Okayama Friendship Garden, Pune
Iran:
in Tehran, the National Botanical Garden of Iran; established in 1995
Israel:
Kibbutz Heftziba
Kenya:
Japanese Zen garden in Nairobi at Kitisuru district
Mexico:
Masayoshi Ohira Park in Mexico City
in Los Colomos, Guadalajara
in "Jardines de México" theme park in Cuernavaca
in Parque Tangamanga, San Luis Potosi
Mongolia:
Juulchin street cnr Jigjidjav street, Ulaanbaatar, established in 2005 by a Mongolian sumo wrestler
Monaco:
Jardin Japonais, Larvotto
Netherlands:
The Japanse Tuin of Clingendael park
The garden in Lelystad, a private modern Japanese zen ( meaning "dry rock") garden
The Von Siebold Memorial Garden in Leiden
Nicaragua:
Parque Japón Nicaragua, in Managua
Norway:
Japanhagen in Milde, Bergen – opened 2005, part of the botanical garden of the University of Bergen – (landscape architect Haruto Kobayashi)
Philippines:
The Japanese garden at Rizal Park in Ermita, Manila
The Japanese garden at Lake Caliraya in Cavinti, Laguna
Poland:
The Japanese Garden in Wrocław – founded 1913, restored 1996–1997, destroyed by flood, restored 1999
The Japanese garden in Przelewice – a part of Dendrological Garden in Przelewice founded in 1933
Romania:
in Bucharest, Herăstrău Park
in Cluj-Napoca, Cluj-Napoca Botanical Garden
Russia:
The Japanese garden in Moscow – founded 1983, opened 1987 (landscape architect Ken Nakajima)
or Japanese rock garden in Irkutsk – opened 2012 (landscape architect Takuhiro Yamada), part of the Botanic Garden of the Irkutsk State University
Serbia:
The Japanese garden in Botanical Garden Jevremovac – opened 2004 (landscape architects Vera and Mihailo Grbic)
Singapore:
Japanese Garden – a garden island located in Jurong Lake
South Africa:
Japanese Garden in Durban at Athlone district
Spain:
Zen Gardens of the Autonomous University of Barcelona at the faculty of translation and interpretation
Sweden:
Japanska Trädgården in Ronneby Brunnspark, Blekinge
The "Japandalen" (Japan Valley) of Gothenburg Botanical Garden
Turkey:
Eskişehir Anadolu University Japanese Botanical Garden
Uruguay:
Jardín Japonés, Montevideo – opened 2001 by Princess Sayako
Uzbekistan:
in Tashkent, at exhibition centre
| Technology | Buildings and infrastructure | null |
769065 | https://en.wikipedia.org/wiki/Niche%20construction | Niche construction | Niche construction is the ecological process by which an organism alters its own (or another species') local environment. These alterations can be a physical change to the organism’s environment, or it can encompass the active movement of an organism from one habitat to another where it then experiences different environmental pressures. Examples of niche construction include the building of nests and burrows by animals, the creation of shade, the influencing of wind speed, and alternations to nutrient cycling by plants. Although these modifications are often directly beneficial to the constructor, they are not necessarily always. For example, when organisms dump detritus, they can degrade their own local environments. Within some biological evolutionary frameworks, niche construction can actively beget processes pertaining to ecological inheritance whereby the organism in question “constructs” new or unique ecologic, and perhaps even sociologic environmental realities characterized by specific selective pressures.
Evolution
For niche construction to affect evolution it must satisfy three criteria: 1) the organism must significantly modify environmental conditions, 2) these modifications must influence one or more selection pressures on a recipient organism, and 3) there must be an evolutionary response in at least one recipient population caused by the environmental modification. The first two criteria alone provide evidence of niche construction.
Recently, some biologists have argued that niche construction is an evolutionary process that works in conjunction with natural selection. Evolution entails networks of feedbacks in which previously selected organisms drive environmental changes, and organism-modified environments subsequently select for changes in organisms. The complementary match between an organism and its environment results from the two processes of natural selection and niche construction. The effect of niche construction is especially pronounced in situations where environmental alterations persist for several generations, introducing the evolutionary role of ecological inheritance. This theory emphasizes that organisms inherit two legacies from their ancestors: genes and a modified environment. A niche constructing organism may or may not be considered an ecosystem engineer. Ecosystem engineering is a related but non-evolutionary concept referring to structural changes brought about in the environment by organisms.
Examples
The following are some examples of niche construction:
Earthworms physically and chemically modify the soil in which they live. Only by changing the soil can these primarily aquatic organisms live on land. Earthworm soil processing benefits plant species and other biota present in the soil, as originally pointed out by Darwin in his book The Formation of Vegetable Mould through the Action of Worms.
Lemon ants (Myrmelachista schumanni) employ a specialized method of suppression that regulates the growth of certain trees. They live in the trunks of Duroia hirsuta trees found in the Amazonian rain forest of Peru. Lemon ants use formic acid (a chemical fairly common among species of ants) as a herbicide. By eliminating trees unsuitable for lemon ant colonies, these ants produce distinctive habitats known as Devil's gardens.
Beavers build dams and thereby create lakes that drastically shape and alter riparian ecosystems. These activities modify nutrient cycling and decomposition dynamics, influence the water and materials transported downstream, and ultimately influence plant and community composition and diversity.
Benthic diatoms living in estuarine sediments in the Bay of Fundy, Canada, secrete carbohydrate exudates that bind the sand and stabilize the environment. This changes the physical state of the sand which allows other organisms (such as the amphipod Corophium volutator) to colonize the area.
Chaparrals and pines increase the frequency of forest fire through the dispersal of needles, cones, seeds and oils, essentially littering the forest floor. The benefit of this activity is facilitated by an adaptation for fire resistance which benefits them relative to their competitors.
Saccharomyces cerevisiae yeast creates a novel environment out of fermenting fruit. This fermentation process in turn attracts fruit flies that it is closely associated with and utilizes for transportation.
Cyanobacteria provide an example on a planetary scale through the production of oxygen as a waste product of photosynthesis (see Great Oxygenation Event). This dramatically changed the composition of the Earth’s atmosphere and oceans, with vast macroevolutionary and ecological consequences.
Microbialites represent ancient niches constructed by bacterial communities which give evidence that niche construction was present on early life forms.
Consequences
As creatures construct new niches, they can have a significant effect on the world around them.
An important consequence of niche construction is that it can affect the natural selection experienced by the species doing the constructing. The common cuckoo illustrates such a consequence. It parasitizes other birds by laying its eggs in their nests. This had led to several adaptations among the cuckoos, including a short incubation time for their eggs. The eggs need to hatch first so that the chick can push the host's eggs out of the nest, ensuring it has no competition for the parents' attention. Another adaptation it has acquired is that the chick mimics the calls of multiple young chicks, so that the parents are bringing in food not just for one offspring, but a whole brood.
Niche construction can also generate co-evolutionary interactions, as illustrated by the above earthworm, beaver and yeast examples.
The development of many organisms, and the recurrence of traits across generations, has been found to depend critically on the construction of developmental environments such as nests by ancestral organisms. Ecological inheritance refers to the inherited resources and conditions, and associated modified selection pressures, that ancestral organisms bequeath to their descendants as a direct result of their niche construction.
Niche construction has important implications for understanding, managing, and conserving ecosystems.
History
Niche construction theory (NCT) has been anticipated by diverse people in the past, including by the physicist Erwin Schrödinger in his What Is Life? and Mind and Matter essays (1944). An early advocate of the niche construction perspective in biology was the developmental biologist, Conrad Waddington. He drew his attention to the many ways in which animals modify their selective environments throughout their lives, by choosing and changing their environmental conditions, a phenomenon that he termed "the exploitive system".
The niche construction perspective was subsequently brought to prominence through the writings of Harvard evolutionary biologist, Richard Lewontin. In the 1970s and 1980s Lewontin wrote a series of articles on adaptation, in which he pointed out that organisms do not passively adapt through selection to pre-existing conditions, but actively construct important components of their niches.
Oxford biologist John Odling-Smee (1988) was the first person to coin the term 'niche construction', and the first to make the argument that ‘niche construction’ and ‘ecological inheritance’ should be recognized as evolutionary processes. Over the next decade research into niche construction increased rapidly, with a rush of experimental and theoretical studies across a broad range of fields.
Modeling niche construction
Mathematical evolutionary theory explores both the evolution of niche construction, and its evolutionary and ecological consequences. These analyses suggest that niche construction is of considerable importance. For instance, niche construction can:
fix genes or phenotypes that would otherwise be deleterious, create or eliminate equilibria, and affect evolutionary rates;
cause evolutionary time lags, generate momentum, inertia, autocatalytic effects, catastrophic responses to selection, and cyclical dynamics;
drive niche-constructing traits to fixation by creating statistical associations with recipient traits;
facilitate the evolution of cooperation;
regulate environmental states, allowing persistence in otherwise inhospitable conditions, facilitating range expansion and affecting carrying capacities;
drive coevolutionary events, exacerbate and ameliorate competition, affect the likelihood of coexistence and produce macroevolutionary trends.
Humans
Niche construction theory has had a particular impact in the human sciences, including biological anthropology, archaeology, and psychology. Niche construction is now recognized to have played important roles in human evolution, including the evolution of cognitive capabilities. Its impact is probably because it is immediately apparent that humans possess an unusually potent capability to regulate, construct and destroy their environments, and that this is generating some pressing current problems (e.g. climate change, deforestation, urbanization). However, human scientists have been attracted to the niche construction perspective because it recognizes human activities as a directing process, rather than merely the consequence of natural selection. Cultural niche construction can also feed back to affect other cultural processes, even affecting genetics.
Niche construction theory emphasizes how acquired characters play an evolutionary role, through transforming selective environments. This is particularly relevant to human evolution, where our species appears to have engaged in extensive environmental modification through cultural practices. Such cultural practices are typically not themselves biological adaptations (rather, they are the adaptive product of those much more general adaptations, such as the ability to learn, particularly from others, to teach, to use language, and so forth, that underlie human culture).
Mathematical models have established that cultural niche construction can modify natural selection on human genes and drive evolutionary events. This interaction is known as gene-culture coevolution. There is now little doubt that human cultural niche construction has co-directed human evolution. Humans have modified selection, for instance, by dispersing into new environments with different climatic regimes, devising agricultural practices or domesticating livestock. A well-researched example is the finding that dairy farming created the selection pressure that led to the spread of alleles for adult lactase persistence. Analyses of the human genome have identified many hundreds of genes subject to recent selection, and human cultural activities are thought to be a major source of selection in many cases. The lactase persistence example may be representative of a very general pattern of gene-culture coevolution.
Niche construction is also now central to several accounts of how language evolved. For instance, Derek Bickerton describes how our ancestors constructed scavenging niches that required them to communicate in order to recruit sufficient individuals to drive off predators away from megafauna corpses. He maintains that our use of language, in turn, created a new niche in which sophisticated cognition was beneficial.
Current status
While the fact that niche construction occurs is non-contentious, and its study goes back to Darwin's classic books on earthworms and corals, the evolutionary consequences of niche construction have not always been fully appreciated. Researchers differ over to what extent niche construction requires changes in understanding of the evolutionary process. Many advocates of the niche-construction perspective align themselves with other progressive elements in seeking an extended evolutionary synthesis, a stance that other prominent evolutionary biologists reject. Laubichler and Renn argue that niche construction theory offers the prospect of a broader synthesis of evolutionary phenomena through "the notion of expanded and multiple inheritance systems (from genomic to ecological, social and cultural)."
Niche construction theory (NCT) remains controversial, particularly amongst orthodox evolutionary biologists. In particular, the claim that niche construction is an evolutionary process has excited controversy. A collaboration between some critics of the niche-construction perspective and one of its advocates attempted to pinpoint their differences. They wrote:
"NCT argues that niche construction is a distinct evolutionary process, potentially of equal importance to natural selection. The skeptics dispute this. For them, evolutionary processes are processes that change gene frequencies, of which they identify four (natural selection, genetic drift, mutation, migration [ie. gene flow])... They do not see how niche construction either generates or sorts genetic variation independently of these other processes, or how it changes gene frequencies in any other way. In contrast, NCT adopts a broader notion of an evolutionary process, one that it shares with some other evolutionary biologists. Although the advocate agrees that there is a useful distinction to be made between processes that modify gene frequencies directly, and factors that play different roles in evolution... The skeptics probably represent the majority position: evolutionary processes are those that change gene frequencies. Advocates of NCT, in contrast, are part of a sizable minority of evolutionary biologists that conceive of evolutionary processes more broadly, as anything that systematically biases the direction or rate of evolution, a criterion that they (but not the skeptics) feel niche construction meets."
The authors conclude that their disagreements reflect a wider dispute within evolutionary theory over whether the modern synthesis is in need of reformulation, as well as different usages of some key terms (e.g., evolutionary process).
Further controversy surrounds the application of niche construction theory to the origins of agriculture within archaeology. In a 2015 review, archaeologist Bruce Smith concluded: "Explanations [for domestication of plants and animals] based on diet breadth modeling are found to have a number of conceptual, theoretical, and methodological flaws; approaches based on niche construction theory are far better supported by the available evidence in the two regions considered [eastern North America and the Neotropics]". However, other researchers see no conflict between niche construction theory and the application of behavioral ecology methods in archaeology.
A critical review by Manan Gupta and colleagues was published in 2017 which led to a dispute amongst critics and proponents.
In 2018 another review updates the importance of niche construction and extragenetic adaptation in evolutionary processes.
| Biology and health sciences | Ecology | Biology |
770768 | https://en.wikipedia.org/wiki/Liquid%20fuel | Liquid fuel | Liquid fuels are combustible or energy-generating molecules that can be harnessed to create mechanical energy, usually producing kinetic energy; they also must take the shape of their container. It is the fumes of liquid fuels that are flammable instead of the fluid.
Most liquid fuels in widespread use are derived from fossil fuels; however, there are several types, such as hydrogen fuel (for automotive uses), ethanol, and biodiesel, which are also categorized as a liquid fuel. Many liquid fuels play a primary role in transportation and the economy.
Liquid fuels are contrasted with solid fuels and gaseous fuels.
General properties
Some common properties of liquid fuels are that they are easy to transport, and can be handled with relative ease. Physical properties of liquid fuels vary by temperature, though not as greatly as for gaseous fuels. Some of these properties are: flash point, the lowest temperature at which a flammable concentration of vapor is produced; fire point, the temperature at which sustained burning of vapor will occur; cloud point for diesel fuels, the temperature at which dissolved waxy compounds begin to coalesce, and pour point, the temperature below which the fuel is too thick to pour freely. These properties affect the safety and handling of the fuel.
Petroleum fuels
Most liquid fuels used currently are produced from petroleum. The most notable of these is gasoline. Scientists generally accept that petroleum formed from the fossilized remains of dead plants and animals by exposure to heat and pressure in the Earth's crust.
Gasoline
Gasoline is the most widely used liquid fuel. Gasoline, as it is known in United States and Canada, or petrol virtually everywhere else, is made of hydrocarbon molecules (compounds that contain hydrogen and carbon only) forming aliphatic compounds, or chains of carbons with hydrogen atoms attached. However, many aromatic compounds (carbon chains forming rings) such as benzene are found naturally in gasoline and cause the health risks associated with prolonged exposure to the fuel.
Production of gasoline is achieved by distillation of crude oil. The desirable liquid is separated from the crude oil in refineries. Crude oil is extracted from the ground in several processes, the most commonly seen may be beam pumps. To create gasoline, petroleum must first be removed from crude oil.
Liquid gasoline itself is not actually burned, but its fumes ignite, causing the remaining liquid to evaporate and then burn. Gasoline is extremely volatile and easily combusts, making any leakage potentially extremely dangerous. Gasoline sold in most countries carries a published octane rating. The octane number is an empirical measure of the resistance of gasoline to combusting prematurely, known as knocking. The higher the octane rating, the more resistant the fuel is to autoignition under high pressures, which allows for a higher compression ratio. Engines with a higher compression ratio, commonly used in race cars and high-performance regular-production automobiles, can produce more power; however, such engines require a higher octane fuel. Increasing the octane rating has, in the past, been achieved by adding 'anti-knock' additives such as lead-tetra-ethyl. Because of the environmental impact of lead additives, the octane rating is increased today by refining out the impurities that cause knocking.
Diesel
Conventional diesel is similar to gasoline in that it is a mixture of aliphatic hydrocarbons extracted from petroleum. Diesel may cost more or less than gasoline, but generally costs less to produce because the extraction processes used are simpler. Some countries (particularly Canada, India and Italy) also have lower tax rates on diesel fuels.
After distillation, the diesel fraction is normally processed to reduce the amount of sulfur in the fuel. Sulfur causes corrosion in vehicles, acid rain and higher emissions of soot from the tail pipe (exhaust pipe). Historically, in Europe lower sulfur levels than in the United States were legally required. However, recent US legislation reduced the maximum sulfur content of diesel from 3,000 ppm to 500 ppm in 2007, and 15 ppm by 2010. Similar changes are also underway in Canada, Australia, New Zealand and several Asian countries. | Technology | Fuel | null |
771718 | https://en.wikipedia.org/wiki/Bichon | Bichon | A bichon () is a distinct type of toy dog; it is typically kept as a companion dog. Believed to be descended from the Barbet, it is believed the bichon-type dates to at least the 11th century; it was relatively common in 14th-century France, where they were kept as pets of the royalty and aristocracy. From France, these dogs spread throughout the courts of Europe, with dogs of very similar form being seen in a number of portraits of the upper classes of Germany, Portugal and Spain; from Europe, the type also spread to colonies in Africa and South America. The name "bichon" is believed to be a contraction of "barbichon", which means "little barbet".
Breeds
Bichon Frisé
The Bichon Frisé, formally known as the Bichon Tenerife, Tenerife dog or Canary Island lap dog, was bred on the island of Tenerife; it was believed to be descended from bichon-type dogs introduced from Spain in the 16th century. From the Canary Islands, the breed was imported back to the Continent where it became the sometimes favourite of the European courts, its fortunes depending upon the fashions of the time; during an ebb in the breed's popularity it found its way into a number of circuses, performing throughout Europe with organ grinders. The breed again fell out of favour from the end of the 19th century and it was due to the efforts of Belgian and French enthusiasts in the 1930s that rescued it from extinction, which is why it is today recognised as a Franco-Belgian dog breed.
Bolognese
The Bolognese, also known as the Bichon Bolognese, Bolognese toy dog, Bologneser, Gutschen Hundle or Schoshundle, takes its name from the northern Italian city of Bologna. It is usually claimed the breed descends from the Maltese. It is believed examples of the breed were kept by the Medici family, who gave these dogs as gifts to garner favour. it is said that Louis XIV of France, Philip II of Spain and Catherine the Great of Russia, among other European rulers, all kept some.
Bolonka
The Bolonka, also known as the Bolonka Zwetna, is a recently developed breed from Russia. It is a coloured variation of the white Bolonka Franzuska that was established as a breed in 1988.
Bolonka Franzuska
The name of the breed means French lap dog (franzuskaja = French, Bolonka = lap dog, French Bichon). Since the Renaissance, Bolognese lap dogs have enjoyed great popularity and admiration in princely and royal houses. The close ties between the French and Russian nobility led to the spread of the lap dogs of the French ladies to tsarist Russia. Even Catherine the Great loved and adored these dwarf puppies. She owned a few puppies during her reign at the Romanov Tsar's Court.
Coton de Tuléar
The Coton de Tuléar takes its name from the Madagascan port town of Tuléar, where it originated. The ancestors of these dogs were likely brought to Madagascar in the 17th century, where they became extremely popular with the local ruling class; they became so popular that laws were passed to prevent them being owned by commoners. The breed was relatively unknown to the outside world until the 1970s, when examples were exported to Europe and North America.
Havanese
The Havanese, also known as the Cuban shock dog, Bichon Havanais, Havana silk dog, Havana Spaniel, Havana Bichon or sometimes just the Havana, is a bichon-type breed from Cuba, taking its name from Havana. The breed is believed to be descended from bichon-type dogs imported to Cuba by Europeans in the 18th century, where it thrived. The breed's fortunes turned with the Cuban Revolution in the 1950s; the Communists saw these dogs as the property of the former elite and sought to eliminate such dogs; the breed was saved by expatriates who fled with their pets to the United States.
Löwchen
The Löwchen, whose name means "little lion dog" in German, is another French breed of the bichon-type. The breed was known as early as the 16th century; by the 1970s, it was estimated only 70 remained, although thanks to a publicity drive the breed has recovered. Usually clipped to resemble a lion with a mane, when its hair grows naturally its resemblance to other breeds of the type is clear.
Maltese
The Maltese, sometimes called the Bichon Maltaise, is claimed to be descended from dogs brought to Malta by the Phoenicians in ancient times. Proponents of this theory cite ancient artwork from Malta with dogs of similar form, although the first concrete record of this breed dates from 1805 when the Knights of Malta wrote that the once famous local dog was almost extinct. Today's Maltese is likely the result of subsequent crosses, and they became increasingly popular throughout the 19th and 20th centuries.
| Biology and health sciences | Dogs | Animals |
22228064 | https://en.wikipedia.org/wiki/Parkinson%27s%20disease | Parkinson's disease | Parkinson's disease (PD), or simply Parkinson's, is a neurodegenerative disease primarily of the central nervous system, affecting both motor and non-motor systems. Symptoms typically develop gradually, with non-motor issues becoming more prevalent as the disease progresses. Common motor symptoms include tremors, bradykinesia (slowness of movement), rigidity, and balance difficulties, collectively termed parkinsonism. In later stages, Parkinson's disease dementia, falls, and neuropsychiatric problems such as sleep abnormalities, psychosis, mood swings, or behavioral changes may arise.
Most cases of Parkinson's disease are sporadic, though contributing factors have been identified. Pathophysiology involves progressive degeneration of nerve cells in the substantia nigra, a midbrain region that provides dopamine to the basal ganglia, a system involved in voluntary motor control. The cause of this cell death is poorly understood but involves the aggregation of alpha-synuclein into Lewy bodies within neurons. Other potential factors involve genetic and environmental influences, medications, lifestyle, and prior health conditions.
Diagnosis is primarily based on signs and symptoms, typically motor-related, identified through neurological examination. Medical imaging techniques like positron emission tomography can support the diagnosis. Parkinson's typically manifests in individuals over 60, with about one percent affected. In those younger than 50, it is termed "early-onset PD".
No cure for Parkinson's is known, and treatment focuses on alleviating symptoms. Initial treatment typically includes L-DOPA, MAO-B inhibitors, or dopamine agonists. As the disease progresses, these medications become less effective and may cause involuntary muscle movements. Diet and rehabilitation therapies can help improve symptoms. Deep brain stimulation is used to manage severe motor symptoms when drugs are ineffective. There is little evidence for treatments addressing non-motor symptoms, such as sleep disturbances and mood instability. Life expectancy for those with PD is near-normal but is decreased for early-onset.
Classification and terminology
Parkinson's disease (PD) is a neurodegenerative disease affecting both the central and peripheral nervous systems, characterized by the loss of dopamine-producing neurons in the substantia nigra region of the brain. It is classified as a synucleinopathy due to the abnormal accumulation of the protein alpha-synuclein, which aggregates into Lewy bodies within affected neurons.
The loss of dopamine-producing neurons in the substantia nigra initially presents as movement abnormalities, leading to Parkinson's further categorization as a movement disorder. In 30% of cases, disease progression leads to the cognitive decline known as Parkinson's disease dementia (PDD). Alongside dementia with Lewy bodies, PDD is one of the two subtypes of Lewy body dementia.
The four cardinal motor symptoms of Parkinson's—bradykinesia (slowed movements), postural instability, rigidity, and tremor—are called parkinsonism. These four symptoms are not exclusive to Parkinson's and can occur in many other conditions, including HIV infection and recreational drug use. Neurodegenerative diseases that feature parkinsonism but have distinct differences are grouped under the umbrella of Parkinson-plus syndromes or, alternatively, atypical parkinsonian disorders. Parkinson's disease can be attributed to genetic factors or be idiopathic, in which there is no clearly identifiable cause. The latter, also called sporadic Parkinson's, makes up some 85–90% of cases.
Signs and symptoms
Motor
Although a wide spectrum of motor and non-motor symptoms appear in Parkinson's, the cardinal features remain tremor, bradykinesia, rigidity, and postural instability, collectively termed parkinsonism. Appearing in 70–75 percent of PD patients, tremor is often the predominant motor symptom. Resting tremor is the most common, but kinetic tremors—occurring during voluntary movements—and postural tremor—preventing upright, stable posture—also occur. Tremor largely affects the hands and feet: a classic parkinsonian tremor is "pill-rolling", a resting tremor in which the thumb and index finger make contact in a circular motion at 4–6 Hz frequency.
Bradykinesia describes difficulties in motor planning, beginning, and executing, resulting in overall slowed movement with reduced amplitude that affects sequential and simultaneous tasks. Bradykinesia can also lead to hypomimia, reduced facial expressions. Rigidity, also called rigor, refers to a feeling of stiffness and resistance to passive stretching of muscles that occurs in up to 89 percent of cases. Postural instability typically appears in later stages, leading to impaired balance and falls. Postural instability also leads to a forward stooping posture.
Beyond the cardinal four, other motor deficits, termed secondary motor symptoms, commonly occur. Notably, gait disturbances result in the Parkinsonian gait, which includes shuffling and paroxysmal deficits, where a normal gait is interrupted by rapid footsteps—known as festination—or sudden stops, impairing balance and causing falls. Most PD patients experience speech problems, including stuttering, hypophonic, "soft" speech, slurring, and festinating speech (rapid and poorly intelligible). Handwriting is commonly altered in Parkinson's, decreasing in size—known as micrographia—and becoming jagged and sharply fluctuating. Grip and dexterity are also impaired.
Non-motor
Neuropsychiatric and cognitive
Neuropsychiatric symptoms like anxiety, apathy, depression, hallucinations, and impulse control disorders occur in up to 60% of those with Parkinson's. They often precede motor symptoms and vary with disease progression. Non-motor fluctuations, including dysphoria, fatigue, and slowness of thought, are also common. Some neuropsychiatric symptoms are not directly caused by neurodegeneration but rather by its pharmacological management.
Cognitive impairments rank among the most prevalent and debilitating non-motor symptoms. These deficits may emerge in the early stages or before diagnosis, and their prevalence and severity tend to increase with disease progression. Ranging from mild cognitive impairment to severe Parkinson's disease dementia, these impairments include executive dysfunction, slowed cognitive processing speed, and disruptions in time perception and estimation.
Autonomic
Autonomic nervous system failures, known as dysautonomia, can appear at any stage of Parkinson's. They are among the most debilitating symptoms and greatly reduce quality of life. Although almost all PD patients suffer cardiovascular autonomic dysfunction, only some are symptomatic. Chiefly, orthostatic hypotension—a sustained blood pressure drop of at least 20 mmHg systolic or 10 mmHg diastolic after standing—occurs in 30–50 percent of cases. This can result in lightheadedness or fainting: subsequent falls are associated with higher morbidity and mortality.
Other autonomic failures include gastrointestinal issues like chronic constipation, impaired stomach emptying and subsequent nausea, excessive salivation, and dysphagia (difficulty swallowing): all greatly reduce quality of life. Dysphagia, for instance, can prevent pill swallowing and lead to aspiration pneumonia. Urinary incontinence, sexual dysfunction, and thermoregulatory dysfunction—including heat and cold intolerance and excessive sweating—also frequently occur.
Other non-motor symptoms
Sensory deficits appear in up to 90 percent of patients and are usually present at early stages. Nociceptive and neuropathic pain are common, with peripheral neuropathy affecting up to 55 percent of individuals. Visual impairments are also frequently observed, including deficits in visual acuity, color vision, eye coordination, and visual hallucinations. An impaired sense of smell is also prevalent. PD patients often struggle with spatial awareness, recognizing faces and emotions, and may experience challenges with reading and double vision.
Sleep disorders are highly prevalent in PD, affecting up to 98%. These disorders include insomnia, excessive daytime sleepiness, restless legs syndrome, REM sleep behavior disorder (RBD), and sleep-disordered breathing, many of which can be worsened by medication. RBD may begin years before the initial motor symptoms. Individual presentation of symptoms varies, although most people affected by PD show an altered circadian rhythm at some point of disease progression.
PD is also associated with a variety of skin disorders that include melanoma, seborrheic dermatitis, bullous pemphigoid, and rosacea. Seborrheic dermatitis is recognized as a premotor feature that indicates dysautonomia and demonstrates that PD can be detected not only by changes of nervous tissue, but tissue abnormalities outside the nervous system as well.
Causes
As of 2024, the cause of neurodegeneration in Parkinson's remains unclear, though it is believed to result from the interplay of genetic and environmental factors. The majority of cases are sporadic with no clearly identifiable cause, while approximately 5–10 percent are familial. Around a third of familial cases can be attributed to a single monogenic cause.
Molecularly, abnormal aggregation of alpha-synuclein is considered a key contributor to PD pathogenesis, although the trigger for this aggregation remains debated. Proteostasis disruption and the dysfunction of cell organelles, including endosomes, lysosomes, and mitochondria, are implicated in pathogenesis. Additionally, maladaptive immune and inflammatory responses are potential contributors. The substantial heterogeneity in PD presentation and progression suggests the involvement of multiple interacting triggers and pathogenic pathways.
Genetic
Parkinson's can be narrowly defined as a genetic disease, as rare inherited gene variants have been firmly linked to monogenic PD, and the majority of sporadic cases carry variants that increase PD risk. PD heritability is estimated to range from 22 to 40 percent. Around 15 percent of diagnosed individuals have a family history, of which 5–10 percent can be attributed to a causative risk gene mutation. However, carrying one of these mutations may not lead to disease. Rates of familial PD vary by ethnicity: monogenic PD occurs in up to 40% of Arab-Berber patients and 20% of Ashkenazi Jewish patients.
As of 2024, around 90 genetic risk variants across 78 genomic loci have been identified. Notable risk variants include SNCA (which encodes alpha-synuclein), LRRK2, and VPS35 for autosomal dominant inheritance, and PRKN, PINK1, and DJ1 for autosomal recessive inheritance. LRRK2 is the most common autosomal dominant variant, responsible for 1–2 percent of all PD cases and 40 percent of familial cases. Parkin variants are associated with nearly half of recessive, early-onset monogenic PD. Mutations in the GBA1 gene, linked to Gaucher's disease, are found in 5–15 percent of PD cases. The GBA1 variant frequently leads to cognitive decline.
Environmental
The limited heritability of Parkinson's strongly suggests environmental factors are involved, though identifying these risk factors and establishing causality is challenging due to PD's decade-long prodromal period. However, environmental toxicants such as air pollution, pesticides, and industrial solvents like trichloroethylene are strongly linked to Parkinson's.
Certain pesticides—like paraquat, glyphosate, and rotenone—are the most established environmental toxicants for Parkinson's and are likely causal. PD prevalence is strongly associated with local pesticide use, and many pesticides are mitochondrial toxins. Paraquat, for instance, structurally resembles metabolized MPTP, which selectively kills dopaminergic neurons by inhibiting mitochondrial complex 1 and is widely used to model PD. Pesticide exposure after diagnosis may also accelerate disease progression. Without pesticide exposure, an estimated 20 percent of all PD cases would be prevented.
Dietary Factors
Emerging research suggests that diet may influence the risk of developing Parkinson's. A 2023 study found that adherence to a Western dietary pattern—characterized by high consumption of red and processed meats, fried foods, high-fat dairy products, and refined grains—is associated with an increased risk of Parkinson's. Individuals with the highest adherence to this dietary pattern had significantly higher odds—approximately seven times—of developing the disease. Conversely, diets rich in fruits, vegetables, whole grains, and lean proteins have been associated with a reduced risk of Parkinson's, potentially offering protective benefits. Further research is needed to establish causality and better understand the mechanisms underlying these associations.
Hypotheses
Prionic hypothesis
The hallmark of Parkinson's is the formation of protein aggregates, beginning with alpha-synuclein fibrils and followed by Lewy bodies and Lewy neurites. The prion hypothesis suggests that alpha-synuclein aggregates are pathogenic and can spread to neighboring, healthy neurons and seed new aggregates. Some propose that the heterogeneity of PD may stem from different "strains" of alpha-synuclein aggregates and varying anatomical sites of origin. Alpha-synuclein propagation has been demonstrated in cell and animal models and is the most popular explanation for the progressive spread through specific neuronal systems. However, therapeutic efforts to clear alpha-synuclein have failed. Additionally, postmortem brain tissue analysis shows that alpha-synuclein pathology does not clearly progress through the nearest neural connections.
Braak's hypothesis
In 2002, Heiko Braak and colleagues proposed that Parkinson's disease begins outside the brain and is triggered by a "neuroinvasion" of some unknown pathogen. The pathogen enters through the nasal cavity and is swallowed into the digestive tract, initiating Lewy pathology in both areas. This alpha-synuclein pathology may then travel from the gut to the central nervous system through the vagus nerve. This theory could explain the presence of Lewy pathology in both the enteric nervous system and olfactory tract neurons, as well as clinical symptoms like loss of smell and gastrointestinal problems. It has also been suggested that environmental toxicants might be ingested in a similar manner to trigger PD.
Catecholaldehyde hypothesis
The enzyme monoamine oxidase (MAO) plays a central role in the metabolism of the neurotransmitter dopamine and other catecholamines. The catecholaldehyde hypothesis argues that the oxidation of dopamine by MAO into 3,4-dihydroxyphenylacetaldehyde (DOPAL) and hydrogen peroxide and the subsequent abnormal accumulation thereof leads to neurodegeneration. The theory posits that DOPAL interacts with alpha-synuclein and causes it to aggregate.
Mitochondrial dysfunction
Whether mitochondrial dysfunction is a cause or consequence of PD pathology remains unclear. Impaired ATP production, increased oxidative stress, and reduced calcium buffering may contribute to neurodegeneration. The finding that MPP+—a respiratory complex I inhibitor and MPTP metabolite—caused parkinsonian symptoms strongly implied that mitochondria contributed to PD pathogenesis. Alpha-synuclein and toxicants like rotenone similarly disrupt respiratory complex I. Additionally, faulty gene variants involved in familial Parkinson's—including PINK1 and Parkin—prevent the elimination of dysfunctional mitochondria through mitophagy.
Neuroinflammation
Some hypothesize that neurodegeneration arises from a chronic neuroinflammatory state created by local activated microglia and infiltrating immune cells. Mitochondrial dysfunction may also drive immune activation, particularly in monogenic PD. Some autoimmune disorders increase the risk of developing PD, supporting an autoimmune contribution. Additionally, influenza and herpes simplex virus infections increase the risk of PD, possibly due to a viral protein resembling alpha-synuclein. Parkinson's risk is also decreased with immunosuppressants.
Pathophysiology
Parkinson's disease has two hallmark pathophysiological processes: the abnormal aggregation of alpha-synuclein that leads to Lewy pathology, and the degeneration of dopaminergic neurons in the substantia nigra pars compacta. The death of these neurons reduces available dopamine in the striatum, which in turn affects circuits controlling movement in the basal ganglia. By the time motor symptoms appear, 50–80 percent of all dopaminergic neurons in the substantia nigra have degenerated.
However, cell death and Lewy pathology are not limited to the substantia nigra. The six-stage Braak system holds that alpha-synuclein pathology begins in the olfactory bulb or outside the central nervous system in the enteric nervous system before ascending the brain stem. In the third Braak stage, Lewy body pathology appears in the substantia nigra, and, by the sixth step, Lewy pathology has spread to the limbic and neocortical regions. Although Braak staging offers a strong basis for PD progression, the Lewy pathology around 50 percent patients do not adhere to the predicted model. Indeed, Lewy pathology is highly variable and may be entirely absent in some PD patients.
Alpha-synuclein pathology
Alpha-synuclein is an intracellular protein typically localized to presynaptic terminals and involved in synaptic vesicle trafficking, intracellular transport, and neurotransmitter release. When misfolded, it can aggregate into oligomers and proto-fibrils that in turn lead to Lewy body formation. Due to their lower molecular weight, oligomers and proto-fibrils may disseminate and be transmitted to other cells more rapidly.
Lewy bodies consist of a fibrillar exterior and granular core. Although alpha-synuclein is the dominant proteinaceous component, the core contains mitochondrial and autophagosomal membrane components, suggesting a link with organelle dysfunction. It is unclear whether Lewy bodies themselves contribute to or are simply the result of PD pathogenesis: alpha-synuclein oligomers can independently mediate cell damage, and neurodegeneration can precede Lewy body formation.
Pathways involved in neurodegeneration
Three major pathways—vesicular trafficking, lysosomal degradation, and mitochondrial maintenance—are known to be affected by and contribute to Parkinson's pathogenesis, with all three linked to alpha-synuclein. High risk gene variants also impair all three of these processes. All steps of vesicular trafficking are impaired by alpha-synuclein. It blocks endoplasmic reticulum (ER) vesicles from reaching the Golgi—leading to ER stress—and Golgi vesicles from reaching the lysosome, preventing alpha-synuclein degradation and leading to its build-up. Risky gene variants, chiefly GBA, further compromise lysosomal function. Although the mechanism is not well established, alpha-synuclein can impair mitochondrial function and cause subsequent oxidative stress. Mitochondrial dysfunction can in turn lead to further alpha-synuclein accumulation in a positive feedback loop. Microglial activation, possibly caused by alpha-synuclein, is also strongly indicated.
Risk factors
Positive risk factors
As 90 percent of Parkinson's cases are sporadic, the identification of the risk factors that may influence disease progression or severity is critical. The most significant risk factor in developing PD is age, with a prevalence of 1 percent in those aged over 65 and approximately 4.3 percent in age over 85. Traumatic brain injury significant increases PD risk, especially if recent. Dairy consumption correlates with a higher risk, possibly due to contaminants like heptachlor epoxide. Although the connection is unclear, melanoma diagnosis is associated with an approximately 45 percent risk increase. There is also an association between methamphetamine use and PD risk.
Protective factors
Although no compounds or activities have been mechanistically established as neuroprotective for Parkinson's, several factors have been found to be associated with a decreased risk. Tobacco use and smoking is strongly associated with a decreased risk, reducing the chance of developing PD by up to 70%. Various tobacco and smoke components have been hypothesized to be neuroprotective, including nicotine, carbon monoxide, and monoamine oxidase B inhibitors. Consumption of coffee, tea, or caffeine is also strongly associated with neuroprotection. Prescribed adrenergic antagonists like terazosin may reduce risk.
Although findings have varied, usage of nonsteroidal anti-inflammatory drugs (NSAIDs) like ibuprofen may be neuroprotective. Calcium channel blockers may also have a protective effect, with a 22% risk reduction reported. Higher blood concentrations of urate—a potent antioxidant—have been proposed to be neuroprotective. Although longitudinal studies observe a slight decrease in PD risk among those who consume alcohol—possibly due to alcohol's urate-increasing effect—alcohol abuse may increase risk.
Diagnosis
Diagnosis of Parkinson's disease is largely clinical, relying on medical history and examination of symptoms, with an emphasis on symptoms that appear in later stages. Although early stage diagnosis is not reliable, prodromal diagnosis may consider previous family history of Parkinson's and possible early symptoms like rapid eye movement sleep behavior disorder (RBD), reduced sense of smell, and gastrointestinal issues. Isolated RBD is a particularly significant sign as 90% of those affected will develop some form of neurodegenerative parkinsonism. Diagnosis in later stages requires the manifestation of parkinsonism, specifically bradykinesia and rigidity or tremor. Further support includes other motor and non-motor symptoms and genetic profiling.
A PD diagnosis is typically confirmed by two of the following criteria: responsiveness to levodopa, resting tremor, levodopa-induced dyskinesia, or with dopamine transporter single-proton emission computed tomography. If these criteria are not met, atypical parkinsonism is considered. However, definitive diagnoses can only be made post-mortem through pathological analysis. Misdiagnosis is common, with a reported error rate of near 25 percent, and diagnoses often change during follow-ups. Diagnosis can be further complicated by multiple overlapping conditions.
Imaging
Diagnosis can be aided by molecular imaging techniques such as magnetic resonance imaging (MRI), positron emission tomography (PET), and single-photon emission computed tomography (SPECT). As both conventional MRI and computed tomography (CT) scans are usually normal in patients with early PD, they can be used to exclude other pathologies that cause parkinsonism. Diffusion MRI can differentiate PD from multiple systems atrophy (MSA). Emerging MRI techniques of at least 3.0 T field strength—including neuromelanin-MRI, 1H-MRSI, and resting state fMRI—may detect abnormalities in the substantia nigra, nigrostriatal pathway, and elsewhere.
Unlike MRI, PET and SPECT use radioisotopes for imaging. Both techniques can aid diagnosis by characterizing PD-associated alterations in the metabolism and transport of dopamine in the basal ganglia. Largely used outside the United States, iodine-123-meta-iodobenzylguanidine myocardial scintigraphy can assess heart muscle denervation to support a PD diagnosis.
Differential diagnosis
Differential diagnosis of Parkinson's is among the most difficult in neurology. Differentiating early PD from atypical parkinsonian disorders is a major difficulty. In their initial stages, PD can be difficult to distinguish from the atypical neurodegenerative parkinsonisms, including MSA, dementia with Lewy bodies, and the tauopathies progressive supranuclear palsy and corticobasal degeneration. Other conditions that may present similarly to PD include vascular parkinsonism, Alzheimer's disease, and frontotemporal dementia.
The International Parkinson and Movement Disorder Society has proposed a set of criteria that, unlike the standard Queen's Square Brain Bank Criteria, includes non-exclusionary "red-flag" clinical features that may not suggest Parkinson's. A large number of "red flags" have been proposed and adopted for various conditions that might mimic the symptoms of PD. Diagnostic tests, including gene sequencing, molecular imaging techniques, and assessment of smell may also distinguish PD. MRI is particularly powerful due to several unique features for atypical parkinsonisms. Key distinguishing symptoms and features include:
Management
As of 2024, no disease-modifying therapies exist that reverse or slow neurodegeneration, processes respectively termed neurorestoration and neuroprotection. Patients are typically managed with a holistic approach that combines lifestyle modifications with physical therapy. Current pharmacological interventions purely target symptoms, by either increasing endogenous dopamine levels or directly mimicking dopamine's effect on the patient's brain. These include dopamine agonists, MAO-B inhibitors, and levodopa: the most widely used and effective drug. The optimal time to initiate pharmacological treatment is debated, but initial dopamine agonist and MAO-B inhibitor treatment and later levodopa therapy is common. Invasive procedures such as deep brain stimulation may be used for patients that do not respond to medication.
Medications
Levodopa
Levodopa (L-DOPA) is the most widely used and the most effective therapy—the gold standard—for Parkinson's treatment. The compound occurs naturally and is the immediate precursor for dopamine synthesis in the dopaminergic neurons of the substantia nigra. Levodopa administration reduces the dopamine deficiency, alleviating parkinsonian symptoms.
Despite its efficacy, levodopa poses several challenges and has been called the "pharmacologist's nightmare". Its metabolism outside the brain by aromatic L-amino acid decarboxylase (AAAD) and catechol-O-methyltransferase (COMT) can cause nausea and vomiting; inhibitors like carbidopa, entacapone, and benserazide are usually taken with levodopa to mitigate these effects. Symptoms may become unresponsive to levodopa, with sudden changes between a state of mobility ("ON time") and immobility ("OFF time"). Long-term levodopa use may also induce dyskinesia and motor fluctuations. Although this often causes levodopa use to be delayed to later stages, earlier administration leads to improved motor function and quality of life.
Dopamine agonists
Dopamine agonists are an alternative or complement for levodopa therapy. They activate dopamine receptors in the striatum, with reduced risk of motor fluctuations and dyskinesia. Ergot dopamine agonists were commonly used, but have been largely replaced with non-ergot compounds due to severe adverse effects like pulmonary fibrosis and cardiovascular issues. Non-ergot agonists are efficacious in both early and late stage Parkinson's, The agonist apomorphine is often used for drug-resistant OFF time in later-stage PD. However, after five years of use, impulse control disorders may occur in over 40 percent of PD patients taking dopamine agonists. A problematic, narcotic-like withdrawal effect may occur when agonist use is reduced or stopped. Compared to levodopa, dopamine agonists are more likely to cause fatigue, daytime sleepiness, and hallucinations.
MAO-B inhibitors
MAO-B inhibitors—such as safinamide, selegiline and rasagiline—increase the amount of dopamine in the basal ganglia by inhibiting the activity of monoamine oxidase B, an enzyme that breaks down dopamine. These compounds mildly alleviate motor symptoms when used as monotherapy but can also be used with levodopa and can be used at any disease stage. When used with levodopa, time spent in the off phase is reduced. Selegiline has been shown to delay the need for initial levodopa, suggesting that it might be neuroprotective and slow the progression of the disease. Common side effects are nausea, dizziness, insomnia, sleepiness, and (in selegiline and rasagiline) orthostatic hypotension. MAO-Bs are known to increase serotonin and cause a potentially dangerous condition known as serotonin syndrome.
Other drugs
Treatments for non-motor symptoms of PD have not been well studied and many medications are used off-label. A diverse range of symptoms beyond those related to motor function can be treated pharmaceutically. Examples include cholinesterase inhibitors for cognitive impairment and modafinil for excessive daytime sleepiness. Fludrocortisone, midodrine and droxidopa are commonly used off label for orthostatic hypotension related to autonomic dysfunction. Sublingual atropine or botulinum toxin injections may be used off-label for drooling. SSRIs and SNRIs are often used for depression related to PD, but there is a risk of serotonin syndrome with the SSRI or SNRI antidepressants. Doxepin and rasagline may reduce physical fatigue in PD. Other treatments have received government approval, such as the first FDA-approved treatment for PD psychosis, pimavanserin. Although its efficacy is inferior to off-label clozapine, it has significantly fewer side effects.
Invasive interventions
Surgery for Parkinson's first appeared in the 19th century and by the 1960s had evolved into ablative brain surgery that lesioned the basal ganglia, thalamus or globus pallidus (a pallidotomy). The discovery of L-DOPA for PD treatment caused ablative therapies to largely disappear. Ablative surgeries experienced a resurgence in the 1990s but were quickly superseded by newly-developed deep brain stimulation (DBS). Although gamma knife and high-intensity focused ultrasound surgeries have been developed for pallidotomies and thalamotomies, their use remains rare.
DBS involves the implantation of electrodes called neurostimulators, which sends electrical impulses to specific parts of the brain. DBS for the subthalamic nucleus and globus pallidus interna has high efficacy for up to 2 years, but longterm efficacy is unclear and likely decreases with time. DBS typically targets rigidity and tremor, and is recommended for PD patients who are intolerant or do not respond to medication. Cognitive impairment is the most common exclusion criteria.
Rehabilitation
Although pharmacological therapies can improve symptoms, patients' autonomy and ability to perform everyday tasks is still reduced by PD. As a result, rehabilitation is often useful. However, the scientific support for any single rehabilitation treatment is limited.
Exercise programs are often recommended, with preliminary evidence of efficacy. Regular physical exercise with or without physical therapy can be beneficial to maintain and improve mobility, flexibility, strength, gait speed, and quality of life. Aerobic, mind-body, and resistance training may be beneficial in alleviating PD-associated depression and anxiety. Strength training may increase manual dexterity and strength, facilitating daily tasks that require grasping objects.
In improving flexibility and range of motion for people experiencing rigidity, generalized relaxation techniques such as gentle rocking have been found to decrease excessive muscle tension. Other effective techniques to promote relaxation include slow rotational movements of the extremities and trunk, rhythmic initiation, diaphragmatic breathing, and meditation. Deep diaphragmatic breathing may also improve chest-wall mobility and vital capacity decreased by the stooped posture and respiratory dysfunctions of advanced Parkinson's. Rehabilitation techniques targeting gait and the challenges posed by bradykinesia, shuffling, and decreased arm swing include pole walking, treadmill walking, and marching exercises.
Speech therapies such as the Lee Silverman voice treatment may reduce the effect of speech disorders associated with PD. Occupational therapy is another rehabilitation strategy and can improve quality of life by enabling PD patients to find engaging activities and communal roles, adapt to their living environment, and improving domestic and work abilities.
Diet
Parkinson's poses digestive problems like constipation and prolonged emptying of stomach contents, and a balanced diet with periodical nutritional assessments is recommended to avoid weight loss or gain and minimize the consequences of gastrointestinal dysfunction. In particular, a Mediterranean diet is advised and may slow disease progression. As it can compete for uptake with amino acids derived from protein, levodopa should be taken 30 minutes before meals to minimize such competition. Low protein diets may also be needed by later stages. As the disease advances, swallowing difficulties often arise. Using thickening agents for liquid intake and an upright posture when eating may be useful; both measures reduce the risk of choking. Gastrostomy can be used to deliver food directly into the stomach. Increased water and fiber intake is used to treat constipation.
Palliative care
As Parkinson's is incurable, palliative care aims to improve the quality of life for both the patient and family by alleviating the symptoms and stress associated with illness. Early integration of palliative care into the disease course is recommended, rather than delaying until later stages. Palliative care specialists can help with physical symptoms, emotional factors such as loss of function and jobs, depression, fear, as well as existential concerns. Palliative care team members also help guide patients and families on difficult decisions caused by disease progression, such as wishes for a feeding tube, noninvasive ventilator or tracheostomy, use of cardiopulmonary resuscitation, and entering hospice care.
Prognosis
As Parkinson's is a heterogeneous condition with multiple etiologies, prognostication can be difficult and prognoses can be highly variable. On average, life expectancy is reduced in those with Parkinson's, with younger age of onset resulting in greater life expectancy decreases. Although PD subtype categorization is controversial, the 2017 Parkinson's Progression Markers Initiative study identified three broad scorable subtypes of increasing severity and more rapid progression: mild-motor predominant, intermediate, and diffuse malignant. Mean years of survival post-diagnosis were 20.2, 13.1, and 8.1.
Recent research has focused on the progression of the disease itself, indicating that heterogeneity in its progression can be described best by two subtypes: a fast-progressing and a slow-progressing subtype. Patients with the fast-progressing subtype demonstrate a more accelerated emergence of non-motor, cognitive, and autonomic symptoms, accompanied by a 3.4-fold elevated mortality rate. They also exhibit a more rapid decline in dopaminergic neurons within the substantia nigra, and more often Alzheimer's disease co-pathology or a mutation in the GBA gene. Conversely, patients with the slow-progressive type manifest a more asymmetric disease, marked by a gradual progression of symptoms and a reduced mortality risk. Education level seems to be a protective factor associated with the slow-progressing subtype. The concept of two Parkinson's disease progression subtypes has been replicated in multiple cohorts.
Around 30% of Parkinson's patients develop dementia, and is 12 times more likely to occur in elderly patients of those with severe PD. Dementia is less likely to arise in patients with tremor-dominant PD. Parkinson's disease dementia is associated with a reduced quality of life in people with PD and their caregivers, increased mortality, and a higher probability of needing nursing home care.
The incidence rate of falls in Parkinson's patients is approximately 45 to 68%, thrice that of healthy individuals, and half of such falls result in serious secondary injuries. Falls increase morbidity and mortality. Around 90% of those with PD develop hypokinetic dysarthria, which worsens with disease progression and can hinder communication. Additionally, over 80% of PD patients develop dysphagia: consequent inhalation of gastric and oropharyngeal secretions can lead to aspiration pneumonia. Aspiration pneumonia is responsible for 70% of deaths in those with PD.
Epidemiology
As of 2024, Parkinson's is the second most common neurodegenerative disease and the fastest-growing in total number of cases. As of 2023, global prevalence was estimated to be 1.51 per 1000. Although it is around 40% more common in men, age is the dominant predeterminant of Parkinson's. Consequently, as global life expectancy has increased, Parkinson's disease prevalence has also risen, with an estimated increase in cases by 74% from 1990 to 2016. The total number is predicted to rise to over 12 million patients by 2040. Some label this a pandemic.
This increase may be due to a number of global factors, including prolonged life expectancy, increased industrialisation, and decreased smoking. Although genetics is the sole factor in a minority of cases, most cases of Parkinson's are likely a result of gene-environment interactions: concordance studies with twins have found Parkinson's heritability to be just 30%. The influence of multiple genetic and environmental factors complicates epidemiological efforts.
Relative to Europe and North America, disease prevalence is lower in Africa but similar in Latin America. Although China is predicted to have nearly half of the global Parkinson's population by 2030, estimates of prevalence in Asia vary. Potential explanations for these geographic differences include genetic variation, environmental factors, health care access, and life expectancy. Although PD incidence and prevalence may vary by race and ethnicity, significant disparities in care, diagnosis, and study participation limit generalizability and lead to conflicting results. Within the United States, high rates of PD have been identified in the Midwest, the South, and agricultural regions of other states: collectively termed the "PD belt". The association between rural residence and Parkinson's has been hypothesized to be caused by environmental factors like herbicides, pesticides, and industrial waste.
History
In 1817, English physician James Parkinson published the first full medical description of the disease as a neurological syndrome in his monograph An Essay on the Shaking Palsy. He presented six clinical cases, including three he had observed on the streets near Hoxton Square in London. Parkinson described three cardinal symptoms: tremor, postural instability and "paralysis" (undistinguished from rigidity or bradykinesia), and speculated that the disease was caused by trauma to the spinal cord.
There was little discussion or investigation of the "shaking palsy" until 1861, when Frenchman Jean-Martin Charcot—regarded as the father of neurology—began expanding Parkinson's description, adding bradykinesia as one of the four cardinal symptoms. In 1877, Charcot renamed the disease after Parkinson, as not all patients displayed the tremor suggested by "shaking palsy". Subsequent neurologists who made early advances to the understanding of Parkinson's include Armand Trousseau, William Gowers, Samuel Kinnier Wilson, and Wilhelm Erb.
Although Parkinson is typically credited with the first detailed description of PD, many previous texts reference some of the disease's clinical signs. In his essay, Parkinson himself acknowledged partial descriptions by Galen, William Cullen, Johann Juncker, and others. Possible earlier but incomplete descriptions include a Nineteenth Dynasty Egyptian papyrus, the ayurvedic text Charaka Samhita, Ecclesiastes 12:3, and a discussion of tremors by Leonardo da Vinci. Multiple traditional Chinese medicine texts may include references to PD, including a discussion in the Yellow Emperor's Internal Classic () of a disease with symptoms of tremor, stiffness, staring, and stooped posture. In 2009, a systematic description of PD was found in the Hungarian medical text Pax corporis written by Ferenc Pápai Páriz in 1690, some 120 years before Parkinson. Although Páriz correctly described all four cardinal signs, it was only published in Hungarian and was not widely distributed.
In 1912, Frederic Lewy described microscopic particles in affected brains, later named Lewy bodies. In 1919, Konstantin Tretiakoff reported that the substantia nigra was the main brain structure affected, corroborated by Rolf Hassler in 1938. The underlying changes in dopamine signaling were identified in the 1950s, largely by Arvid Carlsson and Oleh Hornykiewicz. In 1997, Polymeropoulos and colleagues at the NIH discovered the first gene for PD, SNCA, which encodes alpha-synuclein. Alpha-synuclein was in turn found to be the main component of Lewy bodies by Spillantini, Trojanowski, Goedert, and others. Anticholinergics and surgery were the only treatments until the use of levodopa, which, although first synthesized by Casimir Funk in 1911, did not enter clinical use until 1967. By the late 1980s, deep brain stimulation introduced by Alim Louis Benabid and colleagues at Grenoble, France, emerged as an additional treatment.
Society and culture
Social impact
For some people with PD, masked facial expressions and difficulty moderating facial expressions of emotion or recognizing other people's facial expressions can impact social well-being. As the condition progresses, tremor, other motor symptoms, difficulty communicating, or mobility issues may interfere with social engagement, causing individuals with PD to feel isolated. Public perception and awareness of PD symptoms such as shaking, hallucinating, slurring speech, and being off balance is lacking in some countries and can lead to stigma.
Cost
The economic cost of Parkinson's to both individuals and society is high. Globally, most government health insurance plans do not cover Parkinson's therapies, requiring patients to pay out-of-pocket. Indirect costs include lifetime earnings losses due to premature death, productivity losses, and caregiver burdens. The duration and progessive nature of PD can place a heavy burden on caregivers: family members like spouses dedicate around 22 hours per week to care.
In 2010, the total economic burden of Parkinson's across Europe, including indirect and direct medical costs, was estimated to be €13.9 billion (US $14.9 billion) in 2010. The total burden in the United States was estimated to be $51.9 billion in 2017, and is project to surpass $79 billion by 2037. However, as of 2022, no rigorous economic surveys had been performed for low or middle income nations. Regardless, preventative care has been identified as crucial to prevent the rapidly increasing incidence of Parkinson's from overwhelming national health systems.
Advocacy
The birthday of James Parkinson, 11 April, has been designated as World Parkinson's Day. A red tulip was chosen by international organizations as the symbol of the disease in 2005; it represents the 'James Parkinson' tulip cultivar, registered in 1981 by a Dutch horticulturalist.
Advocacy organizations include the National Parkinson Foundation, which has provided more than $180 million in care, research, and support services since 1982, Parkinson's Disease Foundation, which has distributed more than $115 million for research and nearly $50 million for education and advocacy programs since its founding in 1957 by William Black; the American Parkinson Disease Association, founded in 1961; and the European Parkinson's Disease Association, founded in 1992.
Notable cases
In the 21st century, the diagnosis of Parkinson's among notable figures has increased the public's understanding of the disorder. Actor Michael J. Fox was diagnosed with PD at 29 years old, and has used his diagnosis to increase awareness of the disease. To illustrate the effects of the disease, Fox has appeared without medication in television roles and before the United States Congress without medication. The Michael J. Fox Foundation, which he founded in 2000, has raised over $2 billion for Parkinson's research.
Boxer Muhammad Ali showed signs of PD when he was 38, but was undiagnosed until he was 42; he has been called the "world's most famous Parkinson's patient".
Whether he had PD or parkinsonism related to boxing is unresolved. Cyclist and Olympic medalist Davis Phinney, diagnosed with Parkinson's at 40, started the Davis Phinney Foundation in 2004 to support PD research.
Several historical figures have been theorized to have had Parkinson's, often framed in the industriousness and inflexibility of the so-called "Parkinsonian personality". For instance, English philosopher Thomas Hobbes was diagnosed with "shaking palsy"—assumed to have been Parkinson's—but continued writing works such as Leviathan. Adolf Hitler is widely believed to have had Parkinson's, and the condition may have influenced his decision making. Mao Zedong was also reported to have died from the disorder.
Clinical research
As of 2024, no disease-modifying therapies exist that reverse or slow the progression of Parkinson's. Active research directions include the search for new animal models of the disease and development and trial of gene therapy, stem cell transplants, and neuroprotective agents. Improved treatments will likely combine therapeutic strategies to manage symptoms and enhance outcomes. Reliable biomarkers are needed for early diagnosis, and research criteria for their identification have been established.
Neuroprotective treatments
Anti-alpha-synuclein drugs that prevent alpha-synuclein oligomerization and aggregation or promote their clearance are under active investigation, and potential therapeutic strategies include small molecules and immunotherapies like vaccines and monoclonal antibodies. While immunotherapies show promise, their effiacy is often inconsistent. Anti-inflammatory drugs that target NLRP3 and the JAK-STAT signaling pathway offer another potential therapeutic approach.
As the gut microbiome in PD is often disrupted and produces toxic compounds, fecal microbiota transplants might restore a healthy microbiome and alleviate various motor and non-motor symptoms. Neurotrophic factors—peptides that enhance the growth, maturation, and survival of neurons—show modest results but require invasive surgical administration. Viral vectors may represent a more feasible delivery platform. Calcium channel blockers may restore the calcium imbalance present in Parkinson's, and are being investigated as a neuroprotective treatment. Other therapies, like deferiprone, may reduce the abnormal accumulation of iron in PD.
Cell-based therapies
In contrast to other neurodegenerative disorders, many Parkinson's symptoms can be attributed to the loss of a single cell type. Consequently, dopaminergic neuron regeneration is a promising therapeutic approach. Although most initial research sought to generate dopaminergic neuron precursor cells from fetal brain tissue, pluripotent stem cells—particularly induced pluripotent stem cells (iPSCs)—have become an increasingly popular tissue source.
Both fetal and iPSC-derived DA neurons have been transplanted into patients in clinical trials. Although some patients see improvements, the results are highly variable. Adverse effects, such as dyskinesia arising from excess dopamine release by the transplanted tissues, have also been observed.
Gene therapy
Gene therapy for Parkinson's seeks to restore the healthy function of dopaminergic neurons in the substantia nigra by delivering genetic material—typically through a viral vector—to these diseased cells. This material may deilver a functional, wildtype version of a gene, or knockdown a pathological variants. Experimental gene therapies for PD have aimed to increase the expression of growth factors or enzymes involved in dopamine synthesis, like tyrosine hydroxylase. The one-time delivery of genes circumvents the recurrent invasive administration required to administer some peptides and proteins to the brain. MicroRNAs are an emerging PD gene therapy platform that may serve as an alternative to viral vectors.
| Biology and health sciences | Non-infectious disease | null |
4719512 | https://en.wikipedia.org/wiki/Skin%20infection | Skin infection | A skin infection is an infection of the skin in humans and other animals, that can also affect the associated soft tissues such as loose connective tissue and mucous membranes. They comprise a category of infections termed skin and skin structure infections (SSSIs), or skin and soft tissue infections (SSTIs), and acute bacterial SSSIs (ABSSSIs). They are distinguished from dermatitis (inflammation of the skin), although skin infections can result in skin inflammation.
Causes
Bacterial
Bacterial skin infections affected about 155 million people and cellulitis occurred in about 600 million people in 2013. Bacterial skin infections include:
Cellulitis, a diffuse inflammation of connective tissue with severe inflammation of dermal and subcutaneous layers of the skin. Further, cellulitis can be classified based into purulent and non-purulent cellulitis, based on the most likely causative agent and the symptoms presentation. Purulent cellulitis is often caused by Staphylococcus aureus, including both methicillin-sensitive (MSSA) and methicillin-resistant S. aureus (MRSA). Non-purulent cellulitis is most often associated with group A beta-hemolytic streptococci, such as Streptococcus pyogenes. In rare cases, the infection can progress into necrotizing fasciitis, a serious and potentially fatal infection.
Erysipelas, a bacterial infection which primarily affects superficial dermis, and often involves superficial lymphatics. Unlike cellulitis, it does not affect deeper layers of the skin. It is primarily caused by the Group A beta-hemolytic streptococci, with Streptococcus pyogenes being the most common pathogen.
Folliculitis, a skin condition in which hair follicle, located in the dermal layer of the skin, becomes infected and inflamed. It is predominantly caused by bacterial infections, especially Staphylococcus aureus, leading to superficial bacterial folliculitis. Other causative agents of folliculitis include fungi (most commonly Malassezia species), viruses (such as herpes simplex virus), and mites (Demodex species).
Impetigo, a highly contagious ABSSSI (acute bacterial skin and skin structure infection) common among pre-school children, primarily associated with the pathogens S. aureus and S. pyogenes. Impetigo has a characteristic appearance with yellow (honey-coloured), crusted lesions occurring around mouth, nose, and chin. It is estimated, that at any given time, it affects 140 million people globally. Impetigo can be further classified into bullous and nonbullous forms. Nonbullous impetigo is the most common form, representing approximately 70% of diagnosed cases. The remaining 30% of cases represent bullous form, which is primarily caused by S. aureus. In rare instances, bullous impetigo can spread and lead to Staphylococcal Scalded Skin Syndrome (SSSS), a potentially life-threatening infection.
Fungal
Fungal skin infections may present as either a superficial or deep infection of the skin, hair, and/or nails. Mycetoma are a broad group of fungal infections that characteristically originate in the skin and subcutaneous tissues of the foot. If not treated appropriately and in a timely fashion mycetoma infections can extend to deeper tissues like bones and joints causing osteomyelitis. Extensive osteomyelitis can necessitate surgical bone resections and even lower limb amputation.
As of 2010, they affect about one billion people globally. Some examples of common fungal skin infections include:
Dermatophytosis, also known as ringworm, is a superficial fungal infection of the skin caused by several different species of fungi. The fungal genera which cause skin infections in humans include Trichophyton, Epidermophyton, and Microsporum. Although dermatophytosis is fairly common fungal skin infection worldwide, it is more prevalent in areas with high humidity and environmental temperature. It is estimated that approximately 20-25% of world population are affected by superficial fungal infections, with dermatophytosis predominating.
Oral candidiasis, also commonly referred to as oral thrush, is a fungal infection caused mainly by Candida albicans, which affects mucosal membranes of the oral cavity and the tongue. C. albicans accounts for approximately 95% of oral thrush cases. The fungus is part of the normal oral flora and only causes an infection when host immune and microbiota barriers are impaired, providing C. albicans with an opportunity to overgrow. It is estimated that oral candidiasis affects approximately 2 million people every year worldwide.
Onychomycosis, a fungal infection which predominantly affects toenails. Two most common causative agents of onychomycosis are Trichophyton mentagrophytes and Trichophyton rubrum. Common signs and symptoms include nail discolouration and thickening, nail separation from nail bed, and nail brittleness. Estimated prevalence of onychomycosis in North America is between 8.7% to 13.8%.
Parasitic
Parasitic infestations of the skin are caused by several phyla of organisms, including Annelida, Arthropoda, Bryozoa, Chordata, Cnidaria, Cyanobacteria, Echinodermata, Nemathelminthes, Platyhelminthes, and Protozoa.
Viral
Virus-related cutaneous conditions caused by these obligate intracellular agents derive from both DNA and RNA viruses. Some examples of viral skin infections include:
Warts, benign proliferative skin lesions that are caused by human papilloma virus (HPV). Warts vary in shape, size, appearance, and location on the body where they occur. For example, plantar warts (verrucae plantaris), occur on the soles of the feet and appear as thick calluses. Other types of warts include genital warts, flat warts, mosaic warts, and periungual warts. Common treatment options include salicylic acid and cryotherapy with liquid nitrogen.
Chickenpox, is a highly contagious skin disease caused by the varicella-zoster virus (VZV). It is characterized by pruritic blister-like rash which may cover entire body, affecting all age groups. Rates of chickenpox are higher in countries which lack adequate immunization programs. In 2014, it has been estimated that global incidence of serious chickenpox infections requiring hospitalizations was 4.2 million.
Hand, foot, and mouth disease (HFMD), is a common, often self-limiting viral illness which typically affects infants and children, however, it may also occur in adults. It is characterized by low grade fever and maculopapular rash on palms of the hands, soles of the feet, and around mouth. It is caused by the human enteroviruses and coxsackieviruses, a positive-sense single-stranded RNA viruses.
| Biology and health sciences | Infectious diseases by site | Health |
4720254 | https://en.wikipedia.org/wiki/Men%27s%20health | Men's health | Men's health is a state of complete physical, mental, and social well-being, as experienced by men, and not merely the absence of disease. Differences in men's health compared to women's can be attributed to biological factors, behavioural factors, and social factors (e.g., occupations).
Men's health often relates to biological factors such as the male reproductive system or to conditions caused by hormones specific to, or most notable in, males. Some conditions that affect both men and women, such as cancer, and injury, manifest differently in men. Some diseases that affect both sexes are statistically more common in men. In terms of behavioural factors, men are more likely to make unhealthy or risky choices and less likely to seek medical care.
Men may face issues not directly related to their biology, such as gender-differentiated access to medical treatment and other socioeconomic factors. Outside Sub-Saharan Africa, men are at greater risk of HIV/AIDS. This is associated with unsafe sexual activity that is often nonconsensual.
Definition
Men's health refers to the state of physical, mental, and social well-being of men, and encompasses a wide range of issues that are unique to men or that affect men differently than women. This can include issues related to reproductive health, sexual health, cardiovascular health, mental health, and cancer prevention and treatment. Men's health also encompasses lifestyle factors such as diet, exercise, and stress management, as well as access to healthcare and preventative measures.
Life expectancy
Despite overall increases in life expectancy globally, men's life expectancy is less than women's, regardless of race and geographic regions. The global gap between the life expectancy of men and women has remained at approximately 4.4 years since 2016, according to the WHO. Life expectancy is a statistical measure that represents the average number of years that a person is expected to live, based on the current mortality rates. It is typically calculated at birth, and can vary depending on factors such as gender, race, and location. For example, life expectancy in many developed countries is higher than in developing countries, and life expectancy for women is generally higher than for men.
However, the gap does vary based on country, with low income countries having a smaller gap in life expectancy. Biological, behavioural, and social factors contribute to a lower overall life expectancy in men; however, the individual importance of each factor is not known. Overall attitudes towards health differ by gender. Men are generally less likely to be proactive in seeking healthcare, resulting in poorer health outcomes.
Men are difficult to recruit to health promotion interventions. The value of adopting a gender-sensitive approach to engage and retain men in health promotion interventions has been reported.
Biological influences on lower male life expectancies include genetics and hormones. For males, the 23rd pair of chromosomes are an X and a Y chromosome, rather than the two X chromosomes in females. The Y chromosome is smaller in size and contains fewer genes. This distinction may contribute to the discrepancy between men and women's life expectancy, as the additional X chromosome in females may counterbalance potential disease producing genes from the other X chromosome. Since males don't have the second X chromosome, they lack this potential protection. Hormonally, testosterone is a major male sex hormone important for a number of functions in males, and to a lesser extent, females. Low testosterone in males is a risk factor of cardiovascular related diseases. Conversely, high testosterone levels can contribute to prostate diseases. These hormonal factors play a direct role in the life expectancy of men compared to women.
In terms of behavioural factors, men have higher levels of consumption of alcohol, substances, and tobacco compared to women, resulting in increased rates of diseases such as lung cancer, cardiovascular disease, and cirrhosis of the liver. Sedentary behaviour, associated with many chronic diseases seems to be more prevalent in men. These diseases influence the overall life expectancy of men. For example, according to the World Health Organization, 3.14 million men died from causes linked to excessive alcohol use in 2010 compared to 1.72 million women. Men are more likely than women to engage in over 30 risky behaviours associated with increased morbidity, injury, and mortality. Additionally, despite a disproportionately lower rate of suicide attempts than women, men have significantly higher rates of death by suicide.
Social determinants of men's health involve factors such as greater levels of occupational exposure to physical and chemical hazards than women. Historically, men had higher work-related stress, which negatively impacted their life expectancy by increasing the risk of hypertension, heart attack, and stroke. However, as women's role in the workplace continues to be established, these risks are no longer specific to just men.
Mental health
Stress
Although most stress symptoms are similar in men and women, stress can be experienced differently by men. The American Psychological Association says that men are not as likely to report emotional and physical symptoms of stress compared to women. They say men are more likely to withdraw socially when stressed and are more likely to report doing nothing to manage their stress. Men are more likely than women to cite that work is a source of stress; women are more likely to report that money and the economy are a source of stress.
Mental stress in men is associated with various complications which can affect men's health: high blood pressure and subsequent cardiovascular morbidity and mortality, cardiovascular disease, erectile dysfunction (impotence) and possibly reduced fertility (due to reduced libido and frequency of intercourse).
Fathers experience stress during the time shortly before and after the time of birth (perinatal period). Stress levels tend to increase from the prenatal period up until the time of birth, and then decrease from the time of birth to the later postnatal period. Factors which contribute to stress in fathers include negative feelings about the pregnancy, role restrictions related to becoming a father, fear of childbirth, and feelings of incompetence related to infant care. This stress has a negative impact on fathers. Higher levels of stress in fathers are associated with mental health issues such as anxiety, depression, psychological distress, and fatigue.
Substance use disorders
Substance use disorder and alcohol use disorder can be defined as a pattern of harmful use of substance for mood-altering purposes. Alcohol is one of the most commonly substances used in excess, and men are up to twice as likely to develop alcohol use disorder than women. Gender differences in alcohol consumption remain universal, although the sizes of gender differences vary. More drinking and heaving, binge drinking occurs in men, whereas more long-term abstention occurs in women. Moreover, men are more likely to abuse substances such as drugs, with a lifetime prevalence of 11.5% in men compared to 6.4% in women, in the United States. Additionally, males are more likely to be substance addicts and abuse substances due to peer pressure compared to females.
Risks
Substance and alcohol use disorders are associated with various mental health issues in men and women. Mental health problems are not only a result from drinking excess alcohol; they can also cause people to drink too much. A major reason for consuming alcohol is to change mood or mental state. Alcohol can temporarily alleviate feelings of anxiety and depression, and some people use it as a form of self-medication in an attempt to counteract these negative feelings. However, alcohol consumption can worsen existing mental health problems. Evidence shows that people who consume high amounts of alcohol or use illicit substances are vulnerable to an increased risk of developing mental health problems. Men with mental health disorders, like post-traumatic stress disorder, are twice as likely as women to develop a substance use disorder.
Treatment
There have been identified gender differences in seeking treatment for mental health and substance abuse disorders between men and women. Women are more likely to seek help from and disclose mental health problems to their primary care physicians, whereas men are more likely to seek specialist and inpatient care. Men are more likely than women to disclose problems with alcohol use to their health care provider. In the United States, there are more men than women in treatment for substance use disorders. Both men and women receive better mental health outcomes with early treatment interventions.
Suicide
Suicide has a high incidence rate in men but often lacks public awareness. Suicide is the 13th leading cause of death globally, and in most parts of the world, men are significantly more likely to die by suicide than women, although women are significantly more likely to attempt suicide. This is known as the "gender paradox of suicidal behaviour". Worldwide, the ratio of suicide deaths was 1.8:1 men per woman in 2016 according to the World Health Organization. This gender disparity varies greatly between countries. For example, in the United Kingdom and Australia, this men/women ratio is approximately 3:1, and in the United States, Russia, and Argentina approximately 4:1. In South Africa, the suicide rate amongst men is five times greater than women. In East Asian countries however, the gender gap in suicide rates are relatively smaller, with men to women ratios ranging from 1:1 to 2:1. Multiple factors exist to explain this gender gap in suicide rates, such as men more frequently completing high mortality actions such as hanging, carbon-monoxide poisoning, and the use of lethal weapons. Additional factors that contribute to the disparity in suicide rates between men and women include the pressures of traditional gender roles for men in society and the socialization of men in society.
Risk factors
Because variations exist in the risk factors associated with suicidal behaviour between men and women, they contribute to the discrepancy in suicide rates. Suicide is complex and cannot simply be attributed to a single cause; however, there are psychological, social, and psychiatric factors to consider.
Mental illness is a major risk factor for suicide for both men and women. Common mental illnesses that are associated with suicide include depression, bipolar disorder, schizophrenia, and substance abuse disorders. In addition to mental illness, psychosocial factors such as unemployment and occupational stress are established risk factors for men. Alcohol use disorder is a risk factor that is much more prevalent in men than in women, which increases risks of depression and impulsive behaviours. This problem is exacerbated in men, as they are twice as likely as women to develop alcohol use disorder.
Reluctance to seek help is another prevalent risk factor facing men, stemming from internalized notions of masculinity. Traditional masculine stereotypes place expectations of strength and stoic, while any indication of vulnerability, such as consulting mental health services, is perceived as weak and emasculating. As a result, depression is under-diagnosed in men and may often remain untreated, which may lead to suicide.
Warning signs
Identifying warning signs is important for reducing suicide rates world-wide, but particularly for men, as distress may be expressed in a manner that is not easily recognisable. For instance, depression, and suicidal thoughts may manifest in the form of anger, hostility, and irritability. Additionally, risk-taking and avoidance behaviours may be demonstrated more commonly in men.
Common conditions
The following is a list of diseases or conditions that have a high prevalence in men (relative to women).
Cardiovascular conditions:
Cardiovascular disease
Atherosclerosis
Heart attack
Hypertension
Stroke
High cholesterol
Respiratory conditions:
Respiratory disease
COPD
Lung cancer
Pneumonia
Mental health conditions:
Autism
Major depressive disorder
Suicide
Addiction
Cancer:
Prostate cancer
Testicular cancer
Colorectal cancer
Skin cancer
Sexual health:
HIV/AIDS
Erectile dysfunction
Ejaculation Disorders
Hypoactive sexual desire disorder
Other:
Unintentional injuries
Diabetes
Influenza
Liver disease
Kidney disease
Alcohol abuse
Organisations
In the UK, the Men's Health Forum was founded in 1994. It was established originally by the Royal College of Nursing but became completely independent of the RCN when it was established as a charity in 2001. The first National Men's Health Week was held in the US in 1994. The first UK week took place in 2002, and the event went international (International Men's Health Week) the following year. In 2005, the world's first professor of men's health, Alan White, was appointed at Leeds Metropolitan University in north-east England.
In Australia, the Men's Health Information and Resource Centre advocates a salutogenic approach to male health which focuses on the causal factors behind health. The centre is led by John Macdonald and was established in 1999. The Centre leads and executes Men's Health Week in Australia with core funding from the NSW Ministry of Health.
The Global Action on Men's Health (GAMH) was established in 2013 and was registered as a UK-based charity in May 2018. It is a collaborative initiative to bring together men's health organizations from across the globe into a new global network. GAMH is working at international and national levels to encourage international agencies (such as the World Health Organization) and individual governments to develop research, policies and strategies on men's health.
| Biology and health sciences | Health and fitness: General | Health |
4720859 | https://en.wikipedia.org/wiki/Acariformes | Acariformes | The Acariformes, also known as the Actinotrichida, are the more diverse of the two superorders of mites. Over 32,000 described species are found in 351 families, with an estimated total of 440,000 to 929,000 species, including undescribed species.
Systematics and taxonomy
The Acariformes can be divided into two main clades – Sarcoptiformes and Trombidiformes. In addition, a paraphyletic group containing primitive forms, the Endeostigmata, was formerly also considered distinct. The latter is composed of only 10 families of little-studied, minute, soft-bodied mites that ingest solid food, such as fungi, algae, and soft-bodied invertebrates such as nematodes, rotifers, and tardigrades. These clades were formerly considered suborders, but this does not allow for a sufficiently precise classification of the mites and is abolished in more modern treatments; the Endeostigmata are variously considered to form a suborder on their own (the old view) or are included mainly in the Sarcoptiformes, thus making both groups monophyletic. The superfamily Eriophyoidea, traditionally considered members of the Trombidiformes, have been found to be basal mites in genomic analyses, sister to the clade containing Sarcoptiformes and Trombidiformes.
Another group often mentioned is the Actinedida, but in treatments like the present one, this is split up between the Sarcoptiformes (and formerly the separate Endeostigmata) and Trombidiformes (which contains the bulk of the "Actinedida"), because it appears to be a massively paraphyletic "wastebin taxon", uniting all Acariformes that are not "typical" Oribatida and Astigmatina. The Trombidiformes present their own problems. The small group Sphaerolichida appears to be the most ancient lineage among them. However, the Prostigmata are variously subdivided into the Anystina and Eleutherengona, and Eupodina. The delimitation and interrelationships of these groups are entirely unclear; while most analyses find one of the latter two, but not the other to be a subgroup of the Anystina; neither of these mutually contradicting hypotheses is very robust; possibly this is a simple error because phylogenetic software usually fails in handling nondichotomous phylogenies. Consequently, it may be best for the time being to consider each of the three main prostigmatan lineages to be equally distinct from the other two.
Fossil record
The oldest fossils of acariform mites are from the Rhynie Chert, Scotland, which dates to the early Devonian, around 410 million years ago The Cretaceous Immensmaris chewbaccei had idiosoma of more than in length and was the largest fossil acariform mite and also the largest erythraeoid mite ever recorded.
Diversity
The Sarcoptiformes ingest solid food, being mainly microherbivores, fungivores and detritivores. Some Astigmatina – the Psoroptidia – have become associated with vertebrates and nest-building insects. These include the well known house dust mites, scab mites and mange mites, stored product mites, feather mites and some fur mites. The relationships between their main groups are not well-resolved and subject to revision. In particular it appears as if the Oribatida need to be split up in two, as the Astigmatina are closer to some of them (e.g. certain Desmonomata) than the latter are to other "Oribatida".
The Trombidiformes are most noted for the economic damage caused by many plant parasite species. All of the most important plant pests among the Acari are trombidiformans, such as spider mites (Tetranychidae) and Eriophyidae. Many species are also predators, fungivores, and animal parasites. Some of the most conspicuous species of free-living mites are the relatively large and bright red velvet mites, that belong to the family Trombidiidae.
Oribatid mites and to a much lesser extent others are a source of alkaloids in poison frogs (namely small species like the strawberry poison-dart frog Oophaga pumilio). Such frogs raised without these oribatids in their diets do not develop the strong poisons associated with them in the wild.
Parthenogenesis
Acariformes species appear to have evolved from a sexual ancestor and the primary manner of reproduction during the course of evolution has been sexual reproduction. However, within the super order Acariformes, parthenogenetic species have arisen numerous times during the course of evolution. In contrast to the commonly held view that parthenogenetic lineages are short lived, four species-rich parthenogenetic clusters of the order Oribatida are very ancient and likely arose 400-300 million years ago. In some parthenogenetic species that undergo automixis (a kind of self-fertilization that retains meiosis) sexual reproduction has re-emerged.
Examples
Eriophyidae, plant parasites, e.g. Acalitus essigi (redberry mite)
Sarcoptiformes
Cheese mites
Epidermoptidae
Gastronyssidae
Sarcoptes scabiei
Trombidiformes
Demodecidae, e.g. Demodex mites
Erythraeidae
Labidostommatidae
Polydiscia deuterosminthurus
Smarididae
Spider mites, e.g. Tetranychus urticae
Tarsonemidae, a number of which are plant pests, e.g. Acarapis woodi
Tydeidae
| Biology and health sciences | Arachnids | Animals |
4720943 | https://en.wikipedia.org/wiki/Parasitiformes | Parasitiformes | Parasitiformes are a superorder of Arachnids, constituting one of the two major groups of mites, alongside Acariformes. Parasitiformes has, at times, been classified at the rank of order or suborder.
It is uncertain whether Parasitiformes and Acariformes are closely related, and in many analyses they are recovered more closely related to other arachnids. Amongst the best known members of the group are the ticks, though the Mesostigmata is by far the most diverse group with over 8,000 described species, including economically important species such as the varroa mite.
Description
Taxonomy
Holothyrida - small group of scavenging mites native to former Gondwana landmasses
Ixodida – ticks
Mesostigmata – a large order of predatory and parasitic mites
Opilioacarida – a small group of large, long-legged segmented mites.
Many species are parasitic (most famous of which are ticks), but not all. For example, about half of the 10,000 known species in the suborder Mesostigmata are predatory and cryptozoan, living in soil-litter, rotting wood, dung, carrion, nests or house dust. A few species have switched to grazing on fungi or ingesting spores or pollen. Phylogenetic relationships of the groups, after Klompen, 2010:
The phytoseiid mites, which account for about 15% of all described Mesostigmata are used with great success for biological control.
There are over 12,000 described species of Parasitiformes, and the total estimate is between 100,000 and 200,000 species.
Gallery
Evolutionary history
The oldest known fossils of Parasitiformes, representing three out of the four modern groups, Ixodida, Mesostigmata, and Opilioacarida, are known from Cretaceous aged amber, dating to around 100 million years ago. They are suspected to have diversified substantially earlier. The genetic divergence between the groups is less than that of Acariform mites, suggesting a younger origin, likely dating to the late Paleozoic.
| Biology and health sciences | Arachnids | Animals |
4723892 | https://en.wikipedia.org/wiki/Stamp%20mill | Stamp mill | A stamp mill (or stamp battery or stamping mill) is a type of mill machine that crushes material by pounding rather than grinding, either for further processing or for extraction of metallic ores. Breaking material down is a type of unit operation.
Description
A stamp mill consists of a set of heavy steel (iron-shod wood in some cases) stamps, loosely held vertically in a frame, in which the stamps can slide up and down. They are lifted by cams on a horizontal rotating shaft. As the cam moves from under the stamp, the stamp falls onto the ore below, crushing the rock, and the lifting process is repeated at the next pass of the cam.
Each one frame and stamp set is sometimes called a "battery" or, confusingly, a "stamp" and mills are sometimes categorised by how many stamps they have, i.e. a "10 stamp mill" has 10 sets. They usually are arranged linearly, but when a mill is enlarged, a new line of them may be constructed rather than extending the line. Abandoned mill sites (as documented by industrial archaeologists) will usually have linear rows of foundation sets as their most prominent visible feature as the overall apparatus can exceed 20 feet in height, requiring large foundations. Stamps are usually arranged in sets of five.
Some ore processing applications used large quantities of water so some stamp mills are located near natural or artificial bodies of water. For example, the Redridge Steel Dam was built to supply stamp mills with process water. The California Stamp made its major debut at the 1894 San Francisco midsummer fair. It was the first type that generated electricity, powered by a wood feed steam boiler. Steam started the wheels and belts turning; a generator that also was steam driven supplied the electricity for overhead lighting. This was a big plus for a mining company, enabling more production time.
History
The main components for water-powered stamp mills – water wheels, cams, and hammers – were known in the Hellenistic era in the Eastern Mediterranean region. Ancient cams are in evidence in early water-powered automata from the third century BC. A passage in the Natural History of the Roman scholar Pliny (NH 18.23) indicates that water-driven pestles had become fairly widespread in Italy by the first century AD: "The greater part of Italy uses an unshod pestle and also wheels which water turns as it flows past, and a trip-hammer [mola]". These trip-hammers were used for the pounding and hulling of grain. Grain-pounders with pestles, as well as ordinary watermills, are also attested as late as the middle of the fifth century in a monastery founded by Romanus of Condat in the remote Jura region, indicating that the knowledge of trip hammers continued into the early Middle Ages. Apart from agricultural processing, archaeological evidence also strongly suggests the existence of trip hammers in Roman metal working. In Ickham in Kent, a large metal hammer-head with mechanical deformations was excavated in an area where several Roman water-mills and metal waste dumps have also been traced.
The widest application of stamp mills, however, seems to have occurred in Roman mining, where ore from deep veins was first crushed into small pieces for further processing. Here, the regularity and spacing of large indentations on stone anvils indicate the use of cam-operated ore stamps, much like the devices of later medieval mining. Such mechanically deformed anvils have been found at numerous Roman silver and gold mining sites in Western Europe, including at Dolaucothi (Wales), and on the Iberian Peninsula, where the datable examples are from the 1st and 2nd century AD. At Dolaucothi, these stamp mills were hydraulic-driven and possibly also at other Roman mining sites, where the large scale use of the hushing and ground sluicing technique meant that large amounts of water were directly available for powering the machines.
Stamp mills were used by miners in Samarkand as early as 973. They were used in medieval Persia to crush mineral ores. By the 11th century, stamp mills were in widespread use throughout the medieval Islamic world, from Islamic Spain and North Africa in the west to Central Asia in the east.
Water-powered and mechanised trip hammers reappeared in medieval Europe by the 12th century. Their use was described in medieval written sources of Styria (in modern-day Austria), one from 1135, and another from 1175. Both texts mentioned the use of vertical stamp mills for ore-crushing. Medieval French sources of the years 1116 and 1249 both record the use of mechanised trip hammers used in the forging of wrought iron. Medieval European trip hammers by the 15th century were most often in the shape of the vertical pestle stamp-mill. The well-known Renaissance artist and inventor Leonardo da Vinci often sketched trip hammers for use in forges and even file-cutting machinery, those of the vertical pestle stamp-mill type. The oldest depicted European illustration of a martinet forge-hammer is perhaps in the Historia de Gentibus Septentrionalibus of Olaus Magnus, dated to 1565 AD. This woodcut image depicts three martinets, and a waterwheel working the wood and leather bellows of an osmund (sv) bloomery furnace. The recumbent trip hammer was first depicted in European artwork in an illustration by Sandrart and Zonca (dated 1621 AD).
Water-powered stamp mills are illustrated in book 8 of Georg Agricola's De Re Metallica, published in 1556. The mills Agricola shows were largely wooden construction, excepting the use of iron shoes on the end of each stamp. The camshaft was set directly on the axle of the waterwheel, and stamps were typically arranged in gangs of three, with each wheel driving one or two gangs.
19th century
The first stamp mill in the U.S. was built in 1829 at the Capps mine near Charlotte, North Carolina. They were common in gold, silver, and copper mining regions of the US in the latter 19th and early 20th centuries, in operations where the ore was crushed as a prelude to extracting the metals. They were superseded in the second half of the 19th century in many applications by more efficient methods. However their simplicity meant that they were used in remote areas for ore processing well into the 20th century. (19th century advertisements for some mills highlighted that they could be broken down, packed in by mule in pieces, and assembled on site with only simple tools.) Stamp mills are still in use in Colombia by artisanal miners, powered by electric motors.
Cornish stamps are stamp mills that were developed in Cornwall for use in tin mining in around 1850. Cornish stamps were used to crush small lumps of ore into sand-like material. The stamp was constructed from heavy timber with an iron "head" at the bottom. It was lifted by cams on a rotating axle, and fell on the ore and water mixture, fed into a box beneath. The head normally weighed between 4 and 8 cwt (about 200 to 400 kg) each, and were usually arranged in sets of four, in timber frames. Small stamps were commonly powered by water wheels and larger ones by steam engines.
Californian stamps were based on Cornish stamps and were used for the Californian gold mines. In these stamps the cam is arranged to lift the stamp from the side, so that it causes the stamp to rotate. This evens the wear on the shoe at the foot of the stamp. They were more rapid in action and a single head could crush 1.5 tons of ore as opposed to the Cornish stamps which could only crush 1 ton.
Other stamping mills
Stamp mills were used in early paper making for preparing the paper-stuff (pulp), before the invention of the Hollander beater, and might have derived from those used in fulling wool. They were used in oil-seed processing prior to pressing the oil from the milled seeds. Early mills were water-powered but mills can also be steam or electric powered.
A stamping mill may refer to a factory that performs stamping.
| Technology | Metallurgy | null |
4724840 | https://en.wikipedia.org/wiki/Space%20physics | Space physics | Space physics, also known as space plasma physics, is the study of naturally occurring plasmas within Earth's upper atmosphere and the rest of the Solar System. It includes the topics of aeronomy, aurorae, planetary ionospheres and magnetospheres, radiation belts, and space weather (collectively known as solar-terrestrial physics). It also encompasses the discipline of heliophysics, which studies the solar physics of the Sun, its solar wind, the coronal heating problem, solar energetic particles, and the heliosphere.
Space physics is both a pure science and an applied science, with applications in radio transmission, spacecraft operations (particularly communications and weather satellites), and in meteorology. Important physical processes in space physics include magnetic reconnection, synchrotron radiation, ring currents, Alfvén waves and plasma instabilities. It is studied using direct in situ measurements by sounding rockets and spacecraft, indirect remote sensing of electromagnetic radiation produced by the plasmas, and theoretical magnetohydrodynamics.
Closely related fields include plasma physics, which studies more fundamental physics and artificial plasmas; atmospheric physics, which investigates lower levels of Earth's atmosphere; and astrophysical plasmas, which are natural plasmas beyond the Solar System.
History
Space physics can be traced to the Chinese who discovered the principle of the compass, but did not understand how it worked. During the 16th century, in De Magnete, William Gilbert gave the first description of the Earth's magnetic field, showing that the Earth itself is a great magnet, which explained why a compass needle points north. Deviations of the compass needle magnetic declination were recorded on navigation charts, and a detailed study of the declination near London by watchmaker George Graham resulted in the discovery of irregular magnetic fluctuations that we now call magnetic storms, so named by Alexander Von Humboldt. Gauss and William Weber made very careful measurements of Earth's magnetic field which showed systematic variations and random fluctuations. This suggested that the Earth was not an isolated body, but was influenced by external forces – especially from the Sun and the appearance of sunspots. A relationship between individual aurora and accompanying geomagnetic disturbances was noticed by Anders Celsius and Olof Peter Hiorter in 1747. In 1860, Elias Loomis (1811–1889) showed that the highest incidence of aurora is seen inside an oval of 20 - 25 degrees around the magnetic pole. In 1881, Hermann Fritz published a map of the "isochasms" or lines of constant magnetic field.
In the late 1870s, Henri Becquerel offered the first physical explanation for the statistical correlations that had been recorded: sunspots must be a source of fast protons. They are guided to the poles by the Earth's magnetic field. In the early twentieth century, these ideas led Kristian Birkeland to build a terrella, or laboratory device which simulates the Earth's magnetic field in a vacuum chamber, and which uses a cathode ray tube to simulate the energetic particles which compose the solar wind. A theory began to be formulated about the interaction between the Earth's magnetic field and the solar wind.
Space physics began in earnest with the first in situ measurements in the early 1950s, when a team led by Van Allen launched the first rockets to a height around 110 km. Geiger counters on board the second Soviet satellite, Sputnik 2, and the first US satellite, Explorer 1, detected the Earth's radiation belts, later named the Van Allen belts. The boundary between the Earth's magnetic field and interplanetary space was studied by Explorer 10. Future space craft would travel outside Earth orbit and study the composition and structure of the solar wind in much greater detail. These include WIND (spacecraft), (1994), Advanced Composition Explorer (ACE), Ulysses, the Interstellar Boundary Explorer (IBEX) in 2008, and Parker Solar Probe. Other spacecraft would study the sun, such as STEREO and Solar and Heliospheric Observatory (SOHO).
| Physical sciences | Astronomy basics | Astronomy |
20756869 | https://en.wikipedia.org/wiki/Eusociality | Eusociality | Eusociality (Greek εὖ eu "good" and social) is the highest level of organization of sociality. It is defined by the following characteristics: cooperative brood care (including care of offspring from other individuals), overlapping generations within a colony of adults, and a division of labor into reproductive and non-reproductive groups. The division of labor creates specialized behavioral groups within an animal society, sometimes called castes. Eusociality is distinguished from all other social systems because individuals of at least one caste usually lose the ability to perform behaviors characteristic of individuals in another caste. Eusocial colonies can be viewed as superorganisms.
Eusociality has evolved among the insects, crustaceans, trematoda and mammals. It is most widespread in the Hymenoptera (ants, bees, and wasps) and in Blattodea (termites). A colony has caste differences: queens and reproductive males take the roles of the sole reproducers, while soldiers and workers work together to create and maintain a living situation favorable for the brood. Queens produce multiple queen pheromones to create and maintain the eusocial state in their colonies; they may also eat eggs laid by other females or exert dominance by fighting. There are two eusocial rodents: the naked mole-rat and the Damaraland mole-rat. Some shrimps, such as Synalpheus regalis, are eusocial. E. O. Wilson and others have claimed that humans have evolved a weak form of eusociality. It has been suggested that the colonial and epiphytic staghorn fern, too, may make use of a primitively eusocial division of labor.
History
The term "eusocial" was introduced in 1966 by Suzanne Batra, who used it to describe nesting behavior in Halictid bees, on a scale of subsocial/solitary, colonial/communal, semisocial, and eusocial, where a colony is started by a single individual. Batra observed the cooperative behavior of the bees, males and females alike, as they took responsibility for at least one duty (i.e., burrowing, cell construction, oviposition) within the colony. The cooperativeness was essential as the activity of one labor division greatly influenced the activity of another. Eusocial colonies can be viewed as superorganisms, with individual castes being analogous to different tissue or cell types in a multicellular organism; castes fulfill a specific role that contributes to the functioning and survival of the whole colony, while being incapable of independent survival outside the colony.
In 1969, Charles D. Michener further expanded Batra's classification with his comparative study of social behavior in bees. He observed multiple species of bees (Apoidea) in order to investigate the different levels of animal sociality, all of which are different stages that a colony may pass through. Eusociality, which is the highest level of animal sociality a species can attain, specifically had three characteristics that distinguished it from the other levels:
Egg-layers and worker-like individuals among adult females (division of labor)
The overlap of generations (mother and adult offspring)
Cooperative work on the cells of the bees' honeycomb
E. O. Wilson extended the terminology to include other social insects, such as ants, wasps, and termites. Originally, it was defined to include organisms (only invertebrates) that had the following three features:
Reproductive division of labor (with or without sterile castes)
Overlapping generations
Cooperative care of young
Eusociality was then discovered in a group of chordates, the mole-rats. Further research distinguished another possibly important criterion for eusociality, "the point of no return". This is characterized by having individuals fixed into one behavioral group, usually before reproductive maturity. This prevents them from transitioning between behavioral groups, and creates a society with individuals truly dependent on each other for survival and reproductive success. For many insects, this irreversibility has changed the anatomy of the worker caste, which is sterile and provides support for the reproductive caste.
Diversity
Most eusocial societies exist in arthropods, while a few are found in mammals. Some ferns may exhibit a primitive form of eusocial behavior.
In insects
Eusociality has evolved multiple times in different insect orders, including hymenopterans, termites, thrips, aphids, and beetles.
In hymenoptera
The order Hymenoptera contains the largest group of eusocial insects, including ants, bees, and wasps—divided into castes: reproductive queens, drones, more or less sterile workers, and sometimes also soldiers that perform specialized tasks. In the well-studied social wasp Polistes versicolor, dominant females perform tasks such as building new cells and ovipositing, while subordinate females tend to perform tasks like feeding the larvae and foraging. The task differentiation between castes can be seen in the fact that subordinates complete 81.4% of the total foraging activity, while dominants only complete 18.6% of the total foraging. Eusocial species with a sterile caste are sometimes called hypersocial.
While only a moderate percentage of species in bees (families Apidae and Halictidae) and wasps (Crabronidae and Vespidae) are eusocial, nearly all species of ants (Formicidae) are eusocial. Some major lineages of wasps are mostly or entirely eusocial, including the subfamilies Polistinae and Vespinae. The corbiculate bees (subfamily Apinae of family Apidae) contain four tribes of varying degrees of sociality: the highly eusocial Apini (honey bees) and Meliponini (stingless bees), primitively eusocial Bombini (bumble bees), and the mostly solitary or weakly social Euglossini (orchid bees). Eusociality in these families is sometimes managed by a set of pheromones that alter the behavior of specific castes in the colony. These pheromones may act across different species, as observed in Apis andreniformis (black dwarf honey bee), where worker bees responded to queen pheromone from the related Apis florea (red dwarf honey bee). Pheromones are sometimes used in these castes to assist with foraging. Workers of the Australian stingless bee Tetragonula carbonaria, for instance, mark food sources with a pheromone, helping their nest mates to find the food.
Reproductive specialization generally involves the production of sterile members of the species, which carry out specialized tasks to care for the reproductive members. Individuals may have behavior and morphology modified for group defense, including self-sacrificing behavior. For example, members of the sterile caste of the honeypot ants such as Myrmecocystus fill their abdomens with liquid food until they become immobile and hang from the ceilings of the underground nests, acting as food storage for the rest of the colony. Not all social insects have distinct morphological differences between castes. For example, in the Neotropical social wasp Synoeca surinama, caste ranks are determined by social displays in the developing brood. These castes are sometimes further specialized in their behavior based on age, as in Scaptotrigona postica workers. Between approximately 0–40 days old, the workers perform tasks within the nest such as provisioning cell broods, colony cleaning, and nectar reception and dehydration. Once older than 40 days, S. postica workers move outside the nest for colony defense and foraging.
In Lasioglossum aeneiventre, a halictid bee from Central America, nests may be headed by more than one female; such nests have more cells, and the number of active cells per female is correlated with the number of females in the nest, implying that having more females leads to more efficient building and provisioning of cells. In similar species with only one queen, such as Lasioglossum malachurum in Europe, the degree of eusociality depends on the clime in which the species is found.
In termites
Termites (order Blattodea, infraorder Isoptera) make up another large portion of highly advanced eusocial animals. The colony is differentiated into various castes: the queen and king are the sole reproducing individuals; workers forage and maintain food and resources; and soldiers defend the colony against ant attacks. The latter two castes, which are sterile and perform highly specialized, complex social behaviors, are derived from different stages of pluripotent larvae produced by the reproductive caste. Some soldiers have jaws so enlarged (specialized for defense and attack) that they are unable to feed themselves and must be fed by workers.
In beetles
Austroplatypus incompertus is a species of ambrosia beetle native to Australia, and is the first beetle (order Coleoptera) to be recognized as eusocial. This species forms colonies in which a single female is fertilized, and is protected by many unfertilized females, which serve as workers excavating tunnels in trees. This species has cooperative brood care, in which individuals care for juveniles that are not their own.
In gall-inducing insects
Some gall-inducing insects, including the gall-forming aphid, Pemphigus spyrothecae (order Hemiptera), and thrips such as Kladothrips (order Thysanoptera), are described as eusocial. These species have very high relatedness among individuals due to their asexual reproduction (sterile soldier castes being clones produced by parthenogenesis), but the gall-inhabiting behavior gives these species a defensible resource. They produce soldier castes for fortress defense and protection of the colony against predators, kleptoparasites, and competitors. In these groups, eusociality is produced by both high relatedness and by living in a restricted, shared area.
In crustaceans
Eusociality has evolved in three different lineages in the colonial crustacean genus Synalphaeus. S. regalis, S. microneptunus, S. filidigitus, S. elizabethae, S. chacei, S. riosi, S. duffyi, and S. cayoneptunus are the eight recorded species of parasitic shrimp that rely on fortress defense and live in groups of closely related individuals in tropical reefs and sponges. They live eusocially with a single breeding female, and a large number of male defenders armed with enlarged snapping claws. There is a single shared living space for the colony members, and the non-breeding members act to defend it.
The fortress defense hypothesis additionally points out that because sponges provide both food and shelter, there is an aggregation of relatives (because the shrimp do not have to disperse to find food), and much competition for those nesting sites. Being the target of attack promotes a good defense system (soldier caste); soldiers promote the fitness of the whole nest by ensuring safety and reproduction of the queen.
Eusociality offers a competitive advantage in shrimp populations. Eusocial species are more abundant, occupy more of the habitat, and use more of the available resources than non-eusocial species.
In trematodes
The trematodes are a class of parasitic flatworm, also known as flukes. One species, Haplorchis pumilio, has evolved eusociality involving a colony creating a class of sterile soldiers. One fluke invades a host and establishes a colony of dozens to thousands of clones that work together to take it over. Since rival trematode species can invade and replace the colony, it is protected by a specialized caste of sterile soldier trematodes. Soldiers are smaller, more mobile, and develop along a different pathway than sexually mature reproductives. One difference is that a soldier's mouthparts (pharynx) is five times as big as those of the reproductives. They make up nearly a quarter of the volume of the soldier. These soldiers do not have a germinal mass, never metamorphose to be reproductive, and are, therefore, obligately sterile. Soldiers are readily distinguished from the immature and mature reproductive worms. Soldiers are more aggressive than reproductives, attacking heterospecific trematodes that infect their host in vitro. H. pumilio soldiers do not attack conspecifics from other colonies. The soldiers are not evenly distributed throughout the host body. They are found in the highest numbers in the basal visceral mass, where competing trematodes tend to multiply during the early phase of infection. This strategic positioning allows them to effectively defend against invaders, similar to how soldier distribution patterns are seen in other animals with defensive castes. They "appear to be an obligately sterile physical caste, akin to that of the most advanced social insects".
In nonhuman mammals
Among mammals, two species in the rodent group Phiomorpha are eusocial, the naked mole-rat (Heterocephalus glaber) and the Damaraland mole-rat (Fukomys damarensis), both of which are highly inbred. Usually living in harsh or limiting environments, these mole-rats aid in raising siblings and relatives born to a single reproductive queen. However, this classification is controversial owing to disputed definitions of 'eusociality'. To avoid inbreeding, mole rats sometimes outbreed and establish new colonies when resources are sufficient. Most of the individuals cooperatively care for the brood of a single reproductive female (the queen) to which they are most likely related. Thus, it is uncertain whether mole rats are truly eusocial, since their social behavior depends largely on their resources and environment.
Some mammals in the Carnivora and Primates have eusocial tendencies, especially meerkats (Suricata suricatta) and dwarf mongooses (Helogale parvula). These show cooperative breeding and marked reproductive skews. In the dwarf mongoose, the breeding pair receives food priority and protection from subordinates and rarely has to defend against predators.
In humans
Scientists have debated whether humans are prosocial or eusocial. Edward O. Wilson called humans eusocial apes, arguing for similarities to ants, and observing that early hominins cooperated to rear their children while other members of the same group hunted and foraged. Wilson and others argued that through cooperation and teamwork, ants and humans form superorganisms. Wilson's claims were vigorously rejected by critics of group selection theory, which grounded Wilson's argument, and because human reproductive labor is not divided between castes.
Though controversial, it has been suggested that male homosexuality and female menopause could have evolved through kin selection. This would mean that humans sometimes exhibit a type of alloparental behavior known as "helpers at the nest", with juveniles and sexually mature adolescents helping their parents raise subsequent broods, as in some birds, some non-eusocial bees, and meerkats. These species are not eusocial: they do not have castes, and helpers reproduce on their own if given the opportunity.
In plants
One plant, the epiphytic staghorn fern, Platycerium bifurcatum (Polypodiaceae), may exhibit a primitive form of eusocial behavior amongst clones. The evidence for this is that individuals live in colonies, where they are structured in different ways, with fronds of differing size and shape, to collect and store water and nutrients for the colony to use. At the top of a colony, there are both pleated fan-shaped "nest" fronds that collect and hold water, and gutter-shaped "strap" fronds that channel water: no solitary Platycerium species has both types. At the bottom of a colony, there are "nest" fronds that clasp the trunk of the tree supporting the fern, and drooping photosynthetic fronds. These are argued to be adapted to support the colony structurally, i.e. that the individuals in the colony are to some degree specialized for tasks, a division of labor.
Evolution
Phylogenetic distribution
Eusociality is a rare but widespread phenomenon in species in at least seven orders in the animal kingdom, as shown in the phylogenetic tree (non-eusocial groups not shown). All species of termites are eusocial, and it is believed that they were the first eusocial animals to evolve, sometime in the upper Jurassic period (~150 million years ago). The other orders shown contain both eusocial and non-eusocial species, including many lineages where eusociality is inferred to be the ancestral state. Thus the number of independent evolutions of eusociality (clades) is not known. The major eusocial groups are shown in boldface in the phylogenetic tree.
Paradox
Prior to the gene-centered view of evolution, eusociality was seen as paradoxical: if adaptive evolution unfolds by differential reproduction of individual organisms, the evolution of individuals incapable of passing on their genes presents a challenge. In On the Origin of Species, Darwin referred to the existence of sterile castes as the "one special difficulty, which at first appeared to me insuperable, and actually fatal to my theory". Darwin anticipated that a possible resolution to the paradox might lie in the close family relationship, which W.D. Hamilton quantified a century later with his 1964 inclusive fitness theory. After the gene-centered view of evolution was developed in the mid-1970s, non-reproductive individuals were seen as an extended phenotype of the genes, which are the primary beneficiaries of natural selection.
Inclusive fitness and haplodiploidy
Argument that haplodiploidy favors eusociality
According to inclusive fitness theory, organisms can gain fitness by increasing the reproductive output of other individuals that share their genes, especially their close relatives. Natural selection favors individuals to help their relatives when the cost of helping is less than the benefit gained by their relative multiplied by the fraction of genes that they share, i.e. when Cost < relatedness * Benefit. W. D. Hamilton suggested in 1964 that eusociality could evolve more easily among haplodiploid species such as Hymenoptera, because of their unusual relatedness structure.
In haplodiploid species, females develop from fertilized eggs and males develop from unfertilized eggs. Because a male is haploid, his daughters share 100% of his genes and 50% of their mother's. Therefore, they share 75% of their genes with each other. This mechanism of sex determination gives rise to what W. D. Hamilton first termed "supersisters", more closely related to their sisters than they would be to their own offspring. Even though workers often do not reproduce, they can pass on more of their genes by helping to raise their sisters than by having their own offspring (each of which would only have 50% of their genes). This unusual situation, where females may have greater fitness when they help rear sisters rather than producing offspring, is often invoked to explain the multiple independent evolutions of eusociality (at least nine separate times) within the Hymenoptera.
Argument that haplodiploidy does not favor eusociality
Against the supposed benefits of haplodiploidy for eusociality, Robert Trivers notes that while females share 75% of genes with their sisters in haplodiploid populations, they only share 25% of their genes with their brothers. Accordingly, the average relatedness of an individual to their sibling is 50%. Therefore, helping behavior is only advantageous if it is biased to helping sisters, which would drive the population to a 1:3 sex ratio of males to females. At this ratio, males, as the rarer sex, increase in reproductive value, reducing the benefit of female-biased investment.
Further, not all eusocial species are haplodiploid: termites, some snapping shrimps, and mole rats are not. Conversely, many non-eusocial bees are haplodiploid, and among eusocial species many queens mate with multiple males, resulting in a hive of half-sisters that share only 25% of their genes. The association between haplodiploidy and eusociality is below statistical significance. Haplodiploidy is thus neither necessary nor sufficient for eusociality to emerge. Relatedness does still play a part, as monogamy (queens mating singly) is the ancestral state for all eusocial species so far investigated. If kin selection is an important force driving the evolution of eusociality, monogamy should be the ancestral state, because it maximizes the relatedness of colony members.
Evolutionary ecology
Increased parasitism and predation rates are the primary ecological drivers of social organization. Group living affords colony members defense against enemies, specifically predators, parasites, and competitors, and allows them to gain advantage from superior foraging methods. The importance of ecology in the evolution of eusociality is supported by evidence such as experimentally induced reproductive division of labor, for example when normally solitary queens are forced together. Conversely, female Damaraland mole-rats undergo hormonal changes that promote dispersal after periods of high rainfall.
Climate too appears to be a selective agent driving social complexity; across bee lineages and Hymenoptera in general, higher forms of sociality are more likely to occur in tropical than temperate environments. Similarly, social transitions within halictid bees, where eusociality has been gained and lost multiple times, are correlated with periods of climatic warming. Social behavior in facultative social bees is often reliably predicted by ecological conditions, and switches in behavioral type have been experimentally induced by translocating offspring of solitary or social populations to warm and cool climates. In H. rubicundus, females produce a single brood in cooler regions and two or more broods in warmer regions, so the former populations are solitary while the latter are social. In another species of sweat bees, L. calceatum, social phenotype has been predicted by altitude and micro-habitat composition, with social nests found in warmer, sunnier sites, and solitary nests found in adjacent, cooler, shaded locations. Facultatively social bee species, however, which comprise the majority of social bee diversity, have their lowest diversity in the tropics, being largely limited to temperate regions.
Multilevel selection
Once pre-adaptations such as group formation, nest building, high cost of dispersal, and morphological variation are present, between-group competition has been suggested as a driver of the transition to advanced eusociality. M. A. Nowak, C. E. Tarnita, and E. O. Wilson proposed in 2010 that since eusociality produces an extremely altruistic society, eusocial groups should out-reproduce their less cooperative competitors, eventually eliminating all non-eusocial groups from a species. Multilevel selection has been heavily criticized for its conflict with the kin selection theory.
Reversal to solitarity
A reversal to solitarity is an evolutionary phenomenon in which descendants of a eusocial group evolve solitary behavior once again. Bees have been model organisms for the study of reversal to solitarity, because of the diversity of their social systems. Each of the four origins of eusociality in bees was followed by at least one reversal to solitarity, giving a total of at least nine reversals. In a few species, solitary and eusocial colonies appear simultaneously in the same population, and different populations of the same species may be fully solitary or eusocial. This suggests that eusociality is costly to maintain, and can only persist when ecological variables favor it. Disadvantages of eusociality include the cost of investing in non-reproductive offspring, and an increased risk of disease.
All reversals to solitarity have occurred among primitively eusocial groups; none have followed the emergence of advanced eusociality. The "point of no return" hypothesis posits that the morphological differentiation of reproductive and non-reproductive castes prevents highly eusocial species such as the honeybee from reverting to the solitary state.
Physiology and development
Pheromones
Pheromones play an important role in the physiological mechanisms of eusociality. Enzymes involved in the production and perception of pheromones were important for the emergence of eusociality within both termites and hymenopterans. The best-studied queen pheromone system in social insects is that of the honey bee Apis mellifera. Queen mandibular glands produce a mixture of five compounds, three aliphatic and two aromatic, which control workers. Mandibular gland extracts inhibit workers from constructing queen cells, which can delay the hormonally based behavioral development of workers and suppress their ovarian development. Both behavioral effects mediated by the nervous system often leading to recognition of queens (releaser) and physiological effects on the reproductive and endocrine system (primer) are attributed to the same pheromones. These pheromones volatilize or are deactivated within thirty minutes, allowing workers to respond rapidly to the loss of their queen.
The levels of two of the aliphatic compounds increase rapidly in virgin queens within the first week after emergence from the pupa, consistent with their roles as sex attractants during the mating flight. Once a queen is mated and begins laying eggs, she starts producing the full blend of compounds. In several ant species, reproductive activity is associated with pheromone production by queens. Mated egg-laying queens are attractive to workers, whereas young winged virgin queens elicit little or no response.
Among ants, the queen pheromone system of the fire ant Solenopsis invicta includes both releaser and primer pheromones. A queen recognition (releaser) pheromone is stored in the poison sac along with three other compounds. These compounds elicit a behavioral response from workers. Several primer effects have also been demonstrated. Pheromones initiate reproductive development in new winged females, called female sexuals. These chemicals inhibit workers from rearing male and female sexuals, suppress egg production in other queens of multiple queen colonies, and cause workers to execute excess queens. These pheromones maintain the eusocial phenotype, with one queen supported by sterile workers and sexually active males (drones). In queenless colonies, the lack of queen pheromones causes winged females to quickly shed their wings, develop ovaries and lay eggs. These virgin replacement queens assume the role of the queen and start to produce queen pheromones. Similarly, queen weaver ants Oecophylla longinoda have exocrine glands that produce pheromones which prevent workers from laying reproductive eggs.
Similar mechanisms exist in the eusocial wasp Vespula vulgaris. For a queen to dominate all the workers, usually numbering more than 3000 in a colony, she signals her dominance with pheromones. The workers regularly lick the queen while feeding her, and the air-borne pheromone from the queen's body alerts those workers of her dominance.
The mode of action of inhibitory pheromones which prevent the development of eggs in workers has been demonstrated in the bumble bee Bombus terrestris. The pheromones suppress activity of the endocrine gland, the corpus allatum, stopping it from secreting juvenile hormone. With low juvenile hormone, eggs do not mature. Similar inhibitory effects of lowering juvenile hormone were seen in halictine bees and polistine wasps, but not in honey bees.
Other mechanisms
A variety of other mechanisms give queens of different species of social insects a measure of reproductive control over their nest mates. In many Polistes wasps, monogamy is established soon after colony formation by physical dominance interactions among foundresses of the colony including biting, chasing, and food soliciting. Such interactions create a dominance hierarchy headed by larger, older individuals with the greatest ovarian development. The rank of subordinates is correlated with the degree of ovarian development. Workers do not oviposit when queens are present, for a variety of reasons: colonies tend to be small enough that queens can effectively dominate workers; queens practice selective oophagy; the flow of nutrients favors queen over workers; and queens rapidly lay eggs in new or vacated cells.
In primitively eusocial bees (where castes are morphologically similar and colonies are small and short-lived), queens frequently nudge their nest mates and then burrow back down into the nest. This draws workers into the lower part of the nest where they may respond to stimuli for cell construction and maintenance. Being nudged by the queen may help to inhibit ovarian development; in addition, the queen eats any eggs laid by workers. Furthermore, temporally discrete production of workers and gynes (actual or potential queens) can cause size dimorphisms between different castes, as size is strongly influenced by the season during which the individual is reared. In many wasps, worker caste is determined by a temporal pattern in which workers precede non-workers of the same generation. In some cases, for example in bumblebees, queen control weakens late in the season, and the ovaries of workers develop. The queen attempts to maintain her dominance by aggressive behavior and by eating worker-laid eggs; her aggression is often directed towards the worker with the greatest ovarian development.
In highly eusocial wasps (where castes are morphologically dissimilar), both the quantity and quality of food are important for caste differentiation. Recent studies in wasps suggest that differential larval nourishment may be the environmental trigger for larval divergence into workers or gynes. All honey bee larvae are initially fed with royal jelly, which is secreted by workers, but normally they are switched over to a diet of pollen and honey as they mature; if their diet is exclusively royal jelly, they grow larger than normal and differentiate into queens. This jelly contains a specific protein, royalactin, which increases body size, promotes ovary development, and shortens the developmental time period. The differential expression in Polistes of larval genes and proteins (also differentially expressed during queen versus caste development in honey bees) indicates that regulatory mechanisms may operate very early in development.
In popular culture
Stephen Baxter's 2003 science fiction novel Coalescent imagines a human eusocial organisation founded in ancient Rome, in which most individuals are subject to reproductive repression.
Harold Fromm, reviewing Groping for Groups by E. O. Wilson and others in The Hudson Review, asks whether Wilson's stated "wish" for humans to bring about "a permanent paradise for human beings" would mean "to be group-selected in factories in the style of Huxley's [1932 novel] Brave New World.
| Biology and health sciences | Ethology | Biology |
20756967 | https://en.wikipedia.org/wiki/Cyborg | Cyborg | A cyborg (, a portmanteau of cybernetic and organism) is a being with both organic and biomechatronic body parts. The term was coined in 1960 by Manfred Clynes and Nathan S. Kline. In contrast to biorobots and androids, the term cyborg applies to a living organism that has restored function or enhanced abilities due to the integration of some artificial component or technology that relies on feedback.
Description and definition
Alternative names for a cyborg include cybernetic organism, cyber-organism, cyber-organic being, cybernetically enhanced organism, cybernetically augmented organism, technorganic being, techno-organic being, and techno-organism.
Unlike bionics, biorobotics, or androids, a cyborg is an organism that has restored function or, especially, enhanced abilities due to the integration of some artificial component or technology that relies on some sort of feedback, for example: prostheses, artificial organs, implants or, in some cases, wearable technology. Cyborg technologies may enable or support collective intelligence. A related idea is the "augmented human". While cyborgs are commonly thought of as mammals, including humans, the term can apply to any organism.
Placement and distinctions
D. S. Halacy's Cyborg: Evolution of the Superman (1965) featured an introduction which spoke of a "new frontier" that was "not merely space, but more profoundly the relationship between 'inner space' to 'outer space' – a bridge...between mind and matter."
In "A Cyborg Manifesto", Donna Haraway rejects the notion of rigid boundaries between humanity and technology, arguing that, as humans depend on more technology over time, humanity and technology have become too interwoven to draw lines between them. She believes that since we have allowed and created machines and technology to be so advanced, there should be no reason to fear what we have created, and cyborgs should be embraced because they are part of human identities. However, Haraway has also expressed concern over the contradictions of scientific objectivity and the ethics of technological evolution, and has argued that "There are political consequences to scientific accounts of the world."
Biosocial definition
According to some definitions of the term, the physical attachments that humans have with even the most basic technologies have already made them cyborgs. In a typical example, a human with an artificial cardiac pacemaker or implantable cardioverter-defibrillator would be considered a cyborg, since these devices measure voltage potentials in the body, perform signal processing, and can deliver electrical stimuli, using a synthetic feedback mechanism to keep that person alive. Implants, especially cochlear implants, that combine mechanical modification with any kind of feedback response are also cybernetic enhancements. Some theorists cite such modifications as contact lenses, hearing aids, smartphones, or intraocular lenses as examples of fitting humans with technology to enhance their biological capabilities.
The emerging trend of implanting microchips inside the body (mainly the hands), to make financial operations like a contactless payment, or basic tasks like opening a door, has been erroneously marketed as more recent examples of cybernetic enhancement. The latter has not yet seen significant traction outside niche areas in Scandinavia and in actual function is little more than a pre-programmed Radio-frequency identification (RFID) microchip encased in glass that does not interact with the human body (it is the same technology used in the microchips injected into animals for ease of identification), thus not fitting the definition of a cybernetic implant.
As cyborgs currently are on the rise, some theorists argue there is a need to develop new definitions of aging. For instance, a bio-techno-social definition of aging has been suggested.
The term is also used to address human-technology mixtures in the abstract. This includes not only commonly-used pieces of technology such as phones, computers, the Internet, and so on, but also artifacts that are not usually considered technology; for example, pen and paper, and speech and language. When augmented with these technologies and connected in communication with people in other times and places, a person becomes capable of more than they were before. An example is a computer, which gains power by using Internet protocols to connect with other computers. Another example is a social-media bot—either a bot-assisted human or a human-assisted-bot—used to target social media with likes and shares. Cybernetic technologies thus include highways, pipes, electrical wiring, buildings, electrical plants, libraries, and other infrastructural constructs.
Bruce Sterling, in his Shaper/Mechanist universe, suggested an idea of an alternative cyborg called 'Lobster', which is made not by using internal implants, but by using an external shell (e.g. a powered exoskeleton). Unlike human cyborgs, who appear human externally but are synthetic internally (e.g., the Bishop type in the Alien franchise), Lobster looks inhuman externally but contains a human internally (such as in Elysium and RoboCop). The computer game Deus Ex: Invisible War prominently features cyborgs called Omar, Russian for 'lobster'.
Evolutionary perspective
In 1994, Hans Hass formulated a scientific view of the human-machine hybrids he called "hypercells".<ref>Hans Hass: Hypercell Organisms. A new perspective of man in evolution. Hamburg 1994 English Version online</ref> They can expand their biological cell body with artificial artifacts and thus expand their performance body. The theory of hypercells or "Homo proteus", as Hass called the human-machine hybrid to distinguish Homo sapiens, extends Charles Darwin's theory of evolution and deals with the course of evolution beyond humans.
In his 2019 book Novacene, James Lovelock used the term "cyborgs" to refer to the next generation of beings who will become the "understanders of the future" and "lead the cosmos to self-knowledge". While acknowledging the organic component in Clynes' and Kline's definition, he proposed that these cyborgs "will have designed and built themselves from the artificial intelligence systems we have already constructed", and used the term cyborg "to emphasize that the new intelligent beings will have arisen, like us, from Darwinian evolution."
Origins
The concept of a man-machine mixture was widespread in science fiction before World War II. As early as 1843, Edgar Allan Poe described a man with extensive prostheses in the short story "The Man That Was Used Up". In 1911, Jean de La Hire introduced the Nyctalope, a science fiction hero who was perhaps the first literary cyborg, in (later translated as The Nyctalope on Mars). Nearly two decades later, Edmond Hamilton presented space explorers with a mixture of organic and machine parts in his 1928 novel The Comet Doom. He later featured the talking, living brain of an old scientist, Simon Wright, floating in a transparent case, and in all the adventures of his famous hero, Captain Future. In 1944, in the short story "No Woman Born", C. L. Moore wrote of Deirdre, a dancer, whose body was burned completely and whose brain was placed in a faceless but beautiful and supple mechanical body.
In 1960, the term "cyborg" was coined by Manfred E. Clynes and Nathan S. Kline to refer to their conception of an enhanced human being who could survive in extraterrestrial environments:
Their concept was the outcome of thinking about the need for an intimate relationship between human and machine as the new frontier of space exploration was beginning to develop. A designer of physiological instrumentation and electronic data-processing systems, Clynes was the chief research scientist in the Dynamic Simulation Laboratory at Rockland State Hospital in New York.
The term first appears in print 5 months earlier when The New York Times reported on the "Psychophysiological Aspects of Space Flight Symposium" where Clynes and Kline first presented their paper:
Thereafter, Hamilton would first use the term "cyborg" explicitly in the 1962 short story, "After a Judgment Day", to describe the "mechanical analogs" called "Charlies," explaining that "[c]yborgs, they had been called from the first one in the 1960s...cybernetic organisms."
In 2001, a book titled Cyborg: Digital Destiny and Human Possibility in the Age of the Wearable computer was published by Doubleday. Some of the ideas in the book were incorporated into the documentary film Cyberman that same year.
Cyborg tissues in engineering
Cyborg tissues structured with carbon nanotubes and plant or fungal cells have been used in artificial tissue engineering to produce new materials for mechanical and electrical uses.
Such work was presented by Raffaele Di Giacomo, Bruno Maresca, and others, at the Materials Research Society's spring conference on 3 April 2013. The cyborg obtained was inexpensive, light and had unique mechanical properties. It could also be shaped in the desired forms. Cells combined with multi-walled carbon nanotubes (MWCNTs) co-precipitated as a specific aggregate of cells and nanotubes that formed a viscous material. Likewise, dried cells still acted as a stable matrix for the MWCNT network. When observed by optical microscopy, the material resembled an artificial "tissue" composed of highly packed cells. The effect of cell drying was manifested by their "ghost cell" appearance. A rather specific physical interaction between MWCNTs and cells was observed by electron microscopy, suggesting that the cell wall (the outermost part of fungal and plant cells) may play a major active role in establishing a carbon nanotube's network and its stabilization. This novel material can be used in a wide range of electronic applications, from heating to sensing. For instance, using Candida albicans cells, a species of yeast that often lives inside the human gastrointestinal tract, cyborg tissue materials with temperature sensing properties have been reported.
Actual cyborgization attempts
In current prosthetic applications, the C-Leg system developed by Otto Bock HealthCare, is used to replace a human leg that has been amputated because of injury or illness. The use of sensors in the artificial C-Leg aids in walking significantly by attempting to replicate the user's natural gait, as it would be prior to amputation. A similar system is being developed by the Swedish orthopedic company Integrum, the OPRA Implant System, which is surgically anchored and integrated by means of osseointegration into the skeleton of the remainder of the amputated limb. The same company has developed e-OPRA, a will-powered upper limb prosthesis system that is being evaluated in a clinical trial to allow sensory input to the central nervous system using pressure and temperature sensors in the prosthesis' finger tips. Prostheses like the C-Leg, the e-OPRA Implant System, and the iLimb, are considered by some to be the first real steps towards the next generation of real-world cyborg applications. Additionally, cochlear implants and magnetic implants, which provide people with a sense that they would not otherwise have had, can additionally be thought of as creating cyborgs.
In vision science, direct brain implants have been used to treat non-congenital (acquired) blindness. One of the first scientists to come up with a working brain interface to restore sight was private researcher William Dobelle.
Dobelle's first prototype was implanted into "Jerry", a man blinded in adulthood, in 1978. A single-array BCI containing 68 electrodes was implanted onto Jerry's visual cortex and succeeded in producing phosphenes, the sensation of seeing light. The system included cameras mounted on glasses to send signals to the implant. Initially, the implant allowed Jerry to see shades of grey in a limited field of vision at a low frame-rate. This also required him to be hooked up to a two-ton mainframe, but shrinking electronics and faster computers made his artificial eye more portable and now enable him to perform simple tasks unassisted.
In 1997, Philip Kennedy, a scientist and physician, created the world's first human cyborg from Johnny Ray, a Vietnam War veteran who suffered a stroke. Ray's body, as doctors called it, was "locked in". Ray wanted his old life back so he agreed to Kennedy's experiment. Kennedy embedded an implant he designed (and named a "neurotrophic electrode") near the injured part of Ray's brain so that Ray would be able to have some movement back in his body. The surgery went successfully, but in 2002, Ray died.
In 2002, Canadian Jens Naumann, also blinded in adulthood, became the first in a series of 16 paying patients to receive Dobelle's second-generation implant, marking one of the earliest commercial uses of BCIs. The second-generation device used a more sophisticated implant enabling better mapping of phosphenes into coherent vision. Phosphenes are spread out across the visual field in what researchers call the starry-night effect. Immediately after his implant, Naumann was able to use his imperfectly restored vision to drive slowly around the parking area of the research institute.
In contrast to replacement technologies, in 2002, under the heading Project Cyborg, a British scientist, Kevin Warwick, had an array of 100 electrodes fired into his nervous system to link his nervous system into the internet to investigate enhancement possibilities. With this in place, Warwick successfully carried out a series of experiments including extending his nervous system over the internet to control a robotic hand, also receiving feedback from the fingertips to control the hand's grip. This was a form of extended sensory input. Subsequently, he investigated ultrasonic input to remotely detect the distance to objects. Finally, with electrodes also implanted into his wife's nervous system, they conducted the first direct electronic communication experiment between the nervous systems of two humans.
Since 2004, British artist Neil Harbisson has had a cyborg antenna implanted in his head that allows him to extend his perception of colors beyond the human visual spectrum through vibrations in his skull. His antenna was included within his 2004 passport photograph which has been said to confirm his cyborg status. In 2012 at TEDGlobal, Harbisson explained that he started to feel like a cyborg when he noticed that the software and his brain had united and given him an extra sense. Harbisson is a co-founder of the Cyborg Foundation (2004) and cofounded the Transpecies Society in 2017, which is an association that empowers individuals with non-human identities and supports them in their decisions to develop unique senses and new organs. Neil Harbisson is a global advocate for the rights of cyborgs.
Rob Spence, a Toronto-based filmmaker, who titles himself a real-life "Eyeborg", severely damaged his right eye in a shooting accident on his grandfather's farm as a child.
Many years later, in 2005, he decided to have his ever-deteriorating and now technically blind eye surgically removed, whereafter he wore an eyepatch for some time before he later, after having played for some time with the idea of installing a camera instead, contacted professor Steve Mann at the Massachusetts Institute of Technology, an expert in wearable computing and cyborg technology.
Under Mann's guidance, Spence, at age 36, created a prototype in the form of the miniature camera which could be fitted inside his prosthetic eye; an invention that would come to be named by Time magazine as one of the best inventions of 2009. The bionic eye records everything he sees and contains a 1.5 mm2, low-resolution video camera, a small round printed circuit board, a wireless video transmitter, which allows him to transmit what he is seeing in real-time to a computer, and a 3-volt rechargeable VARTA microbattery. The eye is not connected to his brain and has not restored his sense of vision. Additionally, Spence has also installed a laser-like LED light in one version of the prototype.
Furthermore, many people with multifunctional radio frequency identification (RFID) microchips injected into a hand are known to exist. With the chips they are able to swipe cards, open or unlock doors, operate devices such as printers or, with some using cryptocurrency, buy products, such as drinks, with a wave of the hand.
bodyNET
bodyNET is an application of human-electronic interaction currently in development by researchers from Stanford University. The technology is based on stretchable semiconductor materials (Elastronic). According to their article in Nature, the technology is composed of smart devices, screens, and a network of sensors that can be implanted into the body, woven into the skin or worn as clothes. It has been suggested that this platform can potentially replace the smartphone in the future.
Practical applications
In medicine and biotechnology
In medicine, there are two important and different types of cyborgs: the restorative and the enhanced. Restorative technologies "restore lost function, organs, and limbs." The key aspect of restorative cyborgization is the repair of broken or missing processes to revert to a healthy or average level of function. There is no enhancement to the original faculties and processes that were lost.
On the contrary, the enhanced cyborg "follows a principle, and it is the principle of optimal performance: maximising output (the information or modifications obtained) and minimising input (the energy expended in the process)". Thus, the enhanced cyborg intends to exceed normal processes or even gain new functions that were not originally present.
Prosthetics
Although prostheses in general supplement lost or damaged body parts with the integration of a mechanical artifice, bionic implants in medicine allow model organs or body parts to mimic the original function more closely. Michael Chorost wrote a memoir of his experience with cochlear implants, or bionic ears, titled Rebuilt: How Becoming Part Computer Made Me More Human. Jesse Sullivan became one of the first people to operate a fully robotic limb through a nerve-muscle graft, enabling him a complex range of motions beyond that of previous prosthetics. By 2004, a fully functioning artificial heart was developed. The continued technological development of bionic and (bio-)nanotechnologies begins to raise the question of enhancement, and of the future possibilities for cyborgs which surpass the original functionality of the biological model. The ethics and desirability of "enhancement prosthetics" have been debated; their proponents include the transhumanist movement, with its belief that new technologies can assist the human race in developing beyond its present, normative limitations such as aging and disease, as well as other, more general inabilities, such as limitations on speed, strength, endurance, and intelligence. Opponents of the concept describe what they believe to be biases which propel the development and acceptance of such technologies; namely, a bias towards functionality and efficiency that may compel assent to a view of human people which de-emphasizes as defining characteristics actual manifestations of humanity and personhood, in favor of definition in terms of upgrades, versions, and utility.
Retinal implants are another form of cyborgization in medicine. The theory behind retinal stimulation to restore vision for those suffering from retinitis pigmentosa and vision loss due to aging (conditions in which people have an abnormally low number of retinal ganglion cells), is that the retinal implant and electrical stimulation would act as a substitute for the missing ganglion cells (cells which connect the eye to the brain).
While the work to perfect this technology is still being done, there have already been major advances in the use of electronic stimulation of the retina to allow the eye to sense patterns of light. A specialized camera is worn by the subject, such as on the frames of their glasses, which converts the image into a pattern of electrical stimulation. A chip located in the user's eye would then electrically stimulate the retina with this pattern by exciting certain nerve endings which transmit the image to the optic centers of the brain, and the image would then appear to the user. If technological advances proceed as planned, this technology may be used by thousands of blind people and restore vision to most of them.
A similar process has been created to aid people who have lost their vocal cords. This experimental device would do away with previously used robotic-sounding voice simulators. The transmission of sound would start with a surgery to redirect the nerve that controls the voice and sound production to a muscle in the neck, where a nearby sensor would be able to pick up its electrical signals. The signals would then move to a processor which would control the timing and pitch of a voice simulator. That simulator would then vibrate producing a multi-tonal sound that could be shaped into words by the mouth.
An article published in Nature Materials in 2012 reported research on "cyborg tissues" (engineered human tissues with embedded three-dimensional mesh of nanoscale wires), with possible medical implications.
In 2014, researchers from the University of Illinois at Urbana–Champaign and Washington University in St. Louis had developed a device that could keep a heart beating endlessly. By using 3D printing and computer modeling, these scientists developed an electronic membrane that could successfully replace pacemakers. The device uses a "spider-web like network of sensors and electrodes" to monitor and maintain a normal heart rate with electrical stimuli. Unlike traditional pacemakers that are similar from patient to patient, the elastic heart glove is made custom by using high-resolution imaging technology. The first prototype was created to fit a rabbit's heart, operating the organ in an oxygen and nutrient-rich solution. The stretchable material and circuits of the apparatus were first constructed by Professor John A. Rogers in which the electrodes are arranged in an s-shape design to allow them to expand and bend without breaking. Although the device is only currently used as a research tool to study changes in heart rate, in the future the membrane may serve as a safeguard against heart attacks.
Neural enhancement and restoration
A brain–computer interface, or BCI, provides a direct path of communication from the brain to an external device, effectively creating a cyborg. Research into invasive BCIs, which use electrodes implanted directly into the grey matter of the brain, has focused on restoring damaged eyesight in the blind and providing functionality to paralyzed people, most notably those with severe cases, such as locked-in syndrome. This technology could enable people who are missing a limb or are in a wheelchair the power to control the devices that aid them through neural signals sent from the brain implants directly to computers or the devices. It is possible that this technology will also eventually be used with healthy people.
Deep brain stimulation is a neurological surgical procedure used for therapeutic purposes. This process has aided in treating patients diagnosed with Parkinson's disease, Alzheimer's disease, Tourette syndrome, epilepsy, chronic headaches, and mental disorders. After the patient is unconscious, through anesthesia, brain pacemakers or electrodes, are implanted into the region of the brain where the cause of the disease is present. The region of the brain is then stimulated by bursts of electric current to disrupt the oncoming surge of seizures. Like all invasive procedures, deep brain stimulation may put the patient at a higher risk. However, there have been more improvements in recent years with deep brain stimulation than any available drug treatment.
Pharmacology
Automated insulin delivery systems, colloquially also known as the "artificial pancreas", are a substitute for the lack of natural insulin production by the body, most notably in Type 1 Diabetes. Currently available systems combine a continuous glucose monitor with an insulin pump that can be remote controlled, forming a control loop that automatically adjusts the insulin dosage depending on the current blood glucose level. Examples of commercial systems that implement such a control loop are the MiniMed 670G from Medtronic and the t:slim x2 from Tandem Diabetes Care. Do-it-yourself artificial pancreas technologies also exist, though these are not verified or approved by any regulatory agency. Upcoming next-generation artificial pancreas technologies include automatic glucagon infusion in addition to insulin, to help prevent hypoglycemia and improve efficiency. One example of such a bi-hormonal system is the Beta Bionics iLet.
In the military
Military organizations' research has recently focused on the use of cyborg animals for the purposes of a supposed tactical advantage. DARPA has announced its interest in developing "cyborg insects" to transmit data from sensors implanted into the insect during the pupa stage. The insect's motion would be controlled from a microelectromechanical system (MEMS) and could conceivably survey an environment or detect explosives and gas. Similarly, DARPA is developing a neural implant to remotely control the movement of sharks. The shark's unique senses would then be exploited to provide data feedback in relation to enemy ship movement or underwater explosives.
In 2006, researchers at Cornell University invented a new surgical procedure to implant artificial structures into insects during their metamorphic development. The first insect cyborgs, moths with integrated electronics in their thorax, were demonstrated by the same researchers. The initial success of the techniques has resulted in increased research and the creation of a program called Hybrid-Insect-MEMS (HI-MEMS). Its goal, according to DARPA's Microsystems Technology Office, is to develop "tightly coupled machine-insect interfaces by placing micro-mechanical systems inside the insects during the early stages of metamorphosis."
The use of neural implants has recently been attempted, with success, on cockroaches. Surgically applied electrodes were put on the insect, which was remotely controlled by a human. The results, although sometimes different, basically showed that the cockroach could be controlled by the impulses it received through the electrodes. DARPA is now funding this research because of its obvious beneficial applications to the military and other areas
In 2009 at the Institute of Electrical and Electronics Engineers (IEEE) MEMS conference in Italy, researchers demonstrated the first "wireless" flying-beetle cyborg. Engineers at the University of California, Berkeley, have pioneered the design of a "remote-controlled beetle", funded by the DARPA HI-MEMS Program. This was followed later that year by the demonstration of wireless control of a "lift-assisted" moth-cyborg.
Eventually researchers plan to develop HI-MEMS for dragonflies, bees, rats, and pigeons. For the HI-MEMS cybernetic bug to be considered a success, it must fly from a starting point, guided via computer into a controlled landing within of a specific end point. Once landed, the cybernetic bug must remain in place.
In 2020, an article published in Science Robotics by researchers at the University of Washington reported a mechanically steerable wireless camera attached to beetles. Miniature cameras weighing 248 mg were attached to live beetles of the Tenebrionid genera Asbolus and Eleodes. The camera wirelessly streamed video to a smartphone via Bluetooth for up to 6 hours and the user could remotely steer the camera to achieve a bug's-eye view.
In sports
In 2016, Cybathlon became the first cyborg 'Olympics'; celebrated in Zurich, Switzerland, it was the first worldwide and official celebration of cyborg sports. In this event, 16 teams of people with disabilities used technological developments to turn themselves into cyborg athletes. There were 6 different events and its competitors used and controlled advanced technologies such as powered prosthetic legs and arms, robotic exoskeletons, bikes, and motorized wheelchairs.
This was already a remarkable improvement, as it allowed disabled people to compete and showed the several technological enhancements that are already making a difference; however, it showed that there is still a long way to go. For instance, the exoskeleton race still required its participants to stand up from a chair and sit down, navigate a slalom and other simple activities such as walking over stepping stones and climbing up and down stairs. Despite the simplicity of these activities, 8 of the 16 teams that participated in the event drop off before the start.
Nonetheless, one of the main goals of this event and such simple activities is to show how technological enhancements and advanced prosthetics can make a difference in people's lives. The next Cybathlon that was expected to occur in 2020, was cancelled due to the coronavirus pandemic.
In Art
The concept of the cyborg is often associated with science fiction. However, many artists have incorporated and reappropriated the idea of cybernetic organisms into their work, using disparate aesthetics and often realising actual cyborg constructs; their works range from performances, to paintings and installations. Some of the pioneering artists who created such works are H. R. Giger, Stelarc, Orlan, Shu Lea Cheang, Lee Bul, Tim Hawkinson, Steve Mann, Patricia Piccinini. More recently, this type of artistic practice has been expanded upon by artists such as Marco Donnarumma, Wafaa Bilal, Neil Harbisson, Moon Ribas, Manel De Aguas and Quimera Rosa.
Stelarc is a performance artist who has visually probed and acoustically amplified his body. He uses medical instruments, prosthetics, robotics, virtual reality systems, the Internet and biotechnology to explore alternate, intimate and involuntary interfaces with the body. He has made three films of the inside of his body and has performed with a third hand and a virtual arm. Between 1976 and 1988 he completed 25 body suspension performances with hooks into the skin. For 'Third Ear', he surgically constructed an extra ear within his arm that was internet-enabled, making it a publicly accessible acoustical organ for people in other places. He is presently performing as his avatar from his second life site.
Tim Hawkinson promotes the idea that bodies and machines are coming together as one, where human features are combined with technology to create the Cyborg. Hawkinson's piece Emoter presented how society is now dependent on technology.
Marco Donnarumma is a performance artist and new media artist. In his work the body becomes a morphing language to speak critically of ritual, power and technology. For his "7 Configurations" cycle, between 2014 and 2019, he engineered and created six AI prostheses, each embodying an uncanny configuration of the machinic with the organic. The prostheses – designed together with a team of artists and scientists – are useless prostheses, paradoxical objects designed for the body, but not to enhance it, rather to subtract functions from it: a skin-cutting robot with a steel metal knife, a facial prosthesis which blocks the wearer’s gaze with a mechanical arm, and two robotic spines that function as additional limbs without a body. The prostheses have been created to act as performers with their own agency, that is, to interact with their human partners without being controlled externally. The machines are embedded with biomimetic neural networks, information processing algorithms inspired by the biological nervous system of mammals. Developed by Donnarumma in collaboration with the Neurorobotics Research Laboratory (DE), these neural networks endow the machines with artificial cognitive and sensorimotor skills.
Wafaa Bilal is an Iraqi-American performance artist who had a small 10-megapixel digital camera surgically implanted into the back of his head, part of a project entitled 3rd I. For one year, beginning 15 December 2010, an image was captured once per minute 24 hours a day and streamed live to and the Mathaf: Arab Museum of Modern Art. The site also displays Bilal's location via GPS. Bilal says that the reason why he put the camera in the back of the head was to make an "allegorical statement about the things we don't see and leave behind." As a professor at NYU, this project raised privacy issues, and so Bilal was asked to ensure that his camera did not take photographs in NYU buildings.
Machines are becoming more ubiquitous in the artistic process itself, with computerized drawing pads replacing pen and paper, and drum machines becoming nearly as popular as human drummers. Composers such as Brian Eno have developed and used software that can build entire musical scores from a few basic mathematical parameters.
Scott Draves is a generative artist whose work is explicitly described as a "cyborg mind". His Electric Sheep project generates abstract art by combining the work of many computers and people over the internet.
Artists as cyborgs
Artists have explored the term cyborg from a perspective involving imagination. Some work to make an abstract idea of technological and human-bodily union apparent to reality in an art form using varying mediums, from sculptures and drawings to digital renderings.
Artists who seek to make cyborg-based fantasies a reality often call themselves cyborg artists, or may consider their artwork "cyborg". How an artist or their work may be considered cyborg will vary depending upon the interpreter's flexibility with the term.
Scholars that rely upon a strict, technical description of a cyborg, often going by Norbert Wiener's cybernetic theory and Manfred E. Clynes and Nathan S. Kline's first use of the term, would likely argue that most cyborg artists do not qualify to be considered cyborgs. Scholars considering a more flexible description of cyborgs may argue it incorporates more than cybernetics. Others may speak of defining subcategories, or specialized cyborg types, that qualify different levels of cyborg at which technology influences an individual. This may range from technological instruments being external, temporary, and removable to being fully integrated and permanent. Nonetheless, cyborg artists are artists. Being so, it can be expected for them to incorporate the cyborg idea rather than a strict, technical representation of the term, seeing how their work will sometimes revolve around other purposes outside of cyborgism.
In body modification
As medical technology becomes more advanced, some techniques and innovations are adopted by the body modification community. While not yet cyborgs in the strict definition of Manfred Clynes and Nathan Kline, technological developments like implantable silicon silk electronics, augmented reality and QR codes are bridging the disconnect between technology and the body. Hypothetical technologies such as digital tattoo interfacesDigital Tattoo Interface, Jim Mielke, United States would blend body modification aesthetics with interactivity and functionality, bringing a transhumanist way of life into present day reality.
In addition, it is quite plausible for anxiety expression to manifest. Individuals may experience pre-implantation feelings of fear and nervousness. To this end, individuals may also embody feelings of uneasiness, particularly in a socialized setting, due to their post-operative, technologically augmented bodies, and mutual unfamiliarity with the mechanical insertion. Anxieties may be linked to notions of otherness or a cyborged identity.
In space
Sending humans to space is a dangerous task in which the implementation of various cyborg technologies could be used in the future for risk mitigation. Stephen Hawking, a renowned physicist, stated "Life on Earth is at the ever-increasing risk of being wiped out by a disaster such as sudden global warming, nuclear war ... I think the human race has no future if it doesn't go into space." The difficulties associated with space travel could mean it might be centuries before humans ever become a multi-planet species. There are many effects of spaceflight on the human body. One major issue of space exploration is the biological need for oxygen. If this necessity was taken out of the equation, space exploration would be revolutionized. A theory proposed by Manfred E. Clynes and Nathan S. Kline is aimed at tackling this problem. The two scientists theorized that the use of an inverse fuel cell that is "capable of reducing CO2 to its components with the removal of the carbon and re-circulation of the oxygen ..." could make breathing unnecessary. Another prominent issue is radiation exposure. Yearly, the average human on earth is exposed to approximately 0.30 rem of radiation, while an astronaut aboard the International Space Station for 90 days is exposed to 9 rem. To tackle the issue, Clynes and Kline theorized a cyborg containing a sensor that would detect radiation levels and a Rose osmotic pump "which would automatically inject protective pharmaceuticals in appropriate doses." Experiments injecting these protective pharmaceuticals into monkeys have shown positive results in increasing radiation resistance.
Although the effects of spaceflight on our bodies are an important issue, the advancement of propulsion technology is just as important. With our current technology, it would take us about 260 days to get to Mars. A study backed by NASA proposes an interesting way to tackle this issue through deep sleep, or torpor. With this technique, it would "reduce astronauts' metabolic functions with existing medical procedures." So far experiments have only resulted in patients being in torpor state for one week. Advancements to allow for longer states of deep sleep would lower the cost of the trip to Mars as a result of reduced astronaut resource consumption.
In cognitive science
Theorists such as Andy Clark suggest that interactions between humans and technology result in the creation of a cyborg system. In this model, cyborg is defined as a part-biological, part-mechanical system that results in the augmentation of the biological component and the creation of a more complex whole. Clark argues that this broadened definition is necessary to an understanding of human cognition. He suggests that any tool which is used to offload part of a cognitive process may be considered the mechanical component of a cyborg system. Examples of this human and technology cyborg system can be very low tech and simplistic, such as using a calculator to perform basic mathematical operations or pen and paper to make notes, or as high tech as using a personal computer or phone. According to Clark, these interactions between a person and a form of technology integrate that technology into the cognitive process in a way that is analogous to the way that a technology that would fit the traditional concept of cyborg augmentation becomes integrated with its biological host. Because all humans in some way use technology to augment their cognitive processes, Clark comes to the conclusion that we are "natural-born cyborgs." Professor Donna Haraway also theorizes that people, metaphorically or literally, have been cyborgs since the late twentieth century. If one considers the mind and body as one, much of humanity is aided with technology in almost every way, which hybridizes humans with technology.
Future scope and regulation of implantable technologies
Given the technical scope of current and future implantable sensory/telemetric devices, such devices will be greatly proliferated, and will have connections to commercial, medical, and governmental networks. For example, in the medical sector, patients will be able to log in to their home computer, and thus visit virtual doctor's offices, medical databases, and receive medical prognoses from the comfort of their own home from the data collected through their implanted telemetric devices. However, this online network presents large security concerns because it has been proven by several U.S. universities that hackers could get onto these networks and shut down peoples' electronic prosthetics. Cyborg data mining refers to the collection of data produced by implantable devices.
These sorts of technologies are already present in the U.S. workforce as a firm in River Falls, Wisconsin, called Three Square Market partnered with a Swedish firm Biohacks Technology to implant RFID microchips (which are about the size of a grain of rice) in the hands of its employees that allow employees to access offices, computers, and even vending machines. More than 50 of the firm's 85 employees were chipped. It was confirmed that the American Food and Drug Administration approved of these implantations. If these devices are to be proliferated within society, then the question that begs to be answered is what regulatory agency will oversee the operations, monitoring, and security of these devices? According to this case study of Three Square Market, it seems that the FDA is assuming a role in regulating and monitoring these devices. It has been argued that a new regulatory framework needs to be developed so that the law keeps up with developments in implantable technologies.
Cyborg Foundation
In 2010, the Cyborg Foundation became the world's first international organization dedicated to help humans become cyborgs. The foundation was created by cyborg Neil Harbisson and Moon Ribas as a response to the growing number of letters and emails received from people around the world interested in becoming cyborgs. The foundation's main aims are to extend human senses and abilities by creating and applying cybernetic extensions to the body, to promote the use of cybernetics in cultural events and to defend cyborg rights. In 2010, the foundation, based in Mataró (Barcelona), was the overall winner of the Cre@tic Awards, organized by Tecnocampus Mataró.
In 2012, Spanish film director Rafel Duran Torrent, created a short film about the Cyborg Foundation. In 2013, the film won the Grand Jury Prize at the Sundance Film Festival's Focus Forward Filmmakers Competition and was awarded US$100,000.
In fiction
Cyborgs are a recurring feature of science fiction literature and other media.
Animal cyborgs
The US-based company Backyard Brains released what they refer to as the "world's first commercially available cyborg" called the RoboRoach. The project started as a senior design project for a University of Michigan biomedical engineering student in 2010, and was launched as an available beta product on 25 February 2011. The RoboRoach was officially released into production via a TED talk at the TED Global conference; and via the crowdsourcing website Kickstarter in 2013, the kit allows students to use microstimulation to momentarily control the movements of a walking cockroach (left and right) using a Bluetooth-enabled smartphone as the controller.
Other groups have developed cyborg insects, including researchers at North Carolina State University, UC Berkeley, and Nanyang Technological University, Singapore, but the RoboRoach was the first kit available to the general public and was funded by the National Institute of Mental Health as a device to serve as a teaching aid to promote an interest in neuroscience. Several animal welfare organizations including the RSPCA and PETA have expressed concerns about the ethics and welfare of animals in this project. In 2022, remote controlled cyborg cockroaches functional if moving (or moved) to sunlight for recharging were presented. They could be used e.g. for purposes of inspecting hazardous areas or quickly finding humans underneath hard-to-access rubbles at disaster sites.
In the late 2010s, scientists created cyborg jellyfish using a microelectronic prosthetic that propels the animal to swim almost three times faster while using just twice the metabolic energy of their unmodified peers. The prosthetics can be removed without harming the jellyfish.
Bacterial cyborg cells
A combination of synthetic biology, nanotechnology and materials science approaches have been used to create a few different iterations of bacterial cyborg cells. These different types of mechanically enhanced bacteria are created with so called bionic manufacturing principles that combine natural cells with abiotic materials. In 2005, researchers from the Department of Chemical Engineering at the University of Nebraska, Lincoln created a super sensitive humidity sensor by coating the bacteria Bacillus cereus with gold nanoparticles, being the first to use a microorganism to make an electronic device and presumably the first cyborg bacteria or cellborg circuit. Researchers from the Department of Chemistry at the University of California, Berkeley published a series of articles in 2016 describing the development of cyborg bacteria capable to harvest sunlight more efficiently than plants. In the first study, the researchers induced the self-photosensitization of a nonphotosynthetic bacterium, Moorella thermoacetica, with cadmium sulfide nanoparticles, enabling the photosynthesis of acetic acid from carbon dioxide. A follow-up article described the elucidation of the mechanism of semiconductor-to-bacterium electron transfer that allows the transformation of carbon dioxide and sunlight into acetic acid. Scientists of the Department of Biomedical Engineering at the University of California, Davis and Academia Sinica in Taiwan, developed a different approach to create cyborg cells by assembling a synthetic hydrogel inside the bacterial cytoplasm of Escherichia. coli cells rendering them incapable of dividing and making them resistant to environmental factors, antibiotics and high oxidative stress. The intracellular infusion of synthetic hydrogel provides these cyborg cells with an artificial cytoskeleton and their acquired tolerance makes them well placed to become a new class of drug-delivery systems positioned between classical synthetic materials and cell-based systems.
| Technology | Biotechnology | null |
10518546 | https://en.wikipedia.org/wiki/Technology%20life%20cycle | Technology life cycle | The technology life cycle (TLC) describes the commercial gain of a product through the expense of research and development phase, and the financial return during its "vital life". Some technologies, such as steel, paper or cement manufacturing, have a long lifespan (with minor variations in technology incorporated with time) while in other cases, such as electronic or pharmaceutical products, the lifespan may be quite short.
The TLC associated with a product or technological service is different from product life-cycle (PLC) dealt with in product life-cycle management. The latter is concerned with the life of a product in the marketplace with respect to timing of introduction, marketing measures, and business costs. The technology underlying the product (for example, that of a uniquely flavoured tea) may be quite marginal but the process of creating and managing its life as a branded product will be very different.
The technology life cycle is concerned with the time and cost of developing the technology, the timeline of recovering cost, and modes of making the technology yield a profit proportionate to the costs and risks involved. The TLC may, further, be protected during its cycle with patents and trademarks seeking to lengthen the cycle and to maximize the profit from it.
The product of the technology may be a commodity such as polyethylene plastic or a sophisticated product like the integrated circuits used in a smartphone.
The development of a competitive product or process can have a major effect on the lifespan of the technology, making it longer. Equally, the loss of intellectual property rights through litigation or loss of its secret elements (if any) through leakages also work to reduce a technology's lifespan. Thus, it is apparent that the management of the TLC is an important aspect of technology development.
Most new technologies follow a similar technology maturity life cycle describing the technological maturity of a product. This is not similar to a product life cycle, but applies to an entire technology, or a generation of a technology.
Technology adoption is the most common phenomenon driving the evolution of industries along the industry life cycle. After expanding new uses of resources they end with exhausting the efficiency of those processes, producing gains that are first easier and larger over time then exhaustingly more difficult, as the technology matures.
Four phases
The Soviet economist Nikolai Kondratiev was the first to observe technology life cycle in his book The Major Economic Cycles (1925). Today, these cycles are called Kondratiev wave, the predecessor of TLC. TLC is composed of four phases:
The research and development (R&D) phase (sometimes called the "bleeding edge") when incomes from inputs are negative and where the prospects of failure are high
The ascent phase when out-of-pocket costs have been recovered and the technology begins to gather strength by going beyond some Point A on the TLC (sometimes called the "leading edge")
The maturity phase when gain is high and stable, the region, going into saturation, marked by M, and
The decline (or decay phase), after a Point D, of reducing fortunes and utility of the technology.
S-curve
The shape of the technology life cycle is often referred to as S-curve.
Technology perception dynamics
There is usually technology hype at the introduction of any new technology, but only after some time has passed can it be judged as mere hype or justified true acclaim.
Because of the logistic curve nature of technology adoption, it is difficult to see in the early stages whether the hype is excessive.
Similarly, in the later stages, the opposite mistakes can be made relating to the possibilities of technology maturity and market saturation.
The technology adoption life cycle typically occurs in an S curve, as modelled in diffusion of innovations theory. This is because customers respond to new products in different ways. Diffusion of innovations theory, pioneered by Everett Rogers, posits that people have different levels of readiness for adopting new innovations and that the characteristics of a product affect overall adoption. Rogers classified individuals into five groups: innovators, early adopters, early majority, late majority, and laggards. In terms of the S curve, innovators occupy 2.5%, early adopters 13.5%, early majority 34%, late majority 34%, and laggards 16%.
The four stages of technology life cycle are as follows:
Innovation stage: This stage represents the birth of a new product, material of process resulting from R&D activities. In R&D laboratories, new ideas are generated depending on gaining needs and knowledge factors. Depending on the resource allocation and also the change element, the time taken in the innovation stage as well as in the subsequent stages varies widely.
Syndication stage: This stage represents the demonstration and commercialisation of a new technology, such as, product, material or process with potential for immediate utilisation. Many innovations are put on hold in R&D laboratories. Only a very small percentage of these are commercialised. Commercialisation of research outcomes depends on technical as well non-technical, mostly economic factors.
Diffusion stage: This represents the market penetration of a new technology through acceptance of the innovation, by potential users of the technology. But supply and demand side factors jointly influence the rate of diffusion.
Substitution stage: This last stage represents the decline in the use and eventual extension of a technology, due to replacement by another technology. Many technical and non-technical factors influence the rate of substitution. The time taken in the substitution stage depends on the market dynamics.
Licensing options
Large corporations develop technology for their own benefit and not with the objective of licensing. The tendency to license out technology only appears when there is a threat to the life of the TLC (business gain) as discussed later.
In the R&D phase
There are always smaller firms (SMEs) who are inadequately situated to finance the development of innovative R&D in the post-research and early technology phases. By sharing incipient technology under certain conditions, substantial risk financing can come from third parties. This is a form of quasi-licensing which takes different formats. Even large corporates may not wish to bear all costs of development in areas of significant and high risk (e.g. aircraft development) and may seek means of spreading it to the stage that proof-of-concept is obtained.
In the case of small and medium firms, entities such as venture capitalists or business angels, can enter the scene and help to materialize technologies. Venture capitalists accept both the costs and uncertainties of R&D, and that of market acceptance, in reward for high returns when the technology proves itself. Apart from finance, they may provide networking, management and marketing support. Venture capital connotes financial as well as human capital.
Larger firms may opt for Joint R&D or work in a consortium for the early phase of development. Such vehicles are called strategic alliances – strategic partnerships.
With both venture capital funding and strategic (research) alliances, when business gains begin to neutralize development costs (the TLC crosses the X-axis), the ownership of the technology starts to undergo change.
In the case of smaller firms, venture capitalists help clients enter the stock market for obtaining substantially larger funds for development, maturation of technology, product promotion and to meet marketing costs. A major route is through initial public offering (IPO) which invites risk funding by the public for potential high gain. At the same time, the IPOs enable venture capitalists to attempt to recover expenditures already incurred by them through part sale of the stock pre-allotted to them (subsequent to the listing of the stock on the stock exchange). When the IPO is fully subscribed, the assisted enterprise becomes a corporation and can more easily obtain bank loans, etc. if needed.
Strategic alliance partners, allied on research, pursue separate paths of development with the incipient technology of common origin but pool their accomplishments through instruments such as 'cross-licensing'. Generally, contractual provisions among the members of the consortium allow a member to exercise the option of independent pursuit after joint consultation; in which case the optee owns all subsequent development.
In the ascent phase
The ascent stage of the technology usually refers to some point above Point A in the TLC diagram but actually it commences when the R&D portion of the TLC curve inflects (only that the cashflow is negative and unremunerative to Point A). The ascent is the strongest phase of the TLC because it is here that the technology is superior to alternatives and can command premium profit or gain. The slope and duration of the ascent depends on competing technologies entering the domain, although they may not be as successful in that period. Strongly patented technology extends the duration period.
The TLC begins to flatten out (the region shown as M) when equivalent or challenging technologies come into the competitive space and begin to eat away marketshare.
Till this stage is reached, the technology-owning firm would tend to exclusively enjoy its profitability, preferring not to license it. If an overseas opportunity does present itself, the firm would prefer to set up a controlled subsidiary rather than license a third party.
In the maturity phase
The maturity phase of the technology is a period of stable and remunerative income but its competitive viability can persist over the larger timeframe marked by its 'vital life'. However, there may be a tendency to license out the technology to third parties during this stage to lower risk of decline in profitability (or competitivity) and to expand financial opportunity.
The exercise of this option is, generally, inferior to seeking participatory exploitation; in other words, engagement in joint venture, typically in regions where the technology would be in the ascent phase, as say, a developing country. In addition to providing financial opportunity it allows the technology-owner a degree of control over its use. Gain flows from the two streams of investment-based and royalty incomes. Further, the vital life of the technology is enhanced in such strategy.
In the decline phase
After reaching a point such as D in the above diagram, the earnings from the technology begin to decline rather rapidly. To prolong the life cycle, owners of technology might try to license it out at some point L when it can still be attractive to firms in other markets. This, then, traces the lengthening path, LL'. Further, since the decline is the result of competing rising technologies in this space, licenses may be attracted to the general lower cost of the older technology (than what prevailed during its vital life).
Licenses obtained in this phase are 'straight licenses'. They are free of direct control from the owner of the technology (as would otherwise apply, say, in the case of a joint-venture). Further, there may be fewer restrictions placed on the licensee in the employment of the technology.
The utility, viability, and thus the cost of straight-licenses depends on the estimated 'balance life' of the technology. For instance, should the key patent on the technology have expired, or would expire in a short while, the residual viability of the technology may be limited, although balance life may be governed by other criteria such as knowhow which could have a longer life if properly protected.
The license has no way of knowing the stage at which the prime, and competing technologies, are on their TLCs. It would be evident to competing licensor firms, and to the originator, from the growth, saturation or decline of the profitability of their operations.
The license may, however, be able to approximate the stage by vigorously negotiating with the licensor and competitors to determine costs and licensing terms. A lower cost, or easier terms, may imply a declining technology.
In any case, access to technology in the decline phase is a large risk that the licensee accepts. (In a joint-venture this risk is substantially reduced by licensor sharing it). Sometimes, financial guarantees from the licensor may work to reduce such risk and can be negotiated.
There are instances when, even though the technology declines to becoming a technique, it may still contain important knowledge or experience which the licensee firm cannot learn of without help from the originator. This is often the form that technical service and technical assistance contracts take (encountered often in developing country contracts). Alternatively, consulting agencies may fill this role.
Technology development cycle
According to the Encyclopedia of Earth, "In the simplest formulation, innovation can be thought of as being composed of research, development, demonstration, and deployment."
Technology development cycle describes the process of a new technology through the stages of technological maturity:
Research and development
Scientific demonstration
System deployment
Diffusion
| Technology | General | null |
10520679 | https://en.wikipedia.org/wiki/Multithreading%20%28computer%20architecture%29 | Multithreading (computer architecture) | In computer architecture, multithreading is the ability of a central processing unit (CPU) (or a single core in a multi-core processor) to provide multiple threads of execution.
Overview
The multithreading paradigm has become more popular as efforts to further exploit instruction-level parallelism have stalled since the late 1990s. This allowed the concept of throughput computing to re-emerge from the more specialized field of transaction processing. Even though it is very difficult to further speed up a single thread or single program, most computer systems are actually multitasking among multiple threads or programs. Thus, techniques that improve the throughput of all tasks result in overall performance gains.
Two major techniques for throughput computing are multithreading and multiprocessing.
Advantages
If a thread gets a lot of cache misses, the other threads can continue taking advantage of the unused computing resources, which may lead to faster overall execution, as these resources would have been idle if only a single thread were executed. Also, if a thread cannot use all the computing resources of the CPU (because instructions depend on each other's result), running another thread may prevent those resources from becoming idle.
Disadvantages
Multiple threads can interfere with each other when sharing hardware resources such as caches or translation lookaside buffers (TLBs). As a result, execution times of a single thread are not improved and can be degraded, even when only one thread is executing, due to lower frequencies or additional pipeline stages that are necessary to accommodate thread-switching hardware.
Overall efficiency varies; Intel claims up to 30% improvement with its Hyper-Threading Technology, while a synthetic program just performing a loop of non-optimized dependent floating-point operations actually gains a 100% speed improvement when run in parallel. On the other hand, hand-tuned assembly language programs using MMX or AltiVec extensions and performing data prefetches (as a good video encoder might) do not suffer from cache misses or idle computing resources. Such programs therefore do not benefit from hardware multithreading and can indeed see degraded performance due to contention for shared resources.
From the software standpoint, hardware support for multithreading is more visible to software, requiring more changes to both application programs and operating systems than multiprocessing. Hardware techniques used to support multithreading often parallel the software techniques used for computer multitasking. Thread scheduling is also a major problem in multithreading.
Merging data from two processes can often incur significantly higher costs compared to processing the same data on a single thread, potentially by two or more orders of magnitude due to overheads such as inter-process communication and synchronization.
Types of multithreading
Interleaved/Temporal multithreading
Coarse-grained multithreading
The simplest type of multithreading occurs when one thread runs until it is blocked by an event that normally would create a long-latency stall. Such a stall might be a cache miss that has to access off-chip memory, which might take hundreds of CPU cycles for the data to return. Instead of waiting for the stall to resolve, a threaded processor would switch execution to another thread that was ready to run. Only when the data for the previous thread had arrived, would the previous thread be placed back on the list of ready-to-run threads.
For example:
Cycle : instruction from thread is issued.
Cycle : instruction from thread is issued.
Cycle : instruction from thread is issued, which is a load instruction that misses in all caches.
Cycle : thread scheduler invoked, switches to thread .
Cycle : instruction from thread is issued.
Cycle : instruction from thread is issued.
Conceptually, it is similar to cooperative multi-tasking used in real-time operating systems, in which tasks voluntarily give up execution time when they need to wait upon some type of event. This type of multithreading is known as block, cooperative or coarse-grained multithreading.
The goal of multithreading hardware support is to allow quick switching between a blocked thread and another thread ready to run. Switching from one thread to another means the hardware switches from using one register set to another. To achieve this goal, the hardware for the program visible registers, as well as some processor control registers (such as the program counter), is replicated. For example, to quickly switch between two threads, the processor is built with two sets of registers.
Additional hardware support for multithreading allows thread switching to be done in one CPU cycle, bringing performance improvements. Also, additional hardware allows each thread to behave as if it were executing alone and not sharing any hardware resources with other threads, minimizing the amount of software changes needed within the application and the operating system to support multithreading.
Many families of microcontrollers and embedded processors have multiple register banks to allow quick context switching for interrupts. Such schemes can be considered a type of block multithreading among the user program thread and the interrupt threads.
Fine-grained multithreading
The purpose of fine-grained multithreading is to remove all data dependency stalls from the execution pipeline. Since one thread is relatively independent from other threads, there is less chance of one instruction in one pipelining stage needing an output from an older instruction in the pipeline. Conceptually, it is similar to preemptive multitasking used in operating systems; an analogy would be that the time slice given to each active thread is one CPU cycle.
For example:
Cycle : an instruction from thread is issued.
Cycle : an instruction from thread is issued.
This type of multithreading was first called barrel processing, in which the staves of a barrel represent the pipeline stages and their executing threads. Interleaved, preemptive, fine-grained or time-sliced multithreading are more modern terminology.
In addition to the hardware costs discussed in the block type of multithreading, interleaved multithreading has an additional cost of each pipeline stage tracking the thread ID of the instruction it is processing. Also, since there are more threads being executed concurrently in the pipeline, shared resources such as caches and TLBs need to be larger to avoid thrashing between the different threads.
Simultaneous multithreading
The most advanced type of multithreading applies to superscalar processors. Whereas a normal superscalar processor issues multiple instructions from a single thread every CPU cycle, in simultaneous multithreading (SMT) a superscalar processor can issue instructions from multiple threads every CPU cycle. Recognizing that any single thread has a limited amount of instruction-level parallelism, this type of multithreading tries to exploit parallelism available across multiple threads to decrease the waste associated with unused issue slots.
For example:
Cycle : instructions and from thread and instruction from thread are simultaneously issued.
Cycle : instruction from thread , instruction from thread , and instruction from thread are all simultaneously issued.
Cycle : instruction from thread and instructions and from thread are all simultaneously issued.
To distinguish the other types of multithreading from SMT, the term "temporal multithreading" is used to denote when instructions from only one thread can be issued at a time.
In addition to the hardware costs discussed for interleaved multithreading, SMT has the additional cost of each pipeline stage tracking the thread ID of each instruction being processed. Again, shared resources such as caches and TLBs have to be sized for the large number of active threads being processed.
Implementations include DEC (later Compaq) EV8 (not completed), Intel Hyper-Threading Technology, IBM POWER5/POWER6/POWER7/POWER8/POWER9, IBM z13/z14/z15, Sun Microsystems UltraSPARC T2, Cray XMT, and AMD Bulldozer and Zen microarchitectures.
Implementation specifics
A major area of research is the thread scheduler that must quickly choose from among the list of ready-to-run threads to execute next, as well as maintain the ready-to-run and stalled thread lists. An important subtopic is the different thread priority schemes that can be used by the scheduler. The thread scheduler might be implemented totally in software, totally in hardware, or as a hardware/software combination.
Another area of research is what type of events should cause a thread switch: cache misses, inter-thread communication, DMA completion, etc.
If the multithreading scheme replicates all of the software-visible state, including privileged control registers and TLBs, then it enables virtual machines to be created for each thread. This allows each thread to run its own operating system on the same processor. On the other hand, if only user-mode state is saved, then less hardware is required, which would allow more threads to be active at one time for the same die area or cost.
| Technology | Computer architecture concepts | null |
10520836 | https://en.wikipedia.org/wiki/Hypogeusia | Hypogeusia | Hypogeusia can be defined as the reduced ability to taste things. Due to a lack of stratification, the prevalence of hypogeusia, as well as hyposmia, may not be accurately known. Additionally, reviews do not always make distinctions between ageusia and hypogeusia, often classifying them as the same in certain circumstances and studies. The severity of the loss of taste from hypogeusia is not clearly outlined in current research due to these reasons.
Causes
Covid-19
Covid-19 causes symptoms that affect the central nervous system (CNS), peripheral nervous system (PNS), and skeletal muscle. Hypogeusia falls under a neurological disease and a PNS symptom, while also being the highest occurring PNS symptom, closely followed by anosmia. Due to hypogeusia being a significant symptom of Covid-19, it is often accompanied by hyposmia, even when many other Covid-19 symptoms are absent. Both can be considered early indications of a Covid-19 infection. Further, hypogeusia is often developed following early symptoms of hyposmia, usually developed from olfactory epithelium damage from upper respiratory infections.
Oral cancer
Hypogeusia tied to oral cancer and tumors can affect sweet, sour, salty, and bitter tastes, but bitter taste hypogeusia occurs significantly more often compared to the rest of the tastes. Inhibition of gustatory papillae found in the base, often due to oropharyngeal tumors, is thought of to be the cause of this. Oral cancer treatments, such as chemotherapy, radiation therapy, and surgical treatments, are further causes of taste and smell loss with up to 70% of oral cancer patients noting dysgeusia. Specifically, chemotherapies and radiation treatments may impair or damage various taste related cells, and certain surgeries may even remove minor to major parts of the tongue depending on the severity of the tumor.
Other
Nutritional zinc deficiency may cause various problems, hypogeusia being one of them. Chronic Rhinosinusitis (CRS) may cause olfactory dysfunction as well as gustatory problems, with either or both leading to the noticeable presence of hypogeusia in CRS patients. The connection between hypogeusia and Parkinson's Disease is less well described. PD patients have an increased dysregulation in their taste receptors, as well as their olfactory receptors. The receptors affected in PD patients were those associated with the perception of bitterness in most cases.
Treatment
Covid-19
Hypogeusia tied to Covid-19 can serve as an indicator of Covid-19, which can allow appropriate treatments to be administered earlier to patients.
Oral cancer
When treating oral cancer and related tumors, there is no clear treatment for hypogeusia. Precautions need to be studied and taken to prevent hypogeusia and related symptoms from forming. However, if the treatments have led to the formation of hypogeusia, than patient specific nutrition plans may be used to treat the loss of taste.
Other
While zinc supplementation may treat certain taste dysfunctions, there is a lack of evidence for treatment regarding hypogeusia and dysgeusia not caused by low zinc concentrations in the body. While the mechanisms surrounding hypogeusia from PD are hypothesized, specific treatments are not researched enough. Similarly, while treatment of olfactory related issues is known in CRS research, the treatment of gustatory problems, including hypogeusia, are unknown.
| Biology and health sciences | Symptoms and signs | Health |
8135659 | https://en.wikipedia.org/wiki/Particle%20decay | Particle decay | In particle physics, particle decay is the spontaneous process of one unstable subatomic particle transforming into multiple other particles. The particles created in this process (the final state) must each be less massive than the original, although the total mass of the system must be conserved. A particle is unstable if there is at least one allowed final state that it can decay into. Unstable particles will often have multiple ways of decaying, each with its own associated probability. Decays are mediated by one or several fundamental forces. The particles in the final state may themselves be unstable and subject to further decay.
The term is typically distinct from radioactive decay, in which an unstable atomic nucleus is transformed into a lighter nucleus accompanied by the emission of particles or radiation, although the two are conceptually similar and are often described using the same terminology.
Probability of survival and particle lifetime
Particle decay is a Poisson process, and hence the probability that a particle survives for time before decaying (the survival function) is given by an exponential distribution whose time constant depends on the particle's velocity:
where
is the mean lifetime of the particle (when at rest), and
is the Lorentz factor of the particle.
Table of some elementary and composite particle lifetimes
All data are from the Particle Data Group.
Decay rate
This section uses natural units, where
The lifetime of a particle is given by the inverse of its decay rate, , the probability per unit time that the particle will decay. For a particle of a mass and four-momentum decaying into particles with momenta , the differential decay rate is given by the general formula (expressing Fermi's golden rule)
where
is the number of particles created by the decay of the original,
is a combinatorial factor to account for indistinguishable final states (see below),
is the invariant matrix element or amplitude connecting the initial state to the final state (usually calculated using Feynman diagrams),
is an element of the phase space, and
is the four-momentum of particle .
The factor is given by
where
is the number of sets of indistinguishable particles in the final state, and
is the number of particles of type , so that
The phase space can be determined from
where
is a four-dimensional Dirac delta function,
is the (three-)momentum of particle , and
is the energy of particle .
One may integrate over the phase space to obtain the total decay rate for the specified final state.
If a particle has multiple decay branches or modes with different final states, its full decay rate is obtained by summing the decay rates for all branches. The branching ratio for each mode is given by its decay rate divided by the full decay rate.
Two-body decay
This section uses natural units, where
Decay rate
Say a parent particle of mass decays into two particles, labeled 1 and 2. In the rest frame of the parent particle,
which is obtained by requiring that four-momentum be conserved in the decay, i.e.
Also, in spherical coordinates,
Using the delta function to perform the and integrals in the phase-space for a two-body final state, one finds that the decay rate in the rest frame of the parent particle is
From two different frames
The angle of an emitted particle in the lab frame is related to the angle it has emitted in the center of momentum frame by the equation
Complex mass and decay rate
This section uses natural units, where
The mass of an unstable particle is formally a complex number, with the real part being its mass in the usual sense, and the imaginary part being its decay rate in natural units. When the imaginary part is large compared to the real part, the particle is usually thought of as a resonance more than a particle. This is because in quantum field theory a particle of mass (a real number) is often exchanged between two other particles when there is not enough energy to create it, if the time to travel between these other particles is short enough, of order according to the uncertainty principle. For a particle of mass , the particle can travel for time but decays after time of order of If then the particle usually decays before it completes its travel.
| Physical sciences | Particle physics: General | Physics |
8137250 | https://en.wikipedia.org/wiki/Multirole%20combat%20aircraft | Multirole combat aircraft | A multirole combat aircraft (MRCA) is a combat aircraft intended to perform different roles in combat. These roles can include air to air combat, air support,
aerial bombing, reconnaissance, electronic warfare, and suppression of air defenses.
Definition
The term "multirole" was originally reserved for aircraft designed with the aim of using a common airframe for multiple tasks where the same basic airframe is adapted to a number of differing roles. The main motivation for developing multirole aircraft is cost reduction in using a common airframe.
More roles can be added, such as aerial reconnaissance, forward air control, and electronic-warfare aircraft. Attack missions include the subtypes air interdiction, suppression of enemy air defense (SEAD), and close air support (CAS).
Multirole vs air-superiority
Multirole has also been applied to one aircraft with both major roles, a primary air-to-air combat role, and a secondary role like air-to-surface attack. However, those designed with an emphasis on aerial combat are usually regarded as air superiority fighters and usually deployed solely in that role, even though they are theoretically capable of ground attack. The Eurofighter Typhoon and Dassault Rafale are classified as multirole fighters; however the Typhoon is frequently considered an air superiority fighter due to its higher dogfighting prowess while its built-in strike capability has a lighter bomb load compared to contemporaries like the Rafale, which sacrifices air-to-air ability for a heavier payload.
For the US Navy, the F-14 Tomcat was initially deployed solely as an air-superiority fighter, as well as fleet defense interceptor and tactical aerial reconnaissance. By contrast, the multirole F/A-18 Hornet was designed as strike fighter while having only enough of an edge to defend itself against enemy fighters if needed. While the F-14 had an undeveloped secondary ground attack capability (with a Stores Management System (SMS) that included air-to-ground options as well as rudimentary software in the AWG-9), the Navy did not want to risk it in the air-to-ground role at the time, due to its lack of proper defensive electronic countermeasures (DECM) and radar homing and warning (RHAW) for overland operations, as well as the fighter's high cost. In the 1990s, the US Navy added LANTIRN pods to its F-14s and deployed them on precision ground-attack missions.
Swing-role
Some aircraft, like the Saab JAS 39 Gripen, are called swing-role, to emphasize the ability of a quick role change, either at short notice, or even within the same mission. According to the Military Dictionary: "the ability to employ a multi-role aircraft for multiple purposes during the same mission."
According to BAE Systems, "an aircraft that can accomplish both air-to-air and air-to-surface roles on the same mission and swing between these roles instantly offers true flexibility. This reduces cost, increases effectiveness and enhances interoperability with allied air forces".
"[Swing-role] capability also offers considerable cost-of-ownership benefits to operational commanders."
History
Although the term "multirole aircraft" may be relatively novel, certain airframes in history have proven versatile to multiple roles. In particular, the Junkers Ju 88 was renowned in Germany for being a "jack-of-all-trades", capable of performing as a bomber, dive bomber, night fighter, and so on, much as the British de Havilland Mosquito did as a fast bomber/strike aircraft, reconnaissance, and night fighter. The Hawker Hart was also quite 'multirole' in its numerous variants, being designed as a light bomber but serving as an army cooperation aircraft, a two-seat fighter, a fleet spotter, a fighter-bomber (in fact it was probably the first) and a trainer.
The US joint forces F-4 Phantom II built by McDonnell-Douglas also fits the definition of a multi-role aircraft in its various configurations of the basic airframe design. The various F-4 Phantom II configurations were used in air-to-air, fighter bomber, reconnaissance, and suppression of enemy air defenses (SEAD) mission roles to name a few.
The first use of the term was by the multinational European project named Multi-Role Combat Aircraft, which was formed in 1968 to produce an aircraft capable of tactical strike, aerial reconnaissance, air defense, and maritime roles. The design was aimed to replace a multitude of different types in the cooperating air forces. The project produced the Panavia Tornado, which used the same basic design to undertake a variety of roles, the Tornado IDS (Interdictor/Strike) variant and later the Panavia Tornado ADV (Air Defence Variant). By contrast, the F-15 Eagle which was another fighter aircraft of that era was designed for air superiority and interception, with the mantra "not a pound for air to ground", although the F-15C did have a rarely used secondary ground attack capability. That program eventually evolved into the F-15E Strike Eagle interdictor/strike derivative which retained the air-to-air combat lethality of earlier F-15s.
The newest fighter jet that fits the definition of 'multi-role' is the Lockheed Martin F-35 Lightning II/Joint Strike Fighter, designed to perform stealth-based ground/naval strike, fighter, reconnaissance and electronic warfare roles. Like a modern-day F-4, 3 variants of this aircraft fulfill the various strike and air defense roles among its joint service requirements: the standard variant is intended to eventually replace the F-16 and A-10 in the USAF and other Western air forces, a STOVL version intended to replace the Harrier in US Marine Corps, British Royal Air Force and Royal Navy service, and a carrier variant intended to eventually replace the older F/A-18C/D for the US Navy and other F/A-18 operators. The F-35's design goal can be compared to its larger and more air superiority-focused cousin, the F-22 Raptor.
Aircraft
Below is a list of some current examples.
| Technology | Military aviation | null |
8141106 | https://en.wikipedia.org/wiki/Amphioctopus%20marginatus | Amphioctopus marginatus | Amphioctopus marginatus, also known as the coconut octopus and veined octopus, is a medium-sized cephalopod belonging to the genus Amphioctopus. It is found in tropical waters of the western Pacific Ocean. It commonly preys upon shrimp, crabs, and clams, and displays unusual behavior including bipedal and quadrupedal walking as well as tool use (gathering coconut shells and seashells and using these for shelter).
Taxonomy
Amphioctopus marginatus is a species of octopus located in the family Octopodidae, genus Amphioctopus. The species was first described in 1964 by Japanese malacologist Iwao Taki as Octopus marginatus, and synonymously as Amphioctopus marginatus. In 1976, Z. Dong named the species Octopus striolatus but this name was not recognized as taxonomically valid.
Size and description
The main body of the octopus is normally long and including the arms, approximately long. The octopus displays a typical color pattern with dark ramified lines similar to veins, usually with a yellow siphon. The arms are usually dark in color, with contrasting white suckers. In many color displays, a lighter trapezoidal area can be seen immediately below the eye.
Behavior and habits
The species preys predominately on Calappa crabs and bivalves. Eggs are laid in clutches of 100,000 and are in length.
Locomotion
In March 2005, researchers at the University of California, Berkeley, published an article in Science in which A. marginatus was reported to show bipedal locomotion, or "stilt-walking". This involves rolling two legs to walk while the other six legs are used to mimic the appearance of a floating coconut. This behavior was first observed off the coast of Sulawesi, Indonesia, where coconut shell litter is common. A. marginatus is one of only two octopus species known to display such behavior, the other species being Abdopus aculeatus.
Tool use
In 2009, researchers from the Melbourne Museum in Australia observed the coconut octopus uses tools for concealment and defense by gathering available debris to create a shelter. The researchers filmed the octopus collecting coconut half-shells from the sea floor that had been discarded by humans. They were then carried up to and arranged around the body of the octopus to form a spherical hiding place similar to a clam-shell. This behavior was observed in specimen in Bali and North Sulawesi, Indonesia, and is likely the first evidence of tool use in invertebrates. Other species of octopus had been observed using shells for hiding, but this was the first case in which shells were prepared and collected for later use, in what the Melbourne Museum has described as "true tool use". Octopuses will often engage in bipedal motion when carrying stacks of debris or items larger than themselves.
Distribution
The coconut octopus is broadly endemic to neritic, tropical waters in the Indian Ocean, Red Sea, Northwest and Western Pacific Ocean, and Southeast Asian Sea. Amphioctopus marginatus is listed as Least Concern on the ICUN's Red List. While the species may be threatened by fishing, its wide distribution is seen as enough to compensate against human impacts.
Habitat
The species prefers shallow, subtidal waters along the continental shelf. The species has a maximum depth of , and can often be found in mud and sand substrates.
| Biology and health sciences | Cephalopods | Animals |
19572217 | https://en.wikipedia.org/wiki/Influenza | Influenza | Influenza, commonly known as the flu, is an infectious disease caused by influenza viruses. Symptoms range from mild to severe and often include fever, runny nose, sore throat, muscle pain, headache, coughing, and fatigue. These symptoms begin one to four (typically two) days after exposure to the virus and last for about two to eight days. Diarrhea and vomiting can occur, particularly in children. Influenza may progress to pneumonia from the virus or a subsequent bacterial infection. Other complications include acute respiratory distress syndrome, meningitis, encephalitis, and worsening of pre-existing health problems such as asthma and cardiovascular disease.
There are four types of influenza virus: types A, B, C, and D. Aquatic birds are the primary source of influenza A virus (IAV), which is also widespread in various mammals, including humans and pigs. Influenza B virus (IBV) and influenza C virus (ICV) primarily infect humans, and influenza D virus (IDV) is found in cattle and pigs. Influenza A virus and influenza B virus circulate in humans and cause seasonal epidemics, and influenza C virus causes a mild infection, primarily in children. Influenza D virus can infect humans but is not known to cause illness. In humans, influenza viruses are primarily transmitted through respiratory droplets from coughing and sneezing. Transmission through aerosols and surfaces contaminated by the virus also occur.
Frequent hand washing and covering one's mouth and nose when coughing and sneezing reduce transmission, as does wearing a mask. Annual vaccination can help to provide protection against influenza. Influenza viruses, particularly influenza A virus, evolve quickly, so flu vaccines are updated regularly to match which influenza strains are in circulation. Vaccines provide protection against influenza A virus subtypes H1N1 and H3N2 and one or two influenza B virus subtypes. Influenza infection is diagnosed with laboratory methods such as antibody or antigen tests and a polymerase chain reaction (PCR) to identify viral nucleic acid. The disease can be treated with supportive measures and, in severe cases, with antiviral drugs such as oseltamivir. In healthy individuals, influenza is typically self-limiting and rarely fatal, but it can be deadly in high-risk groups.
In a typical year, five to 15 percent of the population contracts influenza. There are 3 to 5 million severe cases annually, with up to 650,000 respiratory-related deaths globally each year. Deaths most commonly occur in high-risk groups, including young children, the elderly, and people with chronic health conditions. In temperate regions, the number of influenza cases peaks during winter, whereas in the tropics, influenza can occur year-round. Since the late 1800s, pandemic outbreaks of novel influenza strains have occurred every 10 to 50 years. Five flu pandemics have occurred since 1900: the Spanish flu from 1918 to 1920, which was the most severe; the Asian flu in 1957; the Hong Kong flu in 1968; the Russian flu in 1977; and the swine flu pandemic in 2009.
Signs and symptoms
The symptoms of influenza are similar to those of a cold, although usually more severe and less likely to include a runny nose. The time between exposure to the virus and development of symptoms (the incubation period) is one to four days, most commonly one to two days. Many infections are asymptomatic. The onset of symptoms is sudden, and initial symptoms are predominately non-specific, including fever, chills, headaches, muscle pain, malaise, loss of appetite, lack of energy, and confusion. These are usually accompanied by respiratory symptoms such as a dry cough, sore or dry throat, hoarse voice, and a stuffy or runny nose. Coughing is the most common symptom. Gastrointestinal symptoms may also occur, including nausea, vomiting, diarrhea, and gastroenteritis, especially in children. The standard influenza symptoms typically last for two to eight days. Some studies suggest influenza can cause long-lasting symptoms in a similar way to long COVID.
Symptomatic infections are usually mild and limited to the upper respiratory tract, but progression to pneumonia is relatively common. Pneumonia may be caused by the primary viral infection or a secondary bacterial infection. Primary pneumonia is characterized by rapid progression of fever, cough, labored breathing, and low oxygen levels that cause bluish skin. It is especially common among those who have an underlying cardiovascular disease such as rheumatic heart disease. Secondary pneumonia typically has a period of improvement in symptoms for one to three weeks followed by recurrent fever, sputum production, and fluid buildup in the lungs, but can also occur just a few days after influenza symptoms appear. About a third of primary pneumonia cases are followed by secondary pneumonia, which is most frequently caused by the bacteria Streptococcus pneumoniae and Staphylococcus aureus.
Virology
Types of virus
Influenza viruses comprise four species, each the sole member of its own genus. The four influenza genera comprise four of the seven genera in the family Orthomyxoviridae. They are:
Influenza A virus, genus Alphainfluenzavirus
Influenza B virus, genus Betainfluenzavirus
Influenza C virus, genus Gammainfluenzavirus
Influenza D virus, genus Deltainfluenzavirus
Influenza A virus is responsible for most cases of severe illness as well as seasonal epidemics and occasional pandemics. It infects people of all ages but tends to disproportionately cause severe illness in the elderly, the very young, and those with chronic health issues. Birds are the primary reservoir of influenza A virus, especially aquatic birds such as ducks, geese, shorebirds, and gulls, but the virus also circulates among mammals, including pigs, horses, and marine mammals.
Subtypes of Influenza A are defined by the combination of the antigenic viral proteins haemagglutinin (H) and neuraminidase (N) in the viral envelope; for example, "H1N1" designates an IAV subtype that has a type-1 hemagglutinin (H) protein and a type-1 neuraminidase (N) protein. Almost all possible combinations of H (1 thru 16) and N (1 thru 11) have been isolated from wild birds. In addition H17, H18, N10 and N11 have been found in bats. The influenza A virus subtypes in circulation among humans are H1N1 and H3N2.
Influenza B virus mainly infects humans but has been identified in seals, horses, dogs, and pigs. Influenza B virus does not have subtypes like influenza A virus but has two antigenically distinct lineages, termed the B/Victoria/2/1987-like and B/Yamagata/16/1988-like lineages, or simply (B/)Victoria(-like) and (B/)Yamagata(-like). Both lineages are in circulation in humans, disproportionately affecting children. However, the B/Yamagata lineage might have become extinct in 2020/2021 due to COVID-19 pandemic measures. Influenza B viruses contribute to seasonal epidemics alongside influenza A viruses but have never been associated with a pandemic.
Influenza C virus, like influenza B virus, is primarily found in humans, though it has been detected in pigs, feral dogs, dromedary camels, cattle, and dogs. Influenza C virus infection primarily affects children and is usually asymptomatic or has mild cold-like symptoms, though more severe symptoms such as gastroenteritis and pneumonia can occur. Unlike influenza A virus and influenza B virus, influenza C virus has not been a major focus of research pertaining to antiviral drugs, vaccines, and other measures against influenza. Influenza C virus is subclassified into six genetic/antigenic lineages.
Influenza D virus has been isolated from pigs and cattle, the latter being the natural reservoir. Infection has also been observed in humans, horses, dromedary camels, and small ruminants such as goats and sheep. Influenza D virus is distantly related to influenza C virus. While cattle workers have occasionally tested positive to prior influenza D virus infection, it is not known to cause disease in humans. Influenza C virus and influenza D virus experience a slower rate of antigenic evolution than influenza A virus and influenza B virus. Because of this antigenic stability, relatively few novel lineages emerge.
Influenza virus nomenclature
Every year, millions of influenza virus samples are analysed to monitor changes in the virus' antigenic properties, and to inform the development of vaccines.
To unambiguously describe a specific isolate of virus, researchers use the internationally accepted influenza virus nomenclature, which describes, among other things, the species of animal from which the virus was isolated, and the place and year of collection. As an example – "A/chicken/Nakorn-Patom/Thailand/CU-K2/04(H5N1)":
"A" stands for the genus of influenza (A, B, C or D).
"chicken" is the animal species the isolate was found in (note: human isolates lack this component term and are thus identified as human isolates by default)
"Nakorn-Patom/Thailand" is the place this specific virus was isolated
"CU-K2" is the laboratory reference number that identifies it from other influenza viruses isolated at the same place and year
"04" represents the year of isolation 2004
"H5" stands for the fifth of several known types of the protein hemagglutinin.
"N1" stands for the first of several known types of the protein neuraminidase.
The nomenclature for influenza B, C and D, which are less variable, is simpler. Examples are B/Santiago/29615/2020 and C/Minnesota/10/2015.
Genome and structure
Influenza viruses have a negative-sense, single-stranded RNA genome that is segmented. The negative sense of the genome means it can be used as a template to synthesize messenger RNA (mRNA). Influenza A virus and influenza B virus have eight genome segments that encode 10 major proteins. Influenza C virus and influenza D virus have seven genome segments that encode nine major proteins.
Three segments encode three subunits of an RNA-dependent RNA polymerase (RdRp) complex: PB1, a transcriptase, PB2, which recognizes 5' caps, and PA (P3 for influenza C virus and influenza D virus), an endonuclease. The M1 matrix protein and M2 proton channel share a segment, as do the non-structural protein (NS1) and the nuclear export protein (NEP). For influenza A virus and influenza B virus, hemagglutinin (HA) and neuraminidase (NA) are encoded on one segment each, whereas influenza C virus and influenza D virus encode a hemagglutinin-esterase fusion (HEF) protein on one segment that merges the functions of HA and NA. The final genome segment encodes the viral nucleoprotein (NP). Influenza viruses also encode various accessory proteins, such as PB1-F2 and PA-X, that are expressed through alternative open reading frames and which are important in host defense suppression, virulence, and pathogenicity.
The virus particle, called a virion, is pleomorphic and varies between being filamentous, bacilliform, or spherical in shape. Clinical isolates tend to be pleomorphic, whereas strains adapted to laboratory growth typically produce spherical virions. Filamentous virions are about 250 nanometers (nm) by 80 nm, bacilliform 120–250 by 95 nm, and spherical 120 nm in diameter.
The core of the virion comprises one copy of each segment of the genome bound to NP nucleoproteins in separate ribonucleoprotein (RNP) complexes for each segment. There is a copy of the RdRp, all subunits included, bound to each RNP. The genetic material is encapsulated by a layer of M1 matrix protein which provides structural reinforcement to the outer layer, the viral envelope. The envelope comprises a lipid bilayer membrane incorporating HA and NA (or HEF) proteins extending outward from its exterior surface. HA and HEF proteins have a distinct "head" and "stalk" structure. M2 proteins form proton channels through the viral envelope that are required for viral entry and exit. Influenza B viruses contain a surface protein named NB that is anchored in the envelope, but its function is unknown.
Life cycle
The viral life cycle begins by binding to a target cell. Binding is mediated by the viral HA proteins on the surface of the envelope, which bind to cells that contain sialic acid receptors on the surface of the cell membrane. For N1 subtypes with the "G147R" mutation and N2 subtypes, the NA protein can initiate entry. Prior to binding, NA proteins promote access to target cells by degrading mucus, which helps to remove extracellular decoy receptors that would impede access to target cells. After binding, the virus is internalized into the cell by an endosome that contains the virion inside it. The endosome is acidified by cellular vATPase to have lower pH, which triggers a conformational change in HA that allows fusion of the viral envelope with the endosomal membrane. At the same time, hydrogen ions diffuse into the virion through M2 ion channels, disrupting internal protein-protein interactions to release RNPs into the host cell's cytosol. The M1 protein shell surrounding RNPs is degraded, fully uncoating RNPs in the cytosol.
RNPs are then imported into the nucleus with the help of viral localization signals. There, the viral RNA polymerase transcribes mRNA using the genomic negative-sense strand as a template. The polymerase snatches 5' caps for viral mRNA from cellular RNA to prime mRNA synthesis and the 3'-end of mRNA is polyadenylated at the end of transcription. Once viral mRNA is transcribed, it is exported out of the nucleus and translated by host ribosomes in a cap-dependent manner to synthesize viral proteins. RdRp also synthesizes complementary positive-sense strands of the viral genome in a complementary RNP complex which are then used as templates by viral polymerases to synthesize copies of the negative-sense genome. During these processes, RdRps of avian influenza viruses (AIVs) function optimally at a higher temperature than mammalian influenza viruses.
Newly synthesized viral polymerase subunits and NP proteins are imported to the nucleus to further increase the rate of viral replication and form RNPs. HA, NA, and M2 proteins are trafficked with the aid of M1 and NEP proteins to the cell membrane through the Golgi apparatus and inserted into the cell's membrane. Viral non-structural proteins including NS1, PB1-F2, and PA-X regulate host cellular processes to disable antiviral responses. PB1-F2 also interacts with PB1 to keep polymerases in the nucleus longer. M1 and NEP proteins localize to the nucleus during the later stages of infection, bind to viral RNPs and mediate their export to the cytoplasm where they migrate to the cell membrane with the aid of recycled endosomes and are bundled into the segments of the genome.
Progeny viruses leave the cell by budding from the cell membrane, which is initiated by the accumulation of M1 proteins at the cytoplasmic side of the membrane. The viral genome is incorporated inside a viral envelope derived from portions of the cell membrane that have HA, NA, and M2 proteins. At the end of budding, HA proteins remain attached to cellular sialic acid until they are cleaved by the sialidase activity of NA proteins. The virion is then released from the cell. The sialidase activity of NA also cleaves any sialic acid residues from the viral surface, which helps prevent newly assembled viruses from aggregating near the cell surface and improving infectivity. Similar to other aspects of influenza replication, optimal NA activity is temperature- and pH-dependent. Ultimately, presence of large quantities of viral RNA in the cell triggers apoptosis (programmed cell death), which is initiated by cellular factors to restrict viral replication.
Antigenic drift and shift
Two key processes that influenza viruses evolve through are antigenic drift and antigenic shift. Antigenic drift is when an influenza virus' antigens change due to the gradual accumulation of mutations in the antigen's (HA or NA) gene. This can occur in response to evolutionary pressure exerted by the host immune response. Antigenic drift is especially common for the HA protein, in which just a few amino acid changes in the head region can constitute antigenic drift. The result is the production of novel strains that can evade pre-existing antibody-mediated immunity. Antigenic drift occurs in all influenza species but is slower in B than A and slowest in C and D. Antigenic drift is a major cause of seasonal influenza, and requires that flu vaccines be updated annually. HA is the main component of inactivated vaccines, so surveillance monitors antigenic drift of this antigen among circulating strains. Antigenic evolution of influenza viruses of humans appears to be faster than in swine and equines. In wild birds, within-subtype antigenic variation appears to be limited but has been observed in poultry.
Antigenic shift is a sudden, drastic change in an influenza virus' antigen, usually HA. During antigenic shift, antigenically different strains that infect the same cell can reassort genome segments with each other, producing hybrid progeny. Since all influenza viruses have segmented genomes, all are capable of reassortment. Antigenic shift only occurs among influenza viruses of the same genus and most commonly occurs among influenza A viruses. In particular, reassortment is very common in AIVs, creating a large diversity of influenza viruses in birds, but is uncommon in human, equine, and canine lineages. Pigs, bats, and quails have receptors for both mammalian and avian influenza A viruses, so they are potential "mixing vessels" for reassortment. If an animal strain reassorts with a human strain, then a novel strain can emerge that is capable of human-to-human transmission. This has caused pandemics, but only a limited number, so it is difficult to predict when the next will happen.
The Global Influenza Surveillance and Response System of the World Health Organization (GISRS) tests several millions of specimens annually to monitor the spread and evolution of influenza viruses.
Mechanism
Transmission
People who are infected can transmit influenza viruses through breathing, talking, coughing, and sneezing, which spread respiratory droplets and aerosols that contain virus particles into the air. A person susceptible to infection can contract influenza by coming into contact with these particles. Respiratory droplets are relatively large and travel less than two meters before falling onto nearby surfaces. Aerosols are smaller and remain suspended in the air longer, so they take longer to settle and can travel further. Inhalation of aerosols can lead to infection, but most transmission is in the area about two meters around an infected person via respiratory droplets that come into contact with mucosa of the upper respiratory tract. Transmission through contact with a person, bodily fluids, or intermediate objects (fomites) can also occur, since influenza viruses can survive for hours on non-porous surfaces. If one's hands are contaminated, then touching one's face can cause infection.
Influenza is usually transmissible from one day before the onset of symptoms to 5–7 days after. In healthy adults, the virus is shed for up to 3–5 days. In children and the immunocompromised, the virus may be transmissible for several weeks. Children ages 2–17 are considered to be the primary and most efficient spreaders of influenza. Children who have not had multiple prior exposures to influenza viruses shed the virus at greater quantities and for a longer duration than other children. People at risk of exposure to influenza include health care workers, social care workers, and those who live with or care for people vulnerable to influenza. In long-term care facilities, the flu can spread rapidly. A variety of factors likely encourage influenza transmission, including lower temperature, lower absolute and relative humidity, less ultraviolet radiation from the sun, and crowding. Influenza viruses that infect the upper respiratory tract like H1N1 tend to be more mild but more transmissible, whereas those that infect the lower respiratory tract like H5N1 tend to cause more severe illness but are less contagious.
Pathophysiology
In humans, influenza viruses first cause infection by infecting epithelial cells in the respiratory tract. Illness during infection is primarily the result of lung inflammation and compromise caused by epithelial cell infection and death, combined with inflammation caused by the immune system's response to infection. Non-respiratory organs can become involved, but the mechanisms by which influenza is involved in these cases are unknown. Severe respiratory illness can be caused by multiple, non-exclusive mechanisms, including obstruction of the airways, loss of alveolar structure, loss of lung epithelial integrity due to epithelial cell infection and death, and degradation of the extracellular matrix that maintains lung structure. In particular, alveolar cell infection appears to drive severe symptoms since this results in impaired gas exchange and enables viruses to infect endothelial cells, which produce large quantities of pro-inflammatory cytokines.
Pneumonia caused by influenza viruses is characterized by high levels of viral replication in the lower respiratory tract, accompanied by a strong pro-inflammatory response called a cytokine storm. Infection with H5N1 or H7N9 especially produces high levels of pro-inflammatory cytokines. In bacterial infections, early depletion of macrophages during influenza creates a favorable environment in the lungs for bacterial growth since these white blood cells are important in responding to bacterial infection. Host mechanisms to encourage tissue repair may inadvertently allow bacterial infection. Infection also induces production of systemic glucocorticoids that can reduce inflammation to preserve tissue integrity but allow increased bacterial growth.
The pathophysiology of influenza is significantly influenced by which receptors influenza viruses bind to during entry into cells. Mammalian influenza viruses preferentially bind to sialic acids connected to the rest of the oligosaccharide by an α-2,6 link, most commonly found in various respiratory cells, such as respiratory and retinal epithelial cells. AIVs prefer sialic acids with an α-2,3 linkage, which are most common in birds in gastrointestinal epithelial cells and in humans in the lower respiratory tract. Cleavage of the HA protein into HA, the binding subunit, and HA, the fusion subunit, is performed by different proteases, affecting which cells can be infected. For mammalian influenza viruses and low pathogenic AIVs, cleavage is extracellular, which limits infection to cells that have the appropriate proteases, whereas for highly pathogenic AIVs, cleavage is intracellular and performed by ubiquitous proteases, which allows for infection of a greater variety of cells, thereby contributing to more severe disease.
Immunology
Cells possess sensors to detect viral RNA, which can then induce interferon production. Interferons mediate expression of antiviral proteins and proteins that recruit immune cells to the infection site, and they notify nearby uninfected cells of infection. Some infected cells release pro-inflammatory cytokines that recruit immune cells to the site of infection. Immune cells control viral infection by killing infected cells and phagocytizing viral particles and apoptotic cells. An exacerbated immune response can harm the host organism through a cytokine storm. To counter the immune response, influenza viruses encode various non-structural proteins, including NS1, NEP, PB1-F2, and PA-X, that are involved in curtailing the host immune response by suppressing interferon production and host gene expression.
B cells, a type of white blood cell, produce antibodies that bind to influenza antigens HA and NA (or HEF) and other proteins to a lesser degree. Once bound to these proteins, antibodies block virions from binding to cellular receptors, neutralizing the virus. In humans, a sizeable antibody response occurs about one week after viral exposure. This antibody response is typically robust and long-lasting, especially for influenza C virus and influenza D virus. People exposed to a certain strain in childhood still possess antibodies to that strain at a reasonable level later in life, which can provide some protection to related strains. There is, however, an "original antigenic sin", in which the first HA subtype a person is exposed to influences the antibody-based immune response to future infections and vaccines.
Prevention
Vaccination
Annual vaccination is the primary and most effective way to prevent influenza and influenza-associated complications, especially for high-risk groups. Vaccines against the flu are trivalent or quadrivalent, providing protection against an H1N1 strain, an H3N2 strain, and one or two influenza B virus strains corresponding to the two influenza B virus lineages. Two types of vaccines are in use: inactivated vaccines that contain "killed" (i.e. inactivated) viruses and live attenuated influenza vaccines (LAIVs) that contain weakened viruses. There are three types of inactivated vaccines: whole virus, split virus, in which the virus is disrupted by a detergent, and subunit, which only contains the viral antigens HA and NA. Most flu vaccines are inactivated and administered via intramuscular injection. LAIVs are sprayed into the nasal cavity.
Vaccination recommendations vary by country. Some recommend vaccination for all people above a certain age, such as 6 months, whereas other countries limit recommendations to high-risk groups. Young infants cannot receive flu vaccines for safety reasons, but they can inherit passive immunity from their mother if vaccinated during pregnancy. Influenza vaccination helps to reduce the probability of reassortment.
In general, influenza vaccines are only effective if there is an antigenic match between vaccine strains and circulating strains. Most commercially available flu vaccines are manufactured by propagation of influenza viruses in embryonated chicken eggs, taking 6–8 months. Flu seasons are different in the northern and southern hemisphere, so the WHO meets twice a year, once for each hemisphere, to discuss which strains should be included based on observation from HA inhibition assays. Other manufacturing methods include an MDCK cell culture-based inactivated vaccine and a recombinant subunit vaccine manufactured from baculovirus overexpression in insect cells.
Antiviral chemoprophylaxis
Influenza can be prevented or reduced in severity by post-exposure prophylaxis with the antiviral drugs oseltamivir, which can be taken orally by those at least three months old, and zanamivir, which can be inhaled by those above seven years. Chemoprophylaxis is most useful for individuals at high risk for complications and those who cannot receive the flu vaccine. Post-exposure chemoprophylaxis is only recommended if oseltamivir is taken within 48 hours of contact with a confirmed or suspected case and zanamivir within 36 hours. It is recommended for people who have yet to receive a vaccine for the current flu season, who have been vaccinated less than two week since contact, if there is a significant mismatch between vaccine and circulating strains, or during an outbreak in a closed setting regardless of vaccination history.
Infection control
These are the main ways that influenza spreads
by direct transmission (when an infected person sneezes mucus directly into the eyes, nose or mouth of another person);
the airborne route (when someone inhales the aerosols produced by an infected person coughing, sneezing or spitting);
through hand-to-eye, hand-to-nose, or hand-to-mouth transmission, either from contaminated surfaces or from direct personal contact such as a hand-shake.
When vaccines and antiviral medications are limited, non-pharmaceutical interventions are essential to reduce transmission and spread. The lack of controlled studies and rigorous evidence of the effectiveness of some measures has hampered planning decisions and recommendations. Nevertheless, strategies endorsed by experts for all phases of flu outbreaks include hand and respiratory hygiene, self-isolation by symptomatic individuals and the use of face masks by them and their caregivers, surface disinfection, rapid testing and diagnosis, and contact tracing. In some cases, other forms of social distancing including school closures and travel restrictions are recommended.
Reasonably effective ways to reduce the transmission of influenza include good personal health and hygiene habits such as: not touching the eyes, nose or mouth; frequent hand washing (with soap and water, or with alcohol-based hand rubs); covering coughs and sneezes with a tissue or sleeve; avoiding close contact with sick people; and staying home when sick. Avoiding spitting is also recommended. Although face masks might help prevent transmission when caring for the sick, there is mixed evidence on beneficial effects in the community. Smoking raises the risk of contracting influenza, as well as producing more severe disease symptoms.
Since influenza spreads through both aerosols and contact with contaminated surfaces, surface sanitizing may help prevent some infections. Alcohol is an effective sanitizer against influenza viruses, while quaternary ammonium compounds can be used with alcohol so that the sanitizing effect lasts for longer. In hospitals, quaternary ammonium compounds and bleach are used to sanitize rooms or equipment that have been occupied by people with influenza symptoms. At home, this can be done effectively with a diluted chlorine bleach.
Since influenza viruses circulate in animals such as birds and pigs, prevention of transmission from these animals is important. Water treatment, indoor raising of animals, quarantining sick animals, vaccination, and biosecurity are the primary measures used. Placing poultry houses and piggeries on high ground away from high-density farms, backyard farms, live poultry markets, and bodies of water helps to minimize contact with wild birds. Closure of live poultry markets appears to the most effective measure and has shown to be effective at controlling the spread of H5N1, H7N9, and H9N2. Other biosecurity measures include cleaning and disinfecting facilities and vehicles, banning visits to poultry farms, not bringing birds intended for slaughter back to farms, changing clothes, disinfecting foot baths, and treating food and water.
If live poultry markets are not closed, then "clean days" when unsold poultry is removed and facilities are disinfected and "no carry-over" policies to eliminate infectious material before new poultry arrive can be used to reduce the spread of influenza viruses. If a novel influenza viruses has breached the aforementioned biosecurity measures, then rapid detection to stamp it out via quarantining, decontamination, and culling may be necessary to prevent the virus from becoming endemic. Vaccines exist for avian H5, H7, and H9 subtypes that are used in some countries. In China, for example, vaccination of domestic birds against H7N9 successfully limited its spread, indicating that vaccination may be an effective strategy if used in combination with other measures to limit transmission. In pigs and horses, management of influenza is dependent on vaccination with biosecurity.
Diagnosis
Diagnosis based on symptoms is fairly accurate in otherwise healthy people during seasonal epidemics and should be suspected in cases of pneumonia, acute respiratory distress syndrome (ARDS), sepsis, or if encephalitis, myocarditis, or breakdown of muscle tissue occur. Because influenza is similar to other viral respiratory tract illnesses, laboratory diagnosis is necessary for confirmation. Common sample collection methods for testing include nasal and throat swabs. Samples may be taken from the lower respiratory tract if infection has cleared the upper but not lower respiratory tract. Influenza testing is recommended for anyone hospitalized with symptoms resembling influenza during flu season or who is connected to an influenza case. For severe cases, earlier diagnosis improves patient outcome. Diagnostic methods that can identify influenza include viral cultures, antibody- and antigen-detecting tests, and nucleic acid-based tests.
Viruses can be grown in a culture of mammalian cells or embryonated eggs for 3–10 days to monitor cytopathic effect. Final confirmation can then be done via antibody staining, hemadsorption using red blood cells, or immunofluorescence microscopy. Shell vial cultures, which can identify infection via immunostaining before a cytopathic effect appears, are more sensitive than traditional cultures with results in 1–3 days. Cultures can be used to characterize novel viruses, observe sensitivity to antiviral drugs, and monitor antigenic drift, but they are relatively slow and require specialized skills and equipment.
Serological assays can be used to detect an antibody response to influenza after natural infection or vaccination. Common serological assays include hemagglutination inhibition assays that detect HA-specific antibodies, virus neutralization assays that check whether antibodies have neutralized the virus, and enzyme-linked immunoabsorbant assays. These methods tend to be relatively inexpensive and fast but are less reliable than nucleic-acid based tests.
Direct fluorescent or immunofluorescent antibody (DFA/IFA) tests involve staining respiratory epithelial cells in samples with fluorescently-labeled influenza-specific antibodies, followed by examination under a fluorescent microscope. They can differentiate between influenza A virus and influenza B virus but can not subtype influenza A virus. Rapid influenza diagnostic tests (RIDTs) are a simple way of obtaining assay results, are low cost, and produce results in less than 30 minutes, so they are commonly used, but they can not distinguish between influenza A virus and influenza B virus or between influenza A virus subtypes and are not as sensitive as nucleic-acid based tests.
Nucleic acid-based tests (NATs) amplify and detect viral nucleic acid. Most of these tests take a few hours, but rapid molecular assays are as fast as RIDTs. Among NATs, reverse transcription polymerase chain reaction (RT-PCR) is the most traditional and considered the gold standard for diagnosing influenza because it is fast and can subtype influenza A virus, but it is relatively expensive and more prone to false-positives than cultures. Other NATs that have been used include loop-mediated isothermal amplification-based assays, simple amplification-based assays, and nucleic acid sequence-based amplification. Nucleic acid sequencing methods can identify infection by obtaining the nucleic acid sequence of viral samples to identify the virus and antiviral drug resistance. The traditional method is Sanger sequencing, but it has been largely replaced by next-generation methods that have greater sequencing speed and throughput.
Management
Treatment in cases of mild or moderate illness is supportive and includes anti-fever medications such as acetaminophen and ibuprofen, adequate fluid intake to avoid dehydration, and rest. Cough drops and throat sprays may be beneficial for sore throat. It is recommended to avoid alcohol and tobacco use while ill. Aspirin is not recommended to treat influenza in children due to an elevated risk of developing Reye syndrome. Corticosteroids are not recommended except when treating septic shock or an underlying medical condition, such as chronic obstructive pulmonary disease or asthma exacerbation, since they are associated with increased mortality. If a secondary bacterial infection occurs, then antibiotics may be necessary.
Antivirals
Antiviral drugs are primarily used to treat severely ill patients, especially those with compromised immune systems. Antivirals are most effective when started in the first 48 hours after symptoms appear. Later administration may still be beneficial for those who have underlying immune defects, those with more severe symptoms, or those who have a higher risk of developing complications if these individuals are still shedding the virus. Antiviral treatment is also recommended if a person is hospitalized with suspected influenza instead of waiting for test results to return and if symptoms are worsening. Most antiviral drugs against influenza fall into two categories: neuraminidase (NA) inhibitors and M2 inhibitors. Baloxavir marboxil is a notable exception, which targets the endonuclease activity of the viral RNA polymerase and can be used as an alternative to NA and M2 inhibitors for influenza A virus and influenza B virus.
NA inhibitors target the enzymatic activity of NA receptors, mimicking the binding of sialic acid in the active site of NA on influenza A virus and influenza B virus virions so that viral release from infected cells and the rate of viral replication are impaired. NA inhibitors include oseltamivir, which is consumed orally in a prodrug form and converted to its active form in the liver, and zanamivir, which is a powder that is inhaled nasally. Oseltamivir and zanamivir are effective for prophylaxis and post-exposure prophylaxis, and research overall indicates that NA inhibitors are effective at reducing rates of complications, hospitalization, and mortality and the duration of illness. Additionally, the earlier NA inhibitors are provided, the better the outcome, though late administration can still be beneficial in severe cases. Other NA inhibitors include laninamivir and peramivir, the latter of which can be used as an alternative to oseltamivir for people who cannot tolerate or absorb it.
The adamantanes amantadine and rimantadine are orally administered drugs that block the influenza virus' M2 ion channel, preventing viral uncoating. These drugs are only functional against influenza A virus but are no longer recommended for use because of widespread resistance to them among influenza A viruses. Adamantane resistance first emerged in H3N2 in 2003, becoming worldwide by 2008. Oseltamivir resistance is no longer widespread because the 2009 pandemic H1N1 strain (H1N1 pdm09), which is resistant to adamantanes, seemingly replaced resistant strains in circulation. Since the 2009 pandemic, oseltamivir resistance has mainly been observed in patients undergoing therapy, especially the immunocompromised and young children. Oseltamivir resistance is usually reported in H1N1, but has been reported in H3N2 and influenza B viruss less commonly. Because of this, oseltamivir is recommended as the first drug of choice for immunocompetent people, whereas for the immunocompromised, oseltamivir is recommended against H3N2 and influenza B virus and zanamivir against H1N1 pdm09. Zanamivir resistance is observed less frequently, and resistance to peramivir and baloxavir marboxil is possible.
Prognosis
In healthy individuals, influenza infection is usually self-limiting and rarely fatal. Symptoms usually last for 2–8 days. Influenza can cause people to miss work or school, and it is associated with decreased job performance and, in older adults, reduced independence. Fatigue and malaise may last for several weeks after recovery, and healthy adults may experience pulmonary abnormalities that can take several weeks to resolve. Complications and mortality primarily occur in high-risk populations and those who are hospitalized. Severe disease and mortality are usually attributable to pneumonia from the primary viral infection or a secondary bacterial infection, which can progress to ARDS.
Other respiratory complications that may occur include sinusitis, bronchitis, bronchiolitis, excess fluid buildup in the lungs, and exacerbation of chronic bronchitis and asthma. Middle ear infection and croup may occur, most commonly in children. Secondary S. aureus infection has been observed, primarily in children, to cause toxic shock syndrome after influenza, with hypotension, fever, and reddening and peeling of the skin. Complications affecting the cardiovascular system are rare and include pericarditis, fulminant myocarditis with a fast, slow, or irregular heartbeat, and exacerbation of pre-existing cardiovascular disease. Inflammation or swelling of muscles accompanied by muscle tissue breaking down occurs rarely, usually in children, which presents as extreme tenderness and muscle pain in the legs and a reluctance to walk for 2–3 days.
Influenza can affect pregnancy, including causing smaller neonatal size, increased risk of premature birth, and an increased risk of child death shortly before or after birth. Neurological complications have been associated with influenza on rare occasions, including aseptic meningitis, encephalitis, disseminated encephalomyelitis, transverse myelitis, and Guillain–Barré syndrome. Additionally, febrile seizures and Reye syndrome can occur, most commonly in children. Influenza-associated encephalopathy can occur directly from central nervous system infection from the presence of the virus in blood and presents as sudden onset of fever with convulsions, followed by rapid progression to coma. An atypical form of encephalitis called encephalitis lethargica, characterized by headache, drowsiness, and coma, may rarely occur sometime after infection. In survivors of influenza-associated encephalopathy, neurological defects may occur. Primarily in children, in severe cases the immune system may rarely dramatically overproduce white blood cells that release cytokines, causing severe inflammation.
People who are at least 65 years of age, due to a weakened immune system from aging or a chronic illness, are a high-risk group for developing complications, as are children less than one year of age and children who have not been previously exposed to influenza viruses multiple times. Pregnant women are at an elevated risk, which increases by trimester and lasts up to two weeks after childbirth. Obesity, in particular a body mass index greater than 35–40, is associated with greater amounts of viral replication, increased severity of secondary bacterial infection, and reduced vaccination efficacy. People who have underlying health conditions are also considered at-risk, including those who have congenital or chronic heart problems or lung (e.g. asthma), kidney, liver, blood, neurological, or metabolic (e.g. diabetes) disorders, as are people who are immunocompromised from chemotherapy, asplenia, prolonged steroid treatment, splenic dysfunction, or HIV infection. Tobacco use, including past use, places a person at risk. The role of genetics in influenza is not well researched, but it may be a factor in influenza mortality.
Epidemiology
Influenza is typically characterized by seasonal epidemics and sporadic pandemics. Most of the burden of influenza is a result of flu seasons caused by influenza A virus and influenza B virus. Among influenza A virus subtypes, H1N1 and H3N2 circulate in humans and are responsible for seasonal influenza. Cases disproportionately occur in children, but most severe causes are among the elderly, the very young, and the immunocompromised. In a typical year, influenza viruses infect 5–15% of the global population, causing 3–5 million cases of severe illness annually and accounting for 290,000–650,000 deaths each year due to respiratory illness. 5–10% of adults and 20–30% of children contract influenza each year. The reported number of influenza cases is usually much lower than the actual number.
During seasonal epidemics, it is estimated that about 80% of otherwise healthy people who have a cough or sore throat have the flu. Approximately 30–40% of people hospitalized for influenza develop pneumonia, and about 5% of all severe pneumonia cases in hospitals are due to influenza, which is also the most common cause of ARDS in adults. In children, influenza and respiratory syncytial virus are the two most common causes of ARDS. About 3–5% of children each year develop otitis media due to influenza. Adults who develop organ failure from influenza and children who have PIM scores and acute renal failure have higher rates of mortality. During seasonal influenza, mortality is concentrated in the very young and the elderly, whereas during flu pandemics, young adults are often affected at a high rate.
In temperate regions, the number of influenza cases varies from season to season. Lower vitamin D levels, presumably due to less sunlight, lower humidity, lower temperature, and minor changes in virus proteins caused by antigenic drift contribute to annual epidemics that peak during the winter season. In the northern hemisphere, this is from October to May (more narrowly December to April), and in the southern hemisphere, this is from May to October (more narrowly June to September). There are therefore two distinct influenza seasons every year in temperate regions, one in the northern hemisphere and one in the southern hemisphere. In tropical and subtropical regions, seasonality is more complex and appears to be affected by various climatic factors such as minimum temperature, hours of sunshine, maximum rainfall, and high humidity. Influenza may therefore occur year-round in these regions. Influenza epidemics in modern times have the tendency to start in the eastern or southern hemisphere, with Asia being a key reservoir.
Influenza A virus and influenza B virus co-circulate, so have the same patterns of transmission. The seasonality of influenza C virus, however, is poorly understood. Influenza C virus infection is most common in children under the age of two, and by adulthood most people have been exposed to it. Influenza C virus-associated hospitalization most commonly occurs in children under the age of three and is frequently accompanied by co-infection with another virus or a bacterium, which may increase the severity of disease. When considering all hospitalizations for respiratory illness among young children, influenza C virus appears to account for only a small percentage of such cases. Large outbreaks of influenza C virus infection can occur, so incidence varies significantly.
Outbreaks of influenza caused by novel influenza viruses are common. Depending on the level of pre-existing immunity in the population, novel influenza viruses can spread rapidly and cause pandemics with millions of deaths. These pandemics, in contrast to seasonal influenza, are caused by antigenic shifts involving animal influenza viruses. To date, all known flu pandemics have been caused by influenza A viruses, and they follow the same pattern of spreading from an origin point to the rest of the world over the course of multiple waves in a year. Pandemic strains tend to be associated with higher rates of pneumonia in otherwise healthy individuals. Generally after each influenza pandemic, the pandemic strain continues to circulate as the cause of seasonal influenza, replacing prior strains. From 1700 to 1889, influenza pandemics occurred about once every 50–60 years. Since then, pandemics have occurred about once every 10–50 years, so they may be getting more frequent over time.
History
The first influenza epidemic may have occurred around 6,000 BC in China, and possible descriptions of influenza exist in Greek writings from the 5th century BC. In both 1173–1174 AD and 1387 AD, epidemics occurred across Europe that were named "influenza". Whether these epidemics or others were caused by influenza is unclear since there was then no consistent naming pattern for epidemic respiratory diseases, and "influenza" did not become clearly associated with respiratory disease until centuries later. Influenza may have been brought to the Americas as early as 1493, when an epidemic disease resembling influenza killed most of the population of the Antilles.
The first convincing record of an influenza pandemic was in 1510. It began in East Asia before spreading to North Africa and then Europe. Following the pandemic, seasonal influenza occurred, with subsequent pandemics in 1557 and 1580. The flu pandemic in 1557 was potentially the first time influenza was connected to miscarriage and death of pregnant women. The 1580 influenza pandemic originated in Asia during summer, spread to Africa, then Europe, and finally America. By the end of the 16th century, influenza was beginning to become understood as a specific, recognizable disease with epidemic and endemic forms. In 1648, it was discovered that horses also experience influenza.
Influenza data after 1700 is more accurate, so it is easier to identify flu pandemics after this point. The first flu pandemic of the 18th century started in 1729 in Russia in spring, spreading worldwide over the course of three years with distinct waves, the later ones being more lethal. Another flu pandemic occurred in 1781–1782, starting in China in autumn. From this pandemic, influenza became associated with sudden outbreaks of febrile illness. The next flu pandemic was from 1830 to 1833, beginning in China in winter. This pandemic had a high attack rate, but the mortality rate was low.
A minor influenza pandemic occurred from 1847 to 1851 at the same time as the third cholera pandemic and was the first flu pandemic to occur with vital statistics being recorded, so influenza mortality was clearly recorded for the first time. Fowl plague (now recognised as highly pathogenic avian influenza) was recognized in 1878 and was soon linked to transmission to humans. By the time of the 1889 pandemic, which may have been caused by an H2N2 strain, the flu had become an easily recognizable disease.
The microbial agent responsible for influenza was incorrectly identified in 1892 by R. F. J. Pfeiffer as the bacteria species Haemophilus influenzae, which retains "influenza" in its name. From 1901 to 1903, Italian and Austrian researchers were able to show that avian influenza, then called "fowl plague", was caused by a microscopic agent smaller than bacteria by using filters with pores too small for bacteria to pass through. The fundamental differences between viruses and bacteria, however, were not yet fully understood.
From 1918 to 1920, the Spanish flu pandemic became the most devastating influenza pandemic and one of the deadliest pandemics in history. The pandemic, caused by an H1N1 strain of influenza A, likely began in the United States before spreading worldwide via soldiers during and after the First World War. The initial wave in the first half of 1918 was relatively minor and resembled past flu pandemics, but the second wave later that year had a much higher mortality rate. A third wave with lower mortality occurred in many places a few months after the second. By the end of 1920, it is estimated that about a third to half of all people in the world had been infected, with tens of millions of deaths, disproportionately young adults. During the 1918 pandemic, the respiratory route of transmission was clearly identified and influenza was shown to be caused by a "filter passer", not a bacterium, but there remained a lack of agreement about influenza's cause for another decade and research on influenza declined. After the pandemic, H1N1 circulated in humans in seasonal form until the next pandemic.
In 1931, Richard Shope published three papers identifying a virus as the cause of swine influenza, a then newly recognized disease among pigs that was characterized during the second wave of the 1918 pandemic. Shope's research reinvigorated research on human influenza, and many advances in virology, serology, immunology, experimental animal models, vaccinology, and immunotherapy have since arisen from influenza research. Just two years after influenza viruses were discovered, in 1933, influenza A virus was identified as the agent responsible for human influenza. Subtypes of influenza A virus were discovered throughout the 1930s, and influenza B virus was discovered in 1940.
During the Second World War, the US government worked on developing inactivated vaccines for influenza, resulting in the first influenza vaccine being licensed in 1945 in the United States. Influenza C virus was discovered two years later in 1947. In 1955, avian influenza was confirmed to be caused by influenza A virus. Four influenza pandemics have occurred since WWII. The first of these was the Asian flu from 1957 to 1958, caused by an H2N2 strain and beginning in China's Yunnan province. The number of deaths probably exceeded one million, mostly among the very young and very old. This was the first flu pandemic to occur in the presence of a global surveillance system and laboratories able to study the novel influenza virus. After the pandemic, H2N2 was the influenza A virus subtype responsible for seasonal influenza. The first antiviral drug against influenza, amantadine, was approved in 1966, with additional antiviral drugs being used since the 1990s.
In 1968, H3N2 was introduced into humans through a rearrangement between an avian H3N2 strain and an H2N2 strain that was circulating in humans. The novel H3N2 strain emerged in Hong Kong and spread worldwide, causing the Hong Kong flu pandemic, which resulted in 500,000–2,000,000 deaths. This was the first pandemic to spread significantly by air travel. H2N2 and H3N2 co-circulated after the pandemic until 1971 when H2N2 waned in prevalence and was completely replaced by H3N2. In 1977, H1N1 reemerged in humans, possibly after it was released from a freezer in a laboratory accident, and caused a pseudo-pandemic. This H1N1 strain was antigenically similar to the H1N1 strains that circulated prior to 1957. Since 1977, both H1N1 and H3N2 have circulated in humans as part of seasonal influenza. In 1980, the classification system used to subtype influenza viruses was introduced.
At some point, influenza B virus diverged into two strains, named the B/Victoria-like and B/Yamagata-like lineages, both of which have been circulating in humans since 1983.
In 1996, a highly pathogenic H5N1 subtype of influenza A was detected in geese in Guangdong, China and a year later emerged in poultry in Hong Kong, gradually spreading worldwide from there. A small H5N1 outbreak in humans in Hong Kong occurred then, and sporadic human cases have occurred since 1997, carrying a high case fatality rate.
The most recent flu pandemic was the 2009 swine flu pandemic, which originated in Mexico and resulted in hundreds of thousands of deaths. It was caused by a novel H1N1 strain that was a reassortment of human, swine, and avian influenza viruses. The 2009 pandemic had the effect of replacing prior H1N1 strains in circulation with the novel strain but not any other influenza viruses. Consequently, H1N1, H3N2, and both influenza B virus lineages have been in circulation in seasonal form since the 2009 pandemic.
In 2011, influenza D virus was discovered in pigs in Oklahoma, USA, and cattle were later identified as the primary reservoir of influenza D virus.
In the same year, avian H7N9 was detected in China and began to cause human infections in 2013, starting in Shanghai and Anhui and remaining mostly in China. Highly pathogenic H7N9 emerged sometime in 2016 and has occasionally infected humans incidentally. Other avian influenza viruses have less commonly infected humans since the 1990s, including H5N1, H5N5, H5N6, H5N8, H6N1, H7N2, H7N7, and H10N7, and have begun to spread throughout much of the world since the 2010s. Future flu pandemics, which may be caused by an influenza virus of avian origin, are viewed as almost inevitable, and increased globalization has made it easier for a pandemic virus to spread, so there are continual efforts to prepare for future pandemics and improve the prevention and treatment of influenza.
Etymology
The word influenza comes from the Italian word , from medieval Latin , originally meaning 'visitation' or 'influence'. Terms such as , meaning 'influence of the cold', and , meaning 'influence of the stars' are attested from the 14th century. The latter referred to the disease's cause, which at the time was ascribed by some to unfavorable astrological conditions. As early as 1504, began to mean a 'visitation' or 'outbreak' of any disease affecting many people in a single place at once. During an outbreak of influenza in 1743 that started in Italy and spread throughout Europe, the word reached the English language and was anglicized in pronunciation. Since the mid-1800s, influenza has also been used to refer to severe colds. The shortened form of the word, "flu", is first attested in 1839 as flue with the spelling flu confirmed in 1893. Other names that have been used for influenza include epidemic catarrh, la grippe from French, sweating sickness, and, especially when referring to the 1918 pandemic strain, Spanish fever.
In animals
Birds
Aquatic birds such as ducks, geese, shorebirds, and gulls are the primary reservoir of influenza A viruses (IAVs).
Because of the impact of avian influenza on economically important chicken farms, a classification system was devised in 1981 which divided avian virus strains as either highly pathogenic (and therefore potentially requiring vigorous control measures) or low pathogenic. The test for this is based solely on the effect on chickens – a virus strain is highly pathogenic avian influenza (HPAI) if 75% or more of chickens die after being deliberately infected with it. The alternative classification is low pathogenic avian influenza (LPAI) which produces mild or no symptoms. This classification system has since been modified to take into account the structure of the virus' haemagglutinin protein. At the genetic level, an AIV can be identified as an HPAI virus if it has a multibasic cleavage site in the HA protein, which contains additional residues in the HA gene. Other species of birds, especially water birds, can become infected with HPAI virus without experiencing severe symptoms and can spread the infection over large distances; the exact symptoms depend on the species of bird and the strain of virus. Classification of an avian virus strain as HPAI or LPAI does not predict how serious the disease might be if it infects humans or other mammals.
Symptoms of HPAI infection in chickens include lack of energy and appetite, decreased egg production, soft-shelled or misshapen eggs, swelling of the head, comb, wattles, and hocks, purple discoloration of wattles, combs, and legs, nasal discharge, coughing, sneezing, incoordination, and diarrhea; birds infected with an HPAI virus may also die suddenly without any signs of infection. Notable HPAI viruses include influenza A (H5N1) and A (H7N9). HPAI viruses have been a major disease burden in the 21st century, resulting in the death of large numbers of birds. In H7N9's case, some circulating strains were originally low pathogenic but became high pathogenic by mutating to acquire the HA multibasic cleavage site. Avian H9N2 is also of concern because although it is low pathogenic, it is a common donor of genes to H5N1 and H7N9 during reassortment.
Migratory birds can spread influenza across long distances. An example of this was when an H5N1 strain in 2005 infected birds at Qinghai Lake, China, which is a stopover and breeding site for many migratory birds, subsequently spreading the virus to more than 20 countries across Asia, Europe, and the Middle East. AIVs can be transmitted from wild birds to domestic free-range ducks and in turn to poultry through contaminated water, aerosols, and fomites. Ducks therefore act as key intermediates between wild and domestic birds. Transmission to poultry typically occurs in backyard farming and live animal markets where multiple species interact with each other. From there, AIVs can spread to poultry farms in the absence of adequate biosecurity. Among poultry, HPAI transmission occurs through aerosols and contaminated feces, cages, feed, and dead animals. Back-transmission of HPAI viruses from poultry to wild birds has occurred and is implicated in mass die-offs and intercontinental spread.
AIVs have occasionally infected humans through aerosols, fomites, and contaminated water. Direction transmission from wild birds is rare. Instead, most transmission involves domestic poultry, mainly chickens, ducks, and geese but also a variety of other birds such as guinea fowl, partridge, pheasants, and quails. The primary risk factor for infection with AIVs is exposure to birds in farms and live poultry markets. Typically, infection with an AIV has an incubation period of 3–5 days but can be up to 9 days. H5N1 and H7N9 cause severe lower respiratory tract illness, whereas other AIVs such as H9N2 cause a more mild upper respiratory tract illness, commonly with conjunctivitis. Limited transmission of avian H2, H5-7, H9, and H10 subtypes from one person to another through respiratory droplets, aerosols, and fomites has occurred, but sustained human-to-human transmission of AIVs has not occurred.
Pigs
Influenza in pigs is a respiratory disease similar to influenza in humans and is found worldwide. Asymptomatic infections are common. Symptoms typically appear 1–3 days after infection and include fever, lethargy, anorexia, weight loss, labored breathing, coughing, sneezing, and nasal discharge. In sows, pregnancy may be aborted. Complications include secondary infections and potentially fatal bronchopneumonia. Pigs become contagious within a day of infection and typically spread the virus for 7–10 days, which can spread rapidly within a herd. Pigs usually recover within 3–7 days after symptoms appear. Prevention and control measures include inactivated vaccines and culling infected herds. Influenza A virus subtypes H1N1, H1N2, and H3N2 are usually responsible for swine flu.
Some influenza A viruses can be transmitted via aerosols from pigs to humans and vice versa. Pigs, along with bats and quails, are recognized as a mixing vessel of influenza viruses because they have both α-2,3 and α-2,6 sialic acid receptors in their respiratory tract. Because of that, both avian and mammalian influenza viruses can infect pigs. If co-infection occurs, reassortment is possible. A notable example of this was the reassortment of a swine, avian, and human influenza virus that caused the 2009 flu pandemic. Spillover events from humans to pigs appear to be more common than from pigs to humans.
Other animals
Influenza viruses have been found in many other animals, including cattle, horses, dogs, cats, and marine mammals. Nearly all influenza A viruses are apparently descended from ancestral viruses in birds. The exception are bat influenza-like viruses, which have an uncertain origin. These bat viruses have HA and NA subtypes H17, H18, N10, and N11. H17N10 and H18N11 are unable to reassort with other influenza A viruses, but they are still able to replicate in other mammals.
Equine influenza A viruses include H7N7 and two lineages of H3N8. H7N7, however, has not been detected in horses since the late 1970s, so it may have become extinct in horses. H3N8 in equines spreads via aerosols and causes respiratory illness. Equine H3N8 preferentially binds to α-2,3 sialic acids, so horses are usually considered dead-end hosts, but transmission to dogs and camels has occurred, raising concerns that horses may be mixing vessels for reassortment. In canines, the only influenza A viruses in circulation are equine-derived H3N8 and avian-derived H3N2. Canine H3N8 has not been observed to reassort with other subtypes. H3N2 has a much broader host range and can reassort with H1N1 and H5N1. An isolated case of H6N1, likely from a chicken, was found infecting a dog, so other AIVs may emerge in canines.
A wide range of other mammals have been affected by avian influenza A viruses, generally due to eating birds which had been infected. There have been instances where transmission of the disease between mammals, including seals and cows, may have occurred. Various mutations have been identified that are associated with AIVs adapting to mammals. Since HA proteins vary in which sialic acids they bind to, mutations in the HA receptor binding site can allow AIVs to infect mammals. Other mutations include mutations affecting which sialic acids NA proteins cleave and a mutation in the PB2 polymerase subunit that improves tolerance of lower temperatures in mammalian respiratory tracts and enhances RNP assembly by stabilizing NP and PB2 binding.
Influenza B virus is mainly found in humans but has also been detected in pigs, dogs, horses, and seals. Likewise, influenza C virus primarily infects humans but has been observed in pigs, dogs, cattle, and dromedary camels. Influenza D virus causes an influenza-like illness in pigs but its impact in its natural reservoir, cattle, is relatively unknown. It may cause respiratory disease resembling human influenza on its own, or it may be part of a bovine respiratory disease (BRD) complex with other pathogens during co-infection. BRD is a concern for the cattle industry, so influenza D virus' possible involvement in BRD has led to research on vaccines for cattle that can provide protection against influenza D virus. Two antigenic lineages are in circulation: D/swine/Oklahoma/1334/2011 (D/OK) and D/bovine/Oklahoma/660/2013 (D/660).
| Biology and health sciences | Illness and injury | null |
118212 | https://en.wikipedia.org/wiki/Staphylococcus%20aureus | Staphylococcus aureus | Staphylococcus aureus is a gram-positive spherically shaped bacterium, a member of the Bacillota, and is a usual member of the microbiota of the body, frequently found in the upper respiratory tract and on the skin. It is often positive for catalase and nitrate reduction and is a facultative anaerobe, meaning that it can grow without oxygen. Although S. aureus usually acts as a commensal of the human microbiota, it can also become an opportunistic pathogen, being a common cause of skin infections including abscesses, respiratory infections such as sinusitis, and food poisoning. Pathogenic strains often promote infections by producing virulence factors such as potent protein toxins, and the expression of a cell-surface protein that binds and inactivates antibodies. S. aureus is one of the leading pathogens for deaths associated with antimicrobial resistance and the emergence of antibiotic-resistant strains, such as methicillin-resistant S. aureus (MRSA). The bacterium is a worldwide problem in clinical medicine. Despite much research and development, no vaccine for S. aureus has been approved.
An estimated 21% to 30% of the human population are long-term carriers of S. aureus, which can be found as part of the normal skin microbiota, in the nostrils, and as a normal inhabitant of the lower reproductive tract of females. S. aureus can cause a range of illnesses, from minor skin infections, such as pimples, impetigo, boils, cellulitis, folliculitis, carbuncles, scalded skin syndrome, and abscesses, to life-threatening diseases such as pneumonia, meningitis, osteomyelitis, endocarditis, toxic shock syndrome, bacteremia, and sepsis. It is still one of the five most common causes of hospital-acquired infections and is often the cause of wound infections following surgery. Each year, around 500,000 hospital patients in the United States contract a staphylococcal infection, chiefly by S. aureus. Up to 50,000 deaths each year in the U.S. are linked to staphylococcal infection.
History
Discovery
In 1880, Alexander Ogston, a Scottish surgeon, discovered that Staphylococcus can cause wound infections after noticing groups of bacteria in pus from a surgical abscess during a procedure he was performing. He named it Staphylococcus after its clustered appearance evident under a microscope. Then, in 1884, German scientist Friedrich Julius Rosenbach identified Staphylococcus aureus, discriminating and separating it from Staphylococcus albus, a related bacterium. In the early 1930s, doctors began to use a more streamlined test to detect the presence of an S. aureus infection by the means of coagulase testing, which enables detection of an enzyme produced by the bacterium. Prior to the 1940s, S. aureus infections were fatal in the majority of patients. However, doctors discovered that the use of penicillin could cure S. aureus infections. Unfortunately, by the end of the 1940s, penicillin resistance became widespread amongst this bacterium population and outbreaks of the resistant strain began to occur.
Evolution
Staphylococcus aureus can be sorted into ten dominant human lineages. There are numerous minor lineages as well, but these are not seen in the population as often. Genomes of bacteria within the same lineage are mostly conserved, with the exception of mobile genetic elements. Mobile genetic elements that are common in S. aureus include bacteriophages, pathogenicity islands, plasmids, transposons, and staphylococcal cassette chromosomes. These elements have enabled S. aureus to continually evolve and gain new traits. There is a great deal of genetic variation within the S. aureus species. A study by Fitzgerald et al. (2001) revealed that approximately 22% of the S. aureus genome is non-coding and thus can differ from bacterium to bacterium. An example of this difference is seen in the species' virulence. Only a few strains of S. aureus are associated with infections in humans. This demonstrates that there is a large range of infectious ability within the species.
It has been proposed that one possible reason for the great deal of heterogeneity within the species could be due to its reliance on heterogeneous infections. This occurs when multiple different types of S. aureus cause an infection within a host. The different strains can secrete different enzymes or bring different antibiotic resistances to the group, increasing its pathogenic ability. Thus, there is a need for a large number of mutations and acquisitions of mobile genetic elements.
Another notable evolutionary process within the S. aureus species is its co-evolution with its human hosts. Over time, this parasitic relationship has led to the bacterium's ability to be carried in the nasopharynx of humans without causing symptoms or infection. This allows it to be passed throughout the human population, increasing its fitness as a species. However, only approximately 50% of the human population are carriers of S. aureus, with 20% as continuous carriers and 30% as intermittent. This leads scientists to believe that there are many factors that determine whether S. aureus is carried asymptomatically in humans, including factors that are specific to an individual person. According to a 1995 study by Hofman et al., these factors may include age, sex, diabetes, and smoking. They also determined some genetic variations in humans that lead to an increased ability for S. aureus to colonize, notably a polymorphism in the glucocorticoid receptor gene that results in larger corticosteroid production. In conclusion, there is evidence that any strain of this bacterium can become invasive, as this is highly dependent upon human factors.
Though S. aureus has quick reproductive and micro-evolutionary rates, there are multiple barriers that prevent evolution with the species. One such barrier is AGR, which is a global accessory gene regulator within the bacteria. This such regulator has been linked to the virulence level of the bacteria. Loss of function mutations within this gene have been found to increase the fitness of the bacterium containing it. Thus, S. aureus must make a trade-off to increase their success as a species, exchanging reduced virulence for increased drug resistance. Another barrier to evolution is the Sau1 Type I restriction modification (RM) system. This system exists to protect the bacterium from foreign DNA by digesting it. Exchange of DNA between the same lineage is not blocked, since they have the same enzymes and the RM system does not recognize the new DNA as foreign, but transfer between different lineages is blocked.
Microbiology
Staphylococcus aureus (, Greek , Latin , ) is a facultative anaerobic, gram-positive coccal (round) bacterium also known as "golden staph" and "oro staphira". S. aureus is nonmotile and does not form spores. In medical literature, the bacterium is often referred to as S. aureus, Staph aureus or Staph a.. S. aureus appears as staphylococci (grape-like clusters) when viewed through a microscope, and has large, round, golden-yellow colonies, often with hemolysis, when grown on blood agar plates. S. aureus reproduces asexually by binary fission. Complete separation of the daughter cells is mediated by S. aureus autolysin, and in its absence or targeted inhibition, the daughter cells remain attached to one another and appear as clusters.
Staphylococcus aureus is catalase-positive (meaning it can produce the enzyme catalase). Catalase converts hydrogen peroxide () to water and oxygen. Catalase-activity tests are sometimes used to distinguish staphylococci from enterococci and streptococci. Previously, S. aureus was differentiated from other staphylococci by the coagulase test. However, not all S. aureus strains are coagulase-positive and incorrect species identification can impact effective treatment and control measures.
Natural genetic transformation is a reproductive process involving DNA transfer from one bacterium to another through the intervening medium, and the integration of the donor sequence into the recipient genome by homologous recombination. S. aureus was found to be capable of natural genetic transformation, but only at low frequency under the experimental conditions employed. Further studies suggested that the development of competence for natural genetic transformation may be substantially higher under appropriate conditions, yet to be discovered.
Role in health
In humans, S. aureus can be present in the upper respiratory tract, gut mucosa, and skin as a member of the normal microbiota. However, because S. aureus can cause disease under certain host and environmental conditions, it is characterized as a pathobiont.
Role in disease
While S. aureus usually acts as a commensal bacterium, asymptomatically colonizing about 30% of the human population, it can sometimes cause disease. In particular, S. aureus is one of the most common causes of bacteremia and infective endocarditis. Additionally, it can cause various skin and soft-tissue infections, particularly when skin or mucosal barriers have been breached.
Staphylococcus aureus infections can spread through contact with pus from an infected wound, skin-to-skin contact with an infected person, and contact with objects used by an infected person such as towels, sheets, clothing, or athletic equipment. Joint replacements put a person at particular risk of septic arthritis, staphylococcal endocarditis (infection of the heart valves), and pneumonia.
Staphylococcus aureus is a significant cause of chronic biofilm infections on medical implants, and the repressor of toxins is part of the infection pathway.
Staphylococcus aureus can lie dormant in the body for years undetected. Once symptoms begin to show, the host is contagious for another two weeks, and the overall illness lasts a few weeks. If untreated, though, the disease can be deadly. Deeply penetrating S. aureus infections can be severe.
Skin infections
Skin infections are the most common form of S. aureus infection. This can manifest in various ways, including small benign boils, folliculitis, impetigo, cellulitis, and more severe, invasive soft-tissue infections.
Staphylococcus aureus is extremely prevalent in persons with atopic dermatitis (AD), more commonly known as eczema. It is mostly found in fertile, active places, including the armpits, hair, and scalp. Large pimples that appear in those areas may exacerbate the infection if lacerated. Colonization of S. aureus drives inflammation of AD. S. aureus is believed to exploit defects in the skin barrier of persons with atopic dermatitis, triggering cytokine expression and therefore exacerbating symptoms. This can lead to staphylococcal scalded skin syndrome, a severe form of which can be seen in newborns.
The role of S. aureus in causing itching in atopic dermatitis has been studied.
Antibiotics are commonly used to target overgrowth of S. aureus but their benefit is limited and they increase the risk of antimicrobial resistance. For these reasons, they are only recommended for people who not only present symptoms on the skin but feel systematically unwell.
Food poisoning
Staphylococcus aureus is also responsible for food poisoning and achieves this by generating toxins in the food, which is then ingested. Its incubation period lasts 30 minutes to eight hours, with the illness itself lasting from 30 minutes to 3 days. Preventive measures one can take to help prevent the spread of the disease include washing hands thoroughly with soap and water before preparing food. The Centers for Disease Control and Prevention recommends staying away from any food if ill, and wearing gloves if any open wounds occur on hands or wrists while preparing food. If storing food for longer than 2 hours, it is recommended to keep the food below 4.4 or above 60 °C (below 40 or above 140 °F).
Bone and joint infections
Staphylococcus aureus is a common cause of major bone and joint infections, including osteomyelitis, septic arthritis, and infections following joint replacement surgeries.
Bacteremia
Staphylococcus aureus is a leading cause of bloodstream infections throughout much of the industrialized world. Infection is generally associated with breaks in the skin or mucosal membranes due to surgery, injury, or use of intravascular devices such as cannulas, hemodialysis machines, or hypodermic needles. Once the bacteria have entered the bloodstream, they can infect various organs, causing infective endocarditis, septic arthritis, and osteomyelitis. This disease is particularly prevalent and severe in the very young and very old.
Without antibiotic treatment, S. aureus bacteremia has a case fatality rate around 80%. With antibiotic treatment, case fatality rates range from 15% to 50% depending on the age and health of the patient, as well as the antibiotic resistance of the S. aureus strain.
Medical implant infections
Staphylococcus aureus is often found in biofilms formed on medical devices implanted in the body or on human tissue. It is commonly found with another pathogen, Candida albicans, forming multispecies biofilms. The latter is suspected to help S. aureus penetrate human tissue. A higher mortality is linked with multispecies biofilms.
Staphylococcus aureus biofilm is the predominant cause of orthopedic implant-related infections, but is also found on cardiac implants, vascular grafts, various catheters, and cosmetic surgical implants. After implantation, the surface of these devices becomes coated with host proteins, which provide a rich surface for bacterial attachment and biofilm formation. Once the device becomes infected, it must be completely removed, since S. aureus biofilm cannot be destroyed by antibiotic treatments.
Current therapy for S. aureus biofilm-mediated infections involves surgical removal of the infected device followed by antibiotic treatment. Conventional antibiotic treatment alone is not effective in eradicating such infections. An alternative to postsurgical antibiotic treatment is using antibiotic-loaded, dissolvable calcium sulfate beads, which are implanted with the medical device. These beads can release high doses of antibiotics at the desired site to prevent the initial infection.
Novel treatments for S. aureus biofilm involving nano silver particles, bacteriophages, and plant-derived antibiotic agents are being studied. These agents have shown inhibitory effects against S. aureus embedded in biofilms. A class of enzymes have been found to have biofilm matrix-degrading ability, thus may be used as biofilm dispersal agents in combination with antibiotics.
Animal infections
Staphylococcus aureus can survive on dogs, cats, and horses, and can cause bumblefoot in chickens. Some believe health-care workers' dogs should be considered a significant source of antibiotic-resistant S. aureus, especially in times of outbreak. In a 2008 study by Boost, O'Donoghue, and James, it was found that just about 90% of S. aureus colonized within pet dogs presented as resistant to at least one antibiotic. The nasal region has been implicated as the most important site of transfer between dogs and humans.
Staphylococcus aureus is one of the causal agents of mastitis in dairy cows. Its large polysaccharide capsule protects the organism from recognition by the cow's immune defenses.
Virulence factors
Enzymes
Staphylococcus aureus produces various enzymes such as coagulase (bound and free coagulases) which facilitates the conversion of fibrinogen to fibrin to cause clots which is important in skin infections. Hyaluronidase (also known as spreading factor) breaks down hyaluronic acid and helps in spreading it. Deoxyribonuclease, which breaks down the DNA, protects S. aureus from neutrophil extracellular trap-mediated killing. S. aureus also produces lipase to digest lipids, staphylokinase to dissolve fibrin and aid in spread, and beta-lactamase for drug resistance.
Toxins
Depending on the strain, S. aureus is capable of secreting several exotoxins, which can be categorized into three groups. Many of these toxins are associated with specific diseases.
Superantigens
Antigens known as superantigens can induce toxic shock syndrome (TSS). This group comprises 25 staphylococcal enterotoxins (SEs) which have been identified to date and named alphabetically (SEA–SEZ), including enterotoxin type B as well as the toxic shock syndrome toxin TSST-1 which causes TSS associated with tampon use. Toxic shock syndrome is characterized by fever, erythematous rash, low blood pressure, shock, multiple organ failure, and skin peeling. Lack of antibody to TSST-1 plays a part in the pathogenesis of TSS. Other strains of S. aureus can produce an enterotoxin that is the causative agent of a type of gastroenteritis. This form of gastroenteritis is self-limiting, characterized by vomiting and diarrhea 1–6 hours after ingestion of the toxin, with recovery in 8 to 24 hours. Symptoms include nausea, vomiting, diarrhea, and major abdominal pain.
Exfoliative toxins
Exfoliative toxins are exotoxins implicated in the disease staphylococcal scalded skin syndrome (SSSS), which occurs most commonly in infants and young children. It also may occur as epidemics in hospital nurseries. The protease activity of the exfoliative toxins causes peeling of the skin observed with SSSS.
Other toxins
Staphylococcal toxins that act on cell membranes include alpha toxin, beta toxin, delta toxin, and several bicomponent toxins. Strains of S. aureus can host phages, such as the prophage Φ-PVL that produces Panton-Valentine leukocidin (PVL), to increase virulence. The bicomponent toxin PVL is associated with severe necrotizing pneumonia in children. The genes encoding the components of PVL are encoded on a bacteriophage found in community-associated MRSA strains.
Type VII secretion system
A secretion system is a highly specialised multi-protein unit that is embedded in the cell envelope with the function of translocating effector proteins from inside of the cell to the extracellular space or into a target host cytosol. The exact structure and function of T7SS is yet to be fully elucidated. Currently, four proteins are known components of S. aureus type VII secretion system; EssC is a large integral membrane ATPase – which most likely powers the secretion systems and has been hypothesised forming part of the translocation channel. The other proteins are EsaA, EssB, EssA, that are membrane proteins that function alongside EssC to mediate protein secretion. The exact mechanism of how substrates reach the cell surface is unknown, as is the interaction of the three membrane proteins with each other and EssC.
T7 dependent effector proteins
EsaD is DNA endonuclease toxin secreted by S. aureus, has been shown to inhibit growth of competitor S. aureus strain in vitro. EsaD is cosecreted with chaperone EsaE, which stabilises EsaD structure and brings EsaD to EssC for secretion. Strains that produce EsaD also co-produce EsaG, a cytoplasmic anti-toxin that protects the producer strain from EsaD's toxicity.
TspA is another toxin that mediates intraspecies competition. It is a bacteriostatic toxin that has a membrane depolarising activity facilitated by its C-terminal domain. Tsai is a transmembrane protein that confers immunity to the producer strain of TspA, as well as the attacked strains. There is genetic variability of the C-terminal domain of TspA therefore, it seems like the strains may produce different TspA variants to increase competitiveness.
Toxins that play a role in intraspecies competition confers an advantage by promoting successful colonisation in polymicrobial communities such as the nasopharynx and lung by outcompeting lesser strains.
There are also T7 effector proteins that play role a in pathogenesis, for example mutational studies of S. aureus have suggested that EsxB and EsxC contribute to persistent infection in a murine abscess model.
EsxX has been implicated in neutrophil lysis, therefore suggested as contributing to the evasion of host immune system. Deletion of essX in S. aureus resulted in significantly reduced resistance to neutrophils and reduced virulence in murine skin and blood infection models.
Altogether, T7SS and known secreted effector proteins are a strategy of pathogenesis by improving fitness against competitor S. aureus species as well as increased virulence via evading the innate immune system and optimising persistent infections.
Small RNA
The list of small RNAs involved in the control of bacterial virulence in S. aureus is growing. This can be facilitated by factors such as increased biofilm formation in the presence of increased levels of such small RNAs. For example, RNAIII, SprD, SprC, RsaE, SprA1, SSR42, ArtR, SprX, and Teg49.
DNA repair
Host neutrophils cause DNA double-strand breaks in S. aureus through the production of reactive oxygen species. For infection of a host to be successful, S. aureus must survive such damages caused by the hosts' defenses. The two protein complex RexAB encoded by S. aureus is employed in the recombinational repair of DNA double-strand breaks.
Strategies for post-transcriptional regulation by 3'untranslated region
Many mRNAs in S. aureus carry three prime untranslated regions (3'UTR) longer than 100 nucleotides, which may potentially have a regulatory function.
Further investigation of icaR mRNA (mRNA coding for the repressor of the main expolysaccharidic compound of the bacteria biofilm matrix) demonstrated that the 3'UTR binding to the 5' UTR can interfere with the translation initiation complex and generate a double stranded substrate for RNase III. The interaction is between the UCCCCUG motif in the 3'UTR and the Shine-Dalagarno region at the 5'UTR. Deletion of the motif resulted in IcaR repressor accumulation and inhibition of biofilm development. The biofilm formation is the main cause of Staphylococcus implant infections.
Biofilm
Biofilms are groups of microorganisms, such as bacteria, that attach to each other and grow on wet surfaces. The S. aureus biofilm is embedded in a glycocalyx slime layer and can consist of teichoic acids, host proteins, extracellular DNA (eDNA) and sometimes polysaccharide intercellular antigen (PIA). S. aureus biofilms are important in disease pathogenesis, as they can contribute to antibiotic resistance and immune system evasion. S. aureus biofilm has high resistance to antibiotic treatments and host immune response. One hypothesis for explaining this is that the biofilm matrix protects the embedded cells by acting as a barrier to prevent antibiotic penetration. However, the biofilm matrix is composed with many water channels, so this hypothesis is becoming increasingly less likely, but a biofilm matrix possibly contains antibiotic‐degrading enzymes such as β-lactamases, which can prevent antibiotic penetration. Another hypothesis is that the conditions in the biofilm matrix favor the formation of persister cells, which are highly antibiotic-resistant, dormant bacterial cells. S. aureus biofilms also have high resistance to host immune response. Though the exact mechanism of resistance is unknown, S. aureus biofilms have increased growth under the presence of cytokines produced by the host immune response. Host antibodies are less effective for S. aureus biofilm due to the heterogeneous antigen distribution, where an antigen may be present in some areas of the biofilm, but completely absent from other areas.
Studies in biofilm development have shown to be related to changes in gene expression. There are specific genes that were found to be crucial in the different biofilm growth stages. Two of these genes include rocD and gudB, which encode for the enzyme's ornithine-oxo-acid transaminase and glutamate dehydrogenase, which are important for amino acid metabolism. Studies have shown biofilm development rely on amino acids glutamine and glutamate for proper metabolic functions.
Other immunoevasive strategies
Protein A
Protein A is anchored to staphylococcal peptidoglycan pentaglycine bridges (chains of five glycine residues) by the transpeptidase sortase A. Protein A, an IgG-binding protein, binds to the Fc region of an antibody. In fact, studies involving mutation of genes coding for protein A resulted in a lowered virulence of S. aureus as measured by survival in blood, which has led to speculation that protein A-contributed virulence requires binding of antibody Fc regions.
Protein A in various recombinant forms has been used for decades to bind and purify a wide range of antibodies by immunoaffinity chromatography. Transpeptidases, such as the sortases responsible for anchoring factors like protein A to the staphylococcal peptidoglycan, are being studied in hopes of developing new antibiotics to target MRSA infections.
Staphylococcal pigments
Some strains of S. aureus are capable of producing staphyloxanthin – a golden-coloured carotenoid pigment. This pigment acts as a virulence factor, primarily by being a bacterial antioxidant which helps the microbe evade the reactive oxygen species which the host immune system uses to kill pathogens.
Mutant strains of S. aureus modified to lack staphyloxanthin are less likely to survive incubation with an oxidizing chemical, such as hydrogen peroxide, than pigmented strains. Mutant colonies are quickly killed when exposed to human neutrophils, while many of the pigmented colonies survive. In mice, the pigmented strains cause lingering abscesses when inoculated into wounds, whereas wounds infected with the unpigmented strains quickly heal.
These tests suggest the Staphylococcus strains use staphyloxanthin as a defence against the normal human immune system. Drugs designed to inhibit the production of staphyloxanthin may weaken the bacterium and renew its susceptibility to antibiotics. In fact, because of similarities in the pathways for biosynthesis of staphyloxanthin and human cholesterol, a drug developed in the context of cholesterol-lowering therapy was shown to block S. aureus pigmentation and disease progression in a mouse infection model.
Classical diagnosis
Depending upon the type of infection present, an appropriate specimen is obtained accordingly and sent to the laboratory for definitive identification by using biochemical or enzyme-based tests. A Gram stain is first performed to guide the way, which should show typical gram-positive bacteria, cocci, in clusters. Second, the isolate is cultured on mannitol salt agar, which is a selective medium with 7.5% NaCl that allows S. aureus to grow, producing yellow-colored colonies as a result of mannitol fermentation and subsequent drop in the medium's pH.
Furthermore, for differentiation on the species level, catalase (positive for all Staphylococcus species), coagulase (fibrin clot formation, positive for S. aureus), DNAse (zone of clearance on DNase agar), lipase (a yellow color and rancid odor smell), and phosphatase (a pink color) tests are all done. For staphylococcal food poisoning, phage typing can be performed to determine whether the staphylococci recovered from the food were the source of infection.
Rapid diagnosis and typing
Diagnostic microbiology laboratories and reference laboratories are key for identifying outbreaks and new strains of S. aureus. Recent genetic advances have enabled reliable and rapid techniques for the identification and characterization of clinical isolates of S. aureus in real time. These tools support infection control strategies to limit bacterial spread and ensure the appropriate use of antibiotics. Quantitative PCR is increasingly being used to identify outbreaks of infection.
When observing the evolvement of S. aureus and its ability to adapt to each modified antibiotic, two basic methods known as "band-based" or "sequence-based" are employed. Keeping these two methods in mind, other methods such as multilocus sequence typing (MLST), pulsed-field gel electrophoresis (PFGE), bacteriophage typing, spa locus typing, and SCCmec typing are often conducted more than others. With these methods, it can be determined where strains of MRSA originated and also where they are currently.
With MLST, this technique of typing uses fragments of several housekeeping genes known as aroE, glpF, gmk, pta, tip, and yqiL. These sequences are then assigned a number which give to a string of several numbers that serve as the allelic profile. Although this is a common method, a limitation about this method is the maintenance of the microarray which detects newly allelic profiles, making it a costly and time-consuming experiment.
With PFGE, a method which is still very much used dating back to its first success in 1980s, remains capable of helping differentiate MRSA isolates. To accomplish this, the technique uses multiple gel electrophoresis, along with a voltage gradient to display clear resolutions of molecules. The S. aureus fragments then transition down the gel, producing specific band patterns that are later compared with other isolates in hopes of identifying related strains. Limitations of the method include practical difficulties with uniform band patterns and PFGE sensitivity as a whole.
Spa locus typing is also considered a popular technique that uses a single locus zone in a polymorphic region of S. aureus to distinguish any form of mutations. Although this technique is often inexpensive and less time-consuming, the chance of losing discriminatory power making it hard to differentiate between MLST clonal complexes exemplifies a crucial limitation.
Treatment
For susceptible strains, the treatment of choice for S. aureus infection is penicillin. An antibiotic derived from some Penicillium fungal species, penicillin inhibits the formation of peptidoglycan cross-linkages that provide the rigidity and strength in a bacterial cell wall. The four-membered β-lactam ring of penicillin is bound to enzyme DD-transpeptidase, an enzyme that when functional, cross-links chains of peptidoglycan that form bacterial cell walls. The binding of β-lactam to DD-transpeptidase inhibits the enzyme's functionality and it can no longer catalyze the formation of the cross-links. As a result, cell wall formation and degradation are imbalanced, thus resulting in cell death. In most countries, however, penicillin resistance is extremely common (>90%), and first-line therapy is most commonly a penicillinase-resistant β-lactam antibiotic (for example, oxacillin or flucloxacillin, both of which have the same mechanism of action as penicillin) or vancomycin, depending on local resistance patterns. Combination therapy with gentamicin may be used to treat serious infections, such as endocarditis, but its use is controversial because of the high risk of damage to the kidneys. The duration of treatment depends on the site of infection and on severity. Adjunctive rifampicin has been historically used in the management of S aureus bacteraemia, but randomised controlled trial evidence has shown this to be of no overall benefit over standard antibiotic therapy.
Antibiotic resistance in S. aureus was uncommon when penicillin was first introduced in 1943. Indeed, the original Petri dish on which Alexander Fleming of Imperial College London observed the antibacterial activity of the Penicillium fungus was growing a culture of S. aureus. By 1950, 40% of hospital S. aureus isolates were penicillin-resistant; by 1960, this had risen to 80%.
Methicillin-resistant Staphylococcus aureus (MRSA, often pronounced or ), is one of a number of greatly feared strains of S. aureus which have become resistant to most β-lactam antibiotics. For this reason, vancomycin, a glycopeptide antibiotic, is commonly used to combat MRSA. Vancomycin inhibits the synthesis of peptidoglycan, but unlike β-lactam antibiotics, glycopeptide antibiotics target and bind to amino acids in the cell wall, preventing peptidoglycan cross-linkages from forming. MRSA strains are most often found associated with institutions such as hospitals, but are becoming increasingly prevalent in community-acquired infections.
Minor skin infections can be treated with triple antibiotic ointment. One topical agent that is prescribed is mupirocin, a protein synthesis inhibitor that is produced naturally by Pseudomonas fluorescens and has seen success for treatment of S. aureus nasal carriage.
Antibiotic resistance
Staphylococcus aureus was found to be the second leading pathogen for deaths associated with antimicrobial resistance in 2019.
Staphylococcal resistance to penicillin is mediated by penicillinase (a form of beta-lactamase) production: an enzyme that cleaves the β-lactam ring of the penicillin molecule, rendering the antibiotic ineffective. Penicillinase-resistant β-lactam antibiotics, such as methicillin, nafcillin, oxacillin, cloxacillin, dicloxacillin, and flucloxacillin are able to resist degradation by staphylococcal penicillinase.
Resistance to methicillin is mediated via the mec operon, part of the staphylococcal cassette chromosome mec (SCCmec). SCCmec is a family of mobile genetic elements, which is a major driving force of S. aureus evolution. Resistance is conferred by the mecA gene, which codes for an altered penicillin-binding protein (PBP2a or PBP2') that has a lower affinity for binding β-lactams (penicillins, cephalosporins, and carbapenems). This allows for resistance to all β-lactam antibiotics, and obviates their clinical use during MRSA infections. Studies have explained that this mobile genetic element has been acquired by different lineages in separate gene transfer events, indicating that there is not a common ancestor of differing MRSA strains. One study suggests that MRSA sacrifices virulence, for example, toxin production and invasiveness, for survival and creation of biofilms
Aminoglycoside antibiotics, such as kanamycin, gentamicin, streptomycin, were once effective against staphylococcal infections until strains evolved mechanisms to inhibit the aminoglycosides' action, which occurs via protonated amine and/or hydroxyl interactions with the ribosomal RNA of the bacterial 30S ribosomal subunit. Three main mechanisms of aminoglycoside resistance mechanisms are currently and widely accepted: aminoglycoside modifying enzymes, ribosomal mutations, and active efflux of the drug out of the bacteria.
Aminoglycoside-modifying enzymes inactivate the aminoglycoside by covalently attaching either a phosphate, nucleotide, or acetyl moiety to either the amine or the alcohol key functional group (or both groups) of the antibiotic. This changes the charge or sterically hinders the antibiotic, decreasing its ribosomal binding affinity. In S. aureus, the best-characterized aminoglycoside-modifying enzyme is aminoglycoside adenylyltransferase 4' IA (ANT(4')IA). This enzyme has been solved by X-ray crystallography. The enzyme is able to attach an adenyl moiety to the 4' hydroxyl group of many aminoglycosides, including kanamycin and gentamicin.
Glycopeptide resistance is typically mediated by acquisition of the vanA gene, which originates from the Tn1546 transposon found in a plasmid in enterococci and codes for an enzyme that produces an alternative peptidoglycan to which vancomycin will not bind.
Today, S. aureus has become resistant to many commonly used antibiotics. In the UK, only 2% of all S. aureus isolates are sensitive to penicillin, with a similar picture in the rest of the world. The β-lactamase-resistant penicillins (methicillin, oxacillin, cloxacillin, and flucloxacillin) were developed to treat penicillin-resistant S. aureus, and are still used as first-line treatment. Methicillin was the first antibiotic in this class to be used (it was introduced in 1959), but only two years later, the first case of methicillin-resistant Staphylococcus aureus (MRSA) was reported in England.
Despite this, MRSA generally remained an uncommon finding, even in hospital settings, until the 1990s, when the MRSA prevalence in hospitals exploded, and it is now endemic. Now, methicillin-resistant Staphylococcus aureus (MRSA) is not only a human pathogen causing a variety of infections, such as skin and soft tissue infection (SSTI), pneumonia, and sepsis, but it also can cause disease in animals, known as livestock-associated MRSA (LA-MRSA).
MRSA infections in both the hospital and community setting are commonly treated with non-β-lactam antibiotics, such as clindamycin (a lincosamine) and co-trimoxazole (also commonly known as trimethoprim/sulfamethoxazole). Resistance to these antibiotics has also led to the use of new, broad-spectrum anti-Gram-positive antibiotics, such as linezolid, because of its availability as an oral drug. First-line treatment for serious invasive infections due to MRSA is currently glycopeptide antibiotics (vancomycin and teicoplanin). A number of problems with these antibiotics occur, such as the need for intravenous administration (no oral preparation is available), toxicity, and the need to monitor drug levels regularly by blood tests. Also, glycopeptide antibiotics do not penetrate very well into infected tissues (this is a particular concern with infections of the brain and meninges and in endocarditis). Glycopeptides must not be used to treat methicillin-sensitive S. aureus (MSSA), as outcomes are inferior.
Daptomycin is a cyclic lipopeptide antibiotic primarily used for treating Gram-positive bacterial infections, including those caused by Staphylococcus aureus. It was first approved in 2003 and is especially effective against resistant strains like methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant Staphylococcus aureus (VRSA).
Daptomycin works in a unique way compared to other antibiotics. It including calcium-dependent membrane binding, disruption of membrane potentia and bacterial cell death.Daptomycin is FDA-approved for treating complicated skin and soft tissue infections and bloodstream infections and right-sided infective endocarditis caused by S. aureus.
Serum triggers a high degree of tolerance to the lipopeptide antibiotic daptomycin and several other classes of antibiotic.Serum-induced daptomycin tolerance is due to two independent mechanisms. The first one is the activation of the GraRS two-component system. The activation is triggered by the host defense LL-37. So that, bacteria can make more peptidoglycan to make the cell wall become thicker. This can make the tolerance of bacteria. The second one is the increase of cardiolipin abundance in the membrane.The serum-adapted bacteria can change their membrane composition. This change can reduce the binding of daptomycin to the bacteria’s membrane.
Because of the high level of resistance to penicillins and because of the potential for MRSA to develop resistance to vancomycin, the U.S. Centers for Disease Control and Prevention has published guidelines for the appropriate use of vancomycin. In situations where the incidence of MRSA infections is known to be high, the attending physician may choose to use a glycopeptide antibiotic until the identity of the infecting organism is known. After the infection is confirmed to be due to a methicillin-susceptible strain of S. aureus, treatment can be changed to flucloxacillin or even penicillin, as appropriate.
Vancomycin-resistant S. aureus (VRSA) is a strain of S. aureus that has become resistant to the glycopeptides. The first case of vancomycin-intermediate S. aureus (VISA) was reported in Japan in 1996; but the first case of S. aureus truly resistant to glycopeptide antibiotics was only reported in 2002. Three cases of VRSA infection had been reported in the United States as of 2005. At least in part the antimicrobial resistance in S. aureus can be explained by its ability to adapt. Multiple two component signal transduction pathways helps S. aureus to express genes that are required to survive under antimicrobial stress.
Efflux pumps
Among the various mechanisms that MRSA acquires to elude antibiotic resistance (e.g., drug inactivation, target alteration, reduction of permeability) there is also the overexpression of efflux pumps. Efflux pumps are membrane-integrated proteins that are physiologically needed in the cell for the exportation of xenobiotic compounds. They are divided into six families, each of which has a different structure, function, and transport of energy. The main efflux pumps of S. aureus are the MFS (Major Facilitator Superfamily) which includes the MdeA pump as well as the NorA pump and the MATE (Multidrug and Toxin Extrusion) to which it belongs the MepA pump. For transport, these families use an electrochemical potential and an ion concentration gradient, while the ATP-binding cassette (ABC) family acquires its energy from the hydrolysis of ATP.
These pumps are overexpressed by MDR S. aureus (Multidrug resistant S. aureus) and the result is an excessive expulsion of the antibiotic outside the cell, which makes its action ineffective. Efflux pumps also contribute significantly to the development of impenetrable biofilms.
By directly modulating efflux pumps' activity or decreasing their expression, it may be possible to modify the resistant phenotype and restore the effectiveness of existing antibiotics.
Carriage
About 33% of the U.S. population are carriers of S. aureus and about 2% carry MRSA. Even healthcare providers can be MRSA colonizers.
The carriage of S. aureus is an important source of hospital-acquired infection (also called nosocomial) and community-acquired MRSA. Although S. aureus can be present on the skin of the host, a large proportion of its carriage is through the anterior nares of the nasal passages and can further be present in the ears. The ability of the nasal passages to harbour S. aureus results from a combination of a weakened or defective host immunity and the bacterium's ability to evade host innate immunity. Nasal carriage is also implicated in the occurrence of staph infections.
Infection control
Spread of S. aureus (including MRSA) generally is through human-to-human contact, although recently some veterinarians have discovered the infection can be spread through pets, with environmental contamination thought to play a relatively less important part. Emphasis on basic hand washing techniques are, therefore, effective in preventing its transmission. The use of disposable aprons and gloves by staff reduces skin-to-skin contact, so further reduces the risk of transmission.
Recently, myriad cases of S. aureus have been reported in hospitals across America. Transmission of the pathogen is facilitated in medical settings where healthcare worker hygiene is insufficient. S. aureus is an incredibly hardy bacterium, as was shown in a study where it survived on polyester for just under three months; polyester is the main material used in hospital privacy curtains.
The bacteria are transported on the hands of healthcare workers, who may pick them up from a seemingly healthy patient carrying a benign or commensal strain of S. aureus, and then pass it on to the next patient being treated. Introduction of the bacteria into the bloodstream can lead to various complications, including endocarditis, meningitis, and, if it is widespread, sepsis.
Ethanol has proven to be an effective topical sanitizer against MRSA. Quaternary ammonium can be used in conjunction with ethanol to increase the duration of the sanitizing action. The prevention of nosocomial infections involves routine and terminal cleaning. Nonflammable alcohol vapor in NAV- systems have an advantage, as they do not attack metals or plastics used in medical environments, and do not contribute to antibacterial resistance.
An important and previously unrecognized means of community-associated MRSA colonization and transmission is during sexual contact.
Staphylococcus aureus is killed in one minute at 78 °C and in ten minutes at 64 °C but is resistant to freezing.
Certain strains of S. aureus have been described as being resistant to chlorine disinfection.
The use of mupirocin ointment can reduce the rate of infections due to nasal carriage of S. aureus. There is limited evidence that nasal decontamination of S. aureus using antibiotics or antiseptics can reduce the rates of surgical site infections.
Research
As of 2021, no approved vaccine exists against S. aureus. Early clinical trials have been conducted for several vaccines candidates such as Nabi's StaphVax and PentaStaph, Intercell's / Merck's V710, VRi's SA75, and others.
While some of these vaccines candidates have shown immune responses, others aggravated an infection by S. aureus. To date, none of these candidates provides protection against a S. aureus infection. The development of Nabi's StaphVax was stopped in 2005 after phase III trials failed. Intercell's first V710 vaccine variant was terminated during phase II/III after higher mortality and morbidity were observed among patients who developed S. aureus infection.
Nabi's enhanced S. aureus vaccines candidate PentaStaph was sold in 2011 to GlaxoSmithKline Biologicals S.A. The current status of PentaStaph is unclear. A WHO document indicates that PentaStaph failed in the phase III trial stage.
In 2010, GlaxoSmithKline started a phase 1 blind study to evaluate its GSK2392103A vaccine. As of 2016, this vaccine is no longer under active development.
Pfizer's S. aureus four-antigen vaccine SA4Ag was granted fast track designation by the U.S. Food and Drug Administration in February 2014. In 2015, Pfizer has commenced a phase 2b trial regarding the SA4Ag vaccine. Phase 1 results published in February 2017 showed a very robust and secure immunogenicity of SA4Ag. The vaccine underwent clinical trial until June 2019, with results published in September 2020, that did not demonstrate a significant reduction in Postoperative Bloodstream Infection after Surgery.
In 2015, Novartis Vaccines and Diagnostics, a former division of Novartis and now part of GlaxoSmithKline, published promising pre-clinical results of their four-component Staphylococcus aureus vaccine, 4C-staph.
In addition to vaccine development, research is being performed to develop alternative treatment options that are effective against antibiotic resistant strains including MRSA. Examples of alternative treatments are phage therapy, antimicrobial peptides and host-directed therapy.
Standard strains
A number of standard strains of S. aureus (called "type cultures") are used in research and in laboratory testing, such as:
| Biology and health sciences | Other organisms | null |
118393 | https://en.wikipedia.org/wiki/Vernier%20scale | Vernier scale | A vernier scale ( ), named after Pierre Vernier, is a visual aid to take an accurate measurement reading between two graduation markings on a linear scale by using mechanical interpolation, thereby increasing resolution and reducing measurement uncertainty by using vernier acuity to reduce human estimation error. It may be found on many types of instrument measuring linear or angular quantities, but in particular on a vernier caliper, which measures lengths (including internal and external diameters).
The vernier is a subsidiary scale replacing a single measured-value pointer, and has for instance ten divisions equal in distance to nine divisions on the main scale. The interpolated reading is obtained by observing which of the vernier scale graduations is coincident with a graduation on the main scale, which is easier to perceive than visual estimation between two points. Such an arrangement can go to a higher resolution by using a higher scale ratio, known as the vernier constant. A vernier may be used on circular or straight scales where a simple linear mechanism is adequate. Examples are calipers and micrometers to measure to fine tolerances, on sextants for navigation, on theodolites in surveying, and generally on scientific instruments.
The Vernier principle of interpolation is also used for electronic displacement sensors such as absolute encoders to measure linear or rotational movement, as part of an electronic measuring system.
History
The first caliper with a secondary scale, which contributed extra precision, was invented in 1631 by French mathematician Pierre Vernier (1580–1637). Its use was described in detail in English in Navigatio Britannica (1750) by mathematician and historian John Barrow. While calipers are the most typical use of vernier scales today, they were originally developed for angle-measuring instruments such as astronomical quadrants.
In some languages, the vernier scale is called a nonius after Portuguese mathematician and cosmographer Pedro Nunes (Latin Petrus Nonius, 1502–1578). In English, this term was used until the end of the 18th century. Nonius now refers to an earlier instrument that Nunes developed.
The name "vernier" was popularised by the French astronomer Jérôme Lalande (1732–1807) through his Traité d'astronomie (2 vols) (1764).
Functioning
The use of the vernier scale is shown on a vernier caliper which measures the internal and the external diameters of an object.
The vernier scale is constructed so that it is spaced at a constant fraction of the fixed main scale. So for a vernier with a constant of 0.1, each mark on the vernier is spaced 9/10 of those on the main scale. If you put the two scales together with zero points aligned, the first mark on the vernier scale is 1/10 short of the first main scale mark, the second is 2/10 short, and so on up to the ninth mark, which is misaligned by 9/10. Only when a full ten marks are counted, is there alignment, because the tenth mark is 10/10—a whole main scale unit—short, and therefore aligns with the ninth mark on the main scale. (In simple words, each , so each decrement of length 0.1 adds 10 times to make one MSD only in 9 divisions of vernier scale division).
Now if you move the vernier by a small amount, say, 1/10 of its fixed main scale, the only pair of marks that come into alignment are the first pair, since these were the only ones originally misaligned by 1/10. If we move it 2/10, the second pair aligns, since these are the only ones originally misaligned by that amount. If we move it 5/10, the fifth pair aligns—and so on. For any movement, only one pair of marks aligns and that pair shows the value between the marks on the fixed scale.
Least count or vernier constant
The difference between the value of one main scale division and the value of one vernier scale division is known as the least count of the vernier, also known as the vernier constant. Let the measure of the smallest main-scale reading, that is the distance between two consecutive graduations (also called its pitch) be S, and the distance between two consecutive vernier scale graduations be V, such that the length of (n − 1) main-scale divisions is equal to n vernier-scale divisions. Then
the length of (n − 1) main-scale divisions = the length of n vernier-scale division, or
(n − 1)S = nV, or
nS − S = nV.
Vernier acuity
Vernier scales work so well because most people are especially good at detecting which of the lines is aligned and misaligned, and that ability gets better with practice, in fact far exceeding the optical capability of the eye. This ability to detect alignment is called vernier acuity. Historically, none of the alternative technologies exploited this or any other hyperacuity, giving the vernier scale an advantage over its competitors.
Zero error
Zero error is defined as the condition where a measuring instrument registers a nonzero value at the zero position. In case of vernier calipers it occurs when a zero on main scale does not coincide with a zero on vernier scale. The zero error may be of two types: when the scale is towards numbers greater than zero, it is positive; otherwise it is negative. The method to use a vernier scale or caliper with zero error is to use the formula
actual reading = main scale + vernier scale − (zero error).
Zero error may arise due to knocks or other damage which causes the 0.00 mm marks to be misaligned when the jaws are perfectly closed or just touching each other.
Positive zero error refers to the case when the jaws of the vernier caliper are just closed and the reading is a positive reading away from the actual reading of 0.00mm. If the reading is 0.10mm, the zero error is referred to as +0.10 mm.
Negative zero error refers to the case when the jaws of the vernier caliper are just closed and the reading is a negative reading away from the actual reading of 0.00mm. If the reading is 0.08mm, the zero error is referred to as −0.08mm.
If positive, the error is subtracted from the mean reading the instrument reads. Thus if the instrument reads 4.39 cm and the error is +0.05, the actual length will be 4.39 − 0.05 = 4.34.
If negative, the error is added to the mean reading the instrument reads. Thus if the instrument reads 4.39 cm and as above the error is −0.05 cm, the actual length will be 4.39 + 0.05 = 4.44.
(Considering that, the quantity is called zero correction which should always be added algebraically to the observed reading to the correct value.)
Zero error (ZE) = ±n × least count (LC)
Direct and retrograde verniers
Direct verniers are the most common. The indicating scale is constructed so that when its zero point coincides with the start of the data scale, its graduations are at a slightly smaller spacing than those on the data scale and so none but the last graduation coincide with any graduations on the data scale. N graduations of the indicating scale cover N − 1 graduations of the data scale.
Retrograde verniers are found on some devices, including surveying instruments. A retrograde vernier is similar to the direct vernier, except its graduations are at a slightly larger spacing than on the main scale. N graduations of the indicating scale cover N + 1 graduations of the data scale. The retrograde vernier also extends backwards along the data scale.
Direct and retrograde verniers are read in the same manner.
Recent uses
This section includes references to techniques which use the Vernier principle to make fine-resolution measurements.
Vernier spectroscopy is a type of cavity-enhanced laser absorption spectroscopy that is especially sensitive to trace gases. The method uses a frequency-comb laser combined with a high-finesse optical cavity to produce an absorption spectrum in a highly parallel manner. The method is also capable of detecting trace gases in very low concentration due to the enhancement effect of the optical resonator on the effective optical path length.
| Technology | Measuring instruments | null |
118396 | https://en.wikipedia.org/wiki/Band%20gap | Band gap | In solid-state physics and solid-state chemistry, a band gap, also called a bandgap or energy gap, is an energy range in a solid where no electronic states exist. In graphs of the electronic band structure of solids, the band gap refers to the energy difference (often expressed in electronvolts) between the top of the valence band and the bottom of the conduction band in insulators and semiconductors. It is the energy required to promote an electron from the valence band to the conduction band. The resulting conduction-band electron (and the electron hole in the valence band) are free to move within the crystal lattice and serve as charge carriers to conduct electric current. It is closely related to the HOMO/LUMO gap in chemistry. If the valence band is completely full and the conduction band is completely empty, then electrons cannot move within the solid because there are no available states. If the electrons are not free to move within the crystal lattice, then there is no generated current due to no net charge carrier mobility. However, if some electrons transfer from the valence band (mostly full) to the conduction band (mostly empty), then current can flow (see carrier generation and recombination). Therefore, the band gap is a major factor determining the electrical conductivity of a solid. Substances having large band gaps (also called "wide" band gaps) are generally insulators, those with small band gaps (also called "narrow" band gaps) are semiconductors, and conductors either have very small band gaps or none, because the valence and conduction bands overlap to form a continuous band.
It is possible to produce laser induced insulator-metal transitions which have already been experimentally observed in some condensed matter systems, like thin films of , doped manganites, or in vanadium sesquioxide . These are special cases of the more general metal-to-nonmetal transitions phenomena which were intensively studied in the last decades. A one-dimensional analytic model of laser induced distortion of band structure was presented for a spatially periodic (cosine) potential. This problem is periodic both in space and time and can be solved analytically using the Kramers-Henneberger co-moving frame. The solutions can be given with the help of the Mathieu functions.
In semiconductor physics
Every solid has its own characteristic energy-band structure. This variation in band structure is responsible for the wide range of electrical characteristics observed in various materials.
Depending on the dimension, the band structure and spectroscopy can vary. The different types of dimensions are as listed: one dimension, two dimensions, and three dimensions.
In semiconductors and insulators, electrons are confined to a number of bands of energy, and forbidden from other regions because there are no allowable electronic states for them to occupy. The term "band gap" refers to the energy difference between the top of the valence band and the bottom of the conduction band. Electrons are able to jump from one band to another. However, in order for a valence band electron to be promoted to the conduction band, it requires a specific minimum amount of energy for the transition. This required energy is an intrinsic characteristic of the solid material. Electrons can gain enough energy to jump to the conduction band by absorbing either a phonon (heat) or a photon (light).
A semiconductor is a material with an intermediate-sized, non-zero band gap that behaves as an insulator at T=0K, but allows thermal excitation of electrons into its conduction band at temperatures that are below its melting point. In contrast, a material with a large band gap is an insulator. In conductors, the valence and conduction bands may overlap, so there is no longer a bandgap with forbidden regions of electronic states.
The conductivity of intrinsic semiconductors is strongly dependent on the band gap. The only available charge carriers for conduction are the electrons that have enough thermal energy to be excited across the band gap and the electron holes that are left off when such an excitation occurs.
Band-gap engineering is the process of controlling or altering the band gap of a material by controlling the composition of certain semiconductor alloys, such as GaAlAs, InGaAs, and InAlAs. It is also possible to construct layered materials with alternating compositions by techniques like molecular-beam epitaxy. These methods are exploited in the design of heterojunction bipolar transistors (HBTs), laser diodes and solar cells.
The distinction between semiconductors and insulators is a matter of convention. One approach is to think of semiconductors as a type of insulator with a narrow band gap. Insulators with a larger band gap, usually greater than 4 eV, are not considered semiconductors and generally do not exhibit semiconductive behaviour under practical conditions. Electron mobility also plays a role in determining a material's informal classification.
The band-gap energy of semiconductors tends to decrease with increasing temperature. When temperature increases, the amplitude of atomic vibrations increase, leading to larger interatomic spacing. The interaction between the lattice phonons and the free electrons and holes will also affect the band gap to a smaller extent. The relationship between band gap energy and temperature can be described by Varshni's empirical expression (named after Y. P. Varshni),
, where Eg(0), α and β are material constants.
Furthermore, lattice vibrations increase with temperature, which increases the effect of electron scattering. Additionally, the number of charge carriers within a semiconductor will increase, as more carriers have the energy required to cross the band-gap threshold and so conductivity of semiconductors also increases with increasing temperature. The external pressure also influences the electronic structure of semiconductors and, therefore, their optical band gaps.
In a regular semiconductor crystal, the band gap is fixed owing to continuous energy states. In a quantum dot crystal, the band gap is size dependent and can be altered to produce a range of energies between the valence band and conduction band. It is also known as quantum confinement effect.
Band gaps can be either direct or indirect, depending on the electronic band structure of the material.
It was mentioned earlier that the dimensions have different band structure and spectroscopy. For non-metallic solids, which are one dimensional, have optical properties that are dependent on the electronic transitions between valence and conduction bands. In addition, the spectroscopic transition probability is between the initial and final orbital and it depends on the integral. φi is the initial orbital, φf is the final orbital, ʃ φf*ûεφi is the integral, ε is the electric vector, and u is the dipole moment.
Two-dimensional structures of solids behave because of the overlap of atomic orbitals. The simplest two-dimensional crystal contains identical atoms arranged on a square lattice. Energy splitting occurs at the Brillouin zone edge for one-dimensional situations because of a weak periodic potential, which produces a gap between bands. The behavior of the one-dimensional situations does not occur for two-dimensional cases because there are extra freedoms of motion. Furthermore, a bandgap can be produced with strong periodic potential for two-dimensional and three-dimensional cases.
Direct and indirect band gap
Based on their band structure, materials are characterised with a direct band gap or indirect band gap. In the free-electron model, k is the momentum of a free electron and assumes unique values within the Brillouin zone that outlines the periodicity of the crystal lattice. If the momentum of the lowest energy state in the conduction band and the highest energy state of the valence band of a material have the same value, then the material has a direct bandgap. If they are not the same, then the material has an indirect band gap and the electronic transition must undergo momentum transfer to satisfy conservation. Such indirect "forbidden" transitions still occur, however at very low probabilities and weaker energy. For materials with a direct band gap, valence electrons can be directly excited into the conduction band by a photon whose energy is larger than the bandgap. In contrast, for materials with an indirect band gap, a photon and phonon must both be involved in a transition from the valence band top to the conduction band bottom, involving a momentum change. Therefore, direct bandgap materials tend to have stronger light emission and absorption properties and tend to be better suited for photovoltaics (PVs), light-emitting diodes (LEDs), and laser diodes; however, indirect bandgap materials are frequently used in PVs and LEDs when the materials have other favorable properties.
Light-emitting diodes and laser diodes
LEDs and laser diodes usually emit photons with energy close to and slightly larger than the band gap of the semiconductor material from which they are made. Therefore, as the band gap energy increases, the LED or laser color changes from infrared to red, through the rainbow to violet, then to UV.
Photovoltaic cells
The optical band gap (see below) determines what portion of the solar spectrum a photovoltaic cell absorbs. Strictly, a semiconductor will not absorb photons of energy less than the band gap; whereas most of the photons with energies exceeding the band gap will generate heat. Neither of them contribute to the efficiency of a solar cell. One way to circumvent this problem is based on the so-called photon management concept, in which case the solar spectrum is modified to match the absorption profile of the solar cell.
List of band gaps
Below are band gap values for some selected materials. For a comprehensive list of band gaps in semiconductors, see List of semiconductor materials.
Optical versus electronic bandgap
In materials with a large exciton binding energy, it is possible for a photon to have just barely enough energy to create an exciton (bound electron–hole pair), but not enough energy to separate the electron and hole (which are electrically attracted to each other). In this situation, there is a distinction between "optical band gap" and "electronic band gap" (or "transport gap"). The optical bandgap is the threshold for photons to be absorbed, while the transport gap is the threshold for creating an electron–hole pair that is not bound together. The optical bandgap is at lower energy than the transport gap.
In almost all inorganic semiconductors, such as silicon, gallium arsenide, etc., there is very little interaction between electrons and holes (very small exciton binding energy), and therefore the optical and electronic bandgap are essentially identical, and the distinction between them is ignored. However, in some systems, including organic semiconductors and single-walled carbon nanotubes, the distinction may be significant.
Band gaps for other quasi-particles
In photonics, band gaps or stop bands are ranges of photon frequencies where, if tunneling effects are neglected, no photons can be transmitted through a material. A material exhibiting this behaviour is known as a photonic crystal. The concept of hyperuniformity has broadened the range of photonic band gap materials, beyond photonic crystals. By applying the technique in supersymmetric quantum mechanics, a new class of optical disordered materials has been suggested, which support band gaps perfectly equivalent to those of crystals or quasicrystals.
Similar physics applies to phonons in a phononic crystal.
Materials
Aluminium gallium arsenide
Boron nitride
Indium gallium arsenide
Indium arsenide
Gallium arsenide
Gallium nitride
Germanium
Metallic hydrogen
List of electronics topics
Electronics
Bandgap voltage reference
Condensed matter physics
Direct and indirect bandgaps
Electrical conduction
Electron hole
Field-effect transistor
Light-emitting diode
Photodiode
Photoresistor
Photovoltaics
Solar cell
Solid state physics
Semiconductor
Semiconductor devices
Strongly correlated material
Valence band
| Physical sciences | Basics_2 | Physics |
118450 | https://en.wikipedia.org/wiki/Innovation | Innovation | Innovation is the practical implementation of ideas that result in the introduction of new goods or services or improvement in offering goods or services. ISO TC 279 in the standard ISO 56000:2020 defines innovation as "a new or changed entity, realizing or redistributing value". Others have different definitions; a common element in the definitions is a focus on newness, improvement, and spread of ideas or technologies.
Innovation often takes place through the development of more-effective products, processes, services, technologies, art works
or business models that innovators make available to markets, governments and society.
Innovation is related to, but not the same as, invention: innovation is more apt to involve the practical implementation of an invention (i.e. new / improved ability) to make a meaningful impact in a market or society, and not all innovations require a new invention.
Technical innovation often manifests itself via the engineering process when the problem being solved is of a technical or scientific nature. The opposite of innovation is exnovation.
Definition
Surveys of the literature on innovation have found a variety of definitions. In 2009, Baregheh et al. found around 60 definitions in different scientific papers, while a 2014 survey found over 40. Based on their survey, Baragheh et al. attempted to formulate a multidisciplinary definition and arrived at the following:"Innovation is the multi-stage process whereby organizations transform ideas into new/improved products, service or processes, in order to advance, compete and differentiate themselves successfully in their marketplace"
In a study of how the software industry considers innovation, the following definition given by Crossan and Apaydin was considered to be the most complete. Crossan and Apaydin built on the definition given in the Organisation for Economic Co-operation and Development (OECD) Oslo Manual:
American sociologist Everett Rogers, defined it as follows:"An idea, practice, or object that is perceived as new by an individual or other unit of adoption"
According to Alan Altshuler and Robert D. Behn, innovation includes original invention and creative use. These writers define innovation as generation, admission and realization of new ideas, products, services and processes.
Two main dimensions of innovation are degree of novelty (i.e. whether an innovation is new to the firm, new to the market, new to the industry, or new to the world) and kind of innovation (i.e. whether it is process or product-service system innovation). Organizational researchers have also distinguished innovation separately from creativity, by providing an updated definition of these two related constructs:
Peter Drucker wrote:
Creativity and innovation
In general, innovation is distinguished from creativity by its emphasis on the implementation of creative ideas in an economic setting. Amabile and Pratt in 2016, drawing on the literature, distinguish between creativity ("the production of novel and useful ideas by an individual or small group of individuals working together") and innovation ("the successful implementation of creative ideas within an organization").
Economics and innovation
In 1957 the economist Robert Solow was able to demonstrate that economic growth had two components. The first component could be attributed to growth in production including wage labour and capital. The second component was found to be productivity. Ever since, economic historians have tried to explain the process of innovation itself, rather than assuming that technological inventions and technological progress result in productivity growth.
The concept of innovation emerged after the Second World War, mostly thanks to the works of Joseph Schumpeter (1883–1950) who described the economic effects of innovation processes as Constructive destruction. Today, consistent neo-Schumpeterian scholars see innovation not as neutral or apolitical processes. Rather, innovation can be seen as socially constructed processes. Therefore, its conception depends on the political and societal context in which innovation is taking place. According to Shannon Walsh, "innovation today is best understood as innovation under capital" (p. 346). This means that the current hegemonic purpose for innovation is capital valorisation and profit maximization, exemplified by the appropriation of knowledge (e.g., through patenting), the widespread practice of Planned obsolescence (incl. lack of repairability by design), and the Jevons paradox, that describes negative consequences of eco-efficiency as energy-reducing effects tend to trigger mechanisms leading to energy-increasing effects.
Types
Several frameworks have been proposed for defining types of innovation.
Sustaining vs disruptive innovation
One framework proposed by Clayton Christensen draws a distinction between sustaining and disruptive innovations. Sustaining innovation is the improvement of a product or service based on the known needs of current customers (e.g. faster microprocessors, flat screen televisions). Disruptive innovation in contrast refers to a process by which a new product or service creates a new market (e.g. transistor radio, free crowdsourced encyclopedia, etc.), eventually displacing established competitors. According to Christensen, disruptive innovations are critical to long-term success in business.
Disruptive innovation is often enabled by disruptive technology. Marco Iansiti and Karim R. Lakhani define foundational technology as having the potential to create new foundations for global technology systems over the longer term. Foundational technology tends to transform business operating models as entirely new business models emerge over many years, with gradual and steady adoption of the innovation leading to waves of technological and institutional change that gain momentum more slowly. The advent of the packet-switched communication protocol TCP/IP—originally introduced in 1972 to support a single use case for United States Department of Defense electronic communication (email), and which gained widespread adoption only in the mid-1990s with the advent of the World Wide Web—is a foundational technology.
Four types of innovation model
Another framework was suggested by Henderson and Clark. They divide innovation into four types;
Radical innovation: "establishes a new dominant design and, hence, a new set of core design concepts embodied in components that are linked together in a new architecture." (p. 11)
Incremental innovation: "refines and extends an established design. Improvement occurs in individual components, but the underlying core design concepts, and the links between them, remain the same." (p. 11)
Architectural innovation: "innovation that changes only the relationships between them [the core design concepts]" (p. 12)
Modular Innovation: "innovation that changes only the core design concepts of a technology" (p. 12)
While Henderson and Clark as well as Christensen talk about technical innovation there are other kinds of innovation as well, such as service innovation and organizational innovation.
Non-economic innovation
As distinct from business-centric views of innovation concentrating on generating profit for a firm, other types of innovation include: social innovation, religious innovation,
sustainable innovation (or green innovation),
and responsible innovation.
Open innovation
One type of innovation that has been the focus of recent literature is open innovation or "crowd sourcing." Open innovation refers to the use of individuals outside of an organizational context who have no expertise in a given area to solve complex problems.
User innovation
Similar to open innovation, user innovation is when companies rely on users of their goods and services to come up with, help to develop, and even help to implement new ideas.
History
Innovation must be understood in the historical setting in which its processes were and are taking place. The first full-length discussion about innovation was published by the Greek philosopher and historian Xenophon (430–355 BCE). He viewed the concept as multifaceted and connected it to political action. The word for innovation that he uses, kainotomia, had previously occurred in two plays by Aristophanes ( – BCE). Plato (died BCE) discussed innovation in his Laws dialogue and was not very fond of the concept. He was skeptical to it both in culture (dancing and art) and in education (he did not believe in introducing new games and toys to the kids). Aristotle (384–322 BCE) did not like organizational innovations: he believed that all possible forms of organization had been discovered.
Before the 4th century in Rome, the words novitas and res nova / nova res were used with either negative or positive judgment on the innovator. This concept meant "renewing" and was incorporated into the new Latin verb word innovo ("I renew" or "I restore") in the centuries that followed. The Vulgate version of the Bible (late 4th century CE) used the word in spiritual as well as political contexts. It also appeared in poetry, mainly with spiritual connotations, but was also connected to political, material and cultural aspects.
Machiavelli's The Prince (1513) discusses innovation in a political setting. Machiavelli portrays it as a strategy a Prince may employ in order to cope with a constantly changing world as well as the corruption within it. Here innovation is described as introducing change in government (new laws and institutions); Machiavelli's later book The Discourses (1528) characterises innovation as imitation, as a return to the original that has been corrupted by people and by time. Thus for Machiavelli innovation came with positive connotations. This is however an exception in the usage of the concept of innovation from the 16th century and onward. No innovator from the renaissance until the late 19th century ever thought of applying the word innovator upon themselves, it was a word used to attack enemies.
From the 1400s through the 1600s, the concept of innovation was pejorative – the term was an early-modern synonym for "rebellion", "revolt" and "heresy". In the 1800s people promoting capitalism saw socialism as an innovation and spent a lot of energy working against it. For instance, Goldwin Smith (1823-1910) saw the spread of social innovations as an attack on money and banks. These social innovations were socialism, communism, nationalization, cooperative associations.
In the 20th century, the concept of innovation did not become popular until after the Second World War of 1939–1945. This is the point in time when people started to talk about technological product innovation and tie it to the idea of economic growth and competitive advantage. Joseph Schumpeter (1883–1950), who contributed greatly to the study of innovation economics, is seen as the one who made the term popular. Schumpeter argued that industries must incessantly revolutionize the economic structure from within, that is: innovate with better or more effective processes and products, as well as with market distribution (such as the transition from the craft shop to factory). He famously asserted that "creative destruction is the essential fact about capitalism". In business and in economics, innovation can provide a catalyst for growth when entrepreneurs continuously search for better ways to satisfy their consumer base with improved quality, durability, service and price - searches which may come to fruition in innovation with advanced technologies and organizational strategies. Schumpeter's findings coincided with rapid advances in transportation and communications in the beginning of the 20th century, which had huge impacts for the economic concepts of factor endowments and comparative advantage as new combinations of resources or production techniques constantly transform markets to satisfy consumer needs. Hence, innovative behaviour becomes relevant for economic success.
Process of innovation
An early model included only three phases of innovation. According to Utterback (1971), these phases were: 1) idea generation, 2) problem solving, and 3) implementation. By the time one completed phase 2, one had an invention, but until one got it to the point of having an economic impact, one did not have an innovation. Diffusion was not considered a phase of innovation. Focus at this point in time was on manufacturing.
A prime example of innovation involved the boom of Silicon Valley start-ups out of the Stanford Industrial Park. In 1957, dissatisfied employees of Shockley Semiconductor, the company of Nobel laureate William Shockley, co-inventor of the transistor, left to form an independent firm, Fairchild Semiconductor. After several years, Fairchild developed into a formidable presence in the sector. Eventually, these founders left to start their own companies based on their own unique ideas, and then leading employees started their own firms. Over the next 20 years this process resulted in the momentous startup-company explosion of information-technology firms. Silicon Valley began as 65 new enterprises born out of Shockley's eight former employees.
All organizations can innovate, including for example hospitals, universities, and local governments. The organization requires a proper structure in order to retain competitive advantage. Organizations can also improve profits and performance by providing work groups opportunities and resources to innovate, in addition to employee's core job tasks. Executives and managers have been advised to break away from traditional ways of thinking and use change to their advantage. The world of work is changing with the increased use of technology and companies are becoming increasingly competitive. Companies will have to downsize or reengineer their operations to remain competitive. This will affect employment as businesses will be forced to reduce the number of people employed while accomplishing the same amount of work if not more.
For instance, former Mayor Martin O'Malley pushed the City of Baltimore to use CitiStat, a performance-measurement data and management system that allows city officials to maintain statistics on several areas from crime trends to the conditions of potholes. This system aided in better evaluation of policies and procedures with accountability and efficiency in terms of time and money. In its first year, CitiStat saved the city $13.2 million. Even mass transit systems have innovated with hybrid bus fleets to real-time tracking at bus stands. In addition, the growing use of mobile data terminals in vehicles, that serve as communication hubs between vehicles and a control center, automatically send data on location, passenger counts, engine performance, mileage and other information. This tool helps to deliver and manage transportation systems.
Still other innovative strategies include hospitals digitizing medical information in electronic medical records. For example, the U.S. Department of Housing and Urban Development's HOPE VI initiatives turned severely distressed public housing in urban areas into revitalized, mixed-income environments; the Harlem Children's Zone used a community-based approach to educate local area children; and the Environmental Protection Agency's brownfield grants facilitates turning over brownfields for environmental protection, green spaces, community and commercial development.
Sources of innovation
Innovation may occur due to effort from a range of different agents, by chance, or as a result of a major system failure. According to Peter F. Drucker, the general sources of innovations are changes in industry structure, in market structure, in local and global demographics, in human perception, in the amount of available scientific knowledge, etc.
In the simplest linear model of innovation the traditionally recognized source is manufacturer innovation. This is where a person or business innovates in order to sell the innovation.
Another source of innovation is end-user innovation. This is where a person or company develops an innovation for their own (personal or in-house) use because existing products do not meet their needs. MIT economist Eric von Hippel identified end-user innovation as the most important source in his classic book on the subject, "The Sources of Innovation".
The robotics engineer Joseph F. Engelberger asserts that innovations require only three things:
a recognized need
competent people with relevant technology
financial support
The Kline chain-linked model of innovation places emphasis on potential market needs as drivers of the innovation process, and describes the complex and often iterative feedback loops between marketing, design, manufacturing, and R&D.
In the 21st century the Islamic State (IS) movement, while decrying religious innovations, has innovated in military tactics, recruitment, ideology and geopolitical activity.
Facilitating innovation
Innovation by businesses is achieved in many ways, with much attention now given to formal research and development (R&D) for "breakthrough innovations". R&D help spur on patents and other scientific innovations that leads to productive growth in such areas as industry, medicine, engineering, and government. Yet, innovations can be developed by less formal on-the-job modifications of practice, through exchange and combination of professional experience and by many other routes. Investigation of relationship between the concepts of innovation and technology transfer revealed overlap. The more radical and revolutionary innovations tend to emerge from R&D, while more incremental innovations may emerge from practice – but there are many exceptions to each of these trends.
Information technology and changing business processes and management style can produce a work climate favorable to innovation. For example, the software tool company Atlassian conducts quarterly "ShipIt Days" in which employees may work on anything related to the company's products. Google employees work on self-directed projects for 20% of their time (known as Innovation Time Off). Both companies cite these bottom-up processes as major sources for new products and features.
An important innovation factor includes customers buying products or using services. As a result, organizations may incorporate users in focus groups (user centered approach), work closely with so-called lead users (lead user approach), or users might adapt their products themselves. The lead user method focuses on idea generation based on leading users to develop breakthrough innovations. U-STIR, a project to innovate Europe's surface transportation system, employs such workshops. Regarding this user innovation, a great deal of innovation is done by those actually implementing and using technologies and products as part of their normal activities. Sometimes user-innovators may become entrepreneurs, selling their product, they may choose to trade their innovation in exchange for other innovations, or they may be adopted by their suppliers. Nowadays, they may also choose to freely reveal their innovations, using methods like open source. In such networks of innovation the users or communities of users can further develop technologies and reinvent their social meaning.
One technique for innovating a solution to an identified problem is to actually attempt an experiment with many possible solutions. This technique was famously used by Thomas Edison's laboratory to find a version of the incandescent light bulb economically viable for home use, which involved searching through thousands of possible filament designs before settling on carbonized bamboo.
This technique is sometimes used in pharmaceutical drug discovery. Thousands of chemical compounds are subjected to high-throughput screening to see if they have any activity against a target molecule which has been identified as biologically significant to a disease. Promising compounds can then be studied; modified to improve efficacy and reduce side effects, evaluated for cost of manufacture; and if successful turned into treatments.
The related technique of A/B testing is often used to help optimize the design of web sites and mobile apps. This is used by major sites such as amazon.com, Facebook, Google, and Netflix. Procter & Gamble uses computer-simulated products and online user panels to conduct larger numbers of experiments to guide the design, packaging, and shelf placement of consumer products. Capital One uses this technique to drive credit card marketing offers.
Goals and failures of innovation
Scholars have argued that the main purpose for innovation today is profit maximization and capital valorisation. Consequently, programs of organizational innovation are typically tightly linked to organizational goals and growth objectives, to the business plan, and to market competitive positioning. Davila et al. (2006) note, "Companies cannot grow through cost reduction and reengineering alone... Innovation is the key element in providing aggressive top-line growth, and for increasing bottom-line results". One survey across a large number of manufacturing and services organizations found that systematic programs of organizational innovation are most frequently driven by: improved quality, creation of new markets, extension of the product range, reduced labor costs, improved production processes, reduced materials cost, reduced environmental damage, replacement of products/services, reduced energy consumption, and conformance to regulations.
Different goals are appropriate for different products, processes, and services. According to Andrea Vaona and Mario Pianta, some example goals of innovation could stem from two different types of technological strategies: technological competitiveness and active price competitiveness. Technological competitiveness may have a tendency to be pursued by smaller firms and can be characterized as "efforts for market-oriented innovation, such as a strategy of market expansion and patenting activity." On the other hand, active price competitiveness is geared toward process innovations that lead to efficiency and flexibility, which tend to be pursued by large, established firms as they seek to expand their market foothold. Whether innovation goals are successfully achieved or otherwise depends greatly on the environment prevailing in the organization.
Organization-internal innovation failures
Failure of organizational innovation programs has been widely researched and the causes vary considerably. Some causes are external to the organization and outside its influence of control. Others are internal and ultimately within the control of the organization. Internal causes of failure can be divided into causes associated with the cultural infrastructure and causes associated with the innovation process itself. David O'Sullivan wrote that causes of failure within the innovation process in most organizations can be distilled into five types: poor goal definition, poor alignment of actions to goals, poor participation in teams, poor monitoring of results, and poor communication and access to information.
Environmental and social innovation failures
Innovation is generally framed as an inherently positive force, delivering growth and prosperity for all, and is often deemed as both inevitable and unstoppable. In this sense, future innovations are often hailed as solutions to current problems, such as climate change. This business-as-usual approach would mean continued and increased globalization as well as quick innovation cycles which supposedly will maximize the competitiveness of processes, in the end leading to Eco-economic decoupling or Green growth. Yet, it is unclear whether innovative solutions will be capable of solving the climate crisis: According to Mario Giampietro and Silvio Funtowicz (2020), this positive framing of innovation "demonstrates [a] lack of understanding of the biophysical roots of the economic process and the seriousness of the sustainability crisis". This is due to the fact that innovation can be understood in its specific historic and cultural context: The prevailing hegemonic view on innovation, as emphasized by Ben Robra et al. (2023), aligns closely with capitalist mode of production, shown by the mantra of 'innovate or die.' From this viewpoint, innovation is primarily driven by the imperative of capital accumulation, serving the sole purpose of increasing returns, neglecting societal needs such as a clean environment or social equality and in general the biophysical limits of our planet.
Diffusion
Diffusion of innovation research was first started in 1903 by seminal researcher Gabriel Tarde, who first plotted the S-shaped diffusion curve. Tarde defined the innovation-decision process as a series of steps that include:
knowledge
forming an attitude
a decision to adopt or reject
implementation and use
confirmation of the decision
Once innovation occurs, innovations may be spread from the innovator to other individuals and groups. This process has been proposed that the lifecycle of innovations can be described using the 's-curve' or diffusion curve. The s-curve maps growth of revenue or productivity against time. In the early stage of a particular innovation, growth is relatively slow as the new product establishes itself. At some point, customers begin to demand and the product growth increases more rapidly. New incremental innovations or changes to the product allow growth to continue. Towards the end of its lifecycle, growth slows and may even begin to decline. In the later stages, no amount of new investment in that product will yield a normal rate of return.
The s-curve derives from an assumption that new products are likely to have "product life" – i.e., a start-up phase, a rapid increase in revenue and eventual decline. In fact, the great majority of innovations never get off the bottom of the curve, and never produce normal returns.
Innovative companies will typically be working on new innovations that will eventually replace older ones. Successive s-curves will come along to replace older ones and continue to drive growth upwards. In the figure above the first curve shows a current technology. The second shows an emerging technology that currently yields lower growth but will eventually overtake current technology and lead to even greater levels of growth. The length of life will depend on many factors.
Measuring innovation
Measuring innovation is inherently difficult as it implies commensurability so that comparisons can be made in quantitative terms. Innovation, however, is by definition novelty. Comparisons are thus often meaningless across products or service. Nevertheless, Edison et al. in their review of literature on innovation management found 232 innovation metrics. They categorized these measures along five dimensions; i.e. inputs to the innovation process, output from the innovation process, effect of the innovation output, measures to access the activities in an innovation process and availability of factors that facilitate such a process.
There are two different types of measures for innovation: the organizational level and the political level.
Organizational-level
The measure of innovation at the organizational level relates to individuals, team-level assessments, and private companies from the smallest to the largest company. Measure of innovation for organizations can be conducted by surveys, workshops, consultants, or internal benchmarking. There is today no established general way to measure organizational innovation. Corporate measurements are generally structured around balanced scorecards which cover several aspects of innovation such as business measures related to finances, innovation process efficiency, employees' contribution and motivation, as well benefits for customers. Measured values will vary widely between businesses, covering for example new product revenue, spending in R&D, time to market, customer and employee perception & satisfaction, number of patents, additional sales resulting from past innovations.
Political-level
For the political level, measures of innovation are more focused on a country or region competitive advantage through innovation. In this context, organizational capabilities can be evaluated through various evaluation frameworks, such as those of the European Foundation for Quality Management. The OECD Oslo Manual (1992) suggests standard guidelines on measuring technological product and process innovation. Some people consider the Oslo Manual complementary to the Frascati Manual from 1963. The new Oslo Manual from 2018 takes a wider perspective to innovation, and includes marketing and organizational innovation. These standards are used for example in the European Community Innovation Surveys.
Other ways of measuring innovation have traditionally been expenditure, for example, investment in R&D (Research and Development) as percentage of GNP (Gross National Product). Whether this is a good measurement of innovation has been widely discussed and the Oslo Manual has incorporated some of the critique against earlier methods of measuring. The traditional methods of measuring still inform many policy decisions. The EU Lisbon Strategy has set as a goal that their average expenditure on R&D should be 3% of GDP.
Indicators
Many scholars claim that there is a great bias towards the "science and technology mode" (S&T-mode or STI-mode), while the "learning by doing, using and interacting mode" (DUI-mode) is ignored and measurements and research about it rarely done. For example, an institution may be high tech with the latest equipment, but lacks crucial doing, using and interacting tasks important for innovation.
A common industry view (unsupported by empirical evidence) is that comparative cost-effectiveness research is a form of price control which reduces returns to industry, and thus limits R&D expenditure, stifles future innovation and compromises new products access to markets.
Some academics claim cost-effectiveness research is a valuable value-based measure of innovation which accords "truly significant" therapeutic advances (i.e. providing "health gain") higher prices than free market mechanisms. Such value-based pricing has been viewed as a means of indicating to industry the type of innovation that should be rewarded from the public purse.
An Australian academic developed the case that national comparative cost-effectiveness analysis systems should be viewed as measuring "health innovation" as an evidence-based policy concept for valuing innovation distinct from valuing through competitive markets, a method which requires strong anti-trust laws to be effective, on the basis that both methods of assessing pharmaceutical innovations are mentioned in annex 2C.1 of the Australia-United States Free Trade Agreement.
Indices
Several indices attempt to measure innovation and rank entities based on these measures, such as:
Bloomberg Innovation Index
"Bogota Manual" similar to the Oslo Manual, is focused on Latin America and the Caribbean countries.
"Creative Class" developed by Richard Florida
EIU Innovation Ranking
Global Competitiveness Report
Global Innovation Index (GII), by INSEAD
Information Technology and Innovation Foundation (ITIF) Index
Innovation 360 – From the World Bank. Aggregates innovation indicators (and more) from a number of different public sources
Innovation Capacity Index (ICI) published by a large number of international professors working in a collaborative fashion. The top scorers of ICI 2009–2010 were: 1. Sweden 82.2; 2. Finland 77.8; and 3. United States 77.5
Innovation Index, developed by the Indiana Business Research Center, to measure innovation capacity at the county or regional level in the United States
Innovation Union Scoreboard, developed by the European Union
innovationsindikator for Germany, developed by the Federation of German Industries (Bundesverband der Deutschen Industrie) in 2005
INSEAD Innovation Efficacy Index
International Innovation Index, produced jointly by The Boston Consulting Group, the National Association of Manufacturers (NAM) and its nonpartisan research affiliate The Manufacturing Institute, is a worldwide index measuring the level of innovation in a country; NAM describes it as the "largest and most comprehensive global index of its kind"
Management Innovation Index – Model for Managing Intangibility of Organizational Creativity: Management Innovation Index
NYCEDC Innovation Index, by the New York City Economic Development Corporation, tracks New York City's "transformation into a center for high-tech innovation. It measures innovation in the City's growing science and technology industries and is designed to capture the effect of innovation on the City's economy"
OECD Oslo Manual is focused on North America, Europe, and other rich economies
State Technology and Science Index, developed by the Milken Institute, is a U.S.-wide benchmark to measure the science and technology capabilities that furnish high paying jobs based around key components
World Competitiveness Scoreboard
Rankings
Common areas of focus include: high-tech companies, manufacturing, patents, post secondary education, research and development, and research personnel. The left ranking of the top 10 countries below is based on the 2020 Bloomberg Innovation Index. However, studies may vary widely; for example the Global Innovation Index 2016 ranks Switzerland as number one wherein countries like South Korea, Japan, and China do not even make the top ten.
Rate of innovation
In 2005 Jonathan Huebner, a physicist working at the Pentagon's Naval Air Warfare Center, argued on the basis of both U.S. patents and world technological breakthroughs, per capita, that the rate of human technological innovation peaked in 1873 and has been slowing ever since. In his article, he asked "Will the level of technology reach a maximum and then decline as in the Dark Ages?" In later comments to New Scientist magazine, Huebner clarified that while he believed that we will reach a rate of innovation in 2024 equivalent to that of the Dark Ages, he was not predicting the reoccurrence of the Dark Ages themselves.
John Smart criticized the claim and asserted that technological singularity researcher Ray Kurzweil and others showed a "clear trend of acceleration, not deceleration" when it came to innovations. The foundation replied to Huebner the journal his article was published in, citing Second Life and eHarmony as proof of accelerating innovation; to which Huebner replied.
However, Huebner's findings were confirmed in 2010 with U.S. Patent Office data. and in a 2012 paper.
Innovation and development
The theme of innovation as a tool to disrupting patterns of poverty has gained momentum since the mid-2000s among major international development actors such as DFID, Gates Foundation's use of the Grand Challenge funding model, and USAID's Global Development Lab. Networks have been established to support innovation in development, such as D-Lab at MIT. Investment funds have been established to identify and catalyze innovations in developing countries, such as DFID's Global Innovation Fund, Human Development Innovation Fund, and (in partnership with USAID) the Global Development Innovation Ventures.
The United States has to continue to play on the same level of playing field as its competitors in federal research. This can be achieved being strategically innovative through investment in basic research and science".
Government policies
Given its effects on efficiency, quality of life, and productive growth, innovation is a key driver in improving society and economy. Consequently, policymakers have worked to develop environments that will foster innovation, from funding research and development to establishing regulations that do not inhibit innovation, funding the development of innovation clusters, and using public purchasing and standardisation to 'pull' innovation through.
For instance, experts are advocating that the U.S. federal government launch a National Infrastructure Foundation, a nimble, collaborative strategic intervention organization that will house innovations programs from fragmented silos under one entity, inform federal officials on innovation performance metrics, strengthen industry-university partnerships, and support innovation economic development initiatives, especially to strengthen regional clusters. Because clusters are the geographic incubators of innovative products and processes, a cluster development grant program would also be targeted for implementation. By focusing on innovating in such areas as precision manufacturing, information technology, and clean energy, other areas of national concern would be tackled including government debt, carbon footprint, and oil dependence. The U.S. Economic Development Administration understand this reality in their continued Regional Innovation Clusters initiative. The United States also has to integrate her supply-chain and improve her applies research capability and downstream process innovation.
Many countries recognize the importance of innovation including Japan's Ministry of Education, Culture, Sports, Science and Technology (MEXT); Germany's Federal Ministry of Education and Research; and the Ministry of Science and Technology in the People's Republic of China. Russia's innovation programme is the Medvedev modernisation programme which aims to create a diversified economy based on high technology and innovation. The Government of Western Australia has established a number of innovation incentives for government departments. Landgate was the first Western Australian government agency to establish its Innovation Program.
Some regions have taken a proactive role in supporting innovation. Many regional governments are setting up innovation agencies to strengthen regional capabilities. Business incubators were first introduced in 1959 and subsequently nurtured by governments around the world. Such "incubators", located close to knowledge clusters (mostly research-based) like universities or other government excellence centres – aim primarily to channel generated knowledge to applied innovation outcomes in order to stimulate regional or national economic growth.
In 2009, the municipality of Medellin, Colombia created Ruta N to transform the city into a knowledge city.
Counter-hegemonic views on innovation
Innovation in the prevailing hegemonic view today mostly refers to 'innovation under capital', due to the prevailing capitalist nature of the global economy. In contrast, Robra et al. (2023) propose a counter-hegemonic view on innovation. This alternative lens revises the centrality of capital accumulation as the primary goal of innovation. Instead of being solely driven by profit motives, a counter-hegemonic understanding sees innovation as a means to create user-value, with a focus on satisfying societal needs. This view on innovation is underpinned by open access to knowledge, adaptability, repairability, and maintenance of products as well as Eco-sufficiency, defining progress not by efficiency but by staying within planetary boundaries, thereby challenging the hegemonic belief in limitless growth. This perspective is exemplified by commons-based peer production (CBPP), offering an alternative vision of innovation that prioritizes conviviality over relentless competition. In essence, this counter-hegemonic view describes a more socially and ecologically conscious approach to innovation, striving for a balance between technological progress and human wellbeing.
| Technology | General | null |
118570 | https://en.wikipedia.org/wiki/Magnetar | Magnetar | A magnetar is a type of neutron star with an extremely powerful magnetic field (~109 to 1011 T, ~1013 to 1015 G). The magnetic-field decay powers the emission of high-energy electromagnetic radiation, particularly X-rays and gamma rays.
The existence of magnetars was proposed in 1992 by Robert Duncan and Christopher Thompson. Their proposal sought to explain the properties of transient sources of gamma rays, now known as soft gamma repeaters (SGRs). Over the following decade, the magnetar hypothesis became widely accepted, and was extended to explain anomalous X-ray pulsars (AXPs). , 24 magnetars have been confirmed.
It has been suggested that magnetars are the source of fast radio bursts (FRB), in particular as a result of findings in 2020 by scientists using the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope.
Description
Like other neutron stars, magnetars are around in diameter, and have a mass of about 1.4 solar masses. They are formed by the collapse of a star with a mass 10–25 times that of the Sun. The density of the interior of a magnetar is such that a tablespoon of its substance would have a mass of over 100 million tons. Magnetars are differentiated from other neutron stars by having even stronger magnetic fields, and by rotating more slowly in comparison. Most observed magnetars rotate once every two to ten seconds, whereas typical neutron stars, observed as radio pulsars, rotate one to ten times per second. A magnetar's magnetic field gives rise to very strong and characteristic bursts of X-rays and gamma rays. The active life of a magnetar is short compared to other celestial bodies. Their strong magnetic fields decay after about 10,000 years, after which activity and strong X-ray emission cease. Given the number of magnetars observable today, one estimate puts the number of inactive magnetars in the Milky Way at 30 million or more.
Starquakes triggered on the surface of the magnetar disturb the magnetic field which encompasses it, often leading to extremely powerful gamma-ray flare emissions which have been recorded on Earth in 1979, 1998 and 2004.
Magnetic field
Magnetars are characterized by their extremely powerful magnetic fields of ~109 to 1011 T. These magnetic fields are a hundred million times stronger than any man-made magnet, and about a trillion times more powerful than the field surrounding Earth. Earth has a geomagnetic field of 30–60 microteslas, and a neodymium-based, rare-earth magnet has a field of about 1.25 tesla, with a magnetic energy density of 4.0 × 105 J/m3. A magnetar's 1010 tesla field, by contrast, has an energy density of , with an E/c2 mass density more than 10,000 times that of lead. The magnetic field of a magnetar would be lethal even at a distance of 1,000 km due to the strong magnetic field distorting the electron clouds of the subject's constituent atoms, rendering the chemistry of sustaining life impossible. At a distance of halfway from Earth to the moon, an average distance between the Earth and the Moon being , a magnetar could wipe information from the magnetic stripes of all credit cards on Earth. , they are the most powerful magnetic objects detected throughout the universe.
As described in the February 2003 Scientific American cover story, remarkable things happen within a magnetic field of magnetar strength. "X-ray photons readily split in two or merge. The vacuum itself is polarized, becoming strongly birefringent, like a calcite crystal. Atoms are deformed into long cylinders thinner than the quantum-relativistic de Broglie wavelength of an electron." In a field of about 105 teslas atomic orbitals deform into rod shapes. At 1010 teslas, a hydrogen atom becomes 200 times as narrow as its normal diameter.
Origins of magnetic fields
The dominant model of the strong fields of magnetars is that it results from a magnetohydrodynamic dynamo process in the turbulent, extremely dense conducting fluid that exists before the neutron star settles into its equilibrium configuration. These fields then persist due to persistent currents in a proton-superconductor phase of matter that exists at an intermediate depth within the neutron star (where neutrons predominate by mass). A similar magnetohydrodynamic dynamo process produces even more intense transient fields during coalescence of pairs of neutron stars. An alternative model is that they simply result from the collapse of stars with unusually strong magnetic fields.
Formation
In a supernova, a star collapses to a neutron star, and its magnetic field increases dramatically in strength through conservation of magnetic flux. Halving a linear dimension increases the magnetic field strength fourfold. Duncan and Thompson calculated that when the spin, temperature and magnetic field of a newly formed neutron star falls into the right ranges, a dynamo mechanism could act, converting heat and rotational energy into magnetic energy and increasing the magnetic field, normally an already enormous 108 teslas, to more than 1011 teslas (or 1015 gauss). The result is a magnetar. It is estimated that about one in ten supernova explosions results in a magnetar rather than a more standard neutron star or pulsar.
1979 discovery
On March 5, 1979, a few months after the successful dropping of landers into the atmosphere of Venus, the two uncrewed Soviet spaceprobes Venera 11 and 12, then in heliocentric orbit, were hit by a blast of gamma radiation at approximately 10:51 EST. This contact raised the radiation readings on both the probes from a normal 100 counts per second to over 200,000 counts a second in only a fraction of a millisecond.
Eleven seconds later, Helios 2, a NASA probe, itself in orbit around the Sun, was saturated by the blast of radiation. It soon hit Venus, where the Pioneer Venus Orbiter's detectors were overcome by the wave. Shortly thereafter the gamma rays inundated the detectors of three U.S. Department of Defense Vela satellites, the Soviet Prognoz 7 satellite, and the Einstein Observatory, all orbiting Earth. Before exiting the solar system the radiation was detected by the International Sun–Earth Explorer in halo orbit.
This was the strongest wave of extra-solar gamma rays ever detected at over 100 times as intense as any previously known burst. Given the speed of light and its detection by several widely dispersed spacecraft, the source of the gamma radiation could be triangulated to within an accuracy of approximately 2 arcseconds. The direction of the source corresponded with the remnants of a star that had gone supernova around 3000 BCE. It was in the Large Magellanic Cloud and the source was named SGR 0525-66; the event itself was named GRB 790305b, the first-observed SGR megaflare.
Recent discoveries
On February 21, 2008, it was announced that NASA and researchers at McGill University had discovered a neutron star with the properties of a radio pulsar which emitted some magnetically powered bursts, like a magnetar. This suggests that magnetars are not merely a rare type of pulsar but may be a (possibly reversible) phase in the lives of some pulsars. On September 24, 2008, ESO announced what it ascertained was the first optically active magnetar-candidate yet discovered, using ESO's Very Large Telescope. The newly discovered object was designated SWIFT J195509+261406. On September 1, 2014, ESA released news of a magnetar close to supernova remnant Kesteven 79. Astronomers from Europe and China discovered this magnetar, named 3XMM J185246.6+003317, in 2013 by looking at images that had been taken in 2008 and 2009. In 2013, a magnetar PSR J1745−2900 was discovered, which orbits the black hole in the Sagittarius A* system. This object provides a valuable tool for studying the ionized interstellar medium toward the Galactic Center. In 2018, the temporary result of the merger of two neutron stars was determined to be a hypermassive magnetar, which shortly collapsed into a black hole.
In April 2020, a possible link between fast radio bursts (FRBs) and magnetars was suggested, based on observations of SGR 1935+2154, a likely magnetar located in the Milky Way galaxy.
Known magnetars
, 24 magnetars are known, with six more candidates awaiting confirmation. A full listing is given in the McGill SGR/AXP Online Catalog. Examples of known magnetars include:
SGR 0525−66, in the Large Magellanic Cloud, located about 163,000 light-years from Earth, the first found (in 1979)
SGR 1806−20, located 50,000 light-years from Earth on the far side of the Milky Way in the constellation of Sagittarius and the most magnetized object known.
SGR 1900+14, located 20,000 light-years away in the constellation Aquila. After a long period of low emissions (significant bursts only in 1979 and 1993) it became active in May–August 1998, and a burst detected on August 27, 1998, was of sufficient power to force NEAR Shoemaker to shut down to prevent damage and to saturate instruments on BeppoSAX, WIND and RXTE. On May 29, 2008, NASA's Spitzer Space Telescope discovered a ring of matter around this magnetar. It is thought that this ring formed in the 1998 burst.
SGR 0501+4516 was discovered on 22 August 2008.
1E 1048.1−5937, located 9,000 light-years away in the constellation Carina. The original star, from which the magnetar formed, had a mass 30 to 40 times that of the Sun.
, ESO reports identification of an object which it has initially identified as a magnetar, SWIFT J195509+261406, originally identified by a gamma-ray burst (GRB 070610).
CXO J164710.2-455216, located in the massive galactic cluster Westerlund 1, which formed from a star with a mass in excess of 40 solar masses.
SWIFT J1822.3 Star-1606 discovered on 14 July 2011 by Italian and Spanish researchers of CSIC at Madrid and Catalonia. This magnetar contrary to previsions has a low external magnetic field, and it might be as young as half a million years.
3XMM J185246.6+003317, discovered by international team of astronomers, looking at data from ESA's XMM-Newton X-ray telescope.
SGR 1935+2154, emitted a pair of luminous radio bursts on 28 April 2020. There was speculation that these may be galactic examples of fast radio bursts.
Swift J1818.0-1607, X-ray burst detected March 2020, is one of five known magnetars that are also radio pulsars. By its time of discovery, it may be only 240 years old.
Bright supernovae
Unusually bright supernovae are thought to result from the death of very large stars as pair-instability supernovae (or pulsational pair-instability supernovae). However, recent research by astronomers has postulated that energy released from newly formed magnetars into the surrounding supernova remnants may be responsible for some of the brightest supernovae, such as SN 2005ap and SN 2008es.
| Physical sciences | Stellar astronomy | Astronomy |
118737 | https://en.wikipedia.org/wiki/Barn | Barn | A barn is an agricultural building usually on farms and used for various purposes. In North America, a barn refers to structures that house livestock, including cattle and horses, as well as equipment and fodder, and often grain. As a result, the term barn is often qualified e.g. tobacco barn, dairy barn, cow house, sheep barn, potato barn. In the British Isles, the term barn is restricted mainly to storage structures for unthreshed cereals and fodder, the terms byre or shippon being applied to cow shelters, whereas horses are kept in buildings known as stables. In mainland Europe, however, barns were often part of integrated structures known as byre-dwellings (or housebarns in US literature). In addition, barns may be used for equipment storage, as a covered workplace, and for activities such as threshing.
Etymology
The word barn comes from the Old English , for barley (or grain in general), and , for a storage place—thus, a storehouse for barley. The word , also spelled bern and bearn, is attested to at least sixty times in homilies and other Old English prose. The related words bere-tun and bere-flor both meant threshing floor. Bere-tun also meant granary; the literal translation of bere-tun is "grain enclosure". While the only literary attestation of bere-hus (also granary) comes from the Dialogi of Gregory the Great, there are four known mentions of bere-tun and two of bere-flor. A Thesaurus of Old English lists and ("meal-store house") as synonyms for barn.
History
The modern barn largely developed from the three aisled medieval barn, commonly known as tithe barn or monastic barn. This, in turn, originated in a 12th-century building tradition, also applied in halls and ecclesiastical buildings. In the 15th century several thousands of these huge barns were to be found in Western Europe. In the course of time, its construction method was adopted by normal farms and it gradually spread to simpler buildings and other rural areas. As a rule, the aisled barn had large entrance doors and a passage corridor for loaded wagons. The storage floors between the central posts or in the aisles were known as bays or mows (from Middle French moye).
The main types were large barns with sideway passages, compact barns with a central entrance and smaller barns with a transverse passage. The latter also spread to Eastern Europe. Whenever stone walls were applied, the aisled timber frame often gave way to single-naved buildings. A special type were byre-dwellings, which included living quarters, byres and stables, such as the Frisian farmhouse or Gulf house and the Black Forest house. Not all, however, evolved from the medieval barn. Other types descended from the prehistoric longhouse or other building traditions. One of the latter was the Low German (hall) house, in which the harvest was stored in the attic. In many cases, the New World colonial barn evolved from the Low German house, which was transformed to a real barn by first generation colonists from the Netherlands and Germany.
Construction
In the Yorkshire Dales, England, barns, known locally as cowhouses were built from double stone walls with truffs or throughstones acting as wall ties.
In the U.S., older barns were built from timbers hewn from trees on the farm and built as a log crib barn or timber frame, although stone barns were sometimes built in areas where stone was a cheaper building material. In the mid to late 19th century in the U.S. barn framing methods began to shift away from traditional timber framing to "truss framed" or "plank framed" buildings. Truss or plank framed barns reduced the number of timbers instead using dimensional lumber for the rafters, joists, and sometimes the trusses. The joints began to become bolted or nailed instead of being mortised and tenoned. The inventor and patentee of the Jennings Barn claimed his design used less lumber, less work, less time, and less cost to build and were durable and provided more room for hay storage. Mechanization on the farm, better transportation infrastructure, and new technology like a hay fork mounted on a track contributed to a need for larger, more open barns, sawmills using steam power could produce smaller pieces of lumber affordably, and machine cut nails were much less expensive than hand-made (wrought) nails. Concrete block began to be used for barns in the early 20th century in the U.S.
Modern barns are more typically steel buildings. From about 1900 to 1940, many large dairy barns were built in northern USA. These commonly have gambrel or hip roofs to maximize the size of the hay loft above the dairy roof, and have become associated in the popular image of a dairy farm. The barns that were common to the wheatbelt held large numbers of pulling horses such as Clydesdales or Percherons. These large wooden barns, especially when filled with hay, could make spectacular fires that were usually total losses for the farmers. With the advent of balers it became possible to store hay and straw outdoors in stacks surrounded by a plowed fireguard. Many barns in the northern United States are painted barn red with a white trim. One possible reason for this is that ferric oxide, which is used to create red paint, was the cheapest and most readily available chemical for farmers in New England and nearby areas. Another possible reason is that ferric oxide acts a preservative and so painting a barn with it would help to protect the structure. The custom of painting barns in red with white trim is widely spread in Scandinavia. Especially in Sweden the Falu red with white trims is the traditional colouring of most wooden buildings.
With the popularity of tractors following World War II many barns were taken down or replaced with modern Quonset huts made of plywood or galvanized steel. Beef ranches and dairies began building smaller loftless barns often of Quonset huts or of steel walls on a treated wood frame (old telephone or power poles). By the 1960s it was found that cattle receive sufficient shelter from trees or wind fences (usually wooden slabs 20% open).
Gallery of barns with different wall building materials
Uses
In older style North American barns, the upper area was used to store hay and sometimes grain. This is called the mow (rhymes with cow) or the hayloft. A large door at the top of the ends of the barn could be opened up so that hay could be put in the loft. The hay was hoisted into the barn by a system containing pulleys and a trolley that ran along a track attached to the top ridge of the barn. Trap doors in the floor allowed animal feed to be dropped into the mangers for the animals.
In New England it is common to find barns attached to the main farmhouse (connected farm architecture), allowing for chores to be done while sheltering the worker from the weather.
In the middle of the twentieth century the large broad roof of barns were sometimes painted with slogans in the United States. Most common of these were the 900 barns painted with ads for Rock City.
In the past barns were often used for communal gatherings, such as barn dances.
Features
A farm may have buildings of varying shapes and sizes used to shelter large and small animals and other uses. The enclosed pens used to shelter large animals are called stalls and may be located in the cellar or on the main level depending in the type of barn. Other common areas, or features, of an American barn include:
a tack room (where bridles, saddles, etc. are kept), often set up as a breakroom
a feed room, where animal feed is stored – not typically part of a modern barn where feed bales are piled in a stackyard
a drive bay, a wide corridor for animals or machinery
a silo where fermented grain or hay (called ensilage or haylage) is stored.
a milkhouse for dairy barns; an attached structure where the milk is collected and stored prior to shipment
a grain (soy, corn, etc.) bin for dairy barns, found in the mow and usually made of wood with a chute to the ground floor providing access to the grain, making it easier to feed the cows.
modern barns often contain an indoor corral with a squeeze chute for providing veterinary treatment to sick animals.
In North Yorkshire cowhouses would have a muck hole (muck’ole in the local dialect) to allow manure to be deposited outside the barn without the cowhand leaving the building.
In North Yorkshire a cowhouse would have a small door or forking hole (forking’ole in the local dialect) high up on the wall to enable fodder to be 'forked' into the baux or baulks (hayloft).
Some English barns would have a gin gang, a semi-circular extension added to house a horse engine.
Derivatives
The physics term "barn", which is a subatomic unit of area, 10−28 m2, came from experiments with uranium nuclei during World War II, wherein they were described colloquially as "big as a barn", with the measurement officially adopted to maintain security around nuclear weapons research.
Barn idioms
"He couldn't hit the broad side of a barn" is a popular expression for a person having poor aim when throwing an object or when shooting at something.
To "lock the barn door after the horse has bolted" implies that one has solved a problem too late to prevent it.
"Were you born/raised in a barn?" is an accusation used differently in various parts of the English-speaking world, but most commonly as a reprimand when someone exhibits poor manners by either using ill-mannered language (particularly if related to manure), or leaving doors open.
"Your barn door is open" is used as a euphemism to remind someone to zip the fly of their trousers.
To "barnstorm" is to travel quickly around a large area making frequent public appearances.
Types
Barns have been classified by their function, structure, location, or other features. Sometimes the same building falls into multiple categories.
Apple barn or fruit barn – for the storage of fruit crops
Bank barn – A multilevel building built into a banking so the upper floor is accessible to a wagon, sometimes accessed by a bridge or ramp.
Bastle house – a defensive structure to guard against border reivers with accommodation on the lower floor for livestock.
Bridge barn or covered bridge barn – general terms for barns accessed by a bridge rather than a ramp.
Boô – A sheep-barn and dwelling in the Netherlands, seasonal or sometimes year round.
Pennsylvania barn (U.S.) of which there are sub-categories such as standard and sweitzer types. Also known as forebay or porch barns.
Cantilever barn – a type of log crib barn with cantilevered upper floors which developed in Appalachia (U.S.A.)
Combination barn – found throughout England, especially in areas of pastoral farming and the standard barn type in America. This general term means the barns were used for both crop storage and as a byre to house animals.
Crib barn – Horizontal log structures with up to four cribs (assemblies of crossing timbers) found primarily in the southern U.S.A.
Drying barns for drying crops in Finland and Sweden are called riihi and ria, respectively.
New World Dutch Barn – A barn type in the U.S. Also see Dutch barn (U.K.) in Other farm buildings section below.
Field barn – An outbuilding located in a field further afield than the main cluster of buildings that constitute a farmstead
New England barn – a common style of barn found in rural New England and in the U.S.
English barn (U.S.), also called a Yankee or Connecticut barn – A widespread barn type in the U.S.
Granary – to store grain after it is threshed, some barns contain a room called a granary, some barns like a rice barn blur the line between a barn and granary.
Gothic arch barn, has profile shaped as a Gothic arch, which became feasible to be formed by laminated members
Ground stable barn, a barn with space for livestock at ground level
Housebarn, also called a byre-dwelling – A combined living space and barn, relatively common in old Europe but rare in North America. Also, longhouses were housebarns.
Pole barn – a simple structure that consists of poles embedded in the ground to support a roof, with or without exterior walls. The pole barn lacks a conventional foundation, thus greatly reducing construction costs. Traditionally used to house livestock, hay or equipment.
Potato barn or potato house– A semi-subterranean or two story building for storage of potatoes or sweet potatoes.
Prairie barn – A general term for barns in the Western U.S.
Rice barn and the related winnowing barn
Round barn, built in a round shape the term often is generalized to the include polygonal barn and octagonal barn
Swing beam barn – A rare barn type in part of the U.S. designed for threshing with animals walking around a pole held by a swing beam inside the barn.
Tobacco barn – for drying of tobacco leaves
Tithe barn – a type of barn used in much of northern Europe in the Middle Ages for storing the tithes—a tenth of the farm's produce which had to be given to the church
Threshing barn – built with a threshing floor for the processing and storage of cereals, to keep them in dry conditions. Characterised by large double doors in the centre of one side, a smaller one on the other, and storage for cereal harvest or unprocessed on either side. In England the grain was beaten from the crop by flails and then separated from the husks by winnowing between these doors. The design of these typically remained unchanged between the 12th and 19th centuries. The large doors allow for a horse wagon to be driven through; the smaller ones allow for the sorting of sheep and other stock in the spring and summer.
Other farm buildings often associated with barns
Carriage house: cart shed
Dutch barn (U.K.): an open sided structure for hay storage. The type with a movable roof is called a hay barrack in the U.S or a hooiberg (kapberg) in the Netherlands.
A corn crib a horizontal slatted structure built to allow airflow to dry corn (maize)
A granary or hórreo: a storage space for threshed grains, sometimes within a barn or as a separate building.
Linhay (linny, linney, linnies): A shed, often with a lean-to roof but may be a circular linhay to store hay on the first floor with either cattle on the ground floor (cattle linhay), or farm machinery (cart linhay). Characterised by an open front with regularly spaced posts or pillars.
Milk room or milk house: to store milk.
Oast houses: an outbuilding used for drying hops as part of the brewing process.
Shelter sheds: open-fronted structures for stock
Shippon: a shed which houses oxen and cattle. Has fodder storage above, regularly spaced doors on the yard side, a pitching door or window on the first floor.
Stable: Usually for housing horses.
Historic farm buildings
Old farm buildings of the countryside contribute to the landscape, and help define the history of the location, i.e. how farming took place in the past, and how the area has been settled throughout the ages. They also can show the agricultural methods, building materials, and skills that were used. Most were built with materials reflecting the local geology of the area. Building methods include earth walling and thatching.
Buildings in stone and brick, roofed with tile or slate, increasingly replaced buildings in clay, timber and thatch from the later 18th century. Metal roofs started to be used from the 1850s. The arrival of canals and railways brought about transportation of building materials over greater distances.
Clues determining their age and historical use can be found from old maps, sale documents, estate plans, and from a visual inspection of the building itself, noting (for example) reused timbers, former floors, partitions, doors and windows.
The arrangement of the buildings within the farmstead can also yield valuable information on the historical farm usage and landscape value. Linear farmsteads were typical of small farms, where there was an advantage to having cattle and fodder within one building, due to the colder climate. Dispersed clusters of unplanned groups were more widespread. Loose courtyard plans built around a yard were associated with bigger farms, whereas carefully laid out courtyard plans designed to minimize waste and labour were built in the latter part of the 18th century.
The barns are typically the oldest and biggest buildings to be found on the farm. Many barns were converted into cow houses and fodder processing and storage buildings after the 1880s. Many barns had owl holes to allow for access by barn owls, encouraged to aid vermin control.
The stable is typically the second-oldest building type on the farm. They were well built and placed near the house due to the value that the horses had as draught animals
Modern granaries were built from the 18th century. Complete granary interiors, with plastered walls and wooden partitioning to grain bins, are very rare.
Longhouses are an ancient building where people and animals used the same entrance. These can still be seen, for example, in North Germany, where the Low Saxon house occurs.
Few interiors of the 19th century cow houses have survived unaltered due to dairy-hygiene regulations in many countries.
Old farm buildings may show the following signs of deterioration: rotting in timber-framed constructions due to damp, cracks in the masonry from movement of the walls, e.g. ground movement, roofing problems (e.g. outward thrust of it, deterioration of purlins and gable ends), foundation problems, penetration of tree roots; lime mortar being washed away due to inadequate weather-protection. Walls made of cob, earth mortars or walls with rubble cores are all highly vulnerable to water penetration, and replacement or covering of breathable materials with cement or damp-proofing materials may trap moisture within the walls.
In England and Wales some of these historical buildings have been given "listed building" status, which provides them some degree of archaeological protection.
Some grant schemes are available to restore Historic Farmland buildings, for example Natural England's Environmental Stewardship, Countryside Stewardship and Environmentally Sensitive Areas Schemes.
| Technology | Buildings and infrastructure | null |
118767 | https://en.wikipedia.org/wiki/Tower%20block | Tower block | A tower block, high-rise, apartment tower, residential tower, apartment block, block of flats, or office tower is a tall building, as opposed to a low-rise building and is defined differently in terms of height depending on the jurisdiction. It is used as a residential, office building, or other functions including hotel, retail, or with multiple purposes combined. Residential high-rise buildings are also known in some varieties of English, such as British English, as tower blocks and may be referred to as MDUs, standing for multi-dwelling units. A very tall high-rise building is referred to as a skyscraper.
High-rise buildings became possible to construct with the invention of the elevator (lift) and with less expensive, more abundant building materials. The materials used for the structural system of high-rise buildings are reinforced concrete and steel. Most North American–style skyscrapers have a steel frame, while residential blocks are usually constructed of concrete. There is no clear difference between a tower block and a skyscraper, although a building with forty or more stories and taller than is generally considered a skyscraper.
High-rise structures pose particular design challenges for structural and geotechnical engineers, particularly if situated in a seismically active region or if the underlying soils have geotechnical risk factors such as high compressibility or bay mud. They also pose serious challenges to firefighters during emergencies in high-rise structures. New and old building design, building systems such as the building standpipe system, HVAC systems (heating, ventilation and air conditioning), fire sprinkler systems, and other things such as stairwell and elevator evacuations pose significant problems. Studies are often required to ensure that pedestrian wind comfort and wind danger concerns are addressed. In order to allow less wind exposure, to transmit more daylight to the ground and to appear more slender, many high-rises have a design with setbacks.
Apartment buildings have technical and economic advantages in areas of high population density, and have become a distinctive feature of housing accommodation in virtually all densely populated urban areas around the world. In contrast with low-rise and single-family houses, apartment blocks accommodate more inhabitants per unit of area of land and decrease the cost of municipal infrastructure.
Definition
Various bodies have defined "high-rise":
Emporis defines a high-rise as "A multi-story structure between tall, or a building of unknown height from 12–39 floors."
The New Shorter Oxford English Dictionary defines a high-rise as "a building having many storeys".
The International Conference on Fire Safety in High-Rise Buildings defined a high-rise as "any structure where the height can have a serious impact on evacuation"
In the U.S., the National Fire Protection Association defines a high-rise as being higher than , or about seven stories.
Most building engineers, inspectors, architects and similar professionals define a high-rise as a building that is at least 75 feet tall.
History
High-rise apartment buildings had already appeared in antiquity: the insulae in Ancient Rome and several other cities in the Roman Empire, some of which might have reached up to ten or more stories, one reportedly having 200 stairs. Because of the destruction caused by poorly built high-rise insulae collapsing, several Roman emperors, beginning with Augustus (r. 30 BC – 14 AD), set limits of for multi-story buildings, but met with limited success, as these limits were often ignored despite the likelihood of taller insulae collapsing. The lower floors were typically occupied by either shops or wealthy families, while the upper stories were rented out to the lower classes. Surviving Oxyrhynchus Papyri indicate that seven-story buildings even existed in provincial towns, such as in third century AD Hermopolis in Roman Egypt.
In Arab Egypt, the initial capital city of Fustat housed many high-rise residential buildings, some seven stories tall that could reportedly accommodate hundreds of people. Al-Muqaddasi, in the 10th century, described them as resembling minarets, while Nasir Khusraw, in the early 11th century, described some of them rising up to 14 stories, with roof gardens on the top story complete with ox-drawn water wheels for irrigating them. By the 16th century, Cairo also had high-rise apartment buildings where the two lower floors were for commercial and storage purposes and the multiple stories above them were rented out to tenants.
The skyline of many important medieval cities was dominated by large numbers of high-rising urban towers, which fulfilled defensive but also representative purposes. The residential Towers of Bologna numbered between 80 and 100 at a time, the largest of which still rise to 97.2 m. In Florence, a law of 1251 decreed that all urban buildings should be reduced to a height of less than 26 m, the regulation immediately put into effect. Even medium-sized towns such as San Gimignano are known to have featured 72 towers up to 51 m in height.
The Hakka people in southern China have adopted communal living structures designed to be easily defensible in the forms of Weilongwu (围龙屋) and Tulou (土楼), the latter are large, enclosed and fortified earth building, between three and five stories high and housing up to 80 families. The oldest still standing tulou dates back from the 14th century.
High-rises were built in the Yemeni city of Shibam in the 16th century. The houses of Shibam are all made out of mud bricks, but about five hundred of them are tower houses, which rise five to sixteen stories high, with each floor having one or two apartments. This technique of building was implemented to protect residents from Bedouin attacks. While Shibam has existed for around two thousand years, most of the city's houses date from the 16th century. The city has the tallest mud buildings in the world, some more than 30 meters (100 feet) high. Shibam has been called "one of the oldest and best examples of urban planning based on the principle of vertical construction" or "Manhattan of the desert".
The engineer's definition of high-rise buildings comes from the development of fire trucks in the late 19th century. Magirus had shown the first cogwheel sliding ladder in 1864. The first turntable ladder drawn by horses was developed in 1892 which had a length of 25 meters. The extension ladder was motorized by Magirus in 1904. The definition of a maximum of 22 meters for the highest floor was common in the building regulations at the time and it is still so today in Germany. The common height for turntable ladders did later go to 32 meters (100 feet), so that 30 meter is a common limit in some building regulations today, for example in Switzerland. Any building that exceeds the height of the usual turntable ladders in a city must install additional fire safety equipment, so that these high-rise buildings have a different section in the building regulations in the world.
The residential tower block with its typical concrete construction is a familiar feature of Modernist architecture. Influential examples include Le Corbusier's "housing unit", his Unité d'Habitation, repeated in various European cities starting with his Cité radieuse in Marseille (1947–52), constructed of béton brut, rough-cast concrete, as steel for framework was unavailable in post-war France. Residential tower blocks became standard in housing urban populations displaced by slum clearances and "urban renewal". High-rise projects after World War II typically rejected the classical designs of the early skyscrapers, instead embracing the uniform international style; many older skyscrapers were redesigned to suit contemporary tastes or even got demolished - such as New York's Singer Building, once the world's tallest skyscraper. However, with the movements of Postmodernism, New Urbanism, and New Classical Architecture, that established since the 1980s, a more classical approach came back to global skyscraper design, that is popular today.
Other contemporary styles and movements in high-rise design include organic, sustainable, neo-futurist, structuralist, high-tech, deconstructivist, blob, digital, streamline, novelty, critical regionalist, vernacular, Art Deco (or Art Deco Nouveau), and neohistorist, also known as revivalist.
Currently, the tallest high-rise apartment building in the world is the Central Park Tower on Billionaires' Row in Midtown Manhattan, towering at .
Streets in the sky
Streets in the sky is a style of architecture that emerged in Britain in the 1960s and 1970s. Generally built to replace run-down terraced housing, the new designs included not only modern improvements such as inside toilets, but also shops and other community facilities within high-rise blocks. Examples of the buildings and developments are Trellick Tower, Balfron Tower, Broadwater Farm, Robin Hood Gardens and Keeling House in London, Hunslet Grange in Leeds and Park Hill, Sheffield, and Castlefields and Southgate Estate, Runcorn. These were an attempt to develop a new architecture, differentiated from earlier large housing estates, such as Quarry Hill flats in Leeds. Alison and Peter Smithson were the architects of Robin Hood Gardens. As another large example, in 2005 it was decided to carry out a 20-year process of demolition and replacement of dwellings with modern houses in the Aylesbury Estate in south London, built in 1970. The Hulme Crescents in Manchester were the largest social housing scheme in Europe when built in 1972 but lasted just 22 years. The Crescents had one of the worst reputations of any British social housing schemes and were marred by numerous design and practical problems.
The ideal of Streets in the Sky often did not work in practice. Unlike an actual city street, these walkways were not thoroughfares, and often came to a dead end multiple storeys above the ground. They lacked a regular flow of passers-by, and the walkways and especially the stairwells could not be seen by anyone elsewhere, so there was no deterrent to crime and disorder. There were no "eyes on the street" as advocated by Jane Jacobs in her book The Death and Life of Great American Cities. The Unité d'Habitation in Marseille provides a more successful example of the concept, with the fifth floor walkway including a shop and café.
Towers in the park and microdistricts
Towers in the park is a morphology of modernist high-rise apartment buildings characterized by a high-rise building surrounded by a swath of landscaped land; e.g., the tower does not directly front the street.
It is based on an ideology popularised by Le Corbusier with the Plan Voisin, an expansion of the Garden city movement aimed at reducing the problem of urban congestion. It was introduced in several large cities across the world, notably in North America, Europe and Australia as a solution for housing, especially for public housing, reaching a peak of popularity in the 1960s with the introduction of prefabrication technology.
By the early 1970s, opposition to this style of towers mounted, with many, including urban planners, now referring to them as "ghettos". Neighbourhoods like St. James Town were originally designed to house young "swinging single" middle class residents, but the apartments lacked appeal and the area quickly became much poorer.
From its early days of implementation the concept was criticised for making residents feel unsafe, including large empty common areas dominated by gang culture and crime. The layout was criticised for normalising anti-social behaviour and hampering the efforts of essential services, particularly for law enforcement.
The history of microdistricts as an urban planning concept dates back to the 1920s, when the Soviet Union underwent rapid urbanization. Under the Soviet urban planning ideologies of the 1920s, residential complexes—compact territories with residential dwellings, schools, shops, entertainment facilities, and green spaces—started to prevail in urban planning practice, as they allowed for more careful and efficient planning of the rapid urban expansion. These complexes were seen as an opportunity to build a collective society, an environment suitable and necessary for the new way of life.
Developments by region
Asia
Residential tower complexes are common in Asian countries such as China, India, Bangladesh, Indonesia, Taiwan, Singapore, Japan, Pakistan, Iran and South Korea, as urban densities are very high. In Singapore and urban Hong Kong, land prices are so high that a large portion of the population lives in high-rise apartments. In fact, over 60% of Hong Kong residents live in apartments, many of them condominiums. Of them in 2020, 2,112,138 were identified residents of public housing, which is 28% of the total population.
Sarah Williams Goldhagen (2012) celebrated the work of innovative architecture firms such as WOHA (based in Singapore), Mass Studies (based in Seoul), Amateur Architecture Studio (based in Hangzhou, China), and the New York City-based Steven Holl in the transformation of residential towers into "vertical communities" or "vertical cities in the sky" providing aesthetic, unusually designed silhouettes on the skyline, comfortable private spaces and attractive public spaces. None of these "functional, handsome, and humane high-rise residential buildings" are affordable housing.
China
The 2012 Pritzker Prize was awarded to Chinese architect Wang Shu. Among his winning designs is the Vertical Courtyard Apartments, six 26-story towers by his architectural firm Amateur Architecture Studio built in Hangzhou. "These towers were designed to house two-story apartments, in which every inhabitant would enjoy "the illusion of living on the second floor", accomplished by folding concrete floor planes (like "bamboo mats," claims the firm), so that every third story opens into a private courtyard. In the larger towers, the two-story units are stacked slightly askew, adding to the visual interest of the variegated façades (Goldhagen 2012)."
Japan
Housing in Japan includes various traits coming from different eras. The word danchi now either means an employer-provided housing or has a meaning similar to "projects". For modern hi-rises, there are two borrowed words to make a distinction:
"Apaato" (アパート)is used to describe a rather small apartment, initially made to be rented;
a large, modern apartment would be a "Mansion" (マンション). The "mansion" nickname is used for both residential towers and for individual condominium apartments (for being roomy, spacey enough to compare to detached houses).
South Korea
In South Korea, the tower blocks are called Apartment Complex (). The first residential towers began to be built after the Korean War. The South Korean government needed to build many apartment complexes in the cities to be able to accommodate the citizens. In the 60 years since, as the population increased considerably, tower blocks have become more common. This time, however, the new tower blocks integrated shopping malls, parking systems, and other convenient facilities.
Samsung Tower Palace in Seoul, South Korea, is the tallest apartment complex in Asia.
In Seoul, approximately 80 percent of its residents live in apartment complexes which comprise 98 percent of recent residential construction. Seoul proper is noted for its population density, eight times greater than Rome, though less than Manhattan and Paris. Its metropolitan area is the densest in the OECD.
Europe
Central and Eastern Europe
Although some Central and Eastern European countries during the interwar period, such as the Second Polish Republic, already started building housing estates that were considered to be of a high standard for their time, many of these structures perished during the Second World War.
In the Eastern Bloc, tower blocks were constructed in great numbers to produce plenty of cheap accommodation for the growing postwar populations of the USSR and its satellite states. This took place mostly in the 1950s, 1960s and 1970s, though in the People's Republic of Poland this process started even earlier due to the severe damages that Polish cities sustained during World War II. Throughout the former Eastern Bloc countries, tower blocks built during the Soviet years make up much of the current housing estates and most of them were built in the specific socialist realist style of architecture that was dominant in the territories east of the Iron Curtain: blocky buildings of that era are colloquially known as Khrushchyovka. However, there were also larger and more ambitious projects built in Eastern Europe at the time, which have since become recognisable examples of post-war modernism; such as the largest falowiec building in the Przymorze Wielkie district of Gdańsk, with a length of and 1,792 flats, it is the second longest housing block in Europe.
In Romania, the mass construction of standardised housing blocks began in the 1950s and 1960s with the outskirts of the cities, some of which were made up of slums. Construction continued in the 1970s and 1980s, under the systematisation programme of Nicolae Ceaușescu, which consisted largely of the demolition and reconstruction of existing villages, towns, and cities, in whole or in part, in order to build blocks of flats (blocuri), as a result of increasing urbanisation following an accelerated industrialisation process. In Czechoslovakia (now the Czech Republic and Slovakia), panelák building under Marxism–Leninism resulted from two main factors: the postwar housing shortage and the ideology of the ruling party.
In Eastern European countries, opinions about these buildings vary greatly, with some deeming them as eyesores on their city's landscape while others glorify them as relics of a bygone age and historical examples of unique architectural styles (such as socialist realism, brutalism, etc.). Since the dissolution of the Soviet Union, and especially in the late 1990s and early 2000s, many of the former Eastern Bloc countries have begun construction of new, more expensive and modern housing. The Śródmieście borough of Warsaw, the capital of Poland, has seen the development of an array of skyscrapers. Russia is also currently undergoing a dramatic buildout, growing a commercially shaped skyline. Moreover, the ongoing changes made to postwar housing estates since the 2000s in former communist countries vary – ranging from simply applying a new coat of paint to the previously grey exterior to thorough modernisation of entire buildings.
In the European Union, among former Warsaw Pact states, a majority of the population lives in flats in Latvia (65.1%), Estonia (63.8%), Lithuania (58.4%), Czech Republic (52.8%), and Slovakia (50.3%) (, data from Eurostat). However, not all flat dwellers in Eastern Europe live in Cold War-era blocks of flats; many live in buildings constructed after the fall of the Berlin Wall, and some in buildings that survived World War II.
Western Europe
In Western Europe, there are fewer high-rise buildings because of the historic city centres. In the 1960s, developers began demolishing older buildings to replace them with modern high-rise buildings.
In Brussels there are numerous modern high-rise buildings in the Northern Quarter business district. The government of Belgium wants to recreate Washington, D.C., on a small scale.
France
There are some tall residential buildings in La Défense district, such as Tour Défense 2000, even though the district is mainly "commercial". This allows the residents to walk to the nearby office buildings without using vehicles.
Great Britain
Tower blocks were first built in the United Kingdom after the Second World War, and were seen as a cheap way to replace 19th-century urban slums and war-damaged buildings. They were originally seen as desirable, but quickly fell out of favour as tower blocks attracted rising crime and social disorder, particularly after the collapse of Ronan Point in 1968.
Although Tower blocks are controversial and numerous examples have been demolished, many still remain in large cities. Due to lack of proper regulation, some tower blocks present a significant fire risk and even though there have been efforts to make them more safe, modern safety precautions can be prohibitively expensive to retrofit. The Grenfell Tower fire in 2017 was partly caused by council ignorance, as a local action group complained to the council about the fire hazards of the tower several years before the incident, yet remedial work had not been carried out. This fire further made tower blocks less desirable to British residents.
There are old high-rise buildings built in the 1960s and 1970s in areas of London such as Tower Hamlets, Newham, Hackney, and virtually any area in London with council housing. Some new high-rises are being built in areas such as Central London, Southwark, and Nine Elms. In east London, some old high-rises are being gentrified, in addition to new high-rises being built in areas such as Stratford and Canary Wharf.
Ireland
Republic of Ireland
The majority of residential high-rise buildings in the Republic of Ireland were concentrated in the suburb of Ballymun, Dublin. The Ballymun Flats were built between 1966 and 1969: seven 15-story towers, nineteen 8-story blocks and ten 4-story blocks. These were the "seven towers" referred to in the U2 song "Running to Stand Still". They have since been demolished.
Inner Dublin flat complexes, typically of 4-5 storeys include Sheriff Street (demolished), Fatima Mansions (demolished and redeveloped), St Joseph's Gardens (demolished; replaced by Killarney Court flat complex), St Teresa's Gardens, Dolphin House, Liberty House, St Michael's Estate (8 storeys) and O'Devaney Gardens and a lot more mainly throughout the north and south inner city of Dublin. Suburban flat complexes were built exclusively on the northside of the city in Ballymun, Coolock and Kilbarrack. These flats were badly affected by a heroin epidemic that hit working-class areas of Dublin in the 1980s and early 1990s.
Residential tower blocks were previously uncommon outside of Dublin, but during the era of the Celtic Tiger the largest cities such as Dublin, Cork, Limerick and Galway witnessed new large apartment building, although their heights have generally been restricted. Some large towns such as Navan, Drogheda, Dundalk and Mullingar have also witnessed the construction of many modern apartment blocks.
Northern Ireland
Tower blocks in Northern Ireland were never built to the frequency as in cities on the island of Great Britain, but taller high-rises are generally more common than in the Republic of Ireland. Most tower blocks and flat complexes are found in Belfast, although many of these have been demolished since the 1990s and replaced with traditional public housing units. The mid-rise Divis flats complex in west Belfast was built between 1968 and 1972. It was demolished in the early 1990s after the residents demanded new houses due to mounting problems with their flats. Divis Tower, built separately in 1966, still stands, however; and in 2007 work began to convert the former British Army base at the top two floors into new dwellings. Divis Tower was for several decades Ireland's tallest residential building, having since being surpassed by the privately owned Obel Tower in the city centre. In the north of the city, the iconic seven-tower complex in the New Lodge remains, although so too the problems that residents face, such as poor piping and limited sanitation. Farther north, the four tower blocks in Rathcoole dominate the local skyline, while in south Belfast, the tower blocks in Seymour Hill, Belvoir and Finaghy remain standing.
Most of the aforementioned high-rise flats in the city were built by the Northern Ireland Housing Trust (NIHT) as part of overspill housing schemes, the first such development being the pair of point blocks in East Belfast's Cregagh estate. These eleven-story towers were completed in 1961 and were the first tall council housing blocks on the island of Ireland. The NIHT also designed the inner city Divis Flats complex. The six-to-eight-storey deck-access flats that comprised most of the Divis estate were of poor build quality and were all demolished by the early 1990s. Similar slab blocks were built by the NIHT in East Belfast (Tullycarnet) and Derry's Bogside area, all four of which have been demolished.
Belfast Corporation constructed seven tower blocks on the former Victoria Barracks site in the New Lodge district. While the Corporation built some mid-rise flats as part of slum clearance schemes (most notably the now demolished Unity Flats and the Weetabix Flats in the Shankill area), New Lodge was its only high-rise project in the inner city; there were three more in outlying areas of the city during the 1960s, two being in Mount Vernon in North Belfast and one being in the Clarawood estate, East Belfast. The Royal Hospital built three thirteen-story towers for use as staff accommodation, prominently located adjacent to the M2 Motorway at Broadway. Belfast City Hospital also constructed a high-rise slab block which since privatisation has been named Bradbury Court, formerly known as Erskine House. Queens University Belfast built several eleven storey towers at its Queens Elms student accommodation. Of the three sixteen-story point blocks of Larne Borough Council in the late 1960s, only one remains.
North America
Canada
In Canada, large multi-family buildings are usually known as apartment buildings or apartment blocks if they are rented from one common landowner, or condominiums or condo towers if each dwelling unit is individually owned; they may be called low-rise (or walk-up), mid-rise, high-rise, or skyscraper depending on their height. Tall residential towers are a staple building type in all large cities. Their relative prominence in Canadian cities varies substantially, however. In general, more populated cities have more high-rises than smaller cities, due to a relative scarcity of land and a greater demand for housing.
However, some cities such as Quebec City and Halifax have fewer high-rise buildings due to several factors: a focus on historic preservation, height restrictions, and lower growth rates. In middle-sized cities with a relatively low population density, such as Calgary, Edmonton, Winnipeg, or Hamilton there are more apartment towers but they are greatly outnumbered by single-family houses. Most of the largest residential towers in Canada are found in Montreal, Toronto, and Vancouver; the country's most densely populated cities.
Toronto contains the second largest concentration of high-rise apartment buildings in North America (after New York). In Canada, like in other New World countries, but unlike Western Europe, most high-rise towers are located in the city centre (or "downtown"), where smaller, older buildings were demolished to make way in redevelopment schemes.
United States
In the United States, tower blocks are commonly referred to as "midrise" or "highrise apartment buildings", depending on their height, while buildings that house fewer flats (apartments), or are not as tall as the tower blocks, are called "lowrise apartment buildings". Specifically, "midrise" buildings are as tall as the streets are wide, allowing five hours of sunlight on the street.
Some of the first residential towers were the Castle Village towers in Manhattan, New York City, completed in 1939. Their cross-shaped design was copied in towers in Parkchester and Stuyvesant Town residential developments.
The government's experiments in the 1960s and 1970s to use high-rise apartments as a means of providing the housing solution for the poor broadly resulted in failure. Made in the tower in the park style, all but a few high-rise housing projects in the nation's largest cities, such as Cabrini–Green and Robert Taylor Homes in Chicago, Penn South in Manhattan, and the Desire Projects in New Orleans, fell victim to the "ghettofication" and are now being torn down, renovated, or replaced. Another example is the former Pruitt–Igoe complex in St. Louis, torn down in the 1970s.
In contrast to their public housing counterparts, commercially developed high-rise apartment buildings continue to flourish in cities around the country largely due to high land prices and the housing boom of the 2000s. The Upper East Side in New York City, featuring high-rise apartments, is the wealthiest urban neighborhood in the United States.
Currently, the tallest residential building in the world is Central Park Tower located in Midtown Manhattan, having a height of with the highest occupied floor at .
Oceania
High-rise living in Australia was limited to the Sydney CBD until the 1960s, when a short-lived fashion saw public housing tenants located in new high-rise developments, especially in Sydney and Melbourne. The buildings pictured along with four other 16-story blocks were constructed on behalf of the Royal Australian Navy and were available to sailors and their families for accommodation. Due to social problems within these blocks the Navy left and the Department of Housing took charge and flats were let to low income and immigrant families. During the 1980s many people escaping communism in Eastern bloc countries were housed in these buildings. Developers have enthusiastically adopted the term "apartment" for these new high-rise blocks, perhaps to avoid the stigma still attached to housing commission flats.
Deck access
Deck access is a type of flat that is accessed from a walkway that is open to the elements, as opposed to flats that are accessed from fully enclosed internal corridors. Deck-access blocks of flats are usually fairly low-rise structures. The decks can vary from simple walkways, which may be covered or uncovered, to decks wide enough for small vehicles. The best-known example of deck-access flats in the UK is Park Hill, Sheffield, where the decks are wide enough to allow electric vehicles; however, the design is inspired by French Modernist architect Le Corbusier, particularly his Unité d'habitation in Marseille.
Green tower blocks
Green tower blocks have some scheme of living plants or green roofs or solar panels on their roofs or incorporate other environmentally friendly design features.
| Technology | Mixed-use buildings | null |
3511290 | https://en.wikipedia.org/wiki/Strike%20and%20dip | Strike and dip | In geology, strike and dip is a measurement convention used to describe the plane orientation or attitude of a planar geologic feature. A feature's strike is the azimuth of an imagined horizontal line across the plane, and its dip is the angle of inclination (or depression angle) measured downward from horizontal. They are used together to measure and document a structure's characteristics for study or for use on a geologic map. A feature's orientation can also be represented by dip and dip direction, using the azimuth of the dip rather than the strike value. Linear features are similarly measured with trend and plunge, where "trend" is analogous to dip direction and "plunge" is the dip angle.
Strike and dip are measured using a compass and a clinometer. A compass is used to measure the feature's strike by holding the compass horizontally against the feature. A clinometer measures the feature's dip by recording the inclination perpendicular to the strike. These can be done separately, or together using a tool such as a Brunton transit or a Silva compass.
Any planar feature can be described by strike and dip, including sedimentary bedding, fractures, faults, joints, cuestas, igneous dikes and sills, metamorphic foliation and fabric, etc. Observations about a structure's orientation can lead to inferences about certain parts of an area's history, such as movement, deformation, or tectonic activity.
Elements
When measuring or describing the attitude of an inclined feature, two quantities are needed. The angle the slope descends, or dip, and the direction of descent, which can be represented by strike or dip direction.
Dip
Dip is the inclination of a given feature, and is measured from the steepest angle of descent of a tilted bed or feature relative to a horizontal plane. True dip is always perpendicular to the strike. It is written as a number (between 0° and 90°) indicating the angle in degrees below horizontal. It can be accompanied with the rough direction of dip (N, SE, etc) to avoid ambiguity. The direction can sometimes be omitted, as long as the convention used (such as right-hand rule) is known.
A feature that is completely flat will have the same dip value over the entire surface. The dip of a curved feature, such as an anticline or syncline, will change at different points along the feature and be flat on any fold axis.
Strike
Strike is a representation of the orientation of a tilted feature. The strike line of a bed, fault, or other planar feature, is a line representing the intersection of that feature with a horizontal plane. The strike of the feature is the azimuth (compass direction) of the strike line. This can be represented by either a quadrant compass bearing (such as N25°E), or as a single three-digit number in terms of the angle from true north (for example, N25°E would simply become 025 or 025°).
A feature's orientation can also be represented by its dip direction. Rather than the azimuth of a horizontal line on the plane, the azimuth of the steepest line on the plane is used. The direction of dip can be visualized as the direction water would flow if poured onto a plane.
Apparent dip
While true dip is measured perpendicular to the strike, apparent dip refers to an observed dip which is not perpendicular to the strike line. This can be seen in outcroppings or cross-sections which do not run parallel to the dip direction. Apparent dip is always shallower than the true dip. If the strike is known, the apparent dip or true dip can be calculated using trigonometry:
where δ is the true dip, α is the apparent dip, and β is the angle between the strike direction and the apparent dip direction, all in degrees.
Trend and plunge
The measurement of a linear feature's orientation is similar to strike and dip, though the terminology differs because "strike" and "dip" are reserved for planes. Linear features use trend and plunge instead. Plunge, or angle of plunge, is the inclination of the feature measured downward relative to horizontal. Trend is the feature's azimuth, measured in the direction of plunge. A horizontal line would have a plunge of 0°, and a vertical line would have a plunge of 90°. A linear feature which lies within a plane can also be measured by its rake (or pitch). Unlike plunge, which is the feature's azimuth, the rake is the angle measured within the plane from the strike line.
Maps and cross-sections
On geologic maps, strike and dip can be represented by a T symbol with a number next to it. The longer line represents strike, and is in the same orientation as the strike angle. Dip is represented by the shorter line, which is perpendicular to the strike line in the downhill direction. The number gives the dip angle, in degrees, below horizontal, and often does not have the degree symbol. Vertical and horizontal features are not marked with numbers, and instead use their own symbols. Beds dipping vertically have the dip line on both sides of the strike, and horizontal bedding is denoted by a cross within a circle.
Interpretation of strike and dip is a part of creating a cross-section of an area. Strike and dip information recorded on a map can be used to reconstruct various structures, determine the orientation of subsurface features, or detect the presence of anticline or syncline folds.
Measurement
Conventions
There are a few conventions geologists use when measuring a feature's azimuth. When using the strike, two directions can be measured at 180° apart, at either clockwise or counterclockwise of north. One common convention is to use the "right-hand rule" (RHR) where the plane dips down towards the right when facing the strike direction, or that the dip direction should be 90° clockwise of the strike direction. However, in the UK, the right-hand rule has sometimes been specified so that the dip direction is instead counterclockwise from the strike. Some geologists prefer to use whichever strike direction is less than 180°. Others prefer to use the "dip-direction, dip" (DDD) convention instead of using the strike direction. Strike and dip are generally written as 'strike/dip' or 'dip direction,dip', with the degree symbol typically omitted. The general alphabetical dip direction (N, SE, etc) can be added to reduce ambiguity. For a feature with a dip of 45° and a dip direction of 75°, the strike and dip can be written as 345/45 NE, 165/45 NE, or 075,45. The compass quadrant direction for the strike can also be used in place of the azimuth, written as S15E or N15W.
Tools
Strike and dip are measured in the field using a compass and with a clinometer. A compass is used to measure the azimuth of the strike, and the clinometer measures inclination of the dip. Dr. E. Clar first described the modern compass-clinometer in 1954, and some continue to be referred to as Clar compasses. Compasses in use today include the Brunton compass and the Silva compass.
Smartphone apps which can make strike and dip measurements are also available, including apps such as GeoTools. These apps can make use of the phone's internal accelerometer to provide orientation measurements. Combined with the GPS functionality of such devices, this allows readings to be recorded and later downloaded onto a map.
When studying subsurface features, a dipmeter can be used. A dipmeter is a tool that is lowered into a borehole, and has arms radially attached which can detect the microresistivity of the rock. By recording the times at which the rock's properties change across each of the sensors, the strike and dip of subsurface features can be worked out.
| Physical sciences | Structural geology | Earth science |
3512103 | https://en.wikipedia.org/wiki/Theory%20%28mathematical%20logic%29 | Theory (mathematical logic) | In mathematical logic, a theory (also called a formal theory) is a set of sentences in a formal language. In most scenarios a deductive system is first understood from context, after which an element of a deductively closed theory is then called a theorem of the theory. In many deductive systems there is usually a subset that is called "the set of axioms" of the theory , in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms.
General theories (as expressed in formal language)
When defining theories for foundational purposes, additional care must be taken, as normal set-theoretic language may not be appropriate.
The construction of a theory begins by specifying a definite non-empty conceptual class , the elements of which are called statements. These initial statements are often called the primitive elements or elementary statements of the theory—to distinguish them from other statements that may be derived from them.
A theory is a conceptual class consisting of certain of these elementary statements. The elementary statements that belong to are called the elementary theorems of and are said to be true. In this way, a theory can be seen as a way of designating a subset of that only contain statements that are true.
This general way of designating a theory stipulates that the truth of any of its elementary statements is not known without reference to . Thus the same elementary statement may be true with respect to one theory but false with respect to another. This is reminiscent of the case in ordinary language where statements such as "He is an honest person" cannot be judged true or false without interpreting who "he" is, and, for that matter, what an "honest person" is under this theory.
Subtheories and extensions
A theory is a subtheory of a theory if is a subset of . If is a subset of then is called an extension or a supertheory of
Deductive theories
A theory is said to be a deductive theory if is an inductive class, which is to say that its content is based on some formal deductive system and that some of its elementary statements are taken as axioms. In a deductive theory, any sentence that is a logical consequence of one or more of the axioms is also a sentence of that theory. More formally, if is a Tarski-style consequence relation, then is closed under (and so each of its theorems is a logical consequence of its axioms) if and only if, for all sentences in the language of the theory , if , then ; or, equivalently, if is a finite subset of (possibly the set of axioms of in the case of finitely axiomatizable theories) and , then , and therefore .
Consistency and completeness
A syntactically consistent theory is a theory from which not every sentence in the underlying language can be proven (with respect to some deductive system, which is usually clear from context). In a deductive system (such as first-order logic) that satisfies the principle of explosion, this is equivalent to requiring that there is no sentence φ such that both φ and its negation can be proven from the theory.
A satisfiable theory is a theory that has a model. This means there is a structure M that satisfies every sentence in the theory. Any satisfiable theory is syntactically consistent, because the structure satisfying the theory will satisfy exactly one of φ and the negation of φ, for each sentence φ.
A consistent theory is sometimes defined to be a syntactically consistent theory, and sometimes defined to be a satisfiable theory. For first-order logic, the most important case, it follows from the completeness theorem that the two meanings coincide. In other logics, such as second-order logic, there are syntactically consistent theories that are not satisfiable, such as ω-inconsistent theories.
A complete consistent theory (or just a complete theory) is a consistent theory such that for every sentence φ in its language, either φ is provable from or {φ} is inconsistent. For theories closed under logical consequence, this means that for every sentence φ, either φ or its negation is contained in the theory. An incomplete theory is a consistent theory that is not complete.
(see also ω-consistent theory for a stronger notion of consistency.)
Interpretation of a theory
An interpretation of a theory is the relationship between a theory and some subject matter when there is a many-to-one correspondence between certain elementary statements of the theory, and certain statements related to the subject matter. If every elementary statement in the theory has a correspondent it is called a full interpretation, otherwise it is called a partial interpretation.
Theories associated with a structure
Each structure has several associated theories. The complete theory of a structure A is the set of all first-order sentences over the signature of A that are satisfied by A. It is denoted by Th(A). More generally, the theory of K, a class of σ-structures, is the set of all first-order σ-sentences that are satisfied by all structures in K, and is denoted by Th(K). Clearly Th(A) = Th({A}). These notions can also be defined with respect to other logics.
For each σ-structure A, there are several associated theories in a larger signature σ' that extends σ by adding one new constant symbol for each element of the domain of A. (If the new constant symbols are identified with the elements of A that they represent, σ' can be taken to be σ A.) The cardinality of σ' is thus the larger of the cardinality of σ and the cardinality of A.
The diagram of A consists of all atomic or negated atomic σ'-sentences that are satisfied by A and is denoted by diagA. The positive diagram of A is the set of all atomic σ'-sentences that A satisfies. It is denoted by diag+A. The elementary diagram of A is the set eldiagA of all first-order σ'-sentences that are satisfied by A or, equivalently, the complete (first-order) theory of the natural expansion of A to the signature σ'.
First-order theories
A first-order theory is a set of sentences in a first-order formal language .
Derivation in a first-order theory
There are many formal derivation ("proof") systems for first-order logic. These include Hilbert-style deductive systems, natural deduction, the sequent calculus, the tableaux method and resolution.
Syntactic consequence in a first-order theory
A formula A is a syntactic consequence of a first-order theory if there is a derivation of A using only formulas in as non-logical axioms. Such a formula A is also called a theorem of . The notation "" indicates A is a theorem of .
Interpretation of a first-order theory
An interpretation of a first-order theory provides a semantics for the formulas of the theory. An interpretation is said to satisfy a formula if the formula is true according to the interpretation. A model of a first-order theory is an interpretation in which every formula of is satisfied.
First-order theories with identity
A first-order theory is a first-order theory with identity if includes the identity relation symbol "=" and the reflexivity and substitution axiom schemes for this symbol.
Topics related to first-order theories
Compactness theorem
Consistent set
Deduction theorem
Enumeration theorem
Lindenbaum's lemma
Löwenheim–Skolem theorem
Examples
One way to specify a theory is to define a set of axioms in a particular language. The theory can be taken to include just those axioms, or their logical or provable consequences, as desired. Theories obtained this way include ZFC and Peano arithmetic.
A second way to specify a theory is to begin with a structure, and let the theory be the set of sentences that are satisfied by the structure. This is a method for producing complete theories through the semantic route, with examples including the set of true sentences under the structure (N, +, ×, 0, 1, =), where N is the set of natural numbers, and the set of true sentences under the structure (R, +, ×, 0, 1, =), where R is the set of real numbers. The first of these, called the theory of true arithmetic, cannot be written as the set of logical consequences of any enumerable set of axioms.
The theory of (R, +, ×, 0, 1, =) was shown by Tarski to be decidable; it is the theory of real closed fields (see Decidability of first-order theories of the real numbers for more).
| Mathematics | Model theory | null |
3512524 | https://en.wikipedia.org/wiki/Respiratory%20sounds | Respiratory sounds | Respiratory sounds, also known as lung sounds or breath sounds, are the specific sounds generated by the movement of air through the respiratory system. These may be easily audible or identified through auscultation of the respiratory system through the lung fields with a stethoscope as well as from the spectral characteristics of lung sounds. These include normal breath sounds and added sounds such as crackles, wheezes, pleural friction rubs, stertor, and stridor.
Description and classification of the sounds usually involve auscultation of the inspiratory and expiratory phases of the breath cycle, noting both the pitch (typically described as low (≤200 Hz), medium or high (≥400 Hz)) and intensity (soft, medium, loud or very loud) of the sounds heard.
Normal breath sounds
Normal breath sounds are classified as vesicular, bronchovesicular, bronchial or tracheal based on the anatomical location of auscultation. Normal breath sounds can also be identified by patterns of sound duration and the quality of the sound as described in the table below:
Abnormal breath sounds
Common types of abnormal breath sounds include the following:
Continued
Rales: Small clicking, bubbling, or rattling sounds in the lungs. They are heard when a person inhales. They are believed to occur when air opens alveoli. Rales can also be described as moist, dry, fine, and coarse.
Rhonchi are coarse rattling respiratory sounds, usually caused by secretions in bronchial airways. The sounds resemble snoring. "Rhonchi" is the plural form of the singular word "rhonchus".
Stridor: Wheeze-like sound heard when a person breathes. Usually it is due to a blockage of airflow in the windpipe (trachea) or in the back of the throat.
Wheezing: High-pitched sounds produced by narrowed airways. They are most often heard when a person breathes out (exhales). Wheezing and other abnormal sounds can sometimes be heard without a stethoscope.
Other tests of auscultation
Pectoriloquy, egophony and bronchophony are tests of auscultation that utilize the phenomenon of vocal resonance. Clinicians can utilize these tests during a physical exam to screen for pathological lung disease. For example, in whispered pectoriloquy, the person being examined whispers a two syllable number as the clinician listens over the lung fields. The whisper is not normally heard over the lungs, but if heard may be indicative of pulmonary consolidation in that area. This is because sound travels differently through denser (fluid or solid) media than the air that should normally be predominant in lung tissue. In egophony, the person being examined continually speaks the English long-sound "E" (/i/). The lungs are usually air filled, but if there is an abnormal solid component due to infection, fluid, or tumor, the higher frequencies of the "E" sound will be diminished. This changes the sound produced, from a long "E" sound to a long "A" sound (/eɪ/).
History
In 1957, Robertson and Coope proposed the two main categories of adventitious (added) lung sounds. Those categories were "Continuous" and "Interrupted" (or non-continuous). In 1976, the International Lung Sound Association simplified the sub-categories as follows:
Continuous
Wheezes (>400 Hz)
Rhonchi (<200 Hz)
Discontinuous
Fine crackles
Coarse crackles
Several sources will also refer to "medium" crackles, as a crackling sound that seems to fall between the coarse and fine crackles. Crackles are defined as discrete sounds that last less than 250 ms, while the continuous sounds (rhonchi and wheezes) last approximately 250 ms. Rhonchi are usually caused by a stricture or blockage in the upper airway. These are different from stridor.
| Biology and health sciences | Diagnostics | Health |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.