id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
40626873 | https://en.wikipedia.org/wiki/Technological%20determinism | Technological determinism | Technological determinism is a reductionist theory in assuming that a society's technology progresses by following its own internal logic of efficiency, while determining the development of the social structure and cultural values. The term is believed to have originated from Thorstein Veblen (1857–1929), an American sociologist and economist. The most radical technological determinist in the United States in the 20th century was most likely Clarence Ayres who was a follower of Thorstein Veblen as well as John Dewey. William Ogburn was also known for his radical technological determinism and his theory on cultural lag.
Origin
The origins of technological determinism as a formal concept are often traced to Thorstein Veblen (1857–1929), an influential American sociologist and economist. Veblen, known for his work on social and economic issues, introduced ideas that portrayed technology as a powerful, autonomous force capable of shaping societal norms and structures. He argued that the development and use of machinery exerted an independent influence on human thought and behavior, notably asserting that "the machine throws out anthropomorphic habits of thought.” This notion laid the foundation for technological determinism by suggesting that technology inherently transforms society by reshaping patterns of thought and behavior.
Historical Context and Influences
During Veblen's time, rapid industrialization and advancements in technology were radically altering American society. Innovations in manufacturing and transportation, such as the assembly line and railroads, demonstrated technology’s potential to reshape economic and social structures. These changes helped popularize the idea that technology could independently drive societal evolution, creating the conditions for Veblen's ideas to resonate widely.
Influence of Karl Marx and Expansion by Clarence Ayres
Although Veblen is credited with coining the core ideas behind technological determinism, the influence of Karl Marx on these ideas is also significant. Marx argued that technology drives historical change by shaping the "material base" of society. For instance, he suggested that the railway in colonial India would challenge and erode the caste system by introducing new economic activities and altering social hierarchies. Later, Clarence Ayres, a 20th-century economist inspired by Veblen, expanded on these ideas by introducing the concept of "technological drag." According to Ayres, technology progresses as a dynamic, self-generating force, while traditional institutions often lag, resisting the transformative potential of technological change. Ayres’ theory further solidified technological determinism, emphasizing the inevitable clash between technological progress and social conservatism.
Explanation
Technological determinism seeks to show technical developments, media, or technology as a whole, as the key mover in history and social change. It is a theory subscribed to by "hyperglobalists" who claim that as a consequence of the wide availability of technology, accelerated globalization is inevitable. Therefore, technological development and innovation become the principal motor of social, economic or political change.
Strict adherents to technological determinism do not believe the influence of technology differs based on how much a technology is or can be used. Instead of considering technology as part of a larger spectrum of human activity, technological determinism sees technology as the basis for all human activity.
Technological determinism has been summarized as 'The belief in technology as a key governing force in society ...' (Merritt Roe Smith). 'The idea that technological development determines social change ...' (Bruce Bimber). It changes the way people think and how they interact with others and can be described as '...a three-word logical proposition: "Technology determines history"' (Rosalind H. Williams) . It is, '... the belief that social progress is driven by technological innovation, which in turn follows an "inevitable" course.' This 'idea of progress' or 'doctrine of progress' is centralized around the idea that social problems can be solved by technological advancement, and this is the way that society moves forward. Technological determinists believe that "'You can't stop progress', implying that we are unable to control technology" (Lelia Green). This suggests that we are somewhat powerless, and society allows technology to drive social changes because "societies fail to be aware of the alternatives to the values embedded in it [technology]" (Merritt Roe Smith).
Technological determinism has been defined as an approach that identifies technology, or technological advances, as the central causal element in processes of social change. As technology is stabilized, its design tends to dictate users' behaviors, consequently stating that "technological progress equals social progress." Key notions of this theory are separated into two parts, with the first being that the development of the technology itself may also be separate from social and political factors, arising from "the ways of inventors, engineers, and designers following an internal, technical logic that has nothing to do with social relationships". The second is that as technology is stabilized, its design tends to dictate users' behaviors, consequently resulting in social change.
As technology changes, the ways in which it is utilized and incorporated into the daily lives of individuals within a culture consequently affect the ways of living, highlighting how technology ultimately determines societal growth through its influence on relations and ways of living within a culture. To illustrate, "the invention of the wheel revolutionized human mobility, allowing humans to travel greater distances and carry greater loads with them". This technological advancement also leads to interactions between different cultural groups, advanced trade, and thus impacts the size and relations both within and between different networks. Other examples include the invention of language, expanding modes of communication between individuals, the introduction of bookkeeping and written documentation, impacting the circulation of knowledge, and having streamlined effects on the socioeconomic and political systems as a whole. As Dusek (2006) notes, "culture and society cannot affect the direction of technology…[and] as technology develops and changes, the institutions in the rest of society change, as does the art and religion of a society." Thus, technological determinism dictates that technological advances and social relations are inevitably tied, with the change of either affecting the other by consequence of normalization.
This stance however ignores the social and cultural circumstances in which the technology was developed. Sociologist Claude Fischer (1992) characterized the most prominent forms of technological determinism as "billiard ball" approaches, in which technology is seen as an external force introduced into a social situation, producing a series of ricochet effects.
Rather than acknowledging that a society or culture interacts with and even shapes the technologies that are used, a technological determinist view holds that "the uses made of technology are largely determined by the structure of the technology itself, that is, that its functions follow from its form" (Neil Postman). However, this is not the sole view of TD following Smith and Marx's (1998) notion of "hard" determinism, which states that once a technology is introduced into a culture what follows is the inevitable development of that technology. In this view, the role of "agency (the power to affect change) is imputed on the technology itself, or some of its intrinsic attributes; thus the invention of technology leads to a situation of inescapable necessity."
The other view follows what Smith and Marx (1998) dictate as "soft" determinism, where the development of technology is also dependent on social context, affecting how it is adopted into a culture, "and, if the technology is adopted, the social context will have important effects on how the technology is used and thus on its ultimate impact".
For example, we could examine the spread of mass-produced knowledge through the role of the printing press in the Protestant Reformation. Because of the urgency from the protestant side to get the reform off the ground before the church could react, "early Lutheran leaders, led by Luther himself, wrote thousands of anti-papal pamphlets in the Reformation's first decades and these works spread rapidly through reprinting in various print shops throughout central Europe". As such the urgency of the socio-political context to utilize such technology in the beginning of its invention caused its fast adoption and normalization into European culture. We could view its uses in its popularization – for political propaganda purposes – in line with the continued traditions of newspapers in modern times, as well as newly adopted uses for other printed text, adapting to change in a social context such as an emphasis on leisurely activities such as reading. This follows the soft deterministic view because the technological invention – the printing press – was quickly adopted because of the socio-political context, and because of its fast integration into society, has impacted and continues to impact how society operates.
Hard and soft determinism
In examining determinism, “hard determinism” can be contrasted with “soft determinism”. A compatibilist says that it is possible for free will and determinism to exist in the world together, while an incompatibilist would say that they cannot and there must be one or the other. Those who support determinism can be further divided.
“Hard determinists” would view technology as developing independent from social concerns. They would say that technology creates a set of powerful forces acting to regulate our social activity and its meaning. According to this view of determinism we organize ourselves to meet the needs of technology, and the outcome of this organization is beyond our control or we do not have the freedom to make a choice regarding the outcome (autonomous technology). The 20th century French philosopher and social theorist Jacques Ellul could be said to be a hard determinist and proponent of autonomous technique (technology). In his 1954 work The Technological Society, Ellul essentially posits that technology, by virtue of its power through efficiency, determines which social aspects are best suited for its own development through a process of natural selection. A social system's values, morals, philosophy etc. that are most conducive to the advancement of technology allow that social system to enhance its power and spread at the expense of those social systems whose values, morals, philosophy etc. are less promoting of technology. While geography, climate, and other "natural" factors largely determined the parameters of social conditions for most of human history, technology has recently become the dominant objective factor (largely due to forces unleashed by the industrial revolution) and it has been the principal objective and determining factor.
“Soft determinism”, as the name suggests, is a more passive view of the way technology interacts with socio-political situations. Soft determinists still subscribe to the fact that technology is the guiding force in our evolution but would maintain that we have a chance to make decisions regarding the outcomes of a situation. This is not to say that free will exists, but that the possibility for us to roll the dice and see what the outcome exists. A slightly different variant of soft determinism is the 1922 technology-driven theory of social change proposed by William Fielding Ogburn, in which society must adjust to the consequences of major inventions, but often does so only after a period of cultural lag.
Criticism
Skepticism about technological determinism emerged alongside increased pessimism about techno-science in the mid-20th century, in particular around the use of nuclear energy in the production of nuclear weapons, Nazi human experimentation during World War II, and the problems of economic development in the Third World. As a direct consequence, desire for greater control of the course of development of technology gave rise to disenchantment with the model of technological determinism in academia.
Modern theorists of technology and society no longer consider technological determinism to be a very accurate view of the way in which we interact with technology, even though determinist assumptions and language fairly saturate the writings of many boosters of technology, the business pages of many popular magazines, and much reporting on technology . Instead, research in science and technology studies, social construction of technology and related fields have emphasized more nuanced views that resist easy causal formulations. They emphasize that "The relationship between technology and society cannot be reduced to a simplistic cause-and-effect formula. It is, rather, an 'intertwining'", whereby technology does not determine but "operates, and are operated upon in a complex social field" (Murphie and Potts).
T. Snyder approached the aspect of technological determinism in his concept: 'politics of inevitability'. A concept utilized by politicians in which society is promised the idea that the future will be only more of the present, this concept removes responsibility. This could be applied to free markets, the development of nation states and technological progress.
In his article "Subversive Rationalization: Technology, Power and Democracy with Technology," Andrew Feenberg argues that technological determinism is not a very well founded concept by illustrating that two of the founding theses of determinism are easily questionable and in doing so calls for what he calls democratic rationalization (Feenberg 210–212).
Prominent opposition to technologically determinist thinking has emerged within work on the social construction of technology (SCOT). SCOT research, such as that of Mackenzie and Wajcman (1997) argues that the path of innovation and its social consequences are strongly, if not entirely shaped by society itself through the influence of culture, politics, economic arrangements, regulatory mechanisms and the like. In its strongest form, verging on social determinism, "What matters is not the technology itself, but the social or economic system in which it is embedded" (Langdon Winner).
In his influential but contested (see Woolgar and Cooper, 1999) article "Do Artifacts Have Politics?", Langdon Winner illustrates not a form of determinism but the various sources of the politics of technologies. Those politics can stem from the intentions of the designer and the culture of the society in which a technology emerges or can stem from the technology itself, a "practical necessity" for it to function. For instance, New York City urban planner Robert Moses is purported to have built Long Island's parkway tunnels too low for buses to pass in order to keep minorities away from the island's beaches, an example of externally inscribed politics. On the other hand, an authoritarian command-and-control structure is a practical necessity of a nuclear power plant if radioactive waste is not to fall into the wrong hands. As such, Winner neither succumbs to technological determinism nor social determinism. The source of a technology's politics is determined only by carefully examining its features and history.
Although "The deterministic model of technology is widely propagated in society" (Sarah Miller), it has also been widely questioned by scholars. Lelia Green explains that, "When technology was perceived as being outside society, it made sense to talk about technology as neutral". Yet, this idea fails to take into account that culture is not fixed and society is dynamic. When "Technology is implicated in social processes, there is nothing neutral about society" (Lelia Green). This confirms one of the major problems with "technological determinism and the resulting denial of human responsibility for change. There is a loss of human involvement that shape technology and society" (Sarah Miller).
Another conflicting idea is that of technological somnambulism, a term coined by Winner in his essay "Technology as Forms of Life". Winner wonders whether or not we are simply sleepwalking through our existence with little concern or knowledge as to how we truly interact with technology. In this view, it is still possible for us to wake up and once again take control of the direction in which we are traveling (Winner 104). However, it requires society to adopt Ralph Schroeder's claim that, "users don't just passively consume technology, but actively transform it".
In opposition to technological determinism are those who subscribe to the belief of social determinism and postmodernism. Social determinists believe that social circumstances alone select which technologies are adopted, with the result that no technology can be considered "inevitable" solely on its own merits. Technology and culture are not neutral and when knowledge comes into the equation, technology becomes implicated in social processes. The knowledge of how to create, enhance, and use technology is socially bound knowledge. Postmodernists take another view, suggesting that what is right or wrong is dependent on circumstance. They believe technological change can have implications on the past, present and future. While they believe technological change is influenced by changes in government policy, society and culture, they consider the notion of change to be a paradox, since change is constant.
Media and cultural studies theorist Brian Winston, in response to technological determinism, developed a model for the emergence of new technologies which is centered on the Law of the suppression of radical potential. In two of his books – Technologies of Seeing: Photography, Cinematography and Television (1997) and Media Technology and Society (1998) – Winston applied this model to show how technologies evolve over time, and how their 'invention' is mediated and controlled by society and societal factors which suppress the radical potential of a given technology.
Notable technological determinists
Some interpret Karl Marx as advocating technological determinism, with such statements as "The Handmill gives you society with the feudal lord: the steam-mill, society with the industrial capitalist" (The Poverty of Philosophy, 1847), but others argue that Marx was not a determinist.
Technological determinist Walter J. Ong reviews the societal transition from an oral culture to a written culture in his work Orality and Literacy: The Technologizing of the Word (1982). He asserts that this particular development is attributable to the use of new technologies of literacy (particularly print and writing,) to communicate thoughts which could previously only be verbalized. He furthers this argument by claiming that writing is purely context dependent as it is a "secondary modelling system" (8). Reliant upon the earlier primary system of spoken language, writing manipulates the potential of language as it depends purely upon the visual sense to communicate the intended information. Furthermore, the rather stagnant technology of literacy distinctly limits the usage and influence of knowledge, it unquestionably effects the evolution of society. In fact, Ong asserts that "more than any other single invention, writing has transformed human consciousness" (Ong 1982: 78).
Media determinism as a form of technological determinism
Media determinism is a form of technological determinism, a philosophical and sociological position which posits the power of the media to impact society. Two foundational media determinists are the Canadian scholars Harold Innis and Marshall McLuhan. One of the best examples of technological determinism in media theory is Marshall McLuhan's theory "the medium is the message" and the ideas of his mentor Harold Adams Innis. Both these Canadian theorists saw media as the essence of civilization. The association of different media with particular mental consequences by McLuhan and others can be seen as related to technological determinism. It is this variety of determinism that is referred to as media determinism. According to McLuhan, there is an association between communications media/technology and language; similarly, Benjamin Lee Whorf argues that language shapes our perception of thinking (linguistic determinism). For McLuhan, media is a more powerful and explicit determinant than is the more general concept of language. McLuhan was not necessarily a hard determinist. As a more moderate version of media determinism, he proposed that our use of particular media may have subtle influences on us, but more importantly, it is the social context of use that is crucial. | Technology | General | null |
42046835 | https://en.wikipedia.org/wiki/Container | Container | A container is any receptacle or enclosure for holding a product used in storage, packaging, and transportation, including shipping.
Things kept inside of a container are protected on several sides by being inside of its structure. The term is most frequently applied to devices made from materials that are durable and are often partly or completely rigid.
A container can also be considered as a basic tool, consisting of any device creating a partially or fully enclosed space that can be used to contain, store, and transport objects or materials.
History
Humans have used containers for at least 100,000 years, and possibly for millions of years. The first containers were probably invented for storing food, allowing early humans to preserve more of their food for a longer time, to carry it more easily, and also to protect it from other animals. The development of food storage containers was "of immense importance to the evolving human populations", and "was a totally innovative behavior" not seen in other primates. The earliest containers were probably objects found in nature such as hollow gourds, of which primitive examples have been found in cultures such as those of the Tharu people, and native Hawaiian people. These were followed by woven baskets, carved wood, and pottery.
Containers thereafter continued to develop along with related advances in human technology, and with the development of new materials and new means of manufacture. Early glass bottles were produced by the Phoenicians; specimens of Phoenician translucent and transparent glass bottles have been found in Cyprus and Rhodes generally varying in length from three to six inches. These Phoenician examples from the first millennium BC were thought to have been used to contain perfume. The Romans learned glass-making from the Phoenicians and produced many extant examples of fine glass bottles, mostly relatively small. By the beginning of the eighteenth century, sizes for retail containers such as glass bottles had become standardized for their markets.
In 1810, Frenchman Philippe de Girard came to London and used British merchant Peter Durand as an agent to patent his own idea for a process for making tin cans. The canning concept was based on experimental food preservation work in glass containers the year before by the French inventor Nicholas Appert. Durand did not pursue food canning, but, in 1812, sold his patent to two Englishmen, Bryan Donkin and John Hall, who refined the process and product, and set up the world's first commercial canning factory on Southwark Park Road, London. By 1813 they were producing their first tin canned goods for the Royal Navy.
For transportation of goods on a larger scale, larger containers remained a problem, as customs officials inspecting imports had to deal with a lack of standardization in this field, and because predominantly wooden containers in use well into the twentieth century were prone to leaking or breaking. The standardized steel shipping container was developed in the 1950s, and quickly became ubiquitous for the large-scale transportation of commercial goods.
Towards the end of the Twentieth century, the introduction of computer-aided design made it possible to design highly specialized containers and container arrangements, and also to make form-fitting labels for containers of unusual shapes.
Modern characteristics
A number of considerations go into the design of modern containers:
Variety
Practical examples of containers are listed below.
Ceramic cylindrical vessels including:
Ancient vessels, including Amphoras, Kvevri, Pithos, and Dolium
Bottles, similar to a jar in being traditionally symmetrical about the axis perpendicular to its base and made of glass
Jars, traditionally cylindrical and made of glass
Jug
Cylindrical vessels including:
Barrels, made of wooden staves bound by rope, wooden or metal hoops.
Cans, traditionally cylindrical and sheet-metallic.
Drums, similar to a can but definitely cylindrical and not necessarily metallic
Tub
Rectilinear vessels including:
Boxes
Crates, a box or rectilinear exoskeleton, designed for hoisting or loading
Wooden boxes
Lift-vans
Corf
Certain waste containers
Flexible containers including:
Bags, such as shopping bags, mail bags, sick bag
Luggage, including satchels, backpacks, and briefcases
Packets
Gunny sacks, flour sacks
Wallets
Shipping containers, including:
Corrugated boxes, made of corrugated fiberboard
Intermodal containers, a.k.a. ship container or cargo container
Twenty-foot equivalent units, an industry standard intermodal container size
Intermediate bulk containers
Unit load devices, similar to a crate
Flexible intermediate bulk containers
| Technology | Tools: General | null |
28412183 | https://en.wikipedia.org/wiki/Projector | Projector | A projector or image projector is an optical device that projects an image (or moving images) onto a surface, commonly a projection screen. Most projectors create an image by shining a light through a small transparent lens, but some newer types of projectors can project the image directly, by using lasers. A virtual retinal display, or retinal projector, is a projector that projects an image directly on the retina instead of using an external projection screen.
The most common type of projector used today is called a video projector. Video projectors are digital replacements for earlier types of projectors such as slide projectors and overhead projectors. These earlier types of projectors were mostly replaced with digital video projectors throughout the 1990s and early 2000s, but old analog projectors are still used at some places. The newest types of projectors are handheld projectors that use lasers or LEDs to project images.
Movie theaters used a type of projector called a movie projector, nowadays mostly replaced with digital cinema video projectors.
Different projector types
Projectors can be roughly divided into three categories, based on the type of input. Some of the listed projectors were capable of projecting several types of input. For instance: video projectors were basically developed for the projection of prerecorded moving images, but are regularly used for still images in PowerPoint presentations and can easily be connected to a video camera for real-time input. The magic lantern is best known for the projection of still images, but was capable of projecting moving images from mechanical slides since its invention and was probably at its peak of popularity when used in phantasmagoria shows to project moving images of ghosts.
Real-time
Camera obscura
Concave mirror
Opaque projector
Overhead projector
Document camera
Shadow projector
Still images
Slide projector
Large-format slide projector
Magic lantern
Magic mirror
Steganographic mirror (see below for details)
Enlarger (not for direct viewing, but for the production of photographic prints)
Moving images
Movie projector
Mini portable home theatres projector
Video projector
Handheld projector
Virtual retinal display
Revolving lanterns (see below for details)
History
There probably existed quite a few other types of projectors than the examples described below, but evidence is scarce and reports are often unclear about their nature. Spectators did not always provide the details needed to differentiate between for instance a shadow play and a lantern projection. Many did not understand the nature of what they had seen and few had ever seen other comparable media. Projections were often presented or perceived as magic or even as religious experiences, with most projectionists unwilling to share their secrets. Joseph Needham sums up some possible projection examples from China in his 1962 book series Science and Civilization in China
Prehistory to 1100
Shadow play
The earliest projection of images was most likely done in primitive shadowgraphy dating back to prehistory. Shadow play usually does not involve a projection device, but can be seen as a first step in the development of projectors. It evolved into more refined forms of shadow puppetry in Asia, where it has a long history in Indonesia (records relating to Wayang since 840 CE), Malaysia, Thailand, Cambodia, China (records since around 1000 CE), India and Nepal.
Camera obscura
Projectors share a common history with cameras in the camera obscura. Camera obscura (Latin for "dark room") is the natural optical phenomenon that occurs when an image of a scene at the other side of a screen (or for instance a wall) is projected through a small hole in that screen to form an inverted image (left to right and upside down) on a surface opposite to the opening. The oldest known record of this principle is a description by Han Chinese philosopher Mozi (ca. 470 to ca. 391 BC). Mozi correctly asserted that the camera obscura image is inverted because light travels in straight lines.
In the early 11th century, Arab physicist Ibn al-Haytham (Alhazen) described experiments with light through a small opening in a darkened room and realized that a smaller hole provided a sharper image.
Chinese magic mirrors
The oldest known objects that can project images are Chinese magic mirrors. The origins of these mirrors have been traced back to the Chinese Han dynasty (206 BC – 24 AD) and are also found in Japan. The mirrors were cast in bronze with a pattern embossed at the back and a mercury amalgam laid over the polished front. The pattern on the back of the mirror is seen in a projection when light is reflected from the polished front onto a wall or other surface. No trace of the pattern can be discerned on the reflecting surface with the naked eye, but minute undulations on the surface are introduced during the manufacturing process and cause the reflected rays of light to form the pattern. It is very likely that the practice of image projection via drawings or text on the surface of mirrors predates the very refined ancient art of the magic mirrors, but no evidence seems to be available.
Revolving lanterns
Revolving lanterns have been known in China as "trotting horse lamps" [走馬燈] since before 1000 CE. A trotting horse lamp is a hexagonal, cubical or round lantern which on the inside has cut-out silhouettes attached to a shaft with a paper vane impeller on top, rotated by heated air rising from a lamp. The silhouettes are projected on the thin paper sides of the lantern and appear to chase each other. Some versions showed some extra motion in the heads, feet and/or hands of figures by connecting them with a fine iron wire to an extra inner layer that would be triggered by a transversely connected iron wire. The lamp would typically show images of horses and horse-riders.
In France, similar lanterns were known as "lanterne vive" (bright or living lantern) in Medieval times. and as "lanterne tournante" since the 18th century. An early variation was described in 1584 by Jean Prevost in his small octavo book La Premiere partie des subtiles et plaisantes inventions. In his "lanterne", cut-out figures of a small army were placed on a wooden platform rotated by a cardboard propeller above a candle. The figures cast their shadows on translucent, oiled paper on the outside of the lantern. He suggested to take special care that the figures look lively: with horses raising their front legs as if they were jumping and soldiers with drawn swords, a dog chasing a hare, etcetera. According to Prevost barbers were skilled in this art and it was common to see these night lanterns in their shop windows.
A more common version had the figures, usually representing grotesque or devilish creatures, painted on a transparent strip. The strip was rotated inside a cylinder by a tin impeller above a candle. The cylinder could be made of paper or of sheet metal perforated with decorative patterns. Around 1608 Mathurin Régnier mentioned the device in his Satire XI as something used by a patissier to amuse children. Régnier compared the mind of an old nagger with the lantern's effect of birds, monkeys, elephants, dogs, cats, hares, foxes and many strange beasts chasing each other.
John Locke (1632–1704) referred to a similar device when wondering if ideas are formed in the human mind at regular intervals,"not much unlike the images in the inside of a lantern, turned round by the heat of a candle." Related constructions were commonly used as Christmas decorations in England and parts of Europe. A still relatively common type of rotating device that is closely related does not really involve light and shadows, but it simply uses candles and an impeller to rotate a ring with tiny figurines standing on top.
Many modern electric versions of this type of lantern use all kinds of colorful transparent cellophane figures which are projected across the walls, especially popular for nurseries.
1100 to 1500
Concave mirrors
The inverted real image of an object reflected by a concave mirror can appear at the focal point in front of the mirror. In a construction with an object at the bottom of two opposing concave mirrors (parabolic reflectors) on top of each other, the top one with an opening in its center, the reflected image can appear at the opening as a very convincing 3D optical illusion.
The earliest description of projection with concave mirrors has been traced back to a text by French author Jean de Meun in his part of Roman de la Rose (circa 1275). A theory known as the Hockney-Falco thesis claims that artists used either concave mirrors or refractive lenses to project images onto their canvas/board as a drawing/painting aid as early as circa 1430.
It has also been thought that some encounters with spirits or gods since antiquity may have been conjured up with (concave) mirrors.
Fontana's lantern
Around 1420 the Venetian scholar and engineer Giovanni Fontana included a drawing of a person with a lantern projecting an image of a demon in his book about mechanical instruments "Bellicorum Instrumentorum Liber". The Latin text "Apparentia nocturna ad terrorem videntium" (Nocturnal appearance to frighten spectators)" clarifies its purpose, but the meaning of the undecipherable other lines is unclear. The lantern seems to simply have the light of an oil lamp or candle go through a transparent cylindrical case on which the figure is drawn to project the larger image, so it probably could not project an image as clearly defined as Fontana's drawing suggests.
Possible 15th century image projector
In 1437 Italian humanist author, artist, architect, poet, priest, linguist, philosopher and cryptographer Leon Battista Alberti is thought to have possibly projected painted pictures from a small closed box with a small hole, but it is unclear whether this actually was a projector or rather a type of show box with transparent pictures illuminated from behind and viewed through the hole.
1500 to 1700
16th to early 17th century
Leonardo da Vinci is thought to have had a projecting lantern - with a condensing lens, candle and chimney - based on a small sketch from around 1515.
In his Three Books of Occult Philosophy (1531–1533) Heinrich Cornelius Agrippa claimed that it was possible to project "images artificially painted, or written letters" onto the surface of the Moon with the means of moonbeams and their "resemblances being multiplied in the air". Pythagoras would have often performed this trick.
In 1589 Giambattista della Porta published about the ancient art of projecting mirror writing in his book Magia Naturalis.
Dutch inventor Cornelis Drebbel, who is a likely inventor of the microscope, is thought to have had some kind of projector that he used in magical performances. In a 1608 letter he described the many marvelous transformations he performed and the apparitions that he summoned by the means of his new invention based on optics. It included giants that rose from the earth and moved all their limbs very lifelike. The letter was found in the papers of his friend Constantijn Huygens, father of the likely inventor of the magic lantern Christiaan Huygens.
Helioscope
In 1612 Italian mathematician Benedetto Castelli wrote to his mentor, the Italian astronomer, physicist, engineer, philosopher and mathematician Galileo Galilei about projecting images of the sun through a telescope (invented in 1608) to study the recently discovered sunspots. Galilei wrote about Castelli's technique to the German Jesuit priest, physicist and astronomer Christoph Scheiner.
From 1612 to at least 1630 Christoph Scheiner would keep on studying sunspots and constructing new telescopic solar projection systems. He called these "Heliotropii Telioscopici", later contracted to helioscope.
Steganographic mirror
The 1645 first edition of German Jesuit scholar Athanasius Kircher's book Ars Magna Lucis et Umbrae included a description of his invention, the steganographic mirror: a primitive projection system with a focusing lens and text or pictures painted on a concave mirror reflecting sunlight, mostly intended for long distance communication. He saw limitations in the increase of size and diminished clarity over a long distance and expressed his hope that someone would find a method to improve on this. Kircher also suggested projecting live flies and shadow puppets from the surface of the mirror. The book was quite influential and inspired many scholars, probably including Christiaan Huygens who would invent the magic lantern. Kircher was often credited as the inventor of the magic lantern, although in his 1671 edition of Ars Magna Lucis et Umbrae Kircher himself credited Danish mathematician Thomas Rasmussen Walgensten for the magic lantern, which Kircher saw as a further development of his own projection system.
Although Athanasius Kircher claimed the Steganographic mirror as his own invention and wrote not to have read about anything like it, it has been suggested that Rembrandt's 1635 painting of "Belshazzar's Feast" depicts a steganographic mirror projection with God's hand writing Hebrew letters on a dusty mirror's surface.
In 1654 Belgian Jesuit mathematician André Tacquet used Kircher's technique to show the journey from China to Belgium of Italian Jesuit missionary Martino Martini. It is sometimes reported that Martini lectured throughout Europe with a magic lantern which he might have imported from China, but there's no evidence that anything other than Kircher's technique was used.
Magic lantern
By 1659 Dutch scientist Christiaan Huygens had developed the magic lantern, which used a concave mirror to reflect and direct as much of the light of a lamp as possible through a small sheet of glass on which was the image to be projected, and onward into a focusing lens at the front of the apparatus to project the image onto a wall or screen (Huygens apparatus actually used two additional lenses). He did not publish nor publicly demonstrate his invention as he thought it was too frivolous.
The magic lantern became a very popular medium for entertainment and educational purposes in the 18th and 19th century. This popularity waned after the introduction of cinema in the 1890s. The magic lantern remained a common medium until slide projectors came into widespread use during the 1950s.
1700 to 1900
Solar microscope
A few years before his death in 1736 Polish-German-Dutch physicist Daniel Gabriel Fahrenheit reportedly constructed a solar microscope, which was a combination of the compound microscope with camera obscura projection. It needed bright sunlight as a light source to project a clear magnified image of transparent objects. Fahrenheit's instrument may have been seen by German physician Johann Nathanael Lieberkühn who introduced the instrument in England, where optician John Cuff improved it with a stationary optical tube and an adjustable mirror. In 1774 English instrument maker Benjamin Martin introduced his "Opake Solar Microscope" for the enlarged projection of opaque objects. He claimed:
The solar microscope, was employed in experiments with photosensitive silver nitrate by Thomas Wedgwood in collaboration with Humphry Davy in making the first, but impermanent, photographic enlargements. Their discoveries, regarded as the earliest deliberate and successful form of photography, were published in June 1802 by Davy in his An Account of a Method of Copying Paintings upon Glass, and of Making Profiles, by the Agency of Light upon Nitrate of Silver. Invented by T. Wedgwood, Esq. With Observations by H. Davy in the first issue of the Journals of the Royal Institution of Great Britain.
Opaque projectors
Swiss mathematician, physicist, astronomer, logician and engineer Leonhard Euler demonstrated an opaque projector, now commonly known as an episcope, around 1756. It could project a clear image of opaque images and (small) objects.
French scientist Jacques Charles is thought to have invented the similar "megascope" in 1780. He used it for his lectures. Around 1872 Henry Morton used an opaque projector in demonstrations for huge audiences, for example in the Philadelphia Opera House which could seat 3500 people. His machine did not use a condenser or reflector, but used an oxyhydrogen lamp close to the object in order to project huge clear images.
Solar camera
See main article: Solar camera
Known equally, though later, as a solar enlarger, the solar camera is a photographic application of the solar microscope and an ancestor of the darkroom enlarger, and was used, mostly by portrait photographers and as an aid to portrait artists, in the mid-to-late 19th century to make photographic enlargements from negatives using the Sun as a light source powerful enough to expose the then available low-sensitivity photographic materials. It was superseded in the 1880s when other light sources, including the incandescent bulb, were developed for the darkroom enlarger and materials became ever more photo-sensitive.
20th century to present day
In the early and middle parts of the 20th century, low-cost opaque projectors were produced and marketed as a toy for children. The light source in early opaque projectors was often limelight, with incandescent light bulbs and halogen lamps taking over later. Episcopes are still marketed as artists' enlargement tools to allow images to be traced on surfaces such as prepared canvas.
In the late 1950s and early 1960s, overhead projectors began to be widely used in schools and businesses. The first overhead projector was used for police identification work. It used a celluloid roll over a 9-inch stage allowing facial characteristics to be rolled across the stage. The United States military in 1940 was the first to use it in quantity for training.
From the 1950s to the 1990s slide projectors for 35 mm photographic positive film slides were common for presentations and as a form of entertainment; family members and friends would occasionally gather to view slideshows, typically of vacation travels.
Complex Multi-image shows of the 1970s to 1990s, purposed usually for marketing, promotion or community service or artistic displays, used 35mm and 46mm transparency slides (diapositives) projected by single or multiple slide projectors onto one or more screens in synchronization with an audio voice-over and/or music track controlled by a pulsed-signal tape or cassette. Multi-image productions are also known as multi-image slide presentations, slide shows and diaporamas and are a specific form of multimedia or audio-visual production.
Digital cameras had become commercialised by 1990, and in 1997 Microsoft PowerPoint was updated to include image files, accelerating the transition from 35 mm slides to digital images, and thus digital projectors, in pedagogy and training. Production of all Kodak Carousel slide projectors ceased in 2004, and in 2009 manufacture and processing of Kodachrome film was discontinued.
In popular culture
In Mad Mens first series the final episode presents the protagonist Don Draper's presentation (via slide projector) of a plan to market the Kodak slide carrier a 'carousel'.
| Technology | Optical instruments | null |
45338795 | https://en.wikipedia.org/wiki/Westerhout%2040 | Westerhout 40 | Westerhout 40 or W40 (also designated Sharpless 64, Sh2-64, or RCW 174) is a star-forming region in the Milky Way located in the constellation Serpens. In this region, interstellar gas forming a diffuse nebula surrounds a cluster of several hundred new-born stars. The distance to W40 is 436 ± 9 pc (1420 ± 30 light years), making it one of the closest sites of formation of high-mass O-type and B-type stars. The ionizing radiation from the massive OB stars has created an H II region, which has an hour-glass morphology.
Dust from the molecular cloud in which W40 formed obscures the nebula, rendering W40 difficult to observe at visible wavelengths of light. Thus, X-ray, infrared, and radio observations have been used to see through the molecular cloud to study the star-formation processes going on within.
W40 appears near to several other star-forming regions in the sky, including an infrared dark cloud designated Serpens South and a young stellar cluster designated the Serpens Main Cluster. Similar distances measured for these three star-forming regions suggests that they are near to each other and part of the same larger-scale collection of clouds known as the Serpens Molecular Cloud.
On the Sky
The W40 star-forming region is projected on the sky in the direction of the Serpens-Aquila Rift, a mass of dark clouds above the Galactic plane in the constellations Aquila, Serpens, and eastern Ophiuchus. The high extinction from interstellar clouds means that the nebula looks unimpressive in visible light, despite being one of the nearest sites of massive star formation.
Star Formation in W40
Like all star-forming regions, W40 is made up of several components: the cluster of young stars and the gaseous material from which these stars form (the interstellar medium). Most of the gas in W40 is in the form of molecular clouds, the coldest, densest phase of the interstellar medium, which is made up of mostly molecular hydrogen (H2). Stars form in molecular clouds when the gas mass in part of a cloud becomes too great, causing it to collapse due to the Jeans instability. Stars usually do not form in isolation, but rather in groups containing hundreds or thousands of other stars, as is the case of W40.
In W40, feedback from the star cluster has ionized some of the gas and blown a bipolar bubble in the cloud around the cluster. Such feedback effects may trigger further star-formation but can also lead to the eventual destruction of the molecular cloud and an end of star-formation activity.
Star cluster
A cluster of young stars lies at the center of the W40 HII region containing approximately 520 stars down to 0.1 solar masses (). Age estimates for the stars indicate that the stars in the center of the cluster are approximately 0.8 million years old, while the stars on the outside are slightly older at 1.5 million years. The cluster is roughly spherically symmetric and is mass segregated, with the more massive stars relatively more likely to be found near the center of the cluster. The cause of mass segregation in very young star clusters, like W40, is an open theoretical question in star-formation theory because timescales for mass segregation through two-body interactions between stars are typically too long.
The cloud is ionized by several O and B-type stars. Near-infrared spectroscopy has identified one late-O type star named IRS 1A South, and 3 early B-type stars, IRS 2B, IRS 3A, and IRS 5. In addition, IRS 1A North and IRS 2A are Herbig Ae/Be stars. Radio emission from several of these stars is observed with the Very Large Array, and may be evidence for ultra-compact H II regions.
Excess light in the infrared indicates that a number of stars in the cluster have circumstellar disks, which may be in the process of forming planets. Millimeter observations from the IRAM 30m telescope show 9 Class-0 protostars in the Serpens South region and 3 Class-0 protostars in W40, supporting the view that the region is very young and actively forming stars.
Interstellar medium
W40 lies in a molecular cloud with an estimated mass of 104 . The core of the molecular cloud has a shape like a shepherd's crook and is currently producing new stars. The cluster of OB and pre–main-sequence (PMS) stars lies just eastward of the bend in this filament. The cloud core was also observed in radio light produced by CO, which allows the mass of the core to be estimated at 200–300 . A weak, bipolar outflow of gas flows out of the core, likely driven by a young stellar object, with two lobes differing in velocity by 0.5 km/s.
It was in this region where the striking prevalence of filamentary cloud structures seen by ESA's Herschel Space Observatory was first noted. These filaments of cloud have dense "cores" of gas embedded within them—many of which are likely to gravitationally collapse and form stars. The Herschel results for this region, and subsequently reported results for other star-forming regions, imply that fragmentation of molecular-cloud filaments are fundamental to the star-formation process. The Herschel results for W40 and the Aquila Rift, compared to those for molecular clouds in the Polaris region, suggest that star-formation occurs when the linear density (mass per unit length) exceeds a threshold making them susceptible to gravitational instability. This accounts for the high star-formation rate in W40 and the Aquila Rift, in contrast to the low star-formation rate in the Polaris clouds. These observational results complement computer simulations of star-formation, which also emphasize the role that molecular-cloud filaments play in the birth of stars.
Observations by the space-based Chandra X-ray Observatory have shown a diffuse X-ray glow from the H II region, which is likely due to the presence of a multi-million Kelvin plasma. Such hot plasmas can be produced by winds from massive stars, which become shock heated.
Gallery
| Physical sciences | Notable nebulae | Astronomy |
25190904 | https://en.wikipedia.org/wiki/Hunger%20%28physiology%29 | Hunger (physiology) | Hunger is a sensation that motivates the consumption of food. The sensation of hunger typically manifests after only a few hours without eating and is generally considered to be unpleasant. Satiety occurs between 5 and 20 minutes after eating. There are several theories about how the feeling of hunger arises. The desire to eat food, or appetite, is another sensation experienced with regard to eating.
The term hunger is also the most commonly used in social science and policy discussions to describe the condition of people who suffer from a chronic lack of sufficient food and constantly or frequently experience the sensation of hunger, and can lead to malnutrition. A healthy, well-nourished individual can survive for weeks without food intake (see fasting), with claims ranging from three to ten weeks.
Satiety is the opposite of hunger; it is the sensation of feeling full.
Hunger pangs
The physical sensation of hunger is related to the contractions of the muscles of the empty stomach. Peristalsis takes place even when the stomach is empty, and these contractions—sometimes called hunger pangs once they become severe—are believed to be triggered by high concentrations of the ghrelin hormone. The migrating motor complex is a pattern of hunger contractions that takes place in the hungry stomach and gut; they are correlated in time with subjective sensations of hunger and are even responsible for the rumbling associated with a hungry stomach. In contrast to leptin, the hormones peptide YY and leptin can have an opposite effect on the appetite, causing the sensation of being full. Ghrelin can be released if blood sugar levels dip too low—a condition called hypoglycemia that can result from long periods without eating. Stomach contractions from hunger can be especially severe and painful in children and young adults.
Hunger pangs can be made worse by irregular meals. People who cannot afford to eat more than once a day sometimes refuse one-off additional meals, because if they do not eat at around the same time on the next days, they may suffer extra severe hunger pangs. Older people may feel less violent stomach contractions when they get hungry, but still suffer the secondary effects resulting from low food intake: these include weakness, irritability and decreased concentration. Prolonged lack of adequate nutrition also causes increased susceptibility to disease and reduced ability for the body to heal.
Short-term regulation of hunger and food intake
Short-term regulation of hunger and food intake involves neural signals from the GI tract, blood levels of nutrients, GI tract hormones, and psychological factors.
Neural signals from the GI tract
One method that the brain uses to evaluate the contents of the gut is through vagal nerve fibers that carry signals between the brain and the gastrointestinal tract (GI tract). Stretch receptors work to inhibit appetite upon distention of the GI tract by sending signals along the vagus nerve afferent pathway and inhibiting the hunger center.
Hormone signals
The hormones insulin and cholecystokinin (CCK) are released from the GI tract during food absorption and act to suppress the feeling of hunger. CCK is key in suppressing hunger because of its role in inhibiting neuropeptide Y. Glucagon and epinephrine levels rise during fasting and stimulate hunger. Ghrelin, a hormone produced by the stomach, is an appetite stimulant.
Psychological factors
Two psychological processes appear to be involved in regulating short-term food intake: liking and wanting. Liking refers to the palatability or taste of the food, which is reduced by repeated consumption. Wanting is the motivation to consume the food, which is also reduced by repeated consumption of a food and may be due to change in memory-related processes. Wanting can be triggered by a variety of psychological processes. Thoughts of a food may intrude on consciousness and be elaborated on, for instance, as when one sees a commercial or smells a desirable food.
Long-term regulation of hunger and food intake
The regulation of appetite (the appestat) has been the subject of much research; breakthroughs included the discovery, in 1994, of leptin, a hormone produced by the adipose tissue that appeared to provide negative feedback. Leptin is a peptide hormone that affects homeostasis and immune responses. Lowering food intake can lower leptin levels in the body, while increasing the intake of food can raise leptin levels. Later studies showed that appetite regulation is an immensely complex process involving the gastrointestinal tract, many hormones, and both the central and autonomic nervous systems. The circulating gut hormones that regulate many pathways in the body can either stimulate or suppress appetite. For example, ghrelin stimulates appetite, whereas cholecystokinin and glucagon-like peptide-1 (GLP-1) suppress appetite.
Effector
The arcuate nucleus of the hypothalamus, a part of the brain, is the main regulatory organ for the human appetite. Many brain neurotransmitters affect appetite, especially dopamine and serotonin. Dopamine acts primarily through the reward centers of the brain, whereas serotonin primarily acts through effects on neuropeptide Y (NPY)/agouti-related peptide (AgRP) [stimulate appetite] and proopiomelanocortin (POMC) [induce satiety] neurons located in the arcuate nucleus. Similarly, the hormones leptin and insulin suppress appetite through effects on AgRP and POMC neurons.
Hypothalamocortical and hypothalamolimbic projections contribute to the awareness of hunger, and the somatic processes controlled by the hypothalamus include vagal tone (the activity of the parasympathetic autonomic nervous system), stimulation of the thyroid (thyroxine regulates the metabolic rate), the hypothalamic-pituitary-adrenal axis and a large number of other mechanisms. Opioid receptor-related processes in the nucleus accumbens and ventral pallidum affect the palatability of foods.
The nucleus accumbens (NAc) is the area of the brain that coordinates neurotransmitter, opioid and endocannabinoid signals to control feeding behaviour. The few important signalling molecules inside the NAc shell modulate the motivation to eat and the affective reactions for food. These molecules include the dopamine (DA), acetylcholine (Ach), opioids and cannabinoids and their action receptors inside the brain, DA, muscarinic and μ-opioid receptor (MOR) and CB1 receptors respectively.
Sensor
The hypothalamus senses external stimuli mainly through a number of hormones such as leptin, ghrelin, PYY 3-36, orexin and cholecystokinin; all modify the hypothalamic response. They are produced by the digestive tract and by adipose tissue (leptin). Systemic mediators, such as tumor necrosis factor-alpha (TNFα), interleukins 1 and 6 and corticotropin-releasing hormone (CRH) influence appetite negatively; this mechanism explains why ill people often eat less.
Leptin, a hormone secreted exclusively by adipose cells in response to an increase in body fat mass, is an important component in the regulation of long term hunger and food intake. Leptin serves as the brain's indicator of the body's total energy stores. When leptin levels rise in the bloodstream they bind to receptors in ARC. The functions of leptin are to:
Suppress the release of neuropeptide Y (NPY), which in turn prevents the release of appetite enhancing orexins from the lateral hypothalamus. This decreases appetite and food intake, promoting weight loss.
Stimulate the expression of cocaine and amphetamine regulated transcript (CART).
Though rising blood levels of leptin do promote weight loss to some extent, its main role is to protect the body against weight loss in times of nutritional deprivation. Other factors also have been shown to effect long-term hunger and food intake regulation including insulin.
In addition, the biological clock (which is regulated by the hypothalamus) stimulates hunger. Processes from other cerebral loci, such as from the limbic system and the cerebral cortex, project on the hypothalamus and modify appetite. This explains why in clinical depression and stress, energy intake can change quite drastically.
Set point theories of hunger and eating
The set point theories of hunger and eating are a group of theories developed in the 1940s and 1950s that operate under the assumption that hunger is the result of an energy deficit and that eating is a means by which energy resources are returned to their optimal level, or energy set point. According to this assumption, a person's energy resources are thought to be at or near their set point soon after eating, and are thought to decline after that. Once the person's energy levels fall below a certain threshold, the sensation of hunger is experienced, which is the body's way of motivating the person to eat again. The set point assumption is a negative feedback mechanism. Two popular set point theories include the glucostatic set point theory and the lipostatic set point theory.
The set point theories of hunger and eating present a number of weaknesses.
The current epidemic of obesity and eating disorders undermines these theories.
The set point theories of hunger and eating are inconsistent with basic evolutionary pressures related to hunger and eating as they are currently understood.
Major predictions of the set point theories of hunger and eating have not been confirmed.
They fail to recognize other psychological and social influences on hunger and eating.
Positive-incentive perspective
The positive-incentive perspective is an umbrella term for a set of theories presented as an alternative to the set-point theories of hunger and eating. The central assertion to the positive-incentive perspective is the idea that humans and other animals are not normally motivated to eat by energy deficits, but are instead motivated to eat by the anticipated pleasure of eating, or the positive-incentive value. According to this perspective, eating is controlled in much the same way as sexual behavior. Humans engage in sexual behavior, not because of an internal deficit, but instead because they have evolved to crave it. Similarly, the evolutionary pressures of unexpected food shortages have shaped humans and all other warm blooded animals to take advantage of food when it is present. It is the presence of good food, or the mere anticipation of it that makes one hungry.
Premeal hunger
Prior to consuming a meal, the body's energy reserves are in reasonable homeostatic balance. However, when a meal is consumed, there is a homeostasis-disturbing influx of fuels into the bloodstream. When the usual mealtime approaches, the body takes steps to soften the impact of the homeostasis-disturbing influx of fuels by releasing insulin into the blood, and lowering the blood glucose levels. It is this lowering of blood glucose levels that causes premeal hunger, and not necessarily an energy deficit.
Similar conditions
A food craving is an intense desire to consume a specific food, as opposed to general hunger. Similarly, thirst is the craving for water.
A concept of food noise or food chatter has gotten more attention in the early 2020s since the advent of antiobesity indications for a class of medications called GLP1 agonists (such as semaglutide). Food noise is a mental preoccupation with food in general (as opposed to one specific food) that is largely independent from physiological hunger but nonetheless is distracting for many people; it includes recurring thoughts about what one has or hasn't eaten in recent hours, what one would like to eat right now or "shouldn't" eat right now, and what one might be eating (or "should" avoid eating) in upcoming hours. Among people for whom these medications are effective in helping with weight loss, most express that the level of food noise in their mind is noticeably reduced. Even without these medications, some people may be able to reduce food noise by modifying their dietary patterns and exercise; this is more effective for some people than others.
| Biology and health sciences | Health and fitness: General | Health |
33834771 | https://en.wikipedia.org/wiki/Climate%20change%20in%20Europe | Climate change in Europe | Climate change has resulted in an increase in temperature of 2.3 °C (4.14 °F) (2022) in Europe compared to pre-industrial levels. Europe is the fastest warming continent in the world. Europe's climate is getting warmer due to anthropogenic activity. According to international climate experts, global temperature rise should not exceed 2 °C to prevent the most dangerous consequences of climate change; without reduction in greenhouse gas emissions, this could happen before 2050. Climate change has implications for all regions of Europe, with the extent and nature of impacts varying across the continent.
Impacts on European countries include warmer weather and increasing frequency and intensity of extreme weather such as heat waves, bringing health risks and impacts on ecosystems. European countries are major contributors to global greenhouse gas emissions, although the European Union and governments of several countries have outlined plans to implement climate change mitigation and an energy transition in the 21st century, the European Green Deal being one of these. The European Union commissioner of climate action is Frans Timmermans since 1 December 2019.
Public opinion in Europe shows concern about climate change; in the European Investment Bank's Climate Survey of 2020, 90% of Europeans believe their children will experience the effects of climate change in their daily lives. Climate change activism and businesses shifting their practices has taken place in Europe.
Greenhouse gas emissions
A 2016 European Environment Agency (EEA) report documents greenhouse gas (GHG) emissions between 1990 and 2014 for the EU-28 individual member states by IPCC sector. Total greenhouse gas emissions fell by 24% between 1990 and 2014, but road transport emissions rose by 17%. Cars, vans, and trucks had the largest absolute increase in CO2 emissions of any sector over the last 25 years, growing by 124Mt. Aviation also grew by 93Mt over the same period, a massive 82% increase.
In 2019 European Union emissions reached 3.3 Gt (3.3 billion metric tons), 80% of which was from fossil fuels.
In 2021, the European Parliament approved a landmark law setting GHG targets for 2050. The law aims to achieve carbon neutrality and, after 2050, negative emissions and paves the way for a policy overhaul in the European Union. Under the law, the European Union must act to lower net GHG emissions by at least 55% by 2030 (compared to 1990). The law sets a limit of 225 Mt of CO2 equivalent to the contribution of removals to the target. According to Swedish lawmaker Jytte Guteland, the law would allow Europe to become the first carbon-neutral continent by 2050.
In 2023, EU greenhouse gas emissions fell by 8.3% — the largest drop since 2020's pandemic-driven 9.8% decline. Emissions are now 37% below 1990 levels, while GDP has grown by 68%, indicating the decoupling of emissions from economic growth. According to the European Commission, the EU remains on track to meet its 2030 goal of cutting emissions by 55%.
Energy consumption
Coal
The coal consumption in Europe was 7,239 TWh in 1985 and has fallen to 2,611 TWh in 2020. The coal consumption in the EU was 5,126 TWh in 1985 and has fallen to 1,624 TWh in 2020. The height of CO2 emissions from coal in Europe were in 1987 with 3.31 billion tonnes, and in 2019 with 1.36 billion tonnes.
Russia had the most CO2 emissions from coal in Europe in 2019 (395.03 Mt), Germany had the second most CO2 emissions from coal in Europe (235.7 Mt). Iceland's CO2 emissions from coal grew 151%, Turkey's CO2 emissions from coal grew 131% between and Montenegro CO2 emissions from coal grew 13% between 1990 and 2019, the rest of the European countries had a decrease in coal consumption in that period of time.
From 2012 to 2018 in the EU coal fell by around 50TWh, compared to a rise of 30TWh in wind power and solar energy generation and a rise of 30TWh in gas generation. The remaining 10TWh covered a small structural increase in electricity consumption. In 2019 coal generation will be about 12% of the EU's 2019 greenhouse gas emissions.
Fossil gas/other methane
The EU classifies fossil gas as a "green" energy for investment purposes under the taxonomy, although it is a fossil fuel. According to Global Energy Monitor plans to expand infrastructure contradict EU climate goals.
The EU used 3,966 TWh in 2021 and Europe as a whole used 10,074 TWh in 2021.
The decline in methane emissions from 1990 to 1995 in the OECD is largely due to non-climate regulatory programs and the collection and flaring or use of landfill methane. In many OECD countries, landfill methane emissions are not expected to grow, despite continued or even increased waste generation, because of non-climate change-related regulations that result in mitigation of air emissions, collection of gas, or closure of facilities. A major driver in the OECD is the European Union Landfill Directive, which limits the amount of organic matter that can enter solid waste facilities. Although the organic matter is expected to decrease rapidly in the EU, emissions occur as a result of total waste in place. Emissions will have a gradual decline over time.
Agriculture
Greenhouse gases are also released through agriculture. Livestock production is common in Europe, responsible for 42% of land in Europe. This land use for livestock does affect the environment. Agriculture accounts for 10% of Europe's greenhouse gas emissions, this percentage being even larger in other parts of the world. Along with this percentage, agriculture is also responsible for being the largest contributor of non carbon dioxide greenhouse gas emissions being emitted annually in Europe. Agriculture has been found to release other gases besides carbon dioxides such as methane and nitrous oxide. A study claimed that 38% of greenhouse gases released through agriculture in Europe were methane. These farms release methane through chemicals in fertilizers used, manure, and a process called enteric fermentation. These gases are estimated to possibly cause even more damage than carbon dioxide, a study by Environmental Research Letters claims that "CH4 has 20 times more heat-trapping potential than CO2 and N2O has 300 times more." These emissions released through agriculture are also linked to soil acidification and loss of biodiversity in Europe as well.
Europe is attempting to take action. The Land Use Change and Forestry (LULUCF) was created, focusing on lowering the amount of greenhouse gas emissions through land use in Europe. Some success was seen, between 1990 and 2016, greenhouse gases emitted through agriculture in Europe decreased by 20%. However, the European Union has a plan to become carbon neutral by 2050. If more policies are not implemented or if there is no dietary shift, it has been concluded the European Union may not reach this goal.
According to the European Green Deal, it is critical to minimize reliance on pesticides and antimicrobials, eliminate excess fertilization (particularly nitrogen and phosphorus), promote organic farming, improve animal welfare, and reverse biodiversity loss. The introduction and successful implementation of sustainable agriculture can assist developing nations improve their food security, as well as strengthening soil and plan carbon sinks globally.
Transport
Road
Road transport emits about a fifth of EU greenhouse gas.
Aviation
Aviation is taxed less than train travel.
Shipping
Greenhouse gas emissions from shipping equal the carbon footprint of a quarter of passenger cars in Europe. Shipping is not covered by the Paris Agreement but is subject to the EU ETS, and will be subject to the UK ETS from 2026.
Other greenhouse gases
Hydrofluorocarbons
Trifluoromethane (HFC-23) is generated and emitted as a byproduct during the production of chlorodifluoromethane (HCFC-22). HCFC-22 is used both in emissive applications (primarily air conditioning and refrigeration) and as a feedstock for production of synthetic polymers. Because HCFC-22 depletes stratospheric ozone, its production for non-feedstock uses is scheduled to be phased out under the Montreal Protocol. However, feedstock production is permitted to continue indefinitely.
In the developed world, HFC-23 emissions decreased between 1990 and 2000 due to process optimization and thermal destruction, although there were increased emissions in the intervening years.
The United States (U.S.) and the European Union drove these trends in the developed world. Although emissions increased in the EU between 1990 and 1995 due to increased production of HCFC-22, a combination of process optimization and thermal oxidation led to a sharp decline in EU emissions after 1995, resulting in a net decrease in emissions of 67 percent for this region between 1990 and 2000.
Impacts on the natural environment
Temperature and weather changes
The World Meteorological Organization's State of the Climate 2021 stated that temperatures in Europe increased at more than twice the global average over the preceding 30 years–the highest increase of any continent in the world. The European Environment Agency stated that from pre-industrial times, European land temperatures have increased by 1.94–1.99 °C, faster than the global average increase of 1.11–1.14 °C.
The Arctic sea ice decreased 33.000 km2 between 1979 and 2020 per year during the winter and 79.000 km2 per year during the summer in the same period of time. If temperatures are kept below 1.5 °C warming ice free Arctic summers would be rare but it would be a frequent event with a 2 °C warming.
In the Baltic Sea ice melting has been seen since 1800 and with an acceleration happening since the 1980s. Sea ice was at a record low in the winter of 2019–2020.
These extreme weather changes may increase the severity of diseases in animals as well as humans. The heat waves will increase the number of forest fires. Experts have warned that climate change may increase the number of global climate refugees from 150 million in 2008 to 800 million in the future. The International agreement of refugees does not recognize climate change refugees. From 2012 to 2022, according to the European Environment Agency, extreme weather events cost Europe more than €145 billion in economic damages. Climate-related economic losses grew by about 2% each year throughout the same time.
A study of future changes in flood, heat-waves, and drought impacts for 571 European cities, using climate model runs from the coupled model intercomparison project Phase 5 (CMIP5) found that heat-wave days increase across all cities, but especially in southern Europe, whilst the greatest heatwave temperature increases are expected in central European cities. For the low impact scenario drought conditions intensify in southern European cities while river flooding worsens in northern European cities. However, the high impact scenario projects that most European cities will see increases in both drought and river flood risks. Over 100 cities are particularly vulnerable to two or more climate impacts.
Extreme weather events
The summer of 2019 brought a series of high temperature records in Western Europe. During a heat wave a glaciological rarity in the form of a previously unseen lake emerged in the Mont Blanc Massif in the French Alps, at the foot of the Dent du Géant at an altitude of about 3400 meters, that was considered as evidence for the effects of global warming on the glaciers.
The summer of 2023 was the warmest on record globally, the average European temperature that summer was 0.0.83 °C above average.
Impact on flora
In the aftermath of the 2003 heat wave, researchers noted how the alpine ecosystems of Italy were affected. Namely, the heat wave "triggered a rapid expansion of vascular plant species at the expense of mosses in peatlands". Peatlands are known to be supreme carbon-storing environments, and thus alterations caused by anthropogenic climate change poses a threat to long-term climate stability.
Impacts on people
Climate change severely endangers the population in Europe, according to the EAA risk assessment, while "climate threats “ are growing faster than our societal preparedness,“. Those impacts are felt in southern and central Europe. It includes heat waves, flooding, transmission of diseases by mosquitos and more.
Health impacts
Heat waves
Due to climate change temperatures rose in Europe and heat mortality increased. From 2003–12 to 2013–22 alone, it increased by 17 deaths per 100,000 people, while women are more vulnerable than men.
In the absence of climate change, extreme heat waves in Europe would be expected to occur only once every several hundred years. In addition to hydrological changes, grain crops mature earlier at a higher temperature, which may reduce the critical growth period and lead to lower grain yields. The Russian heat wave in 2010 caused grain harvest down by 25%, government ban wheat exports, and losses were 1% of GDP. The Russian heat wave 2010 estimate for deaths is 55,000.
The summer of 2003 was probably the hottest in Europe since at least AD 1500, and unusually large numbers of heat-related deaths were reported in France, Germany and Italy. It is very likely that the heat wave was human-induced by greenhouse gases.
These extreme weather changes may increase the severity of diseases in animals as well as humans. The heat waves will increase the number of forest fires. Experts have warned that climate change may increase the number of global climate refugees from 150 million in 2008 to 800 million in future; however, the Convention Relating to the Status of Refugees does not recognize climate change refugees.
The heat wave in 2018 in England, which would take hundreds of lives, would have had 30 times less of a chance of happening, without climate change. By 2050, such patterns would occur every 2 years if the current rate of warming continues.
The heat wave in summer of 2019 as of June 28, claimed human lives, caused closing or taking special measures in 4,000 schools in France only, and big wildfires. Many areas declared state of emergency and advised the public to avoid "risky behaviour" like leaving children in cars or jogging outside in the middle of the day". The heatwave was made at least 5 times more likely by climate change and possibly even 100 times.
In 2022, severe heatwaves occurred in Western Europe.Wildfires emerged in different places and burned vast territories causing tens of thousands of people to flee their homes. In Spain 510 people died from heat between 10 and 16 July.
68% of European respondents have experienced at least one direct impact from an extreme weather event. Among these impacts, 21% faced transportation disruptions, 20% encountered power outages or energy supply challenges, and another 20% dealt with health-related issues. Additionally, 19% reported the destruction of forests or natural areas near their residences.
Diseases
In 2019 for the first time, cases of Zika fever were diagnosed in Europe not because people traveled to tropical countries like Brazil, but from local mosquitos. Evidence indicating that the warming climate change in the area is the primary cause of this fever. It is thought that climate change could lead to dengue fever epidemics in Europe by 2100 if Aedes mosquito vectors become established.
Mitigation
In the beginning of the 21st century the European Union, began to conceive the European Green Deal as its main program of climate change mitigation. The European Union claims that it has already achieved its 2020 target for emission reduction and has the legislation needed to achieve the 2030 targets. Already in 2018, its GHG emissions were 23% lower than in 1990.
In May 2024, a report has been published summarizing the main achievements of the European Union in the environmental domain from 2019.
Paris Agreement
On April 22, 2016, the Paris Climate Accords were signed by all but three countries around the world. The conference to talk about this document was held in Paris, France. This put Europe in the epicenter of talks about the environment and climate change. The EU was the first major economy that decided to submit its intended contribution to the new agreement in March 2015. The EU ratified the Paris Agreement on October 5, 2015.
In these talks the countries agreed that they all had a long-term goal of keeping global warming to well below 2 degrees Celsius. They agreed that global emissions need to peak as soon as possible, and recognize that this will take longer for developing countries. On the subject of transparency the countries agreed that they would meet every five years to set ambitious goals, report their progress to the public and each other, and track progress for their long-term goals throughout a transparent and accountable system.
The countries recognized the importance of non-party stakeholders to be involved in this process. Cities, regions, and local authorities are encouraged to uphold and promote regional and international cooperation.
The Paris agreement is a legally international agreement, its main goal is to limit global warming to below 1.5 degrees Celsius, compared to pre-industrial levels. The Nationally Determined Contributions (NDC's) are the plans to fight climate change adapted for each country. Every party in the agreement has different targets based on its own historical climate records and country's circumstances and all the targets for each country are stated in their NDC.
National determined goals based on NDC's
In the case for member countries of the European Union the goals are very similar and the European Union work with a common strategy within the Paris agreement. The NDC target for countries of the European Union against climate change and greenhouse gas emissions under the Paris agreement are the following:
40% reduction in Greenhouse gas emissions until 2030, compared to 1990. This reduction is covered in these four sections;
European Union Emission Trading System
Outside the EU emissions trading system
Land use, land-use change, and forestry (LULUCF)
Domestic institutional legislation and mitigation measure
55% reduction of greenhouse gases by domestic binding target without contribution from international credits, until 2030 compared to 1990.
Gases covered in reduction: Carbon Dioxide (CO2), Methane (CH4), Nitrous oxide (N2O), Hydrofluorocarbon (HFCs), Perfluorinated compound (PFCs), Sulfur hexafluoride (SF6) and Nitrogen trifluoride (NF3).
40% reduction of emissions from outside the European Union Emission Trading System (EU ETS) until 2030, compared to 2005.
Strategy to achieve NDC's
Each country has different ways to achieve the established goals depending on resources. In the case of the European union the following approach is established to support the NDC's climate change plan:
Each member state must report land use and subsequently report compensatory measures for the removal of carbon dioxide from the atmosphere.
Targets for improved energy efficiency and an increased amount of renewable energy have been established. Until the year 2030, energy consumption will be improved by 32.5%.
The CO2 emission per km must be reduced by 30–37.5% depending on vehicles by 2030
Limit sales of F-gas, prohibited products and prevent emissions in existing products with F-gases. This is expected to reduce emissions of F-gases by 66% by 2030 compared to 2014.
Multiannual Financial Framework (MFF) for 2021–2027. MFF will finance climate action, such as policies and programs. MFF shall contribute to climate neutrality by 2050 and to achieving the 2030 climate targets.
Within the European Union Emission Trading System (EU ETS) a cap on the maximum allowable amount of emissions established. From year 2021 this will also be applied in aviation. The EU ETS is an important tool in EU policy to reduce Greenhouse gas emission in a cost effective way. Under the 'cap and trade' principle, a maximum (cap) is set on the total amount of greenhouse gases that can be emitted by all participating installations.
A survey conducted by the European Investment Bank in 2020 found that although 45% of EU companies have invested in climate change mitigation or adaptation measures, compared to 32% in the US, fewer companies plan future investment in the next three years. 40% of European companies want to invest in climate initiatives during the next three years. The proportion of investment in 2020 varies from 50% in Western and Northern Europe to 32% in Central and Eastern Europe. The majority of European companies, 75%, say regulatory and tax uncertainty is preventing them from investing in climate-related projects.
According to their 2020 Municipality Survey, 56% of European Union municipalities increased climate investment, while 66% believe their climate investment over the previous three years has been insufficient.
According to a study from 2022, while renewables as a whole and specifically hydroelectricity and geothermal energy do reduce emission in European countries, there is a problem with biomass, solar power and wind power as the process of their production also emit big amounts of . The study did not check other greenhouse gases like methane. The authors called to ensure that the energy sources will really reduce emissions.
Climate targets
The climate commitments of the European Union are divided into 3 main categories: targets for the year 2020, 2030 and 2050. The European Union claim that its policies are in line with the goal of the Paris Agreement. The programm of response to climate change in Europe is called European Green Deal. In April 2020, the European Parliament called to include the European Green Deal in the recovery program from the COVID-19 pandemic.
Targets for the year 2020:
Reduce GHG emissions by 20% from the level in 1990.
Produce 20% of energy from renewable sources. Result: 22 percent renewable sources in 2020.
Increase Energy Efficiency by 20%.
10 percent renewable fuels in the transport sector. Result: 10 percent of fuels were renewable on average in the EU27 in 2020.
Targets for the year 2030:
Reduce GHG emission by 55 percent from the level in 1990.
Produce 45 percent of energy from renewables.
Increase energy efficiency by 32.5% from a historical baseline.
14 percent renewable fuels in the transport sector.
emissions per kilometer from passenger cars sold in the EU must decrease by an average of 37.5 percent from 2021 levels.
14 percent of the fuel in the transport sector must be renewable.
Target for the year 2035:
Phase-out of fossil fuel vehicles in new car sales, including plug-in hybrid electric vehicles.
Target for the year 2050:
Become climate neutral.
Policies and legislation for mitigation
There is in place national legislation, international agreements and EU directives. The EU directive 2001/77/EU promotes renewable energy in electricity production. The climate subprogramme will provide €864 million in co-financing for climate projects between 2014 and 2020. Its main objectives are to contribute to the shift towards a low carbon and climate resilient economy and improve the development, implementation and enforcement of EU climate change policies and laws.
In March of the year 2020 a draft of a climate law for the entire European Union was proposed. The law obliges the European Union to become carbon neutral by 2050 and adjust all its policies to the target. The law includes measures to increase the use of trains. The law includes a mechanism to check the implementation of the needed measures. It also should increase the climate ambitions of other countries. It includes a Carbon Border Adjustment Mechanism, that will prevent Carbon leakage. Greta Thunberg and other climate activists have criticized the draft saying it has not enough strong targets.
In July 2021 the European Union published several drafts describing concrete measures to achieve climate neutrality by 2050. Those include tax on jet fuel, a ban on selling cars on petrol and diesel by 2035, border tax, measures for increase energy efficiency in buildings and renewable energy.
Climate initiatives, according to 56% of Europeans, are a source of economic growth. 56% of Europeans also believe that climate change mitigation will produce more employment. 61% of Europeans believe that climate change policies will improve their quality of life.
In May 2022 the European Commission proposed a plan that includes measures for speeding emission reduction. The plan includes reducing energy consumption by 13% by the year 2030, reducing oil and gas use by 5% with behavioural changes already in the short time, increase use of biogas and heat pumps. According to the plan, 45% of energy in the European Union should come from renewable sources by 2030.
In the summer of 2022 the leaders of the union adopted basic elements of the proposition of the European Commission aiming to reduce the emissions of the union by 61% by the year 2030.
The European Commission predicted in 2020 that extra investment of €260 billion year, or around 2% of EU GDP, would be needed to meet the 2030 climate and energy objectives. Since then, the aim for reducing greenhouse gas emissions for the year 2030 has grown (from -40% to -55%), necessitating both more investment and the acceleration of some expenditures.
Approximately 57% of EU businesses are investing in energy efficiency, 64% in reducing and recycling trash, and 32% in less polluting industries and technologies. Roughly 40% of businesses made investments in energy efficiency in 2021. About 90% of EU businesses previously made an effort to cut greenhouse gas emissions. In 2023, physical climate change risks were found to affect around 64% of EU businesses, with just 36% of those businesses taking action to adapt to these risks, through investments in preventing or limiting exposure. Only 13% of businesses purchased insurance to deal with climate-related losses. The largest proportion of firms citing weather events as affecting their operations was found in Spain, with 80%, Portugal 79% and Italy 73%. Denmark, Luxembourg and Latvia (firms) were found to have the fewest weather events affecting them.
The Netherlands has the largest share of companies that have already invested in addressing climate change in the European Union, while Lithuania has the highest share of firms planned to invest in the next three years (following 2023). Cyprus and Greece have the lowest percentage of enterprises in terms of both investments made and planned investments.
The European Union's key efforts are investments in energy efficiency (59%) and trash minimization and recycling (67%).
European Union Emissions Trading System
The European Union Emissions Trading System is a major pillar of EU energy policy. It was the first large greenhouse gas emissions trading scheme in the world and was launched in 2005 to fight global warming. In 2022, the EU ETS covers emissions from power and heat generation, energy-intensive industrial sectors and commercial aviation within Europe.
Under the "cap and trade" principle, a maximum (cap) is set on the total amount of greenhouse gases that can be emitted by all participating installations. EU Allowances for emissions are then auctioned off or allocated for free, and can subsequently be traded. Installations must monitor and report their CO2 emissions, ensuring they hand in enough allowances to the authorities to cover their emissions. If emission exceeds what is permitted by its allowances, an installation must purchase allowances from others. Conversely, if an installation has performed well at reducing its emissions, it can sell its leftover credits. This allows the system to find the most cost-effective ways of reducing emissions without significant government intervention.
The current EU ETS cap aims to reduce GHG emissions by 43% in 2030 against 2005 emissions, but in the "Fit for 55" package, the EU commission proposes to increase the reduction target for 2030 to -61% compared to 2005 emissions.
Stern report 2006
British government and economist Nicholas Stern published the Stern report in 2006. The Review states that climate change is the greatest and widest-ranging market failure ever seen, presenting a unique challenge for economics. The Review provides prescriptions including environmental taxes to minimize the economic and social disruptions. The Stern Review's main conclusion is that the benefits of strong, early action on climate change far outweigh the costs of not acting. The Review points to the potential impacts of climate change on water resources, food production, health, and the environment. According to the Review, without action, the overall costs of climate change will be equivalent to losing at least 5% of global gross domestic product (GDP) each year, now and forever. Including a wider range of risks and impacts could increase this to 20% of GDP or more.
No-one can predict the consequences of climate change with complete certainty; but we now know enough to understand the risks. The review leads to a simple conclusion: the benefits of strong, early action considerably outweigh the costs.
Climate emergency
The EU parliament declared a climate emergency in November 2019. It urged all EU countries to commit to net zero greenhouse gas emissions by 2050. MEPs backed a tougher target of cutting greenhouse gas emissions by 55% by 2030. The vote came as scientists warned that the world may have already crossed a series of climate tipping points, resulting in "a state of planetary emergency". The parliament also calls to end all fossil fuel subsidies by 2020, increase at least twice the payments to the green climate fund, make sure that all the legislation and the European budget will be in line with the 1.5 degrees target, and reduce emissions from aviation and shipping.
Divestment from fossil fuels and sustainable investments
The European Investment Bank declared that it will divest almost completely from fossil fuels from the year 2021 and started to phase out acceptance of new projects in 2019.
The central bank of Sweden sold its bonds in the provinces of Queensland, Western Australia in Australia and the province Alberta from Canada because of severe climate impacts from those provinces.
In November 2019, the European parliament adopted resolutions calling to end all subsidies of fossil fuels by 2020.
In 2019 the European Parliament created rules for identification of sustainable investments. The measure should help achieve climate neutral Europe.
27% of companies in less developed areas report that climate change is having a big impact on their business, while 40% have a slight impact. Only 19% and 43%, respectively, of businesses in transition zones claim that climate change is significantly affecting their business. Less developed regions also have the lowest percentage of businesses who have made investments to combat climate change or reduce their carbon emissions (46%).
Green recovery from the COVID-19 pandemic
In May 2020, the €750 billion European recovery package and the €1 trillion budget were announced, the European Green Deal being part of it. The money will be spent only on projects that meet some green criteria; 25% of all funding will go to climate change mitigation. Fossil fuels and nuclear power are excluded from the funding. The recovery package should also restore some equilibrium between rich and poor countries in the European Union. In July the recovery package and the budget were generally accepted, and budget allocation going to climate action was raised to 30%. The plan includes some green taxation on European products and on imports. Critics say it is still not enough for achieving the climate targets of the European Union and it is not clear how to ensure that all the money will really go to green projects.
Nature restoration and agriculture
In May 2020, the European Union published 2 plans that are part of the European Green Deal: The EU Biodiversity Strategy for 2030 and From Farm to Fork.
In the official page of the EU Biodiversity Strategy for 2030 is cited Ursula von der Leyen, President of the European Commission, saying that:
The biodiversity strategy is an essential part of the climate change mitigation strategy of the European Union. From the 25% of the European budget that will go to fight climate change, large parts will go to restore biodiversity and nature based solutions.
The EU Biodiversity Strategy for 2030 includes the next targets:
Protect 30% of the sea territory and 30% of the land territory especially Old-growth forests.
Plant 3 billion trees by the year 2030.
Restore at least 25,000 kilometers of rivers, so they will become free flowing.
Reduce the use of Pesticides by 50% by the year 2030.
Increase Organic farming.
Increase Biodiversity in agriculture.
Give €20 billion per year to the issue and make it part of the business practice.
According to the page, approximately half of the global GDP depends on nature. In Europe many parts of the economy that generate trillions of Euros per year, depend on nature. Only the benefits of Natura 2000 in Europe are €200 - €300 billion per year.
In the official page of the program From Farm to Fork is cited Frans Timmermans the Executive Vice-president of the European Commission, saying that:
The program include the next targets:
Making 25% of EU agriculture organic, by the year 2030.
Reduce by 50% the use of Pesticides by the year 2030.
Reduce the use of Fertilizers by 20% by the year 2030.
Reduce nutrient loss by at least 50%.
Reduce the use of antimicrobials in agriculture and antimicrobials in aquaculture by 50% by 2030.
Create sustainable food labeling.
Reduce food waste by 50% by 2030.
Dedicate to R&I related to the issue €10 billion.
In 2022 the Environment Ministers of the European Union backed a new law aiming to increase carbon sinks such as forests.
Forests
In 2022 the European parliament approved a bill aiming to stop the import linked with deforestation. The bill may cause to Brazil, for example, to stop deforestation for agricultural production and begun to "increase productivity on existing agricultural land". The legislation was adopted with some changes by the European Council in May 2023 and is expected to enter into force several weeks after. The bill requires companies who want to import certain types of products to the European Union to prove the production of those commodities is not linked to areas deforested after 31 of December 2020. It prohibits also import of products linked with Human rights abuse. The list of products includes: palm oil, cattle, wood, coffee, cocoa, rubber and soy. Some derivatives of those products are also included: chocolate, furniture, printed paper and several palm oil based derivates.
Wood harvesting and supply have reached around 550 million m3 per year, while the total increasing stock of European forests has more than quadrupled during the previous six decades. It now accounts for around 35 billion m3 of forest biomass. Since the beginning of the 1990s, the amounts of wood and carbon stored in European forests have increased by 50% due to greater forest area and biomass stocks. Every year, European woods adsorb and store around 155 million tonnes equivalent. This is comparable to 10% of all other sectors' emissions in Europe.
The forestry industry tries to mitigate climate change by boosting carbon storage in growing trees and soils and improving the sustainable supply of renewable raw materials via sustainable forest management.
Transport
In 2022, the leaders of the union agreed to ban sales of cars emitting from the year 2035.
In December 2022 the European Commission approved a law forbidding flights on planes in France, if people can pass the distance on a train in 2.5 hours. Greenpeace demanded to extend the law, by following the advice of the European Commission to include connecting flights. Greenpeace cited a report according to which, if it will be 6 hours instead of 2.5, it will cut global greenhouse gas emissions by an amount equivalent to 3.5 million tonnes annually.
Policies against greenwashing and planned obsolescence
The European Parliament is advancing a set of rules intended to:
Ban presenting the product as green without proof.
Forbade claiming it is carbon neutral based solely on carbon offsets. Such claim will be approved only for residual emissions.
Support making more durable products.
Forbade presenting a product more durable than it is.
European Court of Human Rights
Verein KlimaSeniorinnen Schweiz v. Switzerland (2024) was a landmark European Court of Human Rights case in which the court ruled that Switzerland violated the European Convention on Human Rights by failing to adequately address climate change. It is the first case in which an international court has ruled that state inaction related to climate change violates human rights.
Adaptation
Climate change threatens to undermine decades of development gains in Europe and put at risk efforts to eradicate poverty. In 2013, the European Union adopted the 'EU Adaptation Strategy', which had three key objectives: (1) promoting action by member states, which includes providing funding, (2) promoting adaptation in climate-sensitive sectors and (3) research.
The Climate Adaptation Investment Advisory Platform (ADAPT) of the European Union helps the public and commercial sectors plan and invest in climate change adaptation and resilience efforts. The European Green Deal is another initiative that aims to make Europe the first climate-neutral continent by 2050. Accelerating the transition to a circular economy is one of the Green Deal's main cornerstones.
Society and culture
Public opinion
The majority of individuals in the eastern EU countries are relatively less positive about the influence of climate measures on the employment market. 55% of Eastern Europeans believe that measures against climate change will result in less jobs. In Western Europe, 60% of respondents believe that policies would generate more jobs. While seeking employment, an increasing number of people are looking at businesses' environmental credentials. Over two-thirds of Europeans (62%) believe that future employers should prioritize sustainability. It is even a high priority for 16% of Europeans.
62% of Europeans believe that the green transition will reduce their buying power.
66% of Europeans believe the climate emergency will be a severe problem by the mid-century, and 30% believe that the climate emergency will be under control by 2050.
Europeans believe climate change is a threat, with 29% of the EU population expecting to be forced to relocate to another area. People of ages 20–29 are concerned about the potential of having to relocate due to climate challenges. Because of climate change, 33% of Europeans feel they will have to relocate to a colder or warmer area or nation, according to the European Investment Bank's climate survey in 2020.
In European Investment Bank's Climate Survey of 2020, 90% of Europeans believe their children will experience the effects of climate change in their daily lives. The survey showed a high concern for the climate from the 30 000 individuals surveyed, explaining that a majority of respondents are also prepared to pay a new tax in accordance with climate laws. Only 9% of Europeans do not think climate change is occurring, compared to 18% in the United States.
Activism
The critics include that European companies, like in other OECD countries, have moved the energy-intensive, polluting, and climate gas-emitting industry to Asia and South America. In respect to climate change there are no harmless areas. Carbon emissions from all countries are equal. The agreements exclude significant factors like deforestation, aviation and tourism, the actual end consumption of energy and the history of emissions. Negotiations are country oriented but the economical interests are in conflict between the energy producers, consumers and the environment.
In the EU, 75% of the population claims they are more worried about the climate crisis than their politicians. 51% of EU citizens cite government inaction as a major difficulty when facing the climate crisis, and 81% cite climate change as the most serious problem of the twenty-first century.
Climate change is also a factor when job searching, according to 54% of young Europeans.
As a form of climate action, 42% of Europeans, specifically 48% of women and 34% of men, invest in second-hand clothing rather than buying new. Younger populations, aged 15 to 29, were found more likely to do so than older generations.
33% of car buyers in Europe will also opt for a petrol / diesel car when purchasing a new vehicle. 67% of them mentioned opting for the hybrid or electric version. In the EU, only 13% of the total population do not plan on owning a vehicle at all.
44% of Europeans aged 20–29 fear they could lose their jobs because of climate change.
Europeans expect lifestyle changes to experience great transformation in the next 20 years. 31% of respondents to an EU climate survey believe that most people will no longer have their own vehicle. 63% believe that teleworking will become the norm in the fight against climate change. 36% of respondents believe most people will no longer consume animal products. 48% predict that energy quotas will be individually assigned. In 2024, 72% of respondents to the same survey acknowledge that they will need to adjust their lifestyle in response to climate change, with the figure rising to 81% among those in southern European countries.
School strike for climate
School strikes for climate became well known when the Swedish teen Greta Thunberg started to strike in the summer of 2018 and starting from September 2018 she began to strike every Friday. The movement started to pick up in January 2019 with mass strikes happened in Belgium, Germany and Switzerland. In the following months mass strikes were reported in numerous European countries. There were numerous global climate strikes that also took place in Europe on 15 March 2019, 24 May 2019, from 20 to 27 September 2019 (global climate action week), 29 November 2019 and 25 September 2020. The strikes during 2020 were limited because of COVID-19.
Extinction Rebellion
Extinction Rebellion (XR) was founded in 2018 in the United Kingdom and is a civil disobedience movement. Their first planned action was in London were 5000 demonstrators blocked the most important bridges of the city. The movement quickly spread around Europe. In October 2019 there was the first global rebellion with numerous demonstrations in European cities.
EU Day for the Victims of the Global Climate Crisis
In 2023, the annual EU Day for the Victims of the Global Climate Crisis on 15 July was established in a joint declaration by European Parliament, European Council and European Commission.
By country
Austria
At the beginning of the year 2020, major parties in Austria reach a deal, including achieving carbon - neutrality of the country by 2040, produce all electricity from renewable sources by 2030, making a nationwide carbon tax and making a tax on flying, what should making trains more attractive.
In 2020 the latest coal fired power station in the country was closed. Austria became the second country in Europe, after Belgium to become coal free. The goal of achieving 100% renewable electrycity by 2030 was adopted by government
Belgium
Bosnia and Herzegovina
Croatia
Croatia aims to reduce emissions by 45% by 2030 and phase out coal by 2033. However, the shift to a low-carbon economy will necessitate significant expenditures in new energy infrastructure and additional renewable energy resources.
Croatia established a 2030 National Energy and Climate Plan to attain its aim. The national policy targets for a 36.4% renewable energy share by 2030, as well as major investment in the energy industry, including hydropower, wind farms, solar photovoltaic facilities, and hydrogen energy.
Cyprus
Denmark
In 2019 Denmark passed a law in which its pledge to reduce GHG emissions by 70% by 2030 from the level in 1990. It also pledged to achieve zero emissions by 2050. The law includes strong monitoring system and setting intermediate targets every 5 years. It includes a pledge to help climate action in other countries and consider climate impacts in diplomatic and economic relations with other countries.
Greenland is an autonomous territory within Denmark. In 2021 Greenland banned all new oil and gas exploration on its territory. The government of Greenland explained the decision as follows: "price of oil extraction is too high,"
Finland
France
Germany
Iceland
Iceland has a target of becoming carbon neutral by 2040. It wants to reduce its greenhouse gas emissions by 40% by the year 2030.
Ireland
Italy
In 2019, Italy became the first country in the world to introduce mandatory lessons about sustainability and climate change. The lessons will be taught in all schools, in the ages 6 –19, one hour each week. According to the European Investment Bank climate survey from 2020, 70% of Europeans have either switched to a green energy supplier or are prepared to do so. This ratio is 82% in Italy.
Netherlands
Norway
Portugal
Portugal is beginning to promote climate action and support UN Sustainable Development Goals through various projects. Portugal Blue is a collaboration formed in October 2020 by the EIB Group, Banco Português de Fomento, and the Portuguese government (via Fundo Azul) to boost investment in the blue economy. The cooperation seeks to raise approximately €80 million in finance from public and institutional investors via venture capital and private equity funds, focused on blue economy funds that are entirely focused on ocean sustainability and climate action.
Russia
Sweden
Spain
Turkey
Ukraine
The EU is trying to support a move away from coal.
United Kingdom
| Physical sciences | Climate change | Earth science |
36425227 | https://en.wikipedia.org/wiki/Raised%20field | Raised field | In agriculture, a raised field is a large, cultivated elevation, typically bounded by water-filled ditches, that is used to allow cultivators to control environmental factors such as moisture levels, frost damage, and flooding. Examples of raised field agriculture can be found among some Pre-Hispanic cultures of Latin America, such as those from tropical lowlands and the Budi Lake Mapuche.
Pre-Hispanic raised fields are known from the region near Santa Cruz de Mompox in northern Colombia and in the Llanos de Moxos region of lowland Bolivia. In highland Bolivia, where this was utilized by the Tiwanaku culture near Lake Titicaca, this technique is known as Waru Waru or camellones. Ancient raised-field agriculture has also been documented in Central America at Pulltrouser Swamp in Belize, where it was practiced by the Maya civilization. Toltec and Aztec people also practiced raised-field agriculture on the shore of Lake Texcoco, where these fields were known as chinampas.
| Technology | Buildings and infrastructure | null |
49608407 | https://en.wikipedia.org/wiki/Virtual%20reality%20headset | Virtual reality headset | A virtual reality headset (or VR headset) is a head-mounted device that uses 3D near-eye displays and positional tracking to provide a virtual reality environment for the user. VR headsets are widely used with VR video games, but they are also used in other applications, including simulators and trainers. VR headsets typically include a stereoscopic display (providing separate images for each eye), stereo sound, and sensors like accelerometers and gyroscopes for tracking the pose of the user's head to match the orientation of the virtual camera with the user's eye positions in the real world. Augmented reality (AR) headsets are VR headsets that enable the user to see and interact with the outside world. Examples of AR headsets include the Apple Vision Pro and Meta Quest 3.
VR headsets typically use at least one MEMS IMU for three degrees of freedom (3DOF) motion tracking, and optionally more tracking technology for six degrees of freedom (6DOF) motion tracking. 6DOF devices typically use a sensor fusion algorithm to merge the data from the IMU and any other tracking sources, typically either one or more external sensors, or "inside-out" tracking using outward facing cameras embedded in the headset. The sensor fusion algorithms that are used are often variants of a Kalman filter. VR headsets can support motion controllers, which similarly combine inputs from accelerometers and gyroscopes with the headset's motion tracking system.
Most headsets are reliant on a personal computer to operate. Some "standalone" headsets are based on a mobile operating system and smartphone-like hardware, allowing VR apps to run directly on the device, while also allowing VR applications to be streamed from a PC over a USB or Wi-Fi connection. Virtual reality headsets and viewers have also been designed for smartphones, where the device's screen is viewed through lenses acting as a stereoscope, rather than using dedicated internal displays.
History
VPL Research was a company that made early VR headsets in the 1980s.
The Sega VR was announced in 1991 and seen in early 1993 at the Winter CES. It was never released for consoles, but was utilized for the Sega VR-1 motion simulator arcade attraction in 1994. Another early VR headset, the Forte VFX1, was announced at CES in 1994. The VFX-1 has stereoscopic displays, 3-axis head-tracking, and stereo headphones.
Sony released the Glasstron in 1997, which has an optional positional sensor, allowing the wearer to view the surroundings, with the perspective moving as the user's head moves, giving a deep sense of immersion. These VR headsets gave MechWarrior 2 players a new visual perspective of seeing the battlefield from inside the cockpit of their craft. However, these early headsets failed commercially due to their limited technology, and they were described by John Carmack as like "looking through toilet paper tubes".
In 2012, a crowdfunding campaign began for a VR headset known as Oculus Rift; the project was led by several prominent video game developers, including John Carmack who later became the company's CTO. In March 2014, the project's parent company Oculus VR was acquired by Facebook for $2 billion. The final consumer-oriented release of Oculus Rift began shipping on 28 March 2016.
In March 2014, Sony demonstrated a prototype headset for PlayStation 4, which was later named PlayStation VR. In 2014, Valve demonstrated some headset prototypes, which led to a partnership with HTC to produce the Vive, which focuses on "room-scale" VR environments that users can naturally navigate within and interact with. The headset uses Valve's "SteamVR" software platform. The Vive was released in April 2016 and PlayStation VR in October 2016.
Google released a series of specifications and associated DIY kits for virtual reality viewers known as Google Cardboard; these viewers are capable of being constructed using low-cost materials (and a smartphone with a gyroscope), such as cardboard (hence the naming). Samsung Electronics partnered with Oculus VR to co-develop the Samsung Gear VR (which is only compatible with some Samsung Galaxy devices). LG Electronics developed a headset with dedicated displays for its LG G5 smartphone known as LG 360 VR. In March 2017, Microsoft launched a platform for VR and mixed reality headsets running on Windows 10 known as Windows Mixed Reality, with VR headsets from multiple partners including PC makers Acer, Dell, HP Inc., and Lenovo.
In 2018, Oculus released the Oculus Go, a standalone headset running capable of running VR apps on embedded mobile computing hardware, thus not needing a PC or an inserted smartphone to operate. In June 2019, Valve released their own in-house SteamVR headset, the Valve Index. In an October 2019 report, Sony, Facebook (Oculus), and HTC were identified by Trend Force as the three largest manufacturers of VR hardware. 2019 saw Facebook release the first-generation Oculus Quest, a successor to the Oculus Go concept which supports motion controllers and positional tracking with 6DOF.
Technology
Resolution and display quality
There are different optics and visual qualities that affect how an individual perceives the image quality and how they experience the virtual world. The image clarity depends on the display resolution, optic quality, refresh rate, and field of view.
Because virtual reality headsets stretch a single display across a wide field of view (up to 110° for some devices according to manufacturers), the magnification factor makes flaws in display technology much more apparent. One issue is the so-called screen-door effect, where the gaps between rows and columns of pixels become visible, kind of like looking through a screen door. This was especially noticeable in earlier prototypes and development kits, which had lower resolutions than the retail versions.
Optics
The lenses of the headset are responsible for mapping the up-close display to a wide field of view, while also providing a more comfortable distant point of focus. One challenge with this is providing consistency of focus: because eyes are free to turn within the headset, it is important to avoid having to refocus to prevent eye strain.
Fresnel lenses are commonly used in virtual reality headsets due to their compactness and lightweight structure. The lenses do not use multiple pieces of material in their lenses like other lenses, but the lens will be broken down into sections, allowing the individual to have a wider range of view. The issue seen with the lens consists of seeing the ridges of the lenses when the headset is not properly aligned on the head.
The lenses introduce distortion and chromatic aberration, which are typically corrected in software. The lenses can also be adjusted dynamically to account for a user's eyeglass prescription so that the user can use the headset without corrective eyeglasses.
Latency requirements
Virtual reality headsets have significantly higher requirements for latency—the time it takes from a change in input to have a visual effect—than ordinary video games. If the system is too sluggish to react to head movement, then it can cause the user to experience virtual reality sickness, a kind of motion sickness. According to a Valve engineer, the ideal latency would be 7–15 milliseconds.
The graphics processing unit (GPU) also needs to be powerful enough to render the required amount of frames. Oculus cited the limited processing power of Xbox One and PlayStation 4 as the reason why they targeted the PC gaming market with their first devices.
Foveated rendering is a new technique to reduce the rendering workload. It uses eye tracking hardware to determine at what point the user is looking and reduces rendering resolution farther from the user's gaze. This can be unnoticeable to the user because human peripheral vision is far less sensitive than the fovea.
Uses in various fields
Medical training and diagnostics
Virtual reality headsets are being currently used as a means to train medical students for surgery. It allows them to perform essential procedures in a virtual, controlled environment. Students perform surgeries on virtual patients, which allows them to acquire the skills needed to perform surgeries on real patients. It also allows the students to revisit the surgeries from the perspective of the lead surgeon.
Traditionally, students had to participate in surgeries and often they would miss essential parts. Now, with the use of VR headsets, students can watch surgical procedures from the perspective of the lead surgeon without missing essential parts. Students can also pause, rewind, and fast-forward surgeries. They also can perfect their techniques in a real headset, mounted in a risk-free environment.
Besides training purposes, augmented reality headsets are also already being used for image-guided surgery.
VR headset mounted smartphones have been used to capture high-quality videos and images of the retina for documenting peripheral retinal lesions.
Military training
Virtual reality headsets have been used by the United States Armed Forces. It is a particularly useful tool for training military personnel without putting them in harm's way.
The virtual reality headset allows military personnel to interact with virtual reality people to make it feel real. They can talk to one another and do varying actions to make the virtual reality world feel like they are actually in the real world. There are also disadvantages and advantages when military personnel use the headset. The disadvantage is the headset is made for an indoor area, with a cool environment, and away from any heat, so when military personnel has just the headset on, no military equipment, it is not like their basic training. The advantages consist of repeating the situations multiple times and the cost of having the headset is less, due to no military equipment being needed.
| Technology | User interface | null |
32253448 | https://en.wikipedia.org/wiki/Telluric%20iron | Telluric iron | Telluric iron, also called native iron, is iron that originated on Earth, and is found in a metallic form rather than as an ore. Telluric iron is extremely rare, with only one known major deposit in the world, located in Greenland.
Introduction
With the exception of its molten core, nearly all elemental iron on Earth is found as iron ores. All metallic iron was thought to have been transformed into iron oxides during the Great Oxidation Event, beginning roughly 2 billion years ago, among other theories. Until the late 1800s, iron as a native metal was only a matter of speculation, outside of isolated Greenland. The only known, terrestrial iron in metallic form was found as meteorites, which were deposited onto the Earth from outer space.
Telluric iron is so named after the Latin word Tellus, meaning "Earth" (the planet, as opposed to terra meaning "earth": the land, ground or soil), combined with the suffix -ic meaning "of" or "born from", differentiating it from meteorites. Telluric iron resembles meteoric iron, in that it contains both a significant amount of nickel and Widmanstatten structures. However, telluric iron typically contains only around 3% nickel, which is too low for meteorites, of which none have been found with less than 5%. There are two types of telluric iron: Both type 1 and type 2 contain comparable amounts of nickel and other impurities. The main difference between the two is the carbon content, which greatly affects the hardness, workability, and melting point of the metal.
Material properties
Telluric iron is metallic iron that formed within the Earth's mantle and crust. Although minor deposits of telluric iron have been found around the world, the west shores of Greenland hold the only known major deposits. However, these deposits may vary drastically in shape and composition, even in the same region, as well as drastic variations between different regions such as Uivfaq, Asuk, Blaafjeld, and Mellemfjord. The common factor is that all Greenlandic deposits tend to be found in dikes (lava-filled fractures in the bedrock) or extrusions where molten rock was able to flow out onto the surface. Another commonality is that all deposits are found in association with graphite-rich feldspar, likely contributing to the high carbon-content and low oxide presence in the metal, although it is unknown if the metal managed to escape being oxidized with the rest of Earth's iron, or if it began as beds of ore and coal that subducted and then were naturally smelted in the lava due to the reducing environment provided by the carbon-rich, graphitic feldspar.
Telluric iron in Greenland is unique, in that it can be found in nearly all phases of iron-carbon alloys, and with drastically varying crystalline structures. In some rock it is found mixed with basalt as very small grains with sharp corners and irregular shapes, whereas in others the small, grain-sized droplets in the molten magma were able to coalesce into larger, pea-sized droplets that crystallized with a mostly spherical or oblong shape. Still in others the dike or extrusion may be made almost entirely out of very high-carbon cast-iron, which could more easily coalesce within the magma and flow into cracks due to its lower viscosity and melting point. This cast iron is often crusted with or contains inclusions of basalt, as it extruded out of the ground as very large, globular masses within the lava, out of which large boulders formed due to natural erosion of the surrounding basalt.
Telluric iron is largely divided into two groups, depending on the carbon content. Type 1 is a cast-iron typically containing over 2.0% carbon, while type 2 ranges somewhere between wrought iron and a eutectoid steel. Both types tend to handle weathering in the elements very well, but tend to decompose and crumble very quickly in the dry, controlled atmosphere of a museum, although type 2 is far more prone to this kind of damage.
Type 1
Type 1 telluric iron contains a significant amount of carbon. Type 1 is a white nickel cast-iron, containing 1.7 to 4% carbon and 0.05 to 4% nickel, which is very hard and brittle and does not respond well to cold working. The structure of type 1 consists mainly of pearlite and cementite or cohenite, with inclusions of troilite and silicate. The individual ferrite grains are typically about a millimeter in size. Although the composition of the grains may vary, even within the same grain, they are mostly composed of fairly pure nickel-ferrite. The ferrite grains are connected with cementite laminations; typically 5–25 micrometers thick; forming the pearlite.
Type 1 is found as massive extrusions or very large boulders, typically ranging from a few tons to tens of tons. The metal could not be cold worked by the ancient Inuit, (the local inhabitants of Greenland), and proves extremely difficult to machine even with modern tools. Machining of type 1 is possibly best accomplished with a carborundum wheel and water cooling. However type 1 was possibly used as hammer and anvil stones by the Inuit.
When sawed in half, boulders of type 1 tend to have a thick shell of cast-iron on the outside that can barely be broken with pneumatic jackhammers, but inside a much more brittle construction of iron grains in an almost powdery form, sintered together to form a porous, sponge-iron type of material that pulverizes at the strike of a hammer.
Type 2
Type 2 telluric iron also contains around 0.05 to 4% nickel, but typically less than 0.7% carbon. Type 2 is a malleable nickel-iron which responds well to cold working. The carbon and nickel content have a great effect on the final hardness of the cold-worked piece.
Type 2 is found as small grains mixed within basalt rock. The grains are usually 1–5 millimeters in diameter. The grains are usually found individually, separated by the basalt, although they are sometimes sintered together to form larger aggregates. The larger pieces also contain small amounts of cohenite, ilmenite, pearlite, and troilite. Type 2 was used by the Inuit to make items such as knives and ulus. The basalt was usually crushed in order to release the pea-sized grains, which were then hammered into discs about the size of coins. The metal is very soft and can be hammered into very thin plates. These flat discs were usually inserted into long slits carved into bone handles, in rows so that they slightly overlapped each other, forming an edge that resembled a combination of a knife and a saw (an inverted scalloped edge).
History
Aside from a very small deposit of telluric iron in Kassel, Germany, which has now been depleted, and a few other minor deposits from around the world, the only known major deposits exist in and nearby the area of Disko Bay, in Greenland. The material was found in the volcanic plains of basalt rock, and used by the local Inuit to make cutting edges for tools like knives and ulus. The Inuit were the only people to make practical use of telluric iron.
In 1870, Adolf Erik Nordenskiöld discovered large boulders of iron near the Disko Bay area of Greenland. Knowing that the Inuit had made tools from the Cape York meteorite, mainly due to Sir John Ross' discovery that the natives of Greenland used iron knives, Nordenskiöld landed at Fortune Bay on Disko Island to search for the material. The Inuit had told Ross that they got the iron from high on a mountain, at a site where two large boulders lay. One was very hard and could not be broken, but the other was chipped into smaller pieces from which balls of iron were extracted and hammered into flat discs for the knives. Nordenskiöld searched unsuccessfully for the site, until being led by some of the local Inuit to a place called Uivfaq, where large masses of metallic iron were strewn about the area. He assumed that the metal was of meteoric origin, since both contain significant amounts of nickel and both had Widmanstätten patterns. Most scientists at the time believed that no un-oxidized telluric iron existed, and few questioned Nordenskiöld's finding.
Gustav Nauckhoff made an expedition to Greenland in 1871. Armed with dynamite and lifting equipment, his expedition collected three large samples of telluric iron, also believing them to be meteoric, per Nordenskiöld's examination, and brought them back to Europe for further study. These samples can be found currently in Sweden, Finland, and Denmark. A 25-ton block now rests outside of the Riksmuseum in Stockholm, a 6.6 ton block outside the Geological Museum in Copenhagen, and a 3-ton block can be found in the Museum of Natural History in Kumpula, Helsinki.
Accompanying Nauckhoff in 1871 was K. J. V. Steenstrup. Due to circumstances like the shape of the boulders, which often had sharp corners or jagged edges that are not characteristic of meteorites (which ablate considerably during atmospheric entry), or the fact that many had areas that were encrusted with basalt, Steenstrup disagreed with Nordenskiöld about the origin of the boulders, and set out on an expedition of his own in 1878. In 1879, Steenstrup first identified the type 2 iron, showing that it also contained Widmanstätten structures. Steenstrup later reported what he found:
In the autumn of 1879, I made a discovery in connection with this matter, for in an old grave at Ekaluit ... I found 9 pieces of basalt containing round balls and irregular pieces of metallic iron. These pieces were lying together with bone knives, similar to those brought home by Ross, as well as with the usual stone tools ... whereas the 9 pieces of basalt with the iron balls were evidently the material for the bone knives. This iron is soft and keeps well in the air, from which reason it is fit for use in the manner described by Ross. The rock in which the iron appears is a typical, large-grained felspar-basalt. The discovery has a double significance, firstly, because it is the first time we have seen the material out of which the Esquimaux made artificial knives, and secondly, because it showed that they have used telluric iron for that purpose.
After the discovery in the grave at Ekaluit, Steenstrup found many large outcrops of basalt containing the type 2 iron. Since the type 2 grains are embedded within volcanic basalt that matches the underlying bedrock, Steenstrup was able to show that the iron was from terrestrial, or telluric, sources. In his report, Steenstrup added,
This peculiar layer of basalt is filled from top to bottom with iron-grains of all sizes from a fraction of a millimeter to a length of 18 mm with a breadth of 14 mm, which is the greatest I have found. ... When polished, this iron shows beautiful Widmannstätten figures. ... Metallic nickel-iron with Widmannstätten figures has now been proved to be also a telluric mineral, and the presence of nickel together with a certain crystalline structure are consequently not sufficient to give the character of meteorites to loose iron blocks.
Steenstrup's findings were later confirmed by meteorite expert J. Lawrence Smith in 1879, and then by Joh Lorenzen in 1882. The extremely rare telluric iron found in western Greenland has been under study ever since.
Occurrence
In addition to the Disko Island deposit native iron has been reported from Fortune Bay, Mellemfjord, Asuk, and other locations along Greenland's west coast. Other locations include:
Ben Breck, Scotland in granite with magnetite
in County Antrim, Northern Ireland
occurs in basalt at Bühl, near Ahnatal-Weimar, Hesse, and associated with nodules of pyrite within limestone at Muhlhausen, Thuringia, Germany
near Rivne, Volhynia, Ukraine
in trachyte at Auvergne, France
in Russia at Grushersk in the Don district southern Urals associated with pyrite; in the Huntukungskii (Khungtukun) massif, Krasnoyarsk Kray; and on the Tolbachik fissure volcano on the Kamchatka Peninsula
in the Hatrurim Formation, Negev, Israel
In the United States occurrences have been reported from coal beds near Cameron, Clinton County, Missouri; and from carboniferous shale near New Brunswick, Somerset County, New Jersey
In Ontario it has been reported from Cameron Township, Nipissing District, and on St. Joseph Island in Lake Huron.
Native nickel-iron alloys with Ni3Fe to Ni2Fe occur as placer deposits derived from ultramafic rocks. Awaruite was described in 1885 from New Zealand.
| Physical sciences | Minerals | Earth science |
31248290 | https://en.wikipedia.org/wiki/Slab%20%28geology%29 | Slab (geology) | In geology, the slab is a significant constituent of subduction zones.
Subduction slabs drive plate tectonics by pulling along the lithosphere to which they attach in a process known as slab pull and by inducing currents in the mantle via slab suction. The slab affects the convection and evolution of the Earth's mantle due to the insertion of the hydrous oceanic lithosphere. Dense oceanic lithosphere retreats into the Earth's mantle, while lightweight continental lithospheric material produces active continental margins and volcanic arcs, generating volcanism. Recycling the subducted slab presents volcanism by flux melting from the mantle wedge. The slab motion can cause dynamic uplift and subsidence of the Earth's surface, forming shallow seaways and potentially rearranging drainage patterns.
Geologic features of the subsurface can infer subducted slabs by seismic imaging. Subduction slabs are dynamic; slab characteristics such as slab temperature evolution, flat-slab, deep-slab, and slab detachment can be expressed globally near subduction zones. Temperature gradients of subducted slabs depend on the oceanic plate's time and thermal structures. Slabs experiencing low angle (less than 30 degrees) subduction is considered flat-slab, primarily in southern China and the western United States. Marianas Trench is an example of a deep slab, thereby creating the deepest trench in the world established by a steep slab angle. Slab breakoff occurs during a collision between oceanic and continental lithosphere, allowing for a slab tear; an example of slab breakoff occurs within the Himalayan subduction zone.
| Physical sciences | Tectonics | Earth science |
26626775 | https://en.wikipedia.org/wiki/Fig | Fig | The fig is the edible fruit of Ficus carica, a species of small shrub in the flowering plant family Moraceae, native to the Mediterranean region, together with western and southern Asia. It has been cultivated since ancient times and is now widely grown throughout the world. Ficus carica is the type species of the genus Ficus, containing over 800 tropical and subtropical plant species.
A fig plant is a small deciduous tree or large shrub growing up to tall, with smooth white bark. Its large leaves have three to five deep lobes. Its fruit (referred to as syconium, a type of multiple fruit) is tear-shaped, long, with a green skin that may ripen toward purple or brown, and sweet soft reddish flesh containing numerous crunchy seeds. The milky sap of the green parts is an irritant to human skin. In the Northern Hemisphere, fresh figs are in season from late summer to early autumn. They tolerate moderate seasonal frost and can be grown even in hot-summer continental climates.
Figs can be eaten fresh or dried, or processed into jam, rolls, biscuits and other types of desserts. Since ripe fruit does not transport and keep well, most commercial production is in dried and processed forms. Raw figs contain roughly 80% water and 20% carbohydrates, with negligible protein, fat and micronutrient content. They are a moderate source of dietary fiber.
In 2018, world production of raw figs was 1.14 million tonnes, led by Turkey and North African countries (Egypt, Morocco, and Algeria) as the largest producers, collectively accounting for 64% of the total.
Etymology
The word fig, first recorded in English in the 13th century, derives from (Old) French figue, itself from Occitan (Provençal) figa, from Romance *fica, from Classical Latin ficus (fig or fig-tree). Italian has fico, directly derived from Latin ficus. The name of the caprifig, Ficus caprificus Risso, is derived both from Latin caper, genitive capri (he-goat) and English fig.
Biology
Description
Ficus carica is a gynodioecious, deciduous tree or large shrub that grows up to tall, with smooth white bark. Its fragrant leaves are long and wide, and are deeply lobed (three or five lobes).
The fig fruit develops as a hollow, fleshy structure called the syconium that is lined internally with numerous unisexual flowers. The tiny flowers bloom inside this cup-like structure. Although commonly called a fruit, the syconium is botanically an infructescence, a type of multiple fruit. The small fig flowers and later small single-seeded (true) fruits line its interior surface. A small opening or ostiole, visible on the middle of the fruit, is a narrow passage that allows the specialized fig wasp, Blastophaga psenes, to enter the inflorescence and pollinate the flowers, after which each fertilized ovule (one per flower, in its ovary) develops into a seed. At maturity, these 'seeds' (actually single-seeded fruits) line the inside of each fig.
The edible mature syconium develops into a fleshy false fruit bearing the numerous one-seeded fruits, which are technically drupelets. The whole fig fruit is long, with a green skin that sometimes ripens toward purple or brown. Ficus carica has milky sap, produced by laticifer cells. The sap of the green parts is an irritant to human skin.
Habitat
The common fig tree has been cultivated since ancient times and grows wild in dry and sunny locations with deep and fresh soil, and in rocky locations that are at sea level to 1,700 metres in elevation. It prefers relatively porous and freely draining soil, and can grow in nutritionally poor soil. Unlike other fig species, Ficus carica does not always require pollination by a wasp or from another tree, but can be pollinated by the fig wasp, Blastophaga psenes to produce seeds. Fig wasps are not present to pollinate in colder regions such as the British Isles.
The species has become naturalized in scattered locations in Asia and North America.
The plant tolerates seasonal drought, and the Middle Eastern and Mediterranean climates are especially suitable to it. Situated in a favorable habitat, mature specimens can grow to considerable size as large, dense, shade trees. Its aggressive root system precludes its cultivation in many urban locations, yet in nature this characteristic helps the plant to root in the most inhospitable locations. Having a great need of water, it is mostly a phreatophyte that extracts the needed water from sources in or on the ground. Consequently, it frequently grows in locations with standing or running water, e. g. in valleys of rivers and in ravines that collect water. The deeply rooted plant searches for groundwater in aquifers, ravines, or cracks in rocks. With access to this water, the tree cools the hot environments in which it grows, thus producing fresh and pleasant habitat for many animals that shelter in its shade during periods of intense heat.
The mountain or rock fig () is a wild variety, tolerant of cold dry climates, of the semi-arid rocky montane regions of Iran, especially in the Kūhestān mountains of Khorasan.
Ecology
Ficus carica is dispersed by birds and mammals that scatter their seeds in droppings. Fig fruit is an important food source for much of the fauna in some areas, and the tree owes its expansion to those that feed on its fruit. The common fig tree also sprouts from the root and stolon tissues.
Cultivation
From ancient times
The edible fig is one of the first plants that were cultivated by humans. Nine subfossil figs of a parthenocarpic (and therefore sterile) type dating to about 9400–9200 BC were found in the early Neolithic village Gilgal I (in the Jordan Valley, 13 km north of Jericho). The find precedes the domestication of wheat, barley, and legumes, and may thus be the first known instance of agriculture. It is proposed that this sterile but desirable type was planted and cultivated intentionally, one thousand years before the next crops were domesticated (wheat and rye). In ancient Palestine, fig-cakes were often produced from selected ripe figs.
Figs were widespread in ancient Greece, and their cultivation was described by both Aristotle and Theophrastus. Aristotle noted that as in animal sexes, figs have individuals of two kinds, one (the cultivated fig) that bears fruit, and one (the wild caprifig) that assists the other to bear fruit. Further, Aristotle recorded that the fruits of the wild fig contain psenes (fig wasps); these begin life as larvae, and the adult psen splits its "skin" (pupa) and flies out of the fig to find and enter a cultivated fig, saving it from dropping. Theophrastus observed that just as date palms have male and female flowers, and that farmers (from the East) help by scattering "dust" from the male onto the female, and as a male fish releases his milt over the female's eggs, so Greek farmers tie wild figs to cultivated trees. They do not say directly that figs reproduce sexually, however.
Figs were also a common food source for the Romans. Cato the Elder, in his c. 160 BC De Agri Cultura, lists several strains of figs grown at the time he wrote his handbook: the Mariscan, African, Herculanean, Saguntine, and the black Tellanian. The fruits were used, among other things, to fatten geese for the production of a precursor of foie gras. Rome's first emperor, Augustus, was reputed to have been poisoned with figs from his garden smeared with poison by his wife Livia. For this reason, or perhaps because of her horticultural expertise, a variety of fig known as the Liviana was cultivated in Roman gardens.
It was cultivated from Afghanistan to Portugal, also grown in Pithoragarh in the Kumaon hills of India. From the 15th century onwards, it was grown in areas including Northern Europe and the New World. In the 16th century, Cardinal Reginald Pole introduced fig trees to Lambeth Palace in London.
In 1769, Spanish missionaries led by Junipero Serra brought the first figs to California. The Mission variety, which they cultivated, is still popular. The fact that it is parthenocarpic (self-pollinating) made it an ideal cultivar for introduction.
The Kadota cultivar is even older, being mentioned by the Roman naturalist Pliny the Elder in the 1st century A.D. Pliny recorded thirty varieties of figs.
The name Kadota name did not exist in the era of Pliny the Elder nor is it mentioned in Pliny's works. Also only 29 figs were recorded in his work; Pliny the Elder, The Natural History, English translation by John Bostock and H.T. Riley, Book XV, CHAP. 19. (18.)—TWENTY-NINE VARIETIES OF THE FIG.
The Kadota name was created in the early 20th century in California, USA, to name a "sport" or genetic deviation from a Dotatto fig tree as documented in The Kadota Fig: A Treatise On Its Origin, Planting And Care by W. Sam Clark.
Modern
The common fig is grown for its edible fruit throughout the temperate world. It is also grown as an ornamental tree, and in the UK the cultivars 'Brown Turkey' and 'Ice Crystal' (mainly grown for its unusual foliage) have gained the Royal Horticultural Society's Award of Garden Merit.
Figs are also grown in Germany, mainly in private gardens inside built up areas. There is no commercial fig growing. The Palatine region in the German South West has an estimated 80,000 fig trees. The variety Brown Turkey is the most widespread in the region. There are about a dozen quite widespread varieties hardy enough to survive winter outdoors mostly without special protection. There are even two local varieties, "Martinsfeige" and "Lussheim", which may be the hardiest varieties in the region.
As the population of California grew, especially after the gold rush, a number of other cultivars were brought there by persons and nurserymen from the east coast of the US and from France and England. By the end of the 19th century, it became apparent that California had the potential for being an ideal fig producing state because of its Mediterranean-like climate and latitude of 38 degrees, lining up San Francisco with İzmir, Turkey. G. P. Rixford first brought true Smyrna figs to California in 1880. The most popular cultivar of Smyrna-type fig is , being a name that combines "California" and "Smyrna". The cultivar, however, is not one that was produced by a breeding program, and instead is from one of the cuttings brought to California in the latter part of the 19th century. It is identical to the cultivar Lob Injir that has been grown in Turkey for centuries.
Figs can be found in continental climates with hot summers as far north as Hungary and Moravia. Thousands of cultivars, most named, have been developed as human migration brought the fig to many places outside its natural range. Fig plants can be propagated by seed or by vegetative methods. Vegetative propagation is quicker and more reliable, as it does not yield the inedible caprifigs. Seeds germinate readily in moist conditions and grow rapidly once established. For vegetative propagation, shoots with buds can be planted in well-watered soil in the spring or summer, or a branch can be scratched to expose the bast (inner bark) and pinned to the ground to allow roots to develop.
Two crops of figs can be produced each year. The first or breba crop develops in the spring on last year's shoot growth. The main fig crop develops on the current year's shoot growth and ripens in the late summer or fall. The main crop is generally superior in quantity and quality, but some cultivars such as 'Black Mission', 'Croisic', and 'Ventura' produce good breba crops.
There are three types of edible figs:
Persistent (or common) figs have all female flowers that do not need pollination for fruiting; the fruit can develop through parthenocarpic means. This is a popular horticulture fig for home gardeners. Dottato (Kadota), Black Mission, Brown Turkey, Brunswick, and Celeste are some representative cultivars.
Caducous (or Smyrna) figs require cross pollination by the fig wasp with pollen from caprifigs for the fruit to mature. If not pollinated the immature fruits drop. Some cultivars are Marabout, Inchàrio, and Zidi.
Intermediate (or San Pedro) figs set an unpollinated breba crop but need pollination for the later main crop. Examples are Lampeira, King, and San Pedro.
There are dozens of fig cultivars, including main and breba cropping varieties, and an edible caprifig (the Croisic). Varieties are often local, found in a single region of one country.
Overwintering
People of the Italian diaspora who live in cold-winter climates have the practice of burying imported fig trees to overwinter them and protect the fruiting hard wood from cold. Italian immigrants to America in the 19th century introduced this common practice in cities such as New York, Philadelphia, Boston, and Toronto, where winters are normally too cold to leave the tree exposed. This practice consists in digging a trench that is appropriate to the size of the specimen, some of which are more than tall, severing part of the root system, and bending the specimen into the trench. Specimens are often wrapped in waterproof material to discourage development of mould and fungus, then covered with a heavy layer of soil and leaves. Sometimes plywood or corrugated metal is placed on top to secure the tree. In borderline climates like New York City burying trees is no longer needed because low winter temperatures have increased. Often specimens are simply wrapped in plastic and other insulating material, or not protected if planted in a sheltered site against a wall that absorbs sunlight.
Breeding
While the fig contains more naturally occurring varieties than any other tree crop, a formal breeding program was not developed until the beginning of the 20th century. Ira Condit, "High Priest of the Fig," and William Storey tested some thousands of fig seedlings in the early 20th century based at University of California, Riverside. It was then continued at the University of California, Davis. However, the fig breeding program was ultimately closed in the 1980s.
Due to insect and fungal disease pressure in both dried and fresh figs, the breeding program was revived in 1989 by James Doyle and Louise Ferguson using the germplasm established at UC Riverside by Ira Condit and William Storey. Crosses were made and two new varieties are now in production in California: the public variety "Sierra", and the patented variety "Sequoia".
Production
In 2020, world production of raw figs was 1.26 million tonnes, led by Turkey (with 25% of the world total), Egypt, Morocco, and Algeria as the largest producers collectively accounting for 62% of the total.
Food
Figs can be eaten fresh or dried, and used in jam-making. Most commercial production is in dried or otherwise processed forms, since the ripe fruit does not transport well, and once picked does not keep well. The widely produced fig roll ("Fig Newton" is a trademark of Nabisco) is a biscuit (or cookie) with a filling made from figs.
In the Northern Hemisphere, fresh figs are in season from August through to early October. Fresh figs used in cooking should be plump and soft, and without bruising or splits. If they smell sour, the figs have become over-ripe. Slightly under-ripe figs can be kept at room temperature for 1–2 days to ripen before serving. Figs are most flavorful at room temperature.
Freshly harvested figs underwent two distinct drying methods for preservation. The first method was natural sun-drying, where the figs were exposed to the warmth and light of the sun. The second method involved oven-drying, where the figs were placed in a controlled temperature environment within an oven. Each process has its unique impact on the texture and flavor profile of the dried figs.
Nutrition
Raw figs are 79% water, 19% carbohydrates, 1% protein, and contain negligible fat (table). They are a moderate source (14% of the Daily Value, DV) of dietary fiber and of food energy per 100-gram serving, and do not supply essential micronutrients in significant contents (table).
When dehydrated to 30% water, figs have a carbohydrate content of 64%, protein content of 3%, and fat content of 1%. In a 100-gram serving, providing of food energy, dried figs are a rich source (more than 20% DV) of dietary fiber and the essential mineral manganese (26% DV), while calcium, iron, magnesium, potassium, and vitamin K are in moderate amounts.
In fig fruits, the levels of glucose and fructose are nearly identical, with glucose being slightly more prevalent overall, while the presence of sucrose is minimal. Still, in some varieties of figs, the fructose content can occasionally slightly surpass that of glucose.
Research and folk medicine
Phytochemicals
Figs contain diverse phytochemicals under basic research for their potential biological properties, including polyphenols, such as gallic acid, chlorogenic acid, syringic acid, (+)-catechin, (−)-epicatechin and rutin. Fig color may vary between cultivars due to various concentrations of anthocyanins, with cyanidin-3-O-rutinoside having particularly high content.
Folk medicine
In some old Mediterranean folk practices, the milky sap of the fig plant was used to soften calluses, remove warts, and deter parasites.
Since the late 1800s, syrup of figs combined with senna has been available as a laxative.
Toxicity
Like other plant species in the family Moraceae, contact with the milky sap of Ficus carica followed by exposure to ultraviolet light can cause phytophotodermatitis, a potentially serious skin inflammation. Although the plant is not poisonous per se, F. carica is listed in the FDA Database of Poisonous Plants.
Organic chemical compounds called furanocoumarins are known to cause phytophotodermatitis in humans. The common fig contains significant quantities of two furanocoumarins, psoralen and bergapten. The essential oil of fig leaves contains more than 10% psoralen, the highest concentration of any organic compound isolated from fig leaves. Psoralen appears to be the primary furanocoumarin compound responsible for fig leaf-induced phytophotodermatitis.
Psoralen and bergapten are found chiefly in the milky sap of the leaves and shoots of F. carica but not the fruits. Neither psoralen nor bergapten were detected in the essential oil of fig fruits. Thus there is no conclusive evidence that fig fruits cause phytophotodermatitis.
Cultural significance
Babylonian mythology
Babylonian Ishtar for example took the form of the divine fig tree Xikum, the "primeval mother at the central place of the earth", protectress of the saviour Tammuz. Moreover, figs and the fig tree were closely linked with female sexuality. According to Barbara Walker's encyclopedia on Goddess symbols, "This may account for the common use of the fig tree as a symbol of man's enlightenment, which was formerly supposed to come through his connection with the female principle."
Buddhism
Gautama Buddha attained enlightenment (bodhi) after meditating underneath a Ficus religiosa, known as the bodhi tree, for seven weeks (49 days) around 500 BCE. The site of enlightenment is in present-day Bodh Gaya and its bodhi tree has been replaced several times.
Judaism and Christianity
In the Biblical Book of Genesis, Adam and Eve clad themselves with fig leaves (Genesis 3:7) after eating the forbidden fruit from the tree of the knowledge of good and evil. Likewise, fig leaves, or depictions of fig leaves, have long been used to cover the genitals of nude figures in painting and sculpture, for example in Masaccio's The Expulsion from the Garden of Eden. Moreover, according to one opinion in the Talmud and the Jewish Biblical commentary, the forbidden fruit of the Tree of Knowledge in the Garden of Eden could have been a fig. There is also a Christian tradition that the Tree of Knowledge was the same fig tree Christ withers in the Gospels.
The Book of Deuteronomy specifies the fig as one of the Seven Species (Deuteronomy 8:7–8), describing the fertility of the land of Canaan. This is a set of seven plants indigenous to the Middle East that together can provide food all year round. The list is organized by date of harvest, with the fig being fourth due to its main crop ripening during summer.
The biblical quote "each man under his own vine and fig tree" (Micah 4:4) has been used to denote peace and prosperity. It was commonly quoted to refer to the life that would be led by settlers in the American West, and was used by Theodor Herzl in his depiction of the future Jewish Homeland: "We are a commonwealth. In form it is new, but in purpose very ancient. Our aim is mentioned in the First Book of Kings: 'Judah and Israel shall dwell securely, each man under his own vine and fig tree, from Dan to Beersheba". United States President George Washington, writing in 1790 to the Touro Synagogue of Newport, Rhode Island, extended the metaphor to denote the equality of all Americans regardless of faith.
Islam
Sura 95 of the Qur'an is named al-Tīn (Arabic for "The Fig"), as it opens with the oath "By the fig and the olive."
Wrongly attributed hadiths that Muhammad stated figs are descended from paradise, and that they cure hemorrhoids are judged weak by specialists.
Fossil record
10 fossil endocarps of †Ficus potentilloides from the early Miocene, have been found in the Kristina Mine at Hrádek nad Nisou in North Bohemia, the Czech Republic. These fossils are similar to endocarps of F. carica.
| Biology and health sciences | Rosales | null |
26631468 | https://en.wikipedia.org/wiki/Silesauridae | Silesauridae | Silesauridae is an extinct family of Triassic dinosauriforms. It is most commonly considered to be a clade of non-dinosaur dinosauriforms, and the sister group of dinosaurs. Some studies have instead suggested that most or all silesaurids comprised an early diverging clade or a paraphyletic grade within ornithischian dinosaurs. Silesaurids have a consistent general body plan, with a fairly long neck and legs and possibly quadrupedal habits, but most silesaurids are heavily fragmentary nonetheless. Furthermore, they occupied a variety of ecological niches, with early silesaurids (such as Lewisuchus) being carnivorous and later taxa (such as Kwanasaurus) having adaptations for specialized herbivory. As indicated by the contents of referred coprolites, Silesaurus may have been insectivorous, feeding selectively on small beetles and other arthropods.
Classification
Silesauridae is typically considered the sister group to Dinosauria. The group was named in 2010 by Max C. Langer et al. They defined it as a branch-based clade of all archosaurs closer to Silesaurus opolensis than to either Heterodontosaurus tucki or Marasuchus lilloensis. At around the same time, Sterling J. Nesbitt et al. (2010) independently named Silesauridae as a node-based clade consisting of Lewisuchus, Silesaurus, their last common ancestor and all their descendants. Currently, both definitions encompass the same group of animals. Nesbitt et al. noted that the earlier definition by Langer et al. did not include a diagnosis, and so was not sufficient to create a ranked family-level name according to the ICZN. Therefore, the family Silesauridae is attributed to Nesbitt et al. (2010) while the clade Silesauridae is attributed to Langer et al. (2010).
Phylogenetic studies originally recovered Silesauridae as a clade sister to Dinosauria, including a variety of Triassic taxa related to Silesaurus. Sometimes the genera are recovered as successive sister taxa to dinosaurs rather than a clade, but often a group is recovered as possibly quadrupedal herbivorous taxa.
A large phylogenetic analysis of early dinosaurs and dinosauromorphs carried out by Matthew Baron, David Norman and Paul Barrett (2017) and published in the journal Nature recovered Silesauridae as a monophyletic sister group to Dinosauria. The study also recovered the taxon Agnosphitys within the clade Silesauridae, close to Lewisuchus and its synonymous taxon Pseudolagosuchus. A 2022 study by Norman and colleagues instead found silesaurs to be a paraphyletic group on the branch leading to traditional Ornithischia. The cladogram below is based on their study. This topology is almost identical to the one recovered by Müller & Garcia (2020) in their first iteration of the same dataset. When found as a grade of ornithischians, Amanasaurus, Ignotosaurus and Silesaurus have still been recovered as a clade, which makes them the only members of Silesauridae under those results.
The inclusion of Pisanosaurus by some studies would mean that, according to ICZN rules, the name of the family should be the older name, Pisanosauridae, which was erected by Rodolfo Casamiquela in 1967.
While Sulcimentisauria is used for all ornithischians by some authors, the original intention of the clade was as a subdivision within Silesauridae, the new use being significantly different from its original application. A solution to this could be to restrict Sulcimentisauria to only apply to a subclade of Silesauridae, where it would no longer be applied along the ornithischian stem as a clade.
Martz & Small, 2019
Müller & Garcia, 2023
| Biology and health sciences | Other prehistoric archosaurs | Animals |
53575775 | https://en.wikipedia.org/wiki/Aggregate%20%28geology%29 | Aggregate (geology) | In geology, particularly in mineralogy and petrology, an aggregate is a mass of mineral crystals, mineraloid particles or rock particles. Examples are dolomite, which is an aggregate of crystals of the mineral dolomite, and rock gypsum, an aggregate of crystals of the mineral gypsum. Lapis lazuli is a type of rock composed of an aggregate of crystals of many minerals including lazurite, pyrite, phlogopite, calcite, potassium feldspar, wollastonite and some sodalite group minerals.
| Physical sciences | Terrestrial geology: General | Earth science |
40659364 | https://en.wikipedia.org/wiki/Arctic%20sea%20ice%20decline | Arctic sea ice decline | Sea ice in the Arctic region has declined in recent decades in area and volume due to climate change. It has been melting more in summer than it refreezes in winter. Global warming, caused by greenhouse gas forcing is responsible for the decline in Arctic sea ice. The decline of sea ice in the Arctic has been accelerating during the early twenty-first century, with a decline rate of 4.7% per decade (it has declined over 50% since the first satellite records). Summertime sea ice will likely cease to exist sometime during the 21st century.
The region is at its warmest in at least 4,000 years. Furthermore, the Arctic-wide melt season has lengthened at a rate of five days per decade (from 1979 to 2013), dominated by a later autumn freeze-up. The IPCC Sixth Assessment Report (2021) stated that Arctic sea ice area will likely drop below 1 million km2 in at least some Septembers before 2050. In September 2020, the US National Snow and Ice Data Center reported that the Arctic sea ice in 2020 had melted to an extent of 3.74 million km2, its second-smallest extent since records began in 1979. Earth lost 28 trillion tonnes of ice between 1994 and 2017, with Arctic sea ice accounting for 7.6 trillion tonnes of this loss. The rate of ice loss has risen by 57% since the 1990s.
Sea ice loss is one of the main drivers of Arctic amplification, the phenomenon that the Arctic warms faster than the rest of the world under climate change. It is plausible that sea ice decline also makes the jet stream weaker, which would cause more persistent and extreme weather in mid-latitudes. Shipping is more often possible in the Arctic now, and will likely increase further. Both the disappearance of sea ice and the resulting possibility of more human activity in the Arctic Ocean pose a risk to local wildlife such as polar bears.
One important aspect in understanding sea ice decline is the Arctic dipole anomaly. This phenomenon appears to have slowed down the overall loss of sea ice between 2007 and 2021, but such a trend will probably not continue.
Definitions
The Arctic Ocean is the mass of water positioned approximately above latitude 65° N. Arctic Sea Ice refers to the area of the Arctic Ocean covered by ice. The Arctic sea ice minimum is the day in a given year when Arctic sea ice reaches its smallest extent, occurring at the end of the summer melting season, normally during September. Arctic Sea ice maximum is the day of a year when Arctic sea ice reaches its largest extent near the end of the Arctic cold season, normally during March. Typical data visualizations for Arctic sea ice include average monthly measurements or graphs for the annual minimum or maximum extent, as shown in the adjacent images.
Sea ice extent is defined as the area with at least 15% of sea ice cover; it is more often used as a metric than simple total sea ice area. This metric is used to address uncertainty in distinguishing open sea water from melted water on top of solid ice, which satellite detection methods have difficulty differentiating. This is primarily an issue in summer months.
Observations
A 2007 study found the decline to be "faster than forecasted" by model simulations.
A 2011 study suggested that it could be reconciled by internal variability enhancing the greenhouse gas-forced sea ice decline over the last few decades. A 2012 study, with a newer set of simulations, also projected rates of retreat that were somewhat less than that actually observed.
Satellite era
Observation with satellites shows that Arctic sea ice area, extent, and volume have been in decline for a few decades. The amount of multi-year sea ice in the Arctic has declined considerably in recent decades. In 1988, ice that was at least 4 years old accounted for 26% of the Arctic's sea ice. By 2013, ice that age was only 7% of all Arctic sea ice.
Scientists recently measured sixteen-foot (five-meter) wave heights during a storm in the Beaufort Sea in mid-August until late October 2012. This is a new phenomenon for the region, since a permanent sea ice cover normally prevents wave formation. Wave action breaks up sea ice, and thus could become a feedback mechanism, driving sea ice decline.
For January 2016, the satellite-based data showed the lowest overall Arctic sea ice extent of any January since records began in 1979. Bob Henson from Wunderground noted:
January 2016's remarkable phase transition of Arctic oscillation was driven by a rapid tropospheric warming in the Arctic, a pattern that appears to have increased surpassing the so-called stratospheric sudden warming. The previous record of the lowest extent of the Arctic Ocean covered by ice in 2012 saw a low of 1.31 million square miles (3.387 million square kilometers). This replaced the previous record set on September 18, 2007, at 1.61 million square miles (4.16 million square kilometers). The minimum extent on 18th Sept 2019 was 1.60 million square miles (4.153 million square kilometers).
A 2018 study of the thickness of sea ice found a decrease of 66% or 2.0 m over the last six decades and a shift from permanent ice to largely seasonal ice cover.
Earlier data
The overall trend indicated in the passive microwave record from 1978 through mid-1995 shows that the extent of Arctic sea ice is decreasing 2.7% per decade. Subsequent work with the satellite passive-microwave data indicates that from late October 1978 through the end of 1996 the extent of Arctic sea ice decreased by 2.9% per decade. Sea ice extent for the Northern Hemisphere showed a decrease of 3.8% ± 0.3% per decade from November 1978 to December 2012.
Future ice loss
An "ice-free" Arctic Ocean, sometimes referred to as a "blue ocean event" (BOE), is often defined as "having less than 1 million square kilometers of sea ice", because it is very difficult to melt the thick ice around the Canadian Arctic Archipelago. The IPCC AR5 defines "nearly ice-free conditions" as a sea ice extent of less than 106 km2 for at least five consecutive years.
Estimating the exact year when the Arctic Ocean will become "ice-free" is very difficult, due to the large role of interannual variability in sea ice trends. In Overland and Wang (2013), the authors investigated three different ways of predicting future sea ice levels. They noted that the average of all models used in 2013 was decades behind the observations, and only the subset of models with the most aggressive ice loss was able to match the observations. However, the authors cautioned that there is no guarantee those models would continue to match the observations, and hence that their estimate of ice-free conditions first appearing in 2040s may still be flawed. Thus, they advocated for the use of expert judgement in addition to models to help predict ice-free Arctic events, but they noted that expert judgement could also be done in two different ways: directly extrapolating ice loss trends (which would suggest an ice-free Arctic in 2020) or assuming a slower decline trend punctuated by the occasional "big melt" seasons (such as those of 2007 and 2012) which pushes back the date to 2028 or further into 2030s, depending on the starting assumptions about the timing and the extent of the next "big melt". Consequently, there has been a recent history of competing projections from climate models and from individual experts.
Climate models
A 2006 paper examined projections from the Community Climate System Model and predicted "near ice-free September conditions by 2040".
A 2009 paper from Muyin Wang and James E. Overland applied observational constraints to the projections from six CMIP3 climate models and estimated nearly ice-free Arctic Ocean around September 2037, with a chance it could happen as early as 2028. In 2012, this pair of researchers repeated the exercise with CMIP5 models and found that under the highest-emission scenario in CMIP5, Representative Concentration Pathway 8.5, ice-free September first occurs between 14 and 36 years after the baseline year of 2007, with the median of 28 years (i.e. around 2035).
In 2009, a study using 18 CMIP3 climate models found that they project ice-free Arctic a little before 2100 under a scenario of medium future greenhouse gas emissions. In 2012, a different team used CMIP5 models and their moderate emission scenario, RCP 4.5 (which represents somewhat lower emissions than the scenario in CMIP3), and found that while their mean estimate avoids ice-free Arctic before the end of the century, ice-free conditions in 2045 were within one standard deviation of the mean.
In 2013, a study compared projections from the best-performing subset of CMIP5 models with the output from all 30 models after it was constrained by the historical ice conditions, and found good agreement between these approaches. Altogether, it projected ice-free September between 2054 and 2058 under RCP 8.5, while under RCP 4.5, Arctic ice gets very close to the ice-free threshold in 2060s, but does not cross it by the end of the century, and stays at an extent of 1.7 million km2.
In 2014, IPCC Fifth Assessment Report indicated a risk of ice-free summer around 2050 under the scenario of highest possible emissions.
The Third U.S. National Climate Assessment (NCA), released May 6, 2014, reported that the Arctic Ocean is expected to be ice free in summer before mid-century. Models that best match historical trends project a nearly ice-free Arctic in the summer by the 2030s.
In 2021, the IPCC Sixth Assessment Report assessed that there is "high confidence" that the Arctic Ocean will likely become practically ice-free in September before the year 2050 under all SSP scenarios.
A paper published in 2021 shows that the CMIP6 models which perform the best at simulating Arcic sea ice trends project the first ice-free conditions around 2035 under SSP5-8.5, which is the scenario of continually accelerating greenhouse gas emissions.
By weighting multiple CMIP6 projections, the first year of an ice-free Arctic is likely to occur during 2040–2072 under the SSP3-7.0 scenario.
Impacts on the physical environment
Global climate change
Arctic sea ice maintains the cool temperature of the polar regions and it has an important albedo effect on the climate. Its bright shiny surface reflects sunlight during the Arctic summer; dark ocean surface exposed by the melting ice absorbs more sunlight and becomes warmer, which increases the total ocean heat content and helps to drive further sea ice loss during the melting season, as well as potentially delaying its recovery during the polar night. Arctic ice decline between 1979 and 2011 is estimated to have been responsible for as much radiative forcing as a quarter of emissions the same period, which is equivalent to around 10% of the cumulative increase since the start of the Industrial Revolution. When compared to the other greenhouse gases, it has had the same impact as the cumulative increase in nitrous oxide, and nearly half of the cumulative increase in methane concentrations.
The effect of Arctic sea ice decline on global warming will intensify in the future as more and more ice is lost. This feedback has been accounted for by all CMIP5 and CMIP6 models, and it is included in all warming projections they make, such as the estimated warming by 2100 under each Representative Concentration Pathway and Shared Socioeconomic Pathway. They are also capable of resolving the second-order effects of sea ice loss, such as the effect on lapse rate feedback, the changes in water vapor concentrations and regional cloud feedbacks.
Ice-free summer vs. ice-free winter
In 2021, the IPCC Sixth Assessment Report said with high confidence that there is no hysteresis and no tipping point in the loss of Arctic summer sea ice. This can be explained by the increased influence of stabilizing feedback compared to the ice albedo feedback. Specifically, thinner sea ice leads to increased heat loss in the winter, creating a negative feedback loop. This counteracts the positive ice albedo feedback. As such, sea ice would recover even from a true ice-free summer during the winter, and if the next Arctic summer is less warm, it may avoid another ice-free episode until another similarly warm year down the line. However, higher levels of global warming would delay the recovery from ice-free episodes and make them occur more often and earlier in the summer. A 2018 paper estimated that an ice-free September would occur once in every 40 years under a global warming of 1.5 degrees Celsius, but once in every 8 years under 2 degrees and once in every 1.5 years under 3 degrees.
Very high levels of global warming could eventually prevent Arctic sea ice from reforming during the Arctic winter. This is known as an ice-free winter, and it ultimately amounts to a total of loss of Arctic ice throughout the year. A 2022 assessment found that unlike an ice-free summer, it may represent an irreversible tipping point. It estimated that it is most likely to occur at around 6.3 degrees Celsius, though it could potentially occur as early as 4.5 °C or as late as 8.7 °C. Relative to today's climate, an ice-free winter would add 0.6 degrees, with a regional warming between 0.6 and 1.2 degrees.
Amplified Arctic warming
Arctic amplification and its acceleration is strongly tied to declining Arctic sea ice: modelling studies show that strong Arctic amplification only occurs during the months when significant sea ice loss occurs, and that it largely disappears when the simulated ice cover is held fixed. Conversely, the high stability of ice cover in Antarctica, where the thickness of the East Antarctic ice sheet allows it to rise nearly above the sea level, means that this continent has not experienced any net warming over the past seven decades: ice loss in the Antarctic and its contribution to sea level rise is instead driven entirely by the warming of the Southern Ocean, which had absorbed 35–43% of the total heat taken up by all oceans between 1970 and 2017.
Impacts on extreme weather
Barents Sea ice
Barents Sea is the fastest-warming part of the Arctic, and some assessments now treat Barents sea ice as a separate tipping point from the rest of the Arctic sea ice, suggesting that it could permanently disappear once the global warming exceeds 1.5 degrees. This rapid warming also makes it easier to detect any potential connections between the state of sea ice and weather conditions elsewhere than in any other area. The first study proposing a connection between floating ice decline in the Barents Sea and the neighbouring Kara Sea and more intense winters in Europe was published in 2010, and there has been extensive research into this subject since then. For instance, a 2019 paper holds BKS ice decline responsible for 44% of the 1995–2014 central Eurasian cooling trend, far more than indicated by the models, while another study from that year suggests that the decline in BKS ice reduces snow cover in the North Eurasia but increases it in central Europe. There are also potential links to summer precipitation: a connection has been proposed between the reduced BKS ice extent in November–December and greater June rainfall over South China. One paper even identified a connection between Kara Sea ice extent and the ice cover of Lake Qinghai on the Tibetan Plateau.
However, BKS ice research is often subject to the same uncertainty as the broader research into Arctic amplification/whole-Arctic sea ice loss and the jet stream, and is often challenged by the same data. Nevertheless, the most recent research still finds connections which are statistically robust, yet non-linear in nature: two separate studies published in 2021 indicate that while autumn BKS ice loss results in cooler Eurasian winters, ice loss during winter makes Eurasian winters warmer: as BKS ice loss accelerates, the risk of more severe Eurasian winter extremes diminishes while heatwave risk in the spring and summer is magnified.
Other possible impacts on weather
In 2019, it was proposed that the reduced sea ice around Greenland in autumn affects snow cover during the Eurasian winter, and this intensifies Korean summer monsoon, and indirectly affects the Indian summer monsoon.
2021 research suggested that autumn ice loss in the East Siberian Sea, Chukchi Sea and Beaufort Sea can affect spring Eurasian temperature. Autumn sea ice decline of one standard deviation in that region would reduce mean spring temperature over central Russia by nearly 0.8 °C, while increasing the probability of cold anomalies by nearly a third.
Atmospheric chemistry
A 2015 study concluded that Arctic sea ice decline accelerates methane emissions from the Arctic tundra, with the emissions for 2005-2010 being around 1.7 million tonnes higher than they would have been with the sea ice at 1981–1990 levels. One of the researchers noted, "The expectation is that with further sea ice decline, temperatures in the Arctic will continue to rise, and so will methane emissions from northern wetlands."
Cracks in Arctic sea ice expose the seawater to the air, causing mercury in the air to be absorbed into the water. This absorption leads to more mercury, a toxin, entering the food chain where it can negatively affect fish and the animals and people who consume them. Mercury is part of Earth's atmosphere due to natural causes (see mercury cycle) and due to human emissions.
Shipping
Economic implications of ice-free summers and the decline in Arctic ice volumes include a greater number of journeys across the Arctic Ocean Shipping lanes during the year. This number has grown from 0 in 1979 to 400–500 along the Bering strait and >40 along the Northern Sea Route in 2013. Traffic through the Arctic Ocean is likely to increase further. An early study by James Hansen and colleagues suggested in 1981 that a warming of 5 to 10 °C, which they expected as the range of Arctic temperature change corresponding to doubled concentrations, could open the Northwest Passage. A 2016 study concludes that Arctic warming and sea ice decline will lead to "remarkable shifts in trade flows between Asia and Europe, diversion of trade within Europe, heavy shipping traffic in the Arctic and a substantial drop in Suez traffic. Projected shifts in trade also imply substantial pressure on an already threatened Arctic ecosystem."
In August 2017, the first ship traversed the Northern Sea Route without the use of ice-breakers. Also in 2017, the Finnish icebreaker MSV Nordica set a record for the earliest crossing of the Northwest Passage. According to the New York Times, this forebodes more shipping through the Arctic, as the sea ice melts and makes shipping easier. A 2016 report by the Copenhagen Business School found that large-scale trans-Arctic shipping will become economically viable by 2040.
Impacts on wildlife
The decline of Arctic sea ice will provide humans with access to previously remote coastal zones. As a result, this will lead to an undesirable effect on terrestrial ecosystems and put marine species at risk.
Sea ice decline has been linked to boreal forest decline in North America and is assumed to culminate with an intensifying wildfire regime in this region. The annual net primary production of the Eastern Bering Sea was enhanced by 40–50% through phytoplankton blooms during warm years of early sea ice retreat.
Polar bears are turning to alternative food sources because Arctic sea ice melts earlier and freezes later each year. As a result, they have less time to hunt their historically preferred prey of seal pups, and must spend more time on land and hunt other animals. As a result, the diet is less nutritional, which leads to reduced body size and reproduction, thus indicating population decline in polar bears.
The Arctic refuge is where polar bears main habitat is to den and the melting arctic sea ice is causing a loss of species. There are only about 900 bears in the Arctic refuge national conservation area.
As arctic ice decays, microorganisms produce substances with various effects on melting and stability. Certain types of bacteria in rotten ice pores produce polymer-like substances, which may influence the physical properties of the ice. A team from the University of Washington studying this phenomenon hypothesizes that the polymers may provide a stabilizing effect to the ice. However, other scientists have found algae and other microorganisms help create a substance, cryoconite, or create other pigments that increase rotting and increase the growth of the microorganisms.
| Physical sciences | Climate change | Earth science |
43537463 | https://en.wikipedia.org/wiki/Size | Size | Size in general is the magnitude or dimensions of a thing. More specifically, geometrical size (or spatial size) can refer to three geometrical measures: length, area, or volume. Length can be generalized to other linear dimensions (width, height, diameter, perimeter).
Size can also be measured in terms of mass, especially when assuming a density range.
In mathematical terms, "size is a concept abstracted from the process of measuring by comparing a longer to a shorter". Size is determined by the process of comparing or measuring objects, which results in the determination of the magnitude of a quantity, such as length or mass, relative to a unit of measurement. Such a magnitude is usually expressed as a numerical value of units on a previously established spatial scale, such as meters or inches.
The sizes with which humans tend to be most familiar are body dimensions (measures of anthropometry), which include measures such as human height and human body weight. These measures can, in the aggregate, allow the generation of commercially useful distributions of products that accommodate expected body sizes, as with the creation of clothing sizes and shoe sizes, and with the standardization of door frame dimensions, ceiling heights, and bed sizes. The human experience of size can lead to a psychological tendency towards size bias, wherein the relative importance or perceived complexity of organisms and other objects is judged based on their size relative to humans, and particularly whether this size makes them easy to observe without aid.
Human perception
Humans most frequently perceive the size of objects through visual cues. One common means of perceiving size is to compare the size of a newly observed object with the size of a familiar object whose size is already known. Binocular vision gives humans the capacity for depth perception, which can be used to judge which of several objects is closer, and by how much, which allows for some estimation of the size of the more distant object relative to the closer object. This also allows for the estimation of the size of large objects based on comparison of closer and farther parts of the same object. The perception of size can be distorted by manipulating these cues, for example through the creation of forced perspective.
Some measures of size may also be determined by sound. Visually impaired humans often use echolocation to determine features of their surroundings, such as the size of spaces and objects. However, even humans who lack this ability can tell if a space that they are unable to see is large or small from hearing sounds echo in the space. Size can also be determined by touch, which is a process of haptic perception.
The sizes of objects that can not readily be measured merely by sensory input may be evaluated with other kinds of measuring instruments. For example, objects too small to be seen with the naked eye may be measured when viewed through a microscope, while objects too large to fit within the field of vision may be measured using a telescope, or through extrapolation from known reference points. However, even very advanced measuring devices may still present a limited field of view.
Terminology
Objects being described by their relative size are often described as being comparatively big and little, or large and small, although "big and little tend to carry affective and evaluative connotations, whereas large and small tend to refer only to the size of a thing". A wide range of other terms exist to describe things by their relative size, with small things being described for example as tiny, miniature, or minuscule, and large things being described as, for example, huge, gigantic, or enormous. Objects are also typically described as tall or short specifically relative to their vertical height, and as long or short specifically relative to their length along other directions. People who have experienced excessive growth and height significantly above average are described as having gigantism. Outside of humans, deep-sea gigantism (or abyssal gigantism) is the tendency for species of deep-sea dwelling animals to be larger than their shallower-water relatives across a large taxonomic range, and island gigantism (or insular gigantism) is a biological phenomenon in which the size of an animal species isolated on an island increases dramatically in comparison to its mainland relatives.
Although the size of an object may be reflected in its mass or its weight, each of these is a different concept. In scientific contexts, mass refers loosely to the amount of "matter" in an object (though "matter" may be difficult to define), whereas weight refers to the force experienced by an object due to gravity. An object with a mass of 1.0 kilogram will weigh approximately 9.81 newtons (newton is the unit of force, while kilogram is the unit of mass) on the surface of the Earth (its mass multiplied by the gravitational field strength). Its weight will be less on Mars (where gravity is weaker), more on Saturn, and negligible in space when far from any significant source of gravity, but it will always have the same mass. Two objects of equal size, however, may have very different mass and weight, depending on the composition and density of the objects. By contrast, if two objects are known to have roughly the same composition, then some information about the size of one can be determined by measuring the size of the other, and determining the difference in weight between the two. For example, if two blocks of wood are equally dense, and it is known that one weighs ten kilograms and the other weighs twenty kilograms, and that the ten kilogram block has a volume of one cubic foot, then it can be deduced that the twenty kilogram block has a volume of two cubic feet.
Conceptualization and generalization
The concept of size is often applied to ideas that have no physical reality. In mathematics, magnitude is the size of a mathematical object, which is an abstract object with no concrete existence. Magnitude is a property by which the object can be compared as larger or smaller than other objects of the same kind. More formally, an object's magnitude is an ordering (or ranking) of the class of objects to which it belongs. There are various other mathematical concepts of size for sets, such as:
measure (mathematics), a systematic way to assign to each suitable subset a number
cardinality (equal if there is a bijection), of a set is a measure of the "number of elements of the set"
for well-ordered sets: ordinal number (equal if there is an order-isomorphism)
In statistics (hypothesis testing), the "size" of the test refers to the rate of false positives, denoted by α. In astronomy, the magnitude of brightness or intensity of a star is measured on a logarithmic scale. Such a scale is also used to measure the intensity of an earthquake, and this intensity is often referred to as the "size" of the event. In computing, file size is a measure of the size of a computer file, typically measured in bytes. The actual amount of disk space consumed by the file depends on the file system. The maximum file size a file system supports depends on the number of bits reserved to store size information and the total size of the file system in terms of its capacity to store bits of information.
In physics, the Planck length, denoted , is a unit of length, equal to . It is a unit in the system of Planck units, developed by physicist Max Planck. The Planck length is defined in terms of three fundamental physical constants: the speed of light, the Planck constant, and the Newtonian constant of gravitation. In contrast, the largest observable thing is the observable universe. The comoving distance – the distance as would be measured at a specific time, including the present – between Earth and the edge of the observable universe is , making the diameter of the observable universe about .
In poetry, fiction, and other literature, size is occasionally assigned to characteristics that do not have measurable dimensions, such as the metaphorical reference to the size of a person's heart as a shorthand for describing their typical degree of kindness or generosity. With respect to physical size, the concept of resizing is occasionally presented in fairy tales, fantasy, and science fiction, placing humans in a different context within their natural environment by depicting them as having physically been made exceptionally large or exceptionally small through some fantastic means. A famous example is associated with the fictional character, the Grinch, who was said in the story to have been born with a heart that they say was "two sizes too small", such that when he is later redeemed, his heart grows "three sizes that day", leading cardiologist David Kass to humorously suggest that the rapid growth of the Grinch's heart at the end of the story indicates that the Grinch has the physiology of a Burmese python.
| Mathematics | Three-dimensional space | null |
25211041 | https://en.wikipedia.org/wiki/Puberty | Puberty | Puberty is the process of physical changes through which a child's body matures into an adult body capable of sexual reproduction. It is initiated by hormonal signals from the brain to the gonads: the ovaries in a female, the testicles in a male. In response to the signals, the gonads produce hormones that stimulate libido and the growth, function, and transformation of the brain, bones, muscle, blood, skin, hair, breasts, and sex organs. Physical growth—height and weight—accelerates in the first half of puberty and is completed when an adult body has been developed. Before puberty, the external sex organs, known as primary sexual characteristics, are sex characteristics that distinguish males and females. Puberty leads to sexual dimorphism through the development of the secondary sex characteristics, which further distinguish the sexes.
On average, females begin puberty at age 10½ and complete puberty at ages 15-17; males begin at ages 11½-12 and complete puberty at ages 16-17. The major landmark of puberty for females is menarche, the onset of menstruation, which occurs on average around age 12½. For males, first ejaculation, spermarche, occurs on average at age 13. In the 21st century, the average age at which children, especially females, reach specific markers of puberty is lower compared to the 19th century, when it was 15 for females and 17 for males (with age at first periods for females and voice-breaks for males being used as examples). This can be due to any number of factors, including improved nutrition resulting in rapid body growth, increased weight and fat deposition, or exposure to endocrine disruptors such as xenoestrogens, which can at times be due to food consumption or other environmental factors. However, more modern archeological research suggests that the rate of puberty as it occurs now is the intended way. Growth spurts began at around 10-12, but markers of later stages of puberty such as menarche had delays that correlated with severe environmental conditions such as poverty, poor nutrition, air and pollution. Puberty that starts earlier than usual is known as precocious puberty, and puberty which starts later than usual is known as delayed puberty.
Notable among the morphologic changes in size, shape, composition, and functioning of the pubertal body, is the development of secondary sex characteristics, the "filling in" of the child's body; from girl to woman, from boy to man. Derived from the Latin (age of maturity), the word puberty describes the physical changes to sexual maturation, not the psychosocial and cultural maturation denoted by the term adolescent development in Western culture, wherein adolescence is the period of mental transition from childhood to adulthood, which overlaps much of the body's period of puberty.
Differences between male and female puberty
Two of the most significant differences between puberty in females and puberty in males are the age at which it begins, and the major sex steroids involved, the androgens and the estrogens.
Although there is a wide range of normal ages, females typically begin the process of puberty around age 10½; males at ages 11½—12. Puberty generally ends between 15—17 for females and 16–17 for males. Females attain reproductive maturity about four years after the first physical changes of puberty appear. In contrast, males accelerate more slowly but continue to grow for about six years after the first visible pubertal changes.
For males, the androgen testosterone is the principal sex hormone; while testosterone is produced, all males' changes are characterized as virilization. A substantial product of testosterone metabolism in males is the estrogen estradiol. The conversion of testosterone to estradiol depends on the amount of body fat and estradiol levels in males are typically much lower than in females. The male "growth spurt" also begins later, accelerates more slowly, and lasts longer before the epiphyses fuse. Although males are on average shorter than females before puberty begins, adult men are on average about taller than women. Most of this sex difference in adult heights is attributable to a later onset of the growth spurt and a slower progression to completion, a direct result of the later rise and lower adult male levels of estradiol.
The hormonal maturation of females is considerably more complicated than in males. The main steroid hormones, testosterone, estradiol, and progesterone as well as prolactin play important physiological functions in puberty. The production of gonadal steroids in females starts with production of testosterone, which is typically quickly converted to estradiol inside the ovaries. However the rate of conversion from testosterone to estradiol (driven by FSH/LH balance) during early puberty is highly individual, resulting in very diverse development patterns of secondary sexual characteristics. Production of progesterone in the ovaries begins with the development of ovulatory cycles in females (during the lutheal phase of the cycle), before puberty low levels of progesterone are produced in the adrenal glands of both males and females. Estradiol levels rise earlier and reach higher levels in women than in men. While estradiol promotes growth of the breasts and uterus, it is also the principal hormone driving the pubertal growth spurt and epiphyseal maturation and closure.
Puberty onset
Puberty is preceded by adrenarche, marking an increase of adrenal androgen production between ages 6–10. Adrenarche is sometimes accompanied by the early appearance of axillary and pubic hair. The first androgenic hair resulting from adrenarche can be also transient and disappear before the onset of true puberty.
The onset of puberty is associated with high GnRH pulsing, which precedes the rise in sex hormones, LH and FSH. Exogenous GnRH pulses cause the onset of puberty. Brain tumors which increase GnRH output may also lead to premature puberty.
The cause of the GnRH rise is unknown. Leptin might be the cause of the GnRH rise. Leptin has receptors in the hypothalamus which synthesizes GnRH. Individuals who are deficient in leptin fail to initiate puberty. The levels of leptin increase with the onset of puberty, and then decline to adult levels when puberty is completed. The rise in GnRH might also be caused by genetics. A study discovered that a mutation in genes encoding both neurokinin B as well as the neurokinin B receptor can alter the timing of puberty. The researchers hypothesized that neurokinin B might play a role in regulating the secretion of kisspeptin, a compound responsible for triggering direct release of GnRH as well as indirect release of LH and FSH.
Effects of early and late puberty onset
Several studies about puberty have examined the effects of an early or a late onset of puberty in males and females. In general, females who enter puberty late experience positive outcomes in adolescence and adulthood, while females who enter puberty early experience negative outcomes. Males who have earlier pubertal timing generally have more positive outcomes in adulthood but more negative outcomes in adolescence, while the reverse is true for later pubertal timing.
Females
Outcomes have generally indicated that early onset of puberty in females can be psychologically damaging. The main reason for this detrimental effect is the issue of body image. As they physically develop, gaining weight in several areas of the body, early-maturing females usually look larger than females who have not yet entered puberty. A result of the social pressure to be thin, the early-maturing females develop a negative view of their body image. In addition, people may tease the females about their visible breasts, forcing the early-maturing female to hide her breasts by dressing differently. Embarrassment about a more developed body may also result in the refusal to undress for gym. These experiences lead to lower self-esteem, more depression and poorer body image in these early-maturing females.
Furthermore, as physical and emotional differences set them apart from people in their same age group, early-maturing females develop relationships with older people. For instance, some early-maturing females have older malefriends, "attracted to the females' womanly physique and femaleish innocence." While having an older malefriend might improve popularity among peers, it also increases the risk of alcohol and drug use, increased sexual relations (often unprotected), eating disorders and bullying.
Generally, later onset of puberty in females produces positive outcomes. They exhibit positive behaviors in adolescence that continue to adulthood.
Males
In the past, early onset of puberty in males has been associated with positive outcomes, such as leadership in high school and success in adulthood. However, recent studies have revealed that the risks and problems of early maturation in males might outweigh the benefits.
Early-maturing males develop "more aggressive, law-breaking, and alcohol abusing" behaviors, which result in anger towards parents and trouble in school and with the police. Early puberty also correlates with increased sexual activity and a higher instance of teenage pregnancy, both of which can lead to depression and other psychosocial issues.
On the other hand, late-maturing males develop lower self-esteem and confidence and generally have lower popularity among peers, due to their less-developed physiques. Also, they experience problems with anxiety and depression and are more likely to be afraid of sex than other males.
Changes in males
In males, puberty begins with the enlargement of the testicles and scrotum. The penis also increases in size, and a male develops pubic hair. A male's testicles also begin making sperm. The release of semen, which contains sperm and other fluids, is called ejaculation. During puberty, a male's erect penis becomes capable of ejaculating semen and impregnating a female. A male's first ejaculation is an important milestone in his development. On average, a male's first ejaculation occurs at age 13. Ejaculation sometimes occurs during sleep; this phenomenon is known as a nocturnal emission.
Testicular size
In males, testicular enlargement is the first physical manifestation of puberty (and is termed gonadarche). Testes in prepubertal males change little in size from about 1 year of age to the onset of puberty, averaging about 2–3 cm in length and about 1.5–2 cm in width. The size of the testicles is among the parameters of the Tanner scale for male genitals, from stage I which represents a volume of less than 1.5 ml, to stage V which represents a testicular volume of greater than or equal to 20 ml. Testicular size reaches maximal adult size about 6 years after the onset of puberty. While 18–20 cm3 is an average adult size, there is wide variation in testicular size in the normal population. After the male's testicles have enlarged and developed for about one year, the length and then the breadth of the shaft of the penis will increase and the glans penis and corpora cavernosa will also start to enlarge to adult proportions.
Erections
Erections during sleep or when waking up are medically known as nocturnal penile tumescence and colloquially referred to as morning wood. The penis can regularly get erect during sleep and men or males often wake up with an erection. Once a male reaches his teenage years, erections occur much more frequently due to puberty. Erections can occur spontaneously at any time of day, and if clothed may cause a bulge or "hump". This can be disguised or hidden by wearing close-fitting underwear, a long shirt and baggier clothes. Erections are common for male prepubescent children and infants, and can even occur before birth. Spontaneous erections, also known as involuntary or unwanted erections, are normal. Such erections can be embarrassing if they happen in public, such as a classroom or living room.
Foreskin retraction
During puberty, if not before, the tip and opening of a male's foreskin becomes wider, progressively allowing for retraction down the shaft of the penis and behind the glans, which ultimately should be possible without pain or difficulty. The membrane that bonds the inner surface of the foreskin with the glans disintegrates and releases the foreskin to separate from the glans. The foreskin then gradually becomes retractable.
Research by Øster (1968) found that with the onset and continuation of puberty, the proportion of males able to pull back their foreskins increased. At ages 12–13, Øster found that only 60% of males were able to retract their foreskins; this increased to 85% by ages 14–15, and 95% by 16–17. He also found that 1% of those unable to fully retract experienced phimosis at ages 14–17, the remainder were partially able to. The findings were supported by further research by Kayaba et al (1996) on a sample of over 600 males, and Ishikawa and Kawakita (2004) found that by age 15, 77% of their sample of males could retract their foreskins. Beaugé (1997) reports that males may assist the development of retractile foreskin by manual stretching.
Once a male is able to retract his foreskin, penile hygiene should become an important feature of his routine body care. Although the American Academy of Pediatrics states there is "little evidence to affirm the association between circumcision status and optimal penile hygiene", various studies suggest that males be educated about the role of hygiene, including retracting the foreskin while urinating and rinsing under it and around the glans at each bathing opportunity. Regular washing under the foreskin was found by Krueger and Osborn (1986) to reduce the risk of numerous penile disorders, however Birley et al. (1993) reports excessive washing with soap should be avoided because it dries the oils out of the tissues and can cause non-specific dermatitis.
Body and facial hair
In the months and years following the appearance of pubic hair, other areas of skin that respond to androgens may develop androgenic hair. The usual sequence is: underarm (axillary) hair, perianal hair, upper lip hair, sideburn (preauricular) hair, periareolar hair, and the beard area. As with most human biological processes, this specific order may vary among some individuals. Arm, leg, chest, abdominal, and back hair become heavier more gradually. There is a large range in amount of body hair among adult men, and significant differences in timing and quantity of hair growth among different racial groups. Facial hair is often present in late adolescence, but may not appear until significantly later. Facial hair will continue to get coarser, darker and thicker for another 2–4 years after puberty. Some men do not develop full facial hair for up to 10 years after the completion of puberty.
Voice change and Adam's apple
Under the influence of androgens, the larynx (or voice box) grows in both sexes. This growth is far more prominent in males, causing the male voice to drop and deepen, sometimes abruptly but rarely "overnight", about one octave, because the longer and thicker vocal folds have a lower fundamental frequency. Before puberty, the larynx of males and females is about equally small.
Changes in females
Breast development
The first physical sign of puberty in females is usually a firm, tender lump under the center of the areola of one or both breasts, occurring on average at about 10½ years of age. This is referred to as thelarche. By the widely used Tanner staging of puberty, this is stage 2 of breast development (stage 1 is a flat, prepubertal breast). Within 6–12 months, the swelling has clearly begun in both sides, softened, and can be felt and seen extending beyond the edges of the areolae. This is stage 3 of breast development. By another 12 months (stage 4), the breasts are approaching mature size and shape, with areolae and nipples forming a secondary mound. In most young women, this mound disappears into the contour of the mature breast (stage 5), although there is so much variation in sizes and shapes of adult breasts that stages 4 and 5 are not always separately identifiable.
Pubic hair
Pubic hair is often the second noticeable change in puberty, usually within a few months of thelarche. It is referred to as pubarche. The pubic hairs are usually visible first along the labia. The first few hairs are described as Tanner stage 2. Stage 3 is usually reached within another 6–12 months, when the hairs are too numerous to count and appear on the pubic mound as well. By stage 4, the pubic hairs densely fill the "pubic triangle". Stage 5 refers to spread of pubic hair to the thighs and sometimes as abdominal hair upward towards the navel. In about 15% of females, the earliest pubic hair appears before breast development begins.
Vagina, uterus, ovaries
Perineal skin keratinizes due to effect of estrogen increasing its resistance to infection. The mucosal surface of the vagina also changes in response to increasing levels of estrogen, becoming thicker and duller pink in color (in contrast to the brighter red of the prepubertal vaginal mucosa). Mucosa changes into a multilayered structure with superficial layer of squamous cells. Estrogen increase glycogen content in vaginal epithelium, which in future plays important part in maintaining vaginal pH. Whitish secretions (physiologic leukorrhea) are a normal effect of estrogen as well. In the two years following thelarche, the uterus, ovaries, and the follicles in the ovaries increase in size. The ovaries usually contain small follicular cysts visible by ultrasound.
Menstruation and fertility
The first menstrual bleeding is referred to as menarche, and typically occurs about two years after thelarche. The average age of menarche is 12½ in the United States. Most American females experience their first period at 11, 12 or 13, but some experience it earlier than their 11th birthday and others after their 14th birthday. In fact, anytime between 8 and 16 is normal. In Canada, the average age of menarche is 12.72, and in the United Kingdom it is 12.9. The time between menstrual periods (menses) is not always regular in the first two years after menarche. Ovulation is necessary for fertility, but may or may not accompany the earliest menses. In postmenarchal females, about 80% of the cycles were anovulatory in the first year after menarche, 50% in the third year and 10% in the sixth year. Initiation of ovulation after menarche is not inevitable. A high proportion of females with continued irregularity in the menstrual cycle several years from menarche will continue to have prolonged irregularity and anovulation, and are at higher risk for reduced fertility.
Body shape, fat distribution, and body composition
During this period, also in response to rising levels of estrogen, the lower half of the pelvis and thus hips widen (providing a larger birth canal). Fat tissue increases to a greater percentage of the body composition than in males, especially in the typical female distribution of breasts, hips, buttocks, thighs, upper arms, and pubis. Progressive differences in fat distribution as well as sex differences in local skeletal growth contribute to the typical female body shape by the end of puberty. On average, at 10 years, females have 6% more body fat than males.
Body odor and acne
Rising levels of androgens can change the fatty acid composition of perspiration, resulting in a more "adult" body odor. This often precedes thelarche and pubarche by one or more years. Another androgen effect is increased secretion of oil (sebum) from the skin. This change increases the susceptibility to acne, a skin condition that is characteristic of puberty. Acne varies greatly in its severity.
Visual and other effects of hormonal changes
Testosterone will cause an enlargement of the clitoris and possibly has important effects on the growth and maturation of the vestibular bulbs, corpora cavernosa of the clitoris and urethral sponge.
Changes of the vulva initiated by estradiol as well as its direct effects also appear to influence the functioning of the lower urinary tract.
Underarm hair
Hair growth develops under the arms, starting out sparse before thickening and darkening over time.
Variations
In a general sense, the conclusion of puberty is reproductive maturity. Criteria for defining the conclusion may differ for different purposes: attainment of the ability to reproduce, achievement of maximal adult height, maximal gonadal size, or adult sex hormone levels. Maximal adult height is achieved at an average age of 15 years for an average female and 18 years for an average male. Potential fertility (sometimes termed nubility) usually precedes completion of growth by 1–2 years in females and 3–4 years in males. Stage 5 typically represents maximal gonadal growth and adult hormone levels.
Age of onset
The definition of the onset of puberty may depend on perspective (e.g., hormonal versus physical) and purpose (establishing population normal standards, clinical care of early or late pubescent individuals, etc.). A common definition for the onset of puberty is physical changes to a person's body. These physical changes are the first visible signs of neural, hormonal, and gonadal function changes.
The age at which puberty begins varies between individuals; usually, puberty begins between 10 and 13 years of age. The age at which puberty begins is affected by both genetic factors and by environmental factors such as nutritional state and social circumstances. An example of social circumstances is the Vandenbergh effect; a juvenile female mouse who has significant interaction with adult male mice will enter puberty earlier than juvenile females who are not socially overexposed to adult males.
The average age at which puberty begins may be affected by ethnicity as well. For example, the average age of menarche in various populations surveyed has ranged from 12 to 18 years. The earliest average onset of puberty is for African-American females and the latest average onset for high altitude subsistence populations in Asia. However, much of the higher age averages reflect nutritional limitations more than genetic differences and can change within a few generations with a substantial change in diet. The median age of menarche for a population may be an index of the proportion of undernourished females in the population, and the width of the spread may reflect unevenness of wealth and food distribution in a population.
Researchers have identified an earlier age of the onset of puberty. However, they have based their conclusions on a comparison of data from 1999 with data from 1969. In the earlier example, the sample population was based on a small sample of white females (200, from Britain). The later study identified as puberty as occurring in 48% of African-American females by age nine, and 12% of white females by that age.
One possible cause of a delay in the onset of puberty past the age 14 in females and 15 in males is Kallmann syndrome, a form of hypogonadotropic hypogonadism (HH). Kallmann syndrome is also associated with a lack of sense of smell (anosmia). Kallmann syndrome and other forms of HH affect both men and women. It is caused by a failure in HPG axis at puberty which results in low or zero gonadotropin (LH and FSH) levels with the subsequent result of a failure to commence or complete puberty, secondary hypogonadism and infertility.
Historical shift
The average age at which the onset of puberty occurs has dropped significantly since the 1840s.
A 2006 study in Denmark found that puberty, as evidenced by breast development, started at an average age of 9 years and 10 months, a year earlier than when a similar study was done in 1991. Scientists believe the phenomenon could be linked to obesity or exposure to chemicals in the food chain, and is putting females at greater long-term risk of breast cancer.
Genetic influence and environmental factors
Various studies have found direct genetic effects to account for at least 46% of the variation of timing of puberty in well-nourished populations. The genetic association of timing is strongest between mothers and daughters. The specific genes affecting timing are not yet known. Among the candidates is an androgen receptor gene.
Researchers have hypothesized that early puberty onset may be caused by certain hair care products containing estrogen or placenta, and by certain chemicals, namely phthalates, which are used in many cosmetics, toys, and plastic food containers.
Hormones and steroids
There is theoretical concern, and animal evidence, that environmental hormones and chemicals may affect aspects of prenatal or postnatal sexual development in humans.
Bisphenol A (BPA) is a chemical used to make plastics, and is frequently used to make baby bottles, water bottles, sports equipment, medical devices, and as a coating in food and beverage cans. Scientists are concerned about BPA's behavioral effects on fetuses, infants, and children at current exposure levels because it can affect the prostate gland, mammary gland, and lead to early puberty in females. BPA mimics and interferes with the action of estrogen—an important reproduction and development regulator. It leaches out of plastic into liquids and foods, and the Centers for Disease Control and Prevention (CDC) found measurable amounts of BPA in the bodies of more than 90 percent of the U.S. population studied. The highest estimated daily intakes of BPA occur in infants and children. Many plastic baby bottles contain BPA, and BPA is more likely to leach out of plastic when its temperature is increased, as when one warms a baby bottle or warms up food in the microwave.
Nutritional influence
Nutritional factors are the strongest and most obvious environmental factors affecting timing of puberty.
Obesity influence and exercise
Scientific researchers have linked early obesity with an earlier onset of puberty in females. They have cited obesity as a cause of breast development before nine years and menarche before twelve years. Early puberty in females can be a harbinger of later health problems.
Physical and mental illness
Mental illnesses occur in puberty. The brain undergoes significant development by hormones which can contribute to mood disorders such as major depressive disorder, bipolar disorder, dysthymia and schizophrenia. Females aged between 15 and 19 make up 40% of anorexia nervosa cases.
Neurohormonal process
The endocrine reproductive system consists of the hypothalamus, the pituitary, the gonads, and the adrenal glands, with input and regulation from many other body systems. True puberty is often termed "central puberty" because it begins as a process of the central nervous system. A simple description of hormonal puberty is as follows:
The brain's hypothalamus begins to release pulses of GnRH.
Cells in the anterior pituitary respond by secreting LH and FSH into the circulation.
The ovaries or testes respond to the rising amounts of LH and FSH by growing and beginning to produce estradiol and testosterone.
Rising levels of estradiol and testosterone produce the body changes of female and male puberty.
The onset of this neurohormonal process may precede the first visible body changes by 1–2 years.
Components of the endocrine reproductive system
The arcuate nucleus of the hypothalamus is the driver of the reproductive system. It has neurons which generate and release pulses of GnRH into the portal venous system of the pituitary gland. The arcuate nucleus is affected and controlled by neuronal input from other areas of the brain and hormonal input from the gonads, adipose tissue and a variety of other systems.
The pituitary gland responds to the pulsed GnRH signals by releasing LH and FSH into the blood of the general circulation, also in a pulsatile pattern.
The gonads (testes and ovaries) respond to rising levels of LH and FSH by producing the steroid sex hormones, testosterone and estrogen.
The adrenal glands are a second source for steroid hormones. Adrenal maturation, termed adrenarche, typically precedes gonadarche in mid-childhood.
Major hormones
Neurokinin B (a tachykinin peptide) and kisspeptin (a neuropeptide), both present in KNDy neurons of the hypothalamus, are critical parts of the control system that switches on the release of GnRH at the start of puberty.
GnRH (gonadotropin-releasing hormone) is a peptide hormone released from the hypothalamus which stimulates gonadotrope cells of the anterior pituitary.
LH (luteinizing hormone) is a larger protein hormone secreted into the general circulation by gonadotrope cells of the anterior pituitary gland. The main target cells of LH are the Leydig cells of testes and the theca cells of the ovaries. LH secretion changes more dramatically with the initiation of puberty than FSH, as LH levels increase about 25-fold with the onset of puberty, compared with the 2.5-fold increase of FSH.
FSH (follicle stimulating hormone) is another protein hormone secreted into the general circulation by the gonadotrope cells of the anterior pituitary. The main target cells of FSH are the ovarian follicles and the Sertoli cells and spermatogenic tissue of the testes.
Testosterone is a steroid hormone produced primarily by the Leydig cells of the testes, and in lesser amounts by the theca cells of the ovaries and the adrenal cortex. Testosterone is the primary mammalian androgen and the "original" anabolic steroid. It acts on androgen receptors in responsive tissue throughout the body.
Estradiol is a steroid hormone produced by aromatization of testosterone. Estradiol is the principal human estrogen and acts on estrogen receptors throughout the body. The largest amounts of estradiol are produced by the granulosa cells of the ovaries, but lesser amounts are derived from testicular and adrenal testosterone.
Adrenal androgens are steroids produced by the zona reticulosa of the adrenal cortex in both sexes. The major adrenal androgens are dehydroepiandrosterone, androstenedione (which are precursors of testosterone), and dehydroepiandrosterone sulfate which is present in large amounts in the blood. Adrenal androgens contribute to the androgenic events of early puberty in females.
IGF1 (insulin-like growth factor 1) rises substantially during puberty in response to rising levels of growth hormone and may be the principal mediator of the pubertal growth spurt.
Leptin is a protein hormone produced by adipose tissue. Its primary target organ is the hypothalamus. The leptin level seems to provide the brain a rough indicator of adipose mass for purposes of regulation of appetite and energy metabolism. It also plays a permissive role in female puberty, which usually will not proceed until an adequate body mass has been achieved.
Endocrine perspective
The endocrine reproductive system becomes functional by the end of the first trimester of fetal life. The testes and ovaries become briefly inactive around the time of birth but resume hormonal activity until several months after birth, when incompletely understood mechanisms in the brain begin to suppress the activity of the arcuate nucleus. This has been referred to as maturation of the prepubertal "gonadostat", which becomes sensitive to negative feedback by sex steroids. The period of hormonal activity until several months after birth, followed by suppression of activity, may correspond to the period of infant sexuality, followed by a latency stage, which Sigmund Freud described.
Neurons of the arcuate nucleus secrete gonadotropin releasing hormone (GnRH) into the blood of the pituitary portal system. An American physiologist, Ernst Knobil, found that the GnRH signals from the hypothalamus induce pulsed secretion of LH (and to a lesser degree, FSH) at roughly 1–2 hour intervals. The LH pulses are the consequence of pulsatile GnRH secretion by the arcuate nucleus that, in turn, is the result of an oscillator or signal generator in the central nervous system ("GnRH pulse generator"). In the years preceding physical puberty, Robert M. Boyar discovered that the gonadotropin pulses occur only during sleep, but as puberty progresses they can be detected during the day. By the end of puberty, there is little day-night difference in the amplitude and frequency of gonadotropin pulses.
Some investigators have attributed the onset of puberty to a resonance of oscillators in the brain. By this mechanism, the gonadotropin pulses that occur primarily at night just before puberty represent beats.
An array of "autoamplification processes" increases the production of all of the pubertal hormones of the hypothalamus, pituitary, and gonads.
Stages
adrenarche (approximately ages 6—8)
gonadarche (approximately age 10½ in females and age 11½ in males)
thelarche (approximately age 10½ in females)
pubarche (approximately age 11 in females and age 12 in males)
menarche (approximately age 12½ in females)
spermarche (approximately age 13 in males)
| Biology and health sciences | Animal ontogeny | null |
30002195 | https://en.wikipedia.org/wiki/Soft-sediment%20deformation%20structures | Soft-sediment deformation structures | Soft-sediment deformation structures develop at deposition or shortly after, during the first stages of the sediment's consolidation. This is because the sediments need to be "liquid-like" or unsolidified for the deformation to occur. These formations have also been put into a category called water-escape structures by Lowe (1975). The most common places for soft-sediment deformations to materialize are in deep water basins with turbidity currents, rivers, deltas, and shallow-marine areas with storm impacted conditions. This is because these environments have high deposition rates, which allows the sediments to pack loosely.
Types of soft-sediment deformation structures
Convolute bedding forms when complex folding and crumpling of beds or laminations occur. This type of deformation is found in fine or silty sands, and is usually confined to one rock layer. Convolute laminations are found in flood plain, delta, point-bar, and intertidal-flat deposits. They generally range in size from 3 to 25 cm, but there have been larger formations recorded as several meters thick.
Flame structures consist of mud and are wavy or "flame" shaped. These flames usually extend into an overlying sandstone layer. This deformation is caused from sand being deposited onto mud, which is less dense. Load casts, technically a subset of sole markings, below, are the features which form alongside flame structures. Flames are thin fingers of mud injected upward into the overlying sands, while load casts are the pendulous knobs of sand that descend downwards into the mud between the flames.
Slump structures are mainly found in sandy shales and mudstones, but may also be in limestones, sandstones, and evaporites. They are a result of the displacement and movement of unconsolidated sediments, and are found in areas with steep slopes and fast sedimentation rates. These structures often are faulted.
Dish structures are thin, dish-shaped formations that normally occur in siltstones and sandstones. The size of each "dish" often ranges from 1 cm to 50 cm in size, and forms as a result of dewatering. Pillar structures often appear along with dish structures and also form by dewatering. They have a vertical orientation, which cuts across laminated or massive sands. These formations can range from a few millimeters in diameter to larger than a meter.
Sole markings are found on the underside of sedimentary rocks that overlie shale beds, usually sandstones. They are used for determining the flow direction of old currents because of their directional features. Sole markings form from the erosion of a bed, which creates a groove that is later filled in by sediment.
Seismites are sedimentary beds disturbed by seismic waves from earthquakes. They are commonly used to interpret the seismic history of an area. The term has also been applied to soft sediment deformation structures, including sand volcanos, sand blows, and certain clastic dikes.
| Physical sciences | Sedimentology | Earth science |
33876998 | https://en.wikipedia.org/wiki/Persian%20cat | Persian cat | The Persian cat, also known as the Persian Longhair, is a long-haired breed of cat characterised by a round face and short muzzle. The first documented ancestors of Persian cats might have been imported into Italy from Khorasan as early as around 1620, but this has not been proven. Instead, there is stronger evidence for a longhaired cat breed being exported from Afghanistan and Iran/Persia from the 19th century onwards. Persian cats have been widely recognised by the North-West European cat fancy since the 19th century, and after World War II by breeders from North America, Australia and New Zealand. Some cat fancier organisations' breed standards subsume the Himalayan and Exotic Shorthair as variants of this breed, while others generally treat them as separate breeds.
The selective breeding carried out by breeders has allowed the development of a wide variety of coat colours, but has also led to the creation of increasingly flat-faced Persian cats. Favoured by fanciers, this head structure can bring with it several health problems. As is the case with the Siamese breed, there have been efforts by some breeders to preserve the older type of cat, the Traditional Persian, which has a more pronounced muzzle. Hereditary polycystic kidney disease (PKD) is prevalent in the breed, affecting almost half of the population in some countries.
In 2021, Persian cats were ranked as the fourth-most popular cat breed in the world according to the Cat Fanciers' Association, an American international cat registry.
History
Origin
The exact time of the appearance of long-haired cats is unclear, as no long-haired specimens of the African wildcat, the ancestor of the domestic species, are known.
The first documented ancestors of the Persian cat might have been imported from Khorasan, either Eastern Iran or Western Afghanistan, into the Italian Peninsula in 1620 by Pietro Della Valle; and from Damascus, Syria, into France by Nicolas-Claude Fabri de Peiresc at around the same time. While the de Peiresc import from Syria is corroborated by later correspondences, Della Valle is only known to have voiced his intention in a letter from 1620 but returned to Italy much later in 1626 after travelling several other countries with the remains of his wife in tow and no further mention of the cats.
In his letter from 1620, Della Valle distinguishes the Khorasan cat from similar long-haired cats imported to Europe from the Near East by their grey coat:
Albeit of unclear geographic faithfulness, the name Persian cat was eventually given to cats imported from Afghanistan, Iran, and likely some adjacent regions for marketing purposes in Europe. Persian-speakers themselves are not documented to refer to any breed of cat as "Persian cat", or . Instead, variations of , , and appear in Persian dictionaries of the 19th and 20th centuries.
In 1815, Lord Elphinstone described the cats in Kabul thus:
In 1839 Lieutenant Irwin notes that “a variety of cat is bred in Cabul, and some parts of Toorkistan. By us it is very improperly called "Persian", for very few are found in Persia, and none exported. The Cabulees call this cat bubuk [buruk?] or boorrak, and they encourage the growth of his long hair by washing it with soap and combing it.”
Seeing as the British seemed to assume the majority of Persian cats stemmed from Afghanistan, there is reason to infer that no small portion of the original Persian cat breed stock was, among other places, imported from Afghanistan to Britain and other European countries.
However, the Persian cat was not only exported to Europe by this time but also to India. In 1885 Edward Balfour describes the Afghan trade of long-haired cats to India: “The long silky-furred Angora cats are annually brought to India for sale from Afghanistan, with caravans of camels, even so far as Calcutta.”
Similarly in 1882, Jane Dieulafoy, travelling in Iran from Isfahan to Shiraz in a caravan heading for Bušehr, observes “an inhabitant of Yezd in Kirmania, who transported from Tauris [Tabriz] to Bombay about twenty beautiful angoras. For several years he constantly travelled between Persia and India and apparently profited from his strange commerce”.
Genetic origin
Recent genetic research indicates that present-day Persian cats are related not to cat breeds from the Near East, but to those from Western Europe, with researchers stating that "Even though the early Persian cat may have in fact originated from Persia, the modern Persian cat has lost its phylogeographical signature".
This can be seen in the phylogenetic tree of cat breeds and populations. The Persian cat is depicted in red, which indicates it falls genetically in the European cat population. The modern-day Persian cat breed is genetically closest related to the British Shorthair, Chartreux, and American Shorthair. The Exotic Shorthair is a breed developed in the late 1950s by outcrossing Persian cats with American Shorthairs.
Development
Persians and Angoras
A Persian cat was presented at the first organised cat show, in 1871 in The Crystal Palace in London, England, organized by Harrison Weir. As specimens closer to the later established Persian conformation became the more popular types, attempts were made to differentiate it from the Angora. The first breed standard (then called the points of excellence list) was issued in 1889 by cat show promoter Weir. Weir stated that the Persian differed from the Angora in the tail being longer, hair more full and coarse at the end, and head larger, with less pointed ears. Not all cat fanciers agreed with the idea of making (or creating) a distinction between the two types, and in The Book of the Cat of 1903, Francis Simpson states that "the distinctions, apparently with hardly any difference, between Angoras and Persians are of so fine a nature that I must be pardoned if I ignore the class of cat commonly called Angora".
Dorothy Bevill Champion lays out the difference between the two types in the 1909 Everybody's Cat Book:
Bell goes on to detail the differences. Persian coats consist of a woolly undercoat and a long, hairy outer coat. The coat loses all the thick underwool in the summer, and only the long hair remains. Hair on the shoulders and upper part of the hind legs is somewhat shorter. Conversely, the Angora has a very different coat which consists of long, soft hair, hanging in locks, "inclining to a slight curl or wave on the under parts of the body." The Angora's hair is much longer on the shoulders and hind legs than the Persian, which Bell considered a great improvement. However, Bell says the Angora "fails to the Persian in head", Angoras having a more wedge-shaped head and Persians having a rounder head.
Bell notes that Angoras and Persians have been crossbred, resulting in a decided improvement to each breed, but claimed the long-haired cat of 1909 had significantly more Persian influence than Angora.
Champion lamented the lack of distinction among various long-haired types by English fanciers, who in 1887, decided to group them under the umbrella term "Long-haired Cats".
Traditional Persian
The traditional Persian, doll-face Persian, or moon-face Persian are somewhat recent names for a variety of the Persian breed, which is essentially the original phenotype of the Persian cat, without the development of extreme features.
As many breeders in the United States, Germany, Italy, and other parts of the world started to interpret the Persian standard differently, they developed the flat-nosed "peke-face" or "ultra-type" over time, as the result of two genetic mutations, without changing the name of the breed from "Persian". Some organisations, including the Cat Fanciers' Association (CFA), consider the peke-face type as their modern standard for the Persian breed. Thus the retronym Traditional Persian was created to refer to the original type, which is still bred, mirroring the renaming of the original-style Siamese cat as the Traditional Siamese or Thai, to distinguish it from the long-faced modern development which has taken over as simply "the Siamese".
Not all cat fancier groups recognise the Traditional Persian (at all, or as distinct), or give it that specific name. TICA has a very general standard that does not specify a flattened face.
Modern Persian (peke-face and ultra-typing)
In the late 1950s, a spontaneous mutation in red tabby Persians gave rise to the "peke-faced" Persian, named after the flat-faced Pekingese dog. It was registered as a distinct breed in the CFA, but fell out of favour by the mid-1990s due to serious health issues; only 98 were registered between 1958 and 1995. Despite this, breeders took a liking to the look and started breeding towards the peke-face look. The over-accentuation of the breed's characteristics by selective breeding (called extreme- or ultra-typing) produced results similar to the peke-faced Persians. The term peke-face has been used to refer to the ultra-typed Persian but it is properly used only to refer to red tabby Persians bearing the mutation. Many fanciers and CFA judges considered the shift in look "a contribution to the breed."
In 1958, breeder and author P. M. Soderberg wrote in Pedigree Cats, Their Varieties, breeding and Exhibition:
While the looks of the Persians changed, the Persian Breed Council's standard for the Persians remained the same. The Persian breed standard is, by its nature, somewhat open-ended and focused on a rounded head, large, wide-spaced round eyes with the top of the nose in alignment with the bottom of the eyes. The standard calls for a short, cobby body with short, well-boned legs, a broad chest, and a round appearance, everything about the ideal Persian cat being "round". It was not until the late 1980s that standards were changed to limit the development of the extreme appearance. In 2004, the statement that muzzles should not be overly pronounced was added to the breed standard. The standards were altered yet again in 2007, this time to reflect the flat face, and it now states that the forehead, nose, and chin should be in vertical alignment.
In the UK, the standard was changed by the Governing Council of the Cat Fancy (GCCF) in the 1990s to disqualify Persians with the "upper edge of the nose leather above the lower edge of the eye" from Certificates or First Prizes in Kitten Open Classes.
While ultra-typed cats do better in the show ring, the public seems to prefer the less extreme, older "doll-face" types.
Variants
Himalayan
In 1950, the Siamese was crossed with the Persian to create a breed with the body type of the Persian but the colorpoint pattern of the Siamese. It was named Himalayan, after other colorpoint animals such as the Himalayan rabbit. In the UK, the breed was recognized as the Colorpoint Longhair. The Himalayan stood as a separate breed in the US until 1984, when the CFA merged it with the Persian, to the objection of the breed councils of both breeds. Some Persian breeders were unhappy with the introduction of this crossbreed into their "pure" Persian lines.
The CFA set up the registration for Himalayans in a way that breeders would be able to discern a Persian with Himalayan ancestry just by looking at the pedigree registration number. This was to make it easy for breeders who do not want Himalayan blood in their breeding lines to avoid individuals who, while not necessarily exhibiting the colourpoint pattern, may be carrying the point colouration gene recessively. Persians with Himalayan ancestry have registration numbers starting with 3 and are commonly referred to by breeders as colourpoint carriers (CPC) or 3000-series cats, although not all will carry the recessive gene. The Siamese is also the source of the chocolate and lilac colour in solid Persians.
Exotic Shorthair
The Persian was used as an outcross secretly by some American Shorthair (ASH) breeders in the late 1950s to "improve" their breed. The crossbreed look gained recognition in the show ring, but other breeders unhappy with the changes successfully pushed for new breed standards that would disqualify ASH that showed signs of crossbreeding.
One ASH breeder who saw the potential of the Persian/ASH cross proposed, and eventually managed, to get the CFA to recognize them as a new breed in 1966, under the name Exotic Shorthair. Regular outcrossing to the Persian has made present-day Exotic Shorthair similar to the Persian in every way, including temperament and conformation, except for the short dense coat. It has even inherited much of the Persian's health problems. The easier-to-manage coat has made some label the Exotic Shorthair "the lazy man's Persian".
Because of the regular use of Persians as outcrosses, some Exotics may carry a copy of the recessive longhair gene. When two such cats mate, there is a one in four chance of each offspring being longhaired. Longhaired Exotics are not considered Persians by CFA, although The International Cat Association accepts them as Persians. Other associations register them as a separate Exotic Longhair breed.
Chinchilla Longhair
Originating in England in 1882 by accident, a silver tabby and smoke-coloured Persian offspring produced Silver Lambkin, a cat regarded as the father of the chinchilla Persian line. Silver Lambkin was bred, and even members of the British royal family had his descendants.
In the US, there was an attempt to establish the silver Persian as a separate breed called the Sterling, but it was not accepted. Silver and golden Persians are recognized, as such, by CFA. In South Africa, the attempt to separate the breed was more successful; the Southern Africa Cat Council (SACC) registers cats with five generations of purebred Chinchilla as a Chinchilla Longhair. The Chinchilla Longhair has a slightly longer nose than the Persian, resulting in healthy breathing and less eye tearing. Its hair is translucent with only the tips carrying black pigment, a feature that gets lost when out-crossed to other coloured Persians. Out-crossing also may result in losing nose and lip liner, which is a fault in the Chinchilla Longhair breed standard. One of the distinctions of this breed is the blue-green or green eye colour only with kittens having blue or blue-purple eye colour.
Registration
Classification by registries
The breed standards of various cat fancier organisations may treat the Himalayan and Exotic Shorthair (or simply Exotic) as variants of the Persian or as separate breeds. The Cat Fanciers' Association (CFA) treats the Himalayan as a colour-pattern class of both the Persian and the Exotic, which have separate but nearly identical standards (differing in coat length). The Fédération Internationale Féline (FIFe) entirely subsumes what other registries call the Himalayan as simply among the allowed colouration patterns for the Persian and the Exotic, treated as separate breeds. The International Cat Association (TICA) treats them both as variants of the Persian. The World Cat Federation (WCF) treats the Persian and Exotic Shorthair as separate breeds and subsumes the Himalayan colouration as colourpoint varieties under each.
Among regional and national organizations, Feline Federation Europe treats all three as separate breeds. The American Cat Fanciers Association (ACFA) has the three as separate breeds (also with a Non-pointed Himalayan that is similar to the Persian). The Australian Cat Federation (AFC) follows the FIFe practice. The Canadian Cat Association (CCA-AFC) treats the three separately and even has an Exotic Longhair sub-breed of the Exotic and a Non-pointed Himalayan sub-breed of the Himalayan, which differ from the Persian only in having some mixed ancestry. The (UK) Governing Council of the Cat Fancy (GCCF) does likewise.
Popularity
In 2008, the Persian was the most popular breed of pedigree cats in the United States. In the UK (GCCF), registration numbers have decreased since the early 1990s and the Persian lost its top spot to the British Shorthair in 2001. As of 2012, it was the 6th most popular breed, behind the British Shorthair, Ragdoll, Siamese, Maine Coon and Burmese. In France, the Persian is the only breed whose registration declined between 2003 and 2007, dropping by more than a quarter.
The most colour popular varieties, according to CFA registration data, are seal point, blue point, flame point and tortie point Himalayan, followed by black-white, shaded silvers and calico.
Characteristics
Appearance
A show-style Persian cat has an extremely long and thick coat, short legs, a wide head with ears set far apart, large eyes, and an extremely shortened muzzle. The breed was originally established with a short muzzle, but over time, this characteristic has become extremely exaggerated, particularly in North America. Persian cats can have virtually any colour or markings.
Colouration
The permissible colours in the breed, in most organisations' breed standards, encompass the entire range of cat coat-pattern variations.
The International Cat Fanciers' Association (CFA) groups the breed into four coat-pattern divisions, but differently: solid, silver and golden (including chinchilla and shaded variants, and blued subvariants), shaded and smoke (with several variations of each, and a third sub-categorisation called shell), tabby (only classic, mackerel, and patched [spotted], in various colours), party-colour (in four classes, tortoiseshell, blue-cream, chocolate tortie, and lilac-cream, mixed with other colours), calico and bi-colour (in around 40 variations, broadly classified as calico, dilute calico, and bi-colour), and Himalayan (white-to-fawn body with point colouration on the head, tail and limbs, in various tints).
CFA base colours are white, black, blue, red, cream, chocolate, and lilac. There are around 140 named CFA coat patterns for which the Himalayan qualifies, and 20 for the Himalayan sub-breed. These coat patterns encompass virtually all of those recognised by CFA for cats generally. Any Persian permissible in TICA's more detailed system would probably be accepted in CFA's, simply with a more general name, though the organisations do not mix breed registries.
The International Cat Association (TICA) groups the breed into three coat-pattern divisions for judging at cat shows traditional (with stable, rich colours), sepia ("paler and warmer than the traditional equivalents", and darkening a bit with age), and mink (much lighter than sepia, and developing noticeably with age on the face and extremities). If classified as the Himalayan sub-breed, full point colouration is required, the fourth TICA colour division, with a "pale and creamy coloured" body even lighter than mink, with intense colouration on the face and extremities. The four TICA categories are essentially a graduated scale of colour distribution from evenly coloured to mostly coloured only at the points. Within each, the colouration may be further classified as solid, tortoiseshell (or "tortie"), tabby, silver or smoke, solid-and-white, tortoiseshell-and-white, tabby-and-white, or silver/smoke-and-white, with various specific colours and modifiers (e.g. chocolate tortoiseshell point, or fawn shaded mink marbled tabby-torbie).
TICA-recognised tabby patterns include classic, mackerel, marbled, spotted, and ticked (in two genetic forms), while other patterns include shaded, chinchilla, and two tabby-tortie variations, golden, and grizzled. Basic colours include white, black, brown, ruddy, bronze, blue ("grey"), chocolate, cinnamon, lilac, fawn, red, and cream, with a silver or shaded variant of most. Not counting bi-colour (piebald) or party-colour coats, nor genetically impossible combinations, there are nearly 1,000 named coat pattern variations in the TICA system for which the Persian/Himalayan qualifies. The Exotic Shorthair sub-breed qualifies for every cat coat variation that TICA recognises.
Eye colours range widely and may include blue, copper, odd-eyed blue and copper, green, blue-green, and hazel. Various TICA and CFA coat categorisations come with specific eye-colour requirements.
Behaviour
The Persian is generally described as a quiet cat. Typically placid, it adapts quite well to apartment life. Himalayans tend to be more active due to the influence of Siamese traits. In a study comparing cat owners' perceptions of their cats, Persians rated higher than non-pedigree cats on closeness and affection to owners, friendliness towards strangers, cleanliness, predictability, vocalization, and fussiness over food.
Health
Ultra-type consequences
The modern-type brachycephalic Persian has a large rounded skull and shortened face and nose. This facial conformation makes the breed prone to breathing difficulties, skin and eye problems, and birthing difficulties. Anatomical abnormalities associated with brachycephalic breeds can cause shortness of breath. Persians are susceptible to malocclusion (incorrect bite), which can affect their ability to grasp, hold and chew food. Even without the condition, the flat face of the Persian can make picking up food difficult, so much so that specially shaped kibble has been created by pet food companies to cater to the Persian.
Malformed tear ducts cause epiphora, an overflow of tears onto the face, which is common but primarily cosmetic. Entropion, the inward folding of the eyelids, causes the eyelashes to rub against the cornea and can lead to tearing, pain, infection and cornea damage. This condition is not uncommon in Persians and usually involves the medial aspect of the lower eyelid. Similarly, in upper-eyelid trichiasis or nasal-fold trichiasis, eyelashes/hair from the eyelid and hair from the nose fold near the eye grow in a way that rubs against the cornea.
The anatomical changes in the upper respiratory track caused by brachycephaly such as stenotic nares, elongated soft palate, and nasopharyngeal turbinates contribute to obstruction of the airways and breathing difficulties. Due to the reduction of the maxillary alveolar space the Persian's teeth are positioned at abnormal angles and overlap, causing dental and gingival problems. Brachcephaly causes the Persian to have shallow orbits and protuding eyes, this can lead to keratitis, sequestrum developments in the cornea, and non-healing corneal ulcers. The reduction of the length of the maxilla can cause excessive skin folds on the face, which may lead to the development of idiopathic facial dermatitis. The brachcephalic skull of the Persian has led to changes in the morphology of the cranial cavity, causing intracranial overcrowding, herniation of the brain, and hydrocephaly.
Dystocia, an abnormal or difficult labour, is relatively common in Persians. Consequently, the stillbirth rate is higher than normal, ranging from 16.1% to 22.1%, and one 1973 study puts the kitten mortality rate (including stillborns) at 29.2%. A veterinary study in 2010 documented the serious health problems caused by the brachycephalic head.
Life span
Pet insurance data from Sweden puts the median lifespan of cats from the Persian group (Persians, Chinchilla, Himalayan and Exotic) at just above 12.5 years, while most cats live until they are about 15 years old. 76% of this group lived for 10 years or more and 52% lived for 12.5 years or more. A 2015 study looking at veterinary clinic data from England shows an average lifespan of 14.1. A 2024 UK study found a life expectancy of 10.93 years for the Persian compared to an overall of 11.74 years.
Internal medical conditions
Polycystic kidney disease (PKD) which causes kidney failure in affected adult cats has an incidence rate of 36–49% in the Persian breed. A study in Japan of cats suspected to have kidney problems found that 46% of tested Persian cats had the PKD1 mutation, which is responsible for feline polycystic kidney disease (PKD). Previous ultrasonographic studies (involving procedures likely to be performed on cats suspected of kidney problems) found a PKD rate in Persian and related breeds of 49.2% in the UK, 43% in Australia, and 41.8% in France. The cause of PKD in the Persian is an autosomal dominant mutation to the PKD1 gene.
Cysts develop and grow in the kidney over time, replacing kidney tissues and enlarging the kidney. Kidney failure develops later in life, at an average age of 7 years old (ranging from 3 to 10 years old). Symptoms include excessive drinking and urination, reduced appetite, weight loss, and depression. The disease is autosomal dominant and DNA screening is the preferred method of eliminating the gene in the breed. Because of DNA testing, most responsible Persian breeders now have cats that no longer carry the PKD gene, hence their offspring also do not have the gene. Before DNA screening was available, an ultrasound was done. However, an ultrasound is only as good as the day that it is done, and many cats that were thought to be clear, were in fact, a carrier of the PKD gene. Only DNA screening and breeding cats that are negative for the PKD gene will produce kittens that are also negative for the gene, effectively removing this gene from the breeding pool.
Hypertrophic cardiomyopathy (HCM) is a common heart disease in all cats. It is likely hereditary in the Persians. The disease causes thickening of the left heart chamber, which can, in some instances, lead to sudden death. It tends to affect males and mid- to old-aged individuals. The reported incidence rate in Persians is 6.5%. Unlike PKD, which can be detected even in very young cats, heart tests for HCM have to be done regularly to effectively track and/or remove affected individuals and their offspring from the breeding pool.
Early onset progressive retinal atrophy is a degenerative eye disease, with an autosomal recessive mode of inheritance in the Persian. Despite a belief among some breeders that the disease is limited to chocolate and Himalayan lines, there is no apparent link between coat colour in Persians and the development of PRA. Basal-cell carcinoma is a skin cancer which shows most commonly as a growth on the head, back or upper chest. While often benign, rare cases of malignancy tend to occur in Persians. Blue smoke Persians are predisposed to Chédiak–Higashi syndrome. White cats, including white Persians, are prone to deafness, especially those with blue eyes.
Skeletal conditions
A study of cats presented to the University of Missouri-Columbia Veterinary
Medical Teaching Hospital that underwent radiography found 3 Persians out of a population of 19 to have hip dysplasia, higher than the 6.6% average for all cats.
Other
Other conditions which the Persian is predisposed to are listed below:
Dermatological – primary seborrhoea, idiopathic periocular crusting, dermatophytosis (ringworm), facial fold pyoderma, idiopathic facial dermatitis, multiple epitrichial cysts (eyelids)
Ocular – coloboma, lacrimal punctal aplasia, corneal sequestrum, congenital cataract, excessive tearing, eye condition such as cherry eye
Urinary – calcium oxalate urolithiasis (feline lower urinary tract disease)
Reproductive – cryptorchidism
Gastrointestinal – congenital portosystemic shunt, congenital polycystic liver disease (associated with PKD)
Cardiovascular – peritoneopericardial diaphragmatic hernia
Immunological – systemic lupus erythematosus
Neurological – alpha-mannosidosis
Neoplastic – basal-cell carcinoma, sebaceous gland tumours
Drug sensitivity — Persians are more prone to side effects of ringworm drug Griseofulvin.
Heat sensitivity
Idiopathic facial dermatitis
Idiopathic facial dermatitis, also known as facial dermatitis of the Persian and Himalayan cat is a type of dermatitis only observed in the Persian and Himalayan cat. It's characterised by greasy skin, debris adhering to the folds of the face and nose, ceruminous otitis externa, secondary bacterial folliculitis and Malassezia dermatitis, and pruritus. Onset is at 10 months to 6 years.
Breeding ethics
Persian cats, known for their facial structure, raise concerns about the ethics of breeding for certain deformities.
Brachycephaly is a highly sought-after characteristic producing big owl-like eyes and an overall petite-looking face. Though these features may be "cuter", they result in many health issues including ill-functioning nasolacrimal systems where tears build and flow down the face, a soft and long palate that obstructs the upper airway making breathing more difficult, and dental and jaw defects (brachygnathia) where the teeth grow outwardly in unnatural positions, making it difficult to eat and increasing the chance of plaque formation gingivitis.
Such health issues affect the quality of life of many Persian cats, especially those that fall into the severe category, and raise questions about the ethics and legality of these deformity breeding programmes.As a consequence of the BBC programme Pedigree Dogs Exposed, cat breeders have also come under pressure from veterinary and animal welfare associations, with the Persian singled out as one of the breeds most affected by health problems. Animal welfare proponents have suggested changes to breed standards to prevent diseases caused by over- or ultra-typing, and prohibiting the breeding of animals outside the set limits. Apart from the GCCF standard that limits high noses, TICA, and FIFe standards require nostrils to be open, with FIFe stating that nostrils should allow "free and easy passage of air." Germany's Animal Welfare Act also prohibits the breeding of brachycephalic cats in which the tip of the nose is higher than the lower eyelids.
Grooming
Since Persian cats have long, dense fur that they cannot effectively keep clean, they need regular grooming to prevent matting. To keep their fur in its best condition, they must be brushed frequently. An alternative is to shave the coat. Their eyes may require regular cleaning to prevent crust buildup and tear staining.
Persian cats in art
The art world and its patrons have long embraced their love for the Persian cat by immortalizing them in art. A artwork that is purported to be the "world's largest cat painting" sold at auction for more than US$820,000. The late 19th-century oil portrait is called My Wife's Lovers, and it once belonged to a wealthy philanthropist who commissioned an artist to paint her vast assortment of Turkish Angoras and Persians. Other popular Persian paintings include White Persian Cat by famous folk artist Warren Kimble and Two White Persian Cats Looking into a Goldfish Bowl by late feline portraitist Arthur Heyer. The Persian cat has made its way onto the artwork of stamps around the world.
| Biology and health sciences | Cats | Animals |
48400112 | https://en.wikipedia.org/wiki/Accretion%20disk | Accretion disk | An accretion disk is a structure (often a circumstellar disk) formed by diffuse material in orbital motion around a massive central body. The central body is most frequently a star. Friction, uneven irradiance, magnetohydrodynamic effects, and other forces induce instabilities causing orbiting material in the disk to spiral inward toward the central body. Gravitational and frictional forces compress and raise the temperature of the material, causing the emission of electromagnetic radiation. The frequency range of that radiation depends on the central object's mass. Accretion disks of young stars and protostars radiate in the infrared; those around neutron stars and black holes in the X-ray part of the spectrum. The study of oscillation modes in accretion disks is referred to as diskoseismology.
Manifestations
Accretion disks are a ubiquitous phenomenon in astrophysics; active galactic nuclei, protoplanetary disks, and gamma ray bursts all involve accretion disks. These disks very often give rise to astrophysical jets coming from the vicinity of the central object. Jets are an efficient way for the star-disk system to shed angular momentum without losing too much mass.
The most prominent accretion disks are those of active galactic nuclei and of quasars, which are thought to be massive black holes at the center of galaxies. As matter enters the accretion disc, it follows a trajectory called a tendex line, which describes an inward spiral. This is because particles rub and bounce against each other in a turbulent flow, causing frictional heating which radiates energy away, reducing the particles' angular momentum, allowing the particle to drift inward, driving the inward spiral. The loss of angular momentum manifests as a reduction in velocity; at a slower velocity, the particle must adopt a lower orbit. As the particle falls to this lower orbit, a portion of its gravitational potential energy is converted to increased velocity and the particle gains speed. Thus, the particle has lost energy even though it is now travelling faster than before; however, it has lost angular momentum. As a particle orbits closer and closer, its velocity increases; as velocity increases frictional heating increases as more and more of the particle's potential energy (relative to the black hole) is radiated away; the accretion disk of a black hole is hot enough to emit X-rays just outside the event horizon. The large luminosity of quasars is believed to be a result of gas being accreted by supermassive black holes. Elliptical accretion disks formed at tidal disruption of stars can be typical in galactic nuclei and quasars. The accretion process can convert about 10 percent to over 40 percent of the mass of an object into energy as compared to around 0.7 percent for nuclear fusion processes. In close binary systems the more massive primary component evolves faster and has already become a white dwarf, a neutron star, or a black hole, when the less massive companion reaches the giant state and exceeds its Roche lobe. A gas flow then develops from the companion star to the primary. Angular momentum conservation prevents a straight flow from one star to the other and an accretion disk forms instead.
Accretion disks surrounding T Tauri stars or Herbig stars are called protoplanetary disks because they are thought to be the progenitors of planetary systems. The accreted gas in this case comes from the molecular cloud out of which the star has formed rather than a companion star.
Accretion disk physics
In the 1940s, models were first derived from basic physical principles. In order to agree with observations, those models had to invoke a yet unknown mechanism for angular momentum redistribution. If matter is to fall inward it must lose not only gravitational energy but also lose angular momentum. Since the total angular momentum of the disk is conserved, the angular momentum loss of the mass falling into the center has to be compensated by an angular momentum gain of the mass far from the center. In other words, angular momentum should be transported outward for matter to accrete. According to the Rayleigh stability criterion,
where represents the angular velocity of a fluid element and its distance to the rotation center,
an accretion disk is expected to be a laminar flow. This prevents the existence of a hydrodynamic mechanism for angular momentum transport.
On one hand, it was clear that viscous stresses would eventually cause the matter toward the center to heat up and radiate away some of its gravitational energy. On the other hand, viscosity itself was not enough to explain the transport of angular momentum to the exterior parts of the disk. Turbulence-enhanced viscosity was the mechanism thought to be responsible for such angular-momentum redistribution, although the origin of the turbulence itself was not well understood. The conventional -model (discussed below) introduces an adjustable parameter describing the effective increase of viscosity due to turbulent eddies within the disk. In 1991, with the rediscovery of the magnetorotational instability (MRI), S. A. Balbus, and J. F. Hawley established that a weakly magnetized disk accreting around a heavy, compact central object would be highly unstable, providing a direct mechanism for angular-momentum redistribution.
α-Disk model
Shakura and Sunyaev (1973) proposed turbulence in the gas as the source of an increased viscosity. Assuming subsonic turbulence and the disk height as an upper limit for the size of the eddies, the disk viscosity can be estimated as where is the sound speed, is the scale height of the disk, and is a free parameter between zero (no accretion) and approximately one. In a turbulent medium , where is the velocity of turbulent cells relative to the mean gas motion, and is the size of the largest turbulent cells, which is estimated as and , where is the Keplerian orbital angular velocity, is the radial distance from the central object of mass . By using the equation of hydrostatic equilibrium, combined with conservation of angular momentum and assuming that the disk is thin, the equations of disk structure may be solved in terms of the parameter. Many of the observables depend only weakly on , so this theory is predictive even though it has a free parameter.
Using Kramers' opacity law it is found that
where and are the mid-plane temperature and density respectively. is the accretion rate, in units of , is the mass of the central accreting object in units of a solar mass, , is the radius of a point in the disk, in units of , and , where is the radius where angular momentum stops being transported inward.
The Shakura–Sunyaev α-disk model is both thermally and viscously unstable. An alternative model, known as the -disk, which is stable in both senses assumes that the viscosity is proportional to the gas pressure . In the standard Shakura–Sunyaev model, viscosity is assumed to be proportional to the total pressure since
.
The Shakura–Sunyaev model assumes that the disk is in local thermal equilibrium, and can radiate its heat efficiently. In this case, the disk radiates away the viscous heat, cools, and becomes geometrically thin. However, this assumption may break down. In the radiatively inefficient case, the disk may "puff up" into a torus or some other three-dimensional solution like an Advection Dominated Accretion Flow (ADAF). The ADAF solutions usually require that the accretion rate is smaller than a few percent of the Eddington limit. Another extreme is the case of Saturn's rings, where the disk is so gas-poor that its angular momentum transport is dominated by solid body collisions and disk-moon gravitational interactions. The model is in agreement with recent astrophysical measurements using gravitational lensing.
Magnetorotational instability
Balbus and Hawley (1991) proposed a mechanism which involves magnetic fields to generate the angular momentum transport. A simple system displaying this mechanism is a gas disk in the presence of a weak axial magnetic field. Two radially neighboring fluid elements will behave as two mass points connected by a massless spring, the spring tension playing the role of the magnetic tension. In a Keplerian disk the inner fluid element would be orbiting more rapidly than the outer, causing the spring to stretch. The inner fluid element is then forced by the spring to slow down, reduce correspondingly its angular momentum causing it to move to a lower orbit. The outer fluid element being pulled forward will speed up, increasing its angular momentum and move to a larger radius orbit. The spring tension will increase as the two fluid elements move further apart and the process runs away.
It can be shown that in the presence of such a spring-like tension the Rayleigh stability criterion is replaced by
Most astrophysical disks do not meet this criterion and are therefore prone to this magnetorotational instability. The magnetic fields present in astrophysical objects (required for the instability to occur) are believed to be generated via dynamo action.
Magnetic fields and jets
Accretion disks are usually assumed to be threaded by the external magnetic fields present in the interstellar medium. These fields are typically weak (about few micro-Gauss), but they can get anchored to the matter in the disk, because of its high electrical conductivity, and carried inward toward the central star. This process can concentrate the magnetic flux around the centre of the disk giving rise to very strong magnetic fields. Formation of powerful astrophysical jets along the rotation axis of accretion disks requires a large scale poloidal magnetic field in the inner regions of the disk.
Such magnetic fields may be advected inward from the interstellar medium or generated by a magnetic dynamo within the disk. Magnetic fields strengths at least of order 100 Gauss seem necessary for the magneto-centrifugal mechanism to launch powerful jets. There are problems, however, in carrying external magnetic flux inward toward the central star of the disk. High electric conductivity dictates that the magnetic field is frozen into the matter which is being accreted onto the central object with a slow velocity. However, the plasma is not a perfect electric conductor, so there is always some degree of dissipation. The magnetic field diffuses away faster than the rate at which it is being carried inward by accretion of matter. A simple solution is assuming a viscosity much larger than the magnetic diffusivity in the disk. However, numerical simulations and theoretical models show that the viscosity and magnetic diffusivity have almost the same order of magnitude in magneto-rotationally turbulent disks. Some other factors may possibly affect the advection/diffusion rate: reduced turbulent magnetic diffusion on the surface layers; reduction of the Shakura–Sunyaev viscosity by magnetic fields; and the generation of large scale fields by small scale MHD turbulence –a large scale dynamo. In fact, a combination of different mechanisms might be responsible for efficiently carrying the external field inward toward the central parts of the disk where the jet is launched. Magnetic buoyancy, turbulent pumping and turbulent diamagnetism exemplify such physical phenomena invoked to explain such efficient concentration of external fields.
Analytic models of sub-Eddington accretion disks (thin disks, ADAFs)
When the accretion rate is sub-Eddington and the opacity very high, the standard thin accretion disk is formed. It is geometrically thin in the vertical direction (has a disk-like shape), and is made of a relatively cold gas, with a negligible radiation pressure. The gas goes down on very tight spirals, resembling almost circular, almost free (Keplerian) orbits. Thin disks are relatively luminous and they have thermal electromagnetic spectra, i.e. not much different from that of a sum of black bodies. Radiative cooling is very efficient in thin disks. The classic 1974 work by Shakura and Sunyaev on thin accretion disks is one of the most often quoted papers in modern astrophysics. Thin disks were independently worked out by Lynden-Bell, Pringle, and Rees. Pringle contributed in the past thirty years many key results to accretion disk theory, and wrote the classic 1981 review that for many years was the main source of information about accretion disks, and is still very useful today.
A fully general relativistic treatment, as needed for the inner part of the disk when the central object is a black hole, has been provided by Page and Thorne, and used for producing simulated optical images by Luminet and Marck, in which, although such a system is intrinsically symmetric its image is not, because the relativistic rotation speed needed for centrifugal equilibrium in the very strong gravitational field near the black hole produces a strong Doppler redshift on the receding side (taken here to be on the right) whereas there will be a strong blueshift on the approaching side. Due to light bending, the disk appears distorted but is nowhere hidden by the black hole.
When the accretion rate is sub-Eddington and the opacity very low, an ADAF (advection dominated accretion flow) is formed. This type of accretion disk was predicted in 1977 by Ichimaru. Although Ichimaru's paper was largely ignored, some elements of the ADAF model were present in the influential 1982 ion-tori paper by Rees, Phinney, Begelman, and Blandford.
ADAFs started to be intensely studied by many authors only after their rediscovery in the early 1990s by Popham and Narayan in numerical models of accretion disk boundary layers.
Self-similar solutions for advection-dominated accretion were found by Narayan and Yi, and independently by Abramowicz, Chen, Kato, Lasota (who coined the name ADAF), and Regev.
Most important contributions to astrophysical applications of ADAFs have been made by Narayan and his collaborators. ADAFs are cooled by advection (heat captured in matter) rather than by radiation. They are very radiatively inefficient, geometrically extended, similar in shape to a sphere (or a "corona") rather than a disk, and very hot (close to the virial temperature). Because of their low efficiency, ADAFs are much less luminous than the Shakura–Sunyaev thin disks. ADAFs emit a power-law, non-thermal radiation, often with a strong Compton component.
Analytic models of super-Eddington accretion disks (slim disks, Polish doughnuts)
The theory of highly super-Eddington black hole accretion, M≫MEdd, was developed in the 1980s by Abramowicz, Jaroszynski, Paczyński, Sikora, and others in terms of "Polish doughnuts" (the name was coined by Rees). Polish doughnuts are low viscosity, optically thick, radiation pressure supported accretion disks cooled by advection. They are radiatively very inefficient. Polish doughnuts resemble in shape a fat torus (a doughnut) with two narrow funnels along the rotation axis. The funnels collimate the radiation into beams with highly super-Eddington luminosities.
Slim disks (name coined by Kolakowska) have only moderately super-Eddington accretion rates, M≥MEdd, rather disk-like shapes, and almost thermal spectra. They are cooled by advection, and are radiatively ineffective. They were introduced by Abramowicz, Lasota, Czerny, and Szuszkiewicz in 1988.
Excretion disk
The opposite of an accretion disk is an excretion disk where instead of material accreting from a disk on to a central object, material is excreted from the center outward onto the disk. Excretion disks are formed when stars merge.
| Physical sciences | Basics_2 | Astronomy |
50762105 | https://en.wikipedia.org/wiki/Emphysema | Emphysema | Emphysema is any air-filled enlargement in the body's tissues. Most commonly emphysema refers to the permanent enlargement of air spaces (alveoli) in the lungs, and is also known as pulmonary emphysema.
Emphysema is a lower respiratory tract disease, characterised by enlarged air-filled spaces in the lungs, that can vary in size and may be very large. The spaces are caused by the breakdown of the walls of the alveoli, which replace the spongy lung tissue. This reduces the total alveolar surface available for gas exchange leading to a reduction in oxygen supply for the blood. Emphysema usually affects the middle aged or older population because it takes time to develop with the effects of tobacco smoking, and other risk factors. Alpha-1 antitrypsin deficiency is a genetic risk factor that may lead to the condition presenting earlier.
When associated with significant airflow limitation, emphysema is a major subtype of chronic obstructive pulmonary disease (COPD), a progressive lung disease characterized by long-term breathing problems and poor airflow. Without COPD, the finding of emphysema on a CT lung scan still confers a higher mortality risk in tobacco smokers. In 2016 in the United States there were 6,977 deaths from emphysema – 2.2 per 100,000 people. Globally it accounts for 5% of all deaths. A 2018 review of work on the effects of tobacco and cannabis smoking found that a possibly cumulative toxic effect could be a risk factor for developing emphysema, and spontaneous pneumothorax.
There are four types of emphysema, three of which are related to the anatomy of the lobules of the lung – centrilobular or centriacinar, panlobular or panacinar, and paraseptal or distal acinar emphysema – and are not associated with fibrosis (scarring). The fourth type is known as paracicatricial emphysema or irregular emphysema that involves the acinus irregularly and is associated with fibrosis. Though the different types can be seen on imaging they are not well-defined clinically. There are also a number of associated conditions, including bullous emphysema, focal emphysema, and Ritalin lung. Only the first two types of emphysema – centrilobular and panlobular – are associated with significant airflow obstruction, with that of centrilobular emphysema around 20 times more common than panlobular. Centrilobular emphysema is the only type associated with smoking.
Osteoporosis is often a comorbidity of emphysema. The use of systemic corticosteroids for treating exacerbations is a significant risk factor for osteoporosis, and their repeated use is recommended against.
Signs and symptoms
Emphysema is a respiratory disease of the lower respiratory tract. It is commonly caused by tobacco smoking but some people are affected who have never smoked. The presence of emphysema is a clear risk factor for lung cancer, made stronger in those who smoke.
Early symptoms of emphysema vary. They can include a cough (with or without sputum), wheezing, a fast breathing rate, breathlessness on exertion, and a feeling of tightness in the chest. There may be frequent cold or flu infections. Other symptoms may include anxiety, depression, fatigue, sleep problems and weight loss. These symptoms could also relate to other lung conditions or other health problems; therefore, emphysema is often underdiagnosed. The shortness of breath emphysema causes can increase over time and develop into chronic obstructive pulmonary disease.
A sign of emphysema in smokers is a higher number of alveolar macrophages sampled from the bronchoalveolar lavage (BAL) in the lungs. The number can be four to six times greater in those who smoke than in non-smokers.
Emphysema is also associated with barrel chest.
Types
There are four main types of emphysema, three of which are related to the anatomy of the lobules of the lung – centrilobular or centriacinar, panlobular or panacinar, and paraseptal or distal acinar and are not associated with fibrosis (scarring). Although fibrosis is not a normal feature of these subtypes, repair strategies in end-stage emphysema may lead to pulmonary fibrosis. The fourth subtype is known as paracicatricial emphysema or irregular emphysema, involves the acinus irregularly and is associated with fibrosis.
Only the first two types of emphysema – centrilobular and panlobular – are associated with significant airflow obstruction, with that of centrilobular emphysema around 20 times more common than panlobular. The subtypes can be seen on imaging but are not well-defined clinically.
There are also a number of associated conditions including bullous emphysema, focal emphysema, and Ritalin lung.
Centrilobular
Centrilobular emphysema, also called centriacinar emphysema, affects the centre of a pulmonary lobule (centrilobular) in the lung, the area around the terminal bronchiole and the first respiratory bronchiole, and can be seen on imaging as an area around the tip of the visible pulmonary artery. Centrilobular emphysema is the most common type usually associated with smoking, and with chronic bronchitis. The disease progresses from the centrilobular portion, leaving the lung parenchyma in the surrounding (perilobular) region preserved. Usually the upper lobes of the lungs are affected.
Panlobular
Panlobular emphysema, also called panacinar emphysema, affects all of the alveoli in a lobule, and can involve the whole lung or mainly the lower lobes. This type of emphysema is associated with alpha-1 antitrypsin deficiency (A1AD or AATD), and Ritalin lung, and is not related to smoking.
Complications
Likely complications of centrilobular and panlobular emphysema, some of which are life-threatening, include: respiratory failure, pneumonia, respiratory infections, pneumothorax, interstitial emphysema, pulmonary heart disease, and respiratory acidosis.
Paraseptal
Paraseptal emphysema, also called distal acinar emphysema, relates to emphysematous change next to a pleural surface, or to a fissure. The cystic spaces known as blebs or bullae that form in paraseptal emphysema typically occur in just one layer beneath the pleura. This distinguishes it from the honeycombing of small cystic spaces seen in fibrosis that typically occurs in layers. This type of emphysema is not associated with airflow obstruction.
Bullous
When the subpleural bullae are significant, the emphysema is called bullous emphysema. Bullae can become extensive and combine to form giant bullae. These can be large enough to take up a third of a hemithorax, compress the lung parenchyma, and cause displacement. The emphysema is now termed giant bullous emphysema, more commonly called vanishing lung syndrome due to the compressed parenchyma. A bleb or bulla may sometimes rupture and cause a pneumothorax.
Paracicatricial
Paracicatricial emphysema, also known as irregular emphysema, is seen next to areas of fibrosis (scarring) as large spaces. The scarring is most often a result of silicosis, granulomatous infection, tuberculosis, or pulmonary infarction. It can be difficult to differentiate from the honeycombing of pulmonary fibrosis.
HIV associated
Classic lung diseases are a complication of HIV/AIDS with emphysema being a source of disease. HIV is cited as a risk factor for the development of emphysema and COPD regardless of smoking status. Around 20 percent of those with HIV have increased emphysematous changes. This has suggested that an underlying mechanism related to HIV is a contributory factor in the development of emphysema. HIV associated emphysema occurs over a much shorter time than that associated with smoking; an earlier presentation is also seen in emphysema caused by alpha-1 antitrypsin deficiency. Both of these conditions predominantly show damage in the lower lungs, which suggests a similarity between the two mechanisms.
Alpha-1 related
Emphysema may develop in some people with alpha-1 antitrypsin deficiency, the only genotype of chronic obstructive pulmonary disease. This usually occurs a lot earlier (as does HIV associated emphysema) than other types.
Ritalin lung
The intravenous use of methylphenidate, commonly marketed as Ritalin and widely used as a stimulant drug in the treatment of attention deficit hyperactivity disorder, can lead to emphysematous changes known as Ritalin lung. The mechanism underlying this link is not clearly understood. Ritalin tablets are not intended to be injected. They contain talc as a filler, and it has been suggested that talc exposure causes granulomatosis leading to alveolar destruction. However, other intravenous drugs also contain talc, and no emphysematous change is associated with those. High resolution CT scanning shows the emphysema to be panlobular.
CPFE
Combined pulmonary fibrosis and emphysema (CPFE) is a rare syndrome that shows upper-lobe emphysema, together with lower-lobe interstitial fibrosis. This is diagnosed by CT scan. This syndrome presents a marked susceptibility for the development of pulmonary hypertension.
SRIF
Smoking-related interstitial fibrosis (SRIF) is another type of fibrosis that occurs in emphysematous lungs and can be identified by pathologists. Unlike CPFE, this type of fibrosis is usually clinically occult (i.e., does not cause symptoms or imaging abnormalities). Occasionally, however, some patients with SRIF present with symptoms and radiologic findings of interstitial lung disease.
Congenital lobar
Congenital lobar emphysema (CLE), also known as congenital lobar overinflation and infantile lobar emphysema, is a neonatal condition associated with enlarged air spaces in the lungs of newborn infants. It is diagnosed around the time of birth or in the first 6 months of life, occurring more often in boys than girls. CLE affects the upper lung lobes more than the lower lobes, and the left lung more often than the right lung. CLE is defined as the hyperinflation of one or more lobes of the lung due to the partial obstruction of the bronchus. This causes symptoms of pressure on the nearby organs. It is associated with several cardiac abnormalities such as patent ductus arteriosus, atrial septal defect, ventricular septal defect, and tetralogy of Fallot. Although CLE may be caused by the abnormal development of bronchi, or compression of airways by nearby tissues, no cause is identified in half of cases. CT scan of the lungs is useful in assessing the anatomy of the lung lobes and status of the neighbouring lobes on whether they are hypoplastic or not. Contrast-enhanced CT is useful in assessing vascular abnormalities and mediastinal masses.
Focal
Focal emphysema is a localized region of emphysema in the lung that is larger than alveoli, and often associated with coalworker's pneumoconiosis. This is also known as localized pulmonary emphysema. Blebs and bullae may also be included as focal emphysema. These can be differentiated from the other type of enclosed air space known as a lung cyst by their size and wall thickness. A bleb or bulla has a wall thickness of less than 1 mm, and are smaller.
Occupational
A number of occupations are associated with the development of emphysema due to the inhalation of varied gases and particles. In the US uranium mining that releases radon gas and particles has been shown to be a cause of emphysema deaths; the figures in the study included some miners who also smoked. Uranium mining and milling was found to create environmental pollution.
The inhalation of coal mine dust that can result in coalworker's pneumoconiosis is an independent risk factor for the development of emphysema. Focal emphysema is associated with the coal macule, and this extends into progressive centrilobular emphysema. Less commonly a variant of panlobular emphysema develops.
Silicosis results from the inhalation of silica particles, and the formation of large silica nodules is associated with paracicatricial emphysema, with or without bullae.
Ozone-induced emphysema
Ozone is another pollutant that can affect the respiratory system. Long-term exposure to ozone can result in emphysema.
Osteoporosis
Osteoporosis is a major comorbidity of emphysema. Both conditions are associated with a low body mass index. There is an association between treating emphysema and osteoporosis; the use of systemic corticosteroids for treating exacerbations is a significant risk factor for osteoporosis, and their repeated use is not recommended.
Other terms
Compensatory emphysema is overinflation of part of a lung in response to either removal by surgery of another part of the lung or decreased size of another part of the lung.
Pulmonary interstitial emphysema (PIE) is a collection of air inside the lungs but outside the normal air space of the alveoli, found as pneumatoses inside the connective tissue of the peribronchovascular sheaths, interlobular septa, and visceral pleura.
Lung volume reduction
Lung volume reduction may be offered to those with advanced emphysema. When other treatments fail, and the emphysema is in the upper lobes, a surgical option may be possible. A number of minimally invasive bronchoscopic procedures are increasingly used to reduce lung volume.
Surgical
Where there is severe emphysema with significant hyperinflation that has proved unresponsive to other therapies, lung volume reduction surgery (LVRS) may be an option. LVRS involves the removal of tissue from the lobe most damaged by emphysema, which allows the other lobes to expand and give improved function. The procedure appears to be particularly effective if the emphysema primarily involves the upper lobes; however, the procedure increases the risk of adverse events and early death in people who have diffuse emphysema.
Bronchoscopic
Minimally invasive bronchoscopic procedures may be carried out to reduce lung volume. These include the use of valves, coils, or thermal ablation. Endobronchial valves are one-way valves that may be used in those with severe hyperinflation resulting from advanced emphysema; a suitable target lobe and no collateral ventilation are required for this procedure. The placement of one or more valves in the lobe induces a partial collapse of the lobe that ensures a reduction in residual volume that improves lung function, the capacity for exercise, and quality of life.
The placement of endobronchial coils made of nitinol, instead of valves is recommended where there is collateral ventilation that would prevent the use of valves. Nitinol is a biocompatible shape-memory alloy.
Both of these techniques are associated with adverse effects, including persistent air leaks and cardiovascular complications. Bronchoscopic thermal vapor ablation has an improved profile. Heated water vapor is used to target affected lobe regions, which leads to permanent fibrosis and volume reduction. The procedure is able to target individual lobe segments, can be carried out regardless of collateral ventilation, and can be repeated with the natural advance of emphysema.
Other surgeries
Lung transplantation – the replacement of either a single lung or both (bilateral) – may be considered in end-stage disease. A bilateral transplant is the preferred choice as complications can arise in a remaining single native lung; complications can include hyperinflation, pneumonia, and the development of lung cancer. Careful selection as recommended by the National Emphysema Treatment Trial (NETT) for transplant surgeries is needed as in some cases there will be an increased risk of mortality. Several factors, including age and exercise tolerance using the BODE index need to be taken into account. A transplant is considered only when there are no serious comorbidites. A CT scan or a ventilation/perfusion scan may be useful to evaluate cases for surgical interventions and to evaluate post-surgery responses. A bullectomy may be carried out when a giant bulla occupies more than a third of a hemithorax.
In other tissues
Trapped air can also develop in other tissues such as under the skin, known as subcutaneous emphysema. Orbital emphysema is the trapping of air in the orbit; a type of this is palpebral emphysema that affects just the eyelids. Emphysematous gastritis is the presence of air in the stomach wall, usually caused by a bacterial infection. This is rare but has a high mortality rate.
History
The terms emphysema and chronic bronchitis were formally defined in 1959 at the CIBA guest symposium, and in 1962 at the American Thoracic Society Committee meeting on Diagnostic Standards. The word emphysema is derived from Ancient Greek ἐμφύσημα 'inflation, swelling' (referring to a lung inflated by air-filled spaces), itself from emphysao 'to blow in, to inflate', composed of ἐν en, meaning "in", and φυσᾶ physa, meaning "wind, blast.
René Laennec, the physician who invented the stethoscope, used the term emphysema in his book A Treatise on the Diseases of the Chest and of Mediate Auscultation (1837) to describe lungs that did not collapse when he opened the chest during an autopsy. He noted that they did not collapse as usual because they were full of air and the airways were filled with mucus. Early descriptions of probable emphysema include: in 1679 by T. Bonet of a condition of "voluminous lungs" and in 1769 by Giovanni Morgagni of lungs which were "turgid particularly from air". In 1721 the first drawings of emphysema were made by Ruysh. These were followed the illustrations of Matthew Baillie in 1789 and descriptions of the destructive nature of the condition.
| Biology and health sciences | Specific diseases | Health |
46728817 | https://en.wikipedia.org/wiki/Classic%20Mac%20OS | Classic Mac OS | Mac OS (originally System Software; retronym: Classic Mac OS) is the series of operating systems developed for the Macintosh family of personal computers by Apple Computer, Inc. from 1984 to 2001, starting with System 1 and ending with Mac OS 9. The Macintosh operating system is credited with having popularized the graphical user interface concept. It was included with every Macintosh that was sold during the era in which it was developed, and many updates to the system software were done in conjunction with the introduction of new Macintosh systems.
Apple released the original Macintosh on January 24, 1984. The first version of the system software, which had no official name, was partially based on the Lisa OS, which Apple previously released for the Lisa computer in 1983. As part of an agreement allowing Xerox to buy shares in Apple at a favorable price, it also used concepts from the Xerox PARC Alto computer, which former Apple CEO Steve Jobs and other Lisa team members had previewed. This operating system consisted of the Macintosh Toolbox ROM and the "System Folder", a set of files that were loaded from disk. The name Macintosh System Software came into use in 1987 with System 5. Apple rebranded the system as Mac OS in 1996, starting officially with version 7.6, due in part to its Macintosh clone program. That program ended after the release of Mac OS 8 in 1997. The last major release of the system was Mac OS 9 in 1999.
Initial versions of the System Software ran one application at a time. With the Macintosh 512K, a system extension called the Switcher was developed to use this additional memory to allow multiple programs to remain loaded. The software of each loaded program used the memory exclusively; only when activated by the Switcher did the program appear, even the Finder's desktop. With the Switcher, the now familiar Clipboard feature allowed copy and paste between the loaded programs across switches including the desktop.
With the introduction of System 5, a cooperative multitasking extension called MultiFinder was added, which allowed content in windows of each program to remain in a layered view over the desktop, and was later integrated into System 7 as part of the operating system along with support for virtual memory. By the mid-1990s, however, contemporary operating systems such as Windows NT, OS/2, NeXTSTEP, BSD, and Linux had all brought pre-emptive multitasking, protected memory, access controls, and multi-user capabilities to desktop computers. The Macintosh's limited memory management and susceptibility to conflicts among extensions that provide additional functionality, such as networking or support for a particular device, led to significant criticism of the operating system, and was a factor in Apple's declining market share at the time.
After two aborted attempts at creating a successor to the Macintosh System Software called Taligent and Copland, and a four-year development effort spearheaded by Steve Jobs's return to Apple in 1997, Apple replaced Mac OS with a new operating system in 2001 named Mac OS X. It retained most of the user interface design elements of the Classic Mac OS, and there was some overlap of application frameworks for compatibility, but the two operating systems otherwise have completely different origins and architectures.
The final updates to Mac OS 9 released in 2001 provided interoperability with Mac OS X. The name "Classic" that now signifies the historical Mac OS as a whole is a reference to the Classic Environment, a compatibility layer that helped ease the transition to Mac OS X (now macOS).
Initial concept
The Macintosh project started in late 1978 with Jef Raskin, who envisioned an easy-to-use, low-cost computer for the average consumer. In September 1979, Raskin began looking for an engineer who could put together a prototype. Bill Atkinson, a member of the Apple Lisa team, introduced Raskin to Burrell Smith, a service technician who had been hired earlier that year.
Apple's concept for the Macintosh deliberately sought to minimize the user's awareness of the operating system. Many basic tasks that required more operating system knowledge on other systems could be accomplished by mouse gestures and graphic controls on a Macintosh. This would differentiate it from its contemporaries such as MS-DOS, which use a command-line interface consisting of terse, abbreviated textual commands.
In January 1981, Steve Jobs completely took over the Macintosh project. Jobs and a number of Apple engineers visited Xerox PARC in December 1979, three months after the Lisa and Macintosh projects had begun. After hearing about the pioneering GUI technology being developed at Xerox PARC from former Xerox employees like Raskin, Jobs negotiated a visit to see the Xerox Alto computer and Smalltalk development tools in exchange for Apple stock options. The final Lisa and Macintosh operating systems use concepts from the Xerox Alto, but many elements of the graphical user interface were created by Apple including the menu bar, pull-down menus, and the concepts of drag and drop and direct manipulation.
Unlike the IBM PC, which uses 8 kB of system ROM for power-on self-test (POST) and basic input/output system (BIOS), the Mac ROM is significantly larger (64 kB) and holds key OS code. Much of the original Mac ROM code was written by Andy Hertzfeld, a member of the original Macintosh team. He was able to conserve precious ROM space by writing routines in assembly language code optimized with "hacks", or clever programming tricks. In addition to the ROM, he also coded the kernel, the Macintosh Toolbox, and some of the desktop accessories (DAs). The icons of the operating system, which represent folders and application software, were designed by Susan Kare, who later designed the icons for Microsoft Windows 3.0. Bruce Horn and Steve Capps wrote the Macintosh Finder, as well as a number of Macintosh system utilities.
Apple aggressively advertised their new machine. After its release, the company bought all 39 pages of advertisement space in the 1984 November/December edition of Newsweek magazine. The Macintosh quickly outsold its more sophisticated but much more expensive predecessor, the Lisa. Apple quickly developed MacWorks, a product that allowed the Lisa to emulate Macintosh system software through System 3, by which time it had been discontinued as the rebranded Macintosh XL. Many of the Lisa's operating system advances would not appear in the Macintosh operating system until System 7 or later.
Architecture
Compatibility
Early versions of Mac OS are compatible only with Motorola 68000-family Macintoshes. As Apple introduced computers with PowerPC hardware, the OS was ported to support this architecture. Mac OS 8.1 is the last version that could run on a 68k processor (the 68040).
In systems prior to PowerPC G3-based systems, significant parts of the system are stored in physical ROM on the motherboard. The initial purpose of this is to avoid having the OS use up most of the 128KiB RAM of the initial Macintosh—the initial ROMs were 64KiB. This architecture also allows for a completely graphical OS interface at the lowest level without the need for a text-only console or command-line mode: boot time errors, such as finding no functioning disk drives, are communicated to the user graphically, usually with an icon or the distinctive Chicago bitmap font and a Chime of Death or a series of beeps. This is in contrast to MS-DOS and CP/M computers of the time, which display such messages in a mono-spaced font on a black background, and require the use of the keyboard rather than a mouse, for input. To provide such niceties at a low level, early Mac OS depends on core system software in ROM on the motherboard, which also ensured that only Apple computers or licensed clones (with the copyright-protected ROMs from Apple) can run Mac OS.
Mac clones
Several computer manufacturers over the years made Macintosh clones that were capable of running Mac OS. From 1995 to 1997, Apple licensed Macintosh ROMs to several companies, notably Power Computing, UMAX and Motorola. These machines normally ran various versions of Classic Mac OS. Steve Jobs ended the clone-licensing program after returning to Apple in 1997.
Support for Macintosh clones was first exhibited in System 7.5.1, which was the first version to include the "Mac OS" logo (a variation on the original Happy Mac startup icon), and Mac OS 7.6 was the first to be named "Mac OS" instead of "System". These changes were made to disassociate the operating system from Apple's own Macintosh models.
File systems
The Macintosh originally used the Macintosh File System (MFS), a flat file system with only one level of folders. This was quickly replaced in 1985 by the Hierarchical File System (HFS), which had a true directory tree. Both file systems are otherwise compatible. An improved file system named HFS Plus ("HFS+" or "Mac OS Extended") was announced in 1997 and implemented in 1998.
Files in most file systems used with DOS, Windows, Unix, or other operating systems have only one "fork". By contrast, MFS and HFS give files two different "forks". The data fork contains the same sort of information as a file in other file systems, such as the text of a document or the bitmaps of an image file. The resource fork contains other structured data such as menu definitions, graphics, sounds, or code segments that would be incorporated into a program's file format on other systems. An executable file might consist only of resources (including code segments) with an empty data fork, while a data file might have only a data fork with no resource fork. A word processor file could contain its text in the data fork and styling information in the resource fork so that an application that does not recognize the styling information can still read the raw text.
On the other hand, these forks would challenge interoperability with different operating systems. In copying or transferring a Mac OS file to a non-Mac system, the default implementations would strip the file of its resource fork. Most data files contained only nonessential information in their resource fork, such as window size and location, but program files would be inoperative without their resources. This necessitated such encoding schemes as BinHex and MacBinary, which allowed a user to encode a dual-forked file into a single stream, or inversely take a single stream so-encoded and reconstitute it into a dual-forked file usable by Mac OS.
Release history
System 1, 2, 3, and 4
As part of Apple's goal of creating a computer with appliance-like simplicity, there is no explicit distinction made between the operating system software and the hardware it runs on. Because of this, early versions of the operating system do not have a distinct name. The software consists of two user-visible files: the System file, and the Finder, an application used for file management that also displays the Desktop. The two files are contained in a folder directory labeled "System Folder", which contains other resource files, like a printer driver, needed to interact with the System. Version numbers of the operating system are based on the version numbers of these two files.
System 1.0, 1.1, and 2.0 use a flat file system named Macintosh File System (MFS). The Finder provides virtual folders that could be used to organize files, but these folders are not visible from any other application and do not exist on the disk.
System 2.0 added support for AppleTalk and the newly introduced LaserWriter to use it.
System 2.1 (Finder 5.0) introduced the Hierarchical File System (HFS) which has real directories. This version was specifically to support the Hard Disk 20 and only implements HFS in RAM; startup and most floppy disks remain MFS 400 K volumes.
System 3.0 (Finder 5.1) was introduced with the Macintosh Plus, officially implementing HFS, 800K startup drives, support for several new technologies including SCSI and AppleShare, and Trash "bulging" (i.e., when the Trash contains files, it gains a bulged appearance).
System 4.0 was released with the Macintosh SE and System 4.1 first shipped with the Macintosh II—these new machines required additional support for the first expansion slots, the Apple Desktop Bus (ADB), internal hard drives and, on the Macintosh II, external color displays and the first Motorola 68020 processor. System 4.0 was the first release to support color graphics; previous releases did not support color.
These releases can only run one application at a time, except for desk accessories, though special application shells such as Multi-Mac or Switcher (discussed under MultiFinder) could work around this. Visible changes are best reflected in the version number of the Finder, where major leaps are found between 1.x, 4.x, 5.x, and 6.x.
In the late 1990s, Apple retroactively gave these older releases a single name.
System Software 5
Towards the end of 1987, Apple introduced a package titled "Apple Macintosh System Software Update 5.0". For the first time, the Macintosh operating system was offered as a distinct retail product that included four 800K disks and three manuals, at a cost of US$49. The software itself was still freely available through user groups and bulletin board services. While the product box presented this update to the operating system as "version 5.0", this number does not appear in the software itself. Three of the four disks (System Tools 1, System Tools 2 and Utilities 1) are all bootable, and the user can boot off whichever floppy contains the tools the user needs. For instance, System Tools 2 is the only disk with printer drivers, and Utilities 1 is the only disk with Disk First Aid and Apple HD SC Setup. Because the disks are named System Tools, users and the press commonly referred to this version as "System Tools 5.0".
The primary new feature of System 5 is MultiFinder, an extension that lets the system run several programs at once. The system uses a cooperative multitasking model, meaning that time is given to the background applications only when the foreground application yields control. A change in system functions that applications were already calling to handle events make many existing applications share time automatically, as well as being allowed to perform tasks in the background. Users can also choose not to use MultiFinder, thereby using a single application at a time. In 1990 InfoWorld tested four multitasking options for PC and Mac, viewing MultiFinder positively overall, but noting that its presence halved the speed of file transfer and printing compared to the single-tasking System 6 without MultiFinder.
System Software 6
System Software 6 (also referred to as "System 6") is a consolidation release of the Macintosh system software, producing a complete, stable, and long-lasting operating system. Two major hardware introductions requiring additional support under System 6 are the 68030 processor and 1.44 MB SuperDrive debuting with the Macintosh IIx and Macintosh SE/30. Later updates include support for the first specialized laptop features with the introduction of the Macintosh Portable. From System 6 forward, the Finder has a unified version number closely matching that of the System, alleviating much of the confusion caused by the often considerable differences between earlier Systems.
System 7/Mac OS 7
On May 13, 1991, System 7 was released. It was a major upgrade over System 6, adding a significant user interface overhaul, new applications, stability improvements and many new features. Its introduction coincides with the release of and provided support for the 68040 Macintosh line. The System 7 era saw numerous changes in the Macintosh platform including a proliferation of Macintosh models, the 68k to Power Macintosh transition as well as the rise of Microsoft Windows, increasing use of computer networking and the explosion in the popularity of the Internet.
One of the most significant features of System 7 is virtual memory support, an essential subsystem anticipated for years, which only exists for previous Systems in a third party extension named Virtual from Connectix. Accompanying this was a move to 32-bit memory addressing, necessary for the ever-increasing amounts of RAM available to the Motorola 68030 CPU, and 68020 CPUs with a 68851 PMMU. This process involves making all of the routines in OS code use the full 32-bits of a pointer as an address—prior systems used the upper 8 bits as flags. This change is known as being "32-bit clean". While System 7 itself is 32-bit clean, many existing machines and thousands of applications were not, so it was some time before the process was completed. To ease the transition, the "Memory" control panel contains a switch to disable this feature, allowing for compatibility with older applications.
Another notable System 7 feature is built-in cooperative multitasking. In System Software 6, this function was optional through the MultiFinder. System 7 also introduced aliases, similar to symbolic links on Unix, shortcuts that were introduced in later versions of Microsoft Windows, and shadows in IBM OS/2. System extensions were enhanced by being moved to their own subfolder; a subfolder in the System Folder was also created for the control panels. In System 7.5, Apple includes the Extensions Manager, a previously third-party program which simplified the process of enabling and disabling extensions.
The Apple menu, home only to desk accessories in System 6, was made more general-purpose: the user could now make often-used folders and applications—or anything else they desired—appear in the menu by placing aliases to them in an "Apple Menu Items" subfolder of the System Folder. System 7 also introduced the following: AppleScript, a scripting language for automating tasks; 32-bit QuickDraw, supporting so-called "true color" imaging, previously available as a system extension; and TrueType, an outline font standard.
The Trash, under System 6 and earlier, empties itself automatically when shutting down the computer—or, if MultiFinder is not running, when launching an application. System 7 reimplements the Trash as a special hidden folder, allowing files to remain in it across reboots until the user deliberately chose the "Empty Trash" command.
System 7.1
System 7.1 is mainly a bugfix release, with a few minor features added. One of the major new features of System 7.1 was moving fonts out of the System file into the Fonts folder in the System Folder. Previously a resource-copying utility such as ResEdit or Font D/A Mover was required for installing fonts. System 7.1 is not only the first Macintosh operating system to cost money (all previous versions were free or sold at the cost of the floppies), but also received a "Pro" sibling (version 7.1.1) with extra features. System 7.1.2 was the first version to support PowerPC-based Macs. System 7.1 also introduces the System Enablers as a method to support new models without updating the actual System file. This leads to extra files inside the system folder (one per new model supported).
System 7.5
System 7.5 introduces a large number of new features, many of which are based on shareware applications that Apple bought and included into the new system. On the newer PowerPC machines, System 7.5 may have stability problems partly due to a new memory manager (which can be turned off), and issues with the handling of errors in the PowerPC code (all PowerPC exceptions map to Type 11). These issues do not affect 68k-architecture machines. System 7.5 is contemporary with Apple's failed Copland effort as well as the release of Windows 95.
Mac OS 7.6
Stability improved in PowerPC-based Macs with Mac OS 7.6, which dropped the "System" moniker as a more trademarkable name was needed in order to license the OS to the growing market of third-party Macintosh clone manufacturers. Mac OS 7.6 required 32-bit-clean ROMs, and so it dropped support for every Mac with a 68000 processor, as well as the Mac II, Mac IIx, Mac IIcx, and Mac SE/30.
Mac OS 8
Mac OS 8 was released on July 26, 1997, the same month Steve Jobs became the de facto CEO of Apple. It was mainly released to keep the Mac OS moving forward during a difficult time for Apple. Initially planned as Mac OS 7.7, it was renumbered "8" to exploit a legal loophole and accomplish Jobs's goal of terminating third-party manufacturers' licenses to System 7 and shutting down the Macintosh clone market.
Mac OS 8 added a number of features from the abandoned Copland project, while leaving the underlying operating system unchanged. A multi-threaded Finder was included; files could now be copied in the background. The GUI was changed in appearance to a new shaded greyscale look named Platinum, and the ability to change the appearance themes (also known as skins) was added with a new control panel (though Platinum was the only one shipped). This capability was provided by a new "appearance" API layer within the OS, one of the few significant changes.
Apple sold 1.2 million copies of Mac OS 8 in its first two weeks of availability and 3 million within six months. In light of Apple's financial difficulties at the time, there was a large grassroots movement among Mac users to upgrade and "help save Apple". Even some pirate groups refused to redistribute the OS.
Mac OS 8.1
Mac OS 8.1 introduced an updated version of the Hierarchical File System named HFS+, which fixed many of the limitations of the earlier system and continued to be used in macOS up until macOS High Sierra, when it was replaced with the Apple File System. There are some other interface changes such as separating network features from printing, and some improvements to application switching. However, in underlying technical respects, Mac OS 8 is not very different from System 7.
Mac OS 8.5
Mac OS 8.5 focuses on speed and stability, with most 68k code replaced by modern code native to the PowerPC. It also improved the appearance of the user interface, although the theming feature was cut late in development.
Mac OS 9
Mac OS 9, the last major revision of the Classic Mac OS, was released on October 23, 1999. It is generally a steady evolution from Mac OS 8. Early development releases of Mac OS 9 were numbered 8.7.
Mac OS 9 added improved support for AirPort wireless networking. It introduced an early implementation of multi-user support. Though not a true multi-user operating system, Mac OS 9 does allow multiple desktop users to have their own data and system settings. An improved Sherlock search engine added several new search plug-ins. Mac OS 9 also provides a much improved memory implementation and management. AppleScript was improved to allow TCP/IP and networking control. Mac OS 9 also makes the first use of the centralized Apple Software Update to find and install OS and hardware updates.
Other new features included its on-the-fly file encryption software with code signing and Keychain technologies, Remote Networking and File Server packages, and much improved list of USB drivers.
Mac OS 9 also added some transitional technologies to help application developers adopt some Mac OS X features before the introduction of the new OS to the public, to help ease the transition. These included new APIs for the file system and the bundling of the Carbon library that apps could link against instead of the traditional API libraries—apps that were adapted to do this could be run natively on Mac OS X as well. Other changes were made beginning with the Mac OS 9.1 update to allow it to be launched in the Classic Environment within Mac OS X.
The final update to the Classic Mac OS was version 9.2.2, released on December 5, 2001.
Transition to Mac OS X
macOS (originally "Mac OS X" and then "OS X") is Apple's current Mac operating system that officially succeeded the Classic Mac OS in 2001. Although it was originally marketed as simply "version 10" of Mac OS, it has a history that is largely independent of the earlier Mac OS releases.
The macOS architectural legacy is the successor to Mac OS 9 and the Classic Mac OS legacy. However, unlike the Classic Mac OS, it is a Unix-based operating system built on NeXTSTEP and technology developed at NeXT from the late 1980s until early 1997, when Apple purchased the company, and its CEO Steve Jobs returned to Apple. macOS also makes use of the BSD codebase and the XNU kernel, and its core set of components is based upon Apple's open source Darwin operating system.
An early version of the operating system, Mac OS X Server 1.0, was released in 1999. It retains the "Platinum" appearance from the Classic Mac OS and even resembles OPENSTEP in places, with the first version to arrive with the new Aqua user interface. The first consumer version, Mac OS X 10.0, was released on March 24, 2001, supporting the new Aqua user interface. Mac OS X was renamed "OS X" in 2011 and "macOS" in 2016.
Users of the Classic Mac OS generally upgraded to Mac OS X, but it was criticized in its early years as more difficult and less user-friendly than the original Mac OS, for the lack of certain features that had not yet been reimplemented in the new OS, for being slower on the same hardware (especially older hardware), and for incompatibilities with the older OS. Because drivers (for printers, scanners, tablets, etc.) written for the older Mac OS were not compatible with Mac OS X, inconsistent program support with the Classic Environment program used to run the older operating system's programs on Mac OS X, and the lack of Mac OS X support for older Apple computers before late 1997; some Macintosh users continued using the older Classic Mac OS for a few years after the original release of Mac OS X. Steve Jobs encouraged people to upgrade to Mac OS X by staging a mock funeral for Mac OS 9 at WWDC 2002.
Classic
PowerPC versions of Mac OS X up to and including Mac OS X 10.4 Tiger include a compatibility layer for running older Mac applications, the Classic Environment. Originally codenamed the "blue box", the environment runs a nearly complete Mac OS 9 operating system, version 9.1 or later, as a Mac OS X application. This allows applications that have not been ported to the Carbon API to run on Mac OS X. This is reasonably seamless, though "classic" applications retain their original Mac OS 9 appearance and do not gain the Mac OS X "Aqua" appearance.
Early New World ROM PowerPC-based Macs shipped with Mac OS 9.2 as well as Mac OS X. Mac OS 9.2 had to be installed by the user—it was not installed by default on hardware revisions released after Mac OS X 10.4. Most well-written "classic" Mac OS applications function properly under this environment, but compatibility is assured only if the software was written to be unaware of the actual hardware and to interact solely with the operating system. The Classic Environment is not available on Intel-based Mac systems or the latest Apple silicon Macs due to the incompatibility of Mac OS 9 with both the x86 and ARM hardware.
Emulation
68k emulators
Third-party Macintosh emulators, such as vMac, Basilisk II, and Executor, eventually made it possible to run the Classic Mac OS on Intel-based PCs. These emulators were restricted to emulating the 68k series of processors, and as such most could not run versions of the Mac OS that succeeded 8.1, which required PowerPC processors. Most also required a Mac ROM image or a hardware interface supporting a real Mac ROM chip; those requiring an image are of dubious legal standing as the ROM image may infringe on Apple's intellectual property.
A notable exception was the Executor commercial software product from Abacus Research & Development, the only product that used 100% reverse-engineered code without the use of Apple technology. It ran extremely quickly but never achieved more than a minor subset of functionality. Few programs were completely compatible and many were extremely crash-prone if they ran at all. Executor filled a niche market for porting 68k Mac applications to x86 platforms; development ceased in 2002 and the source code was released by the author in late 2008. Emulators using Mac ROM images offered near complete Mac OS compatibility, and later versions offered excellent performance as modern x86 processor performance increased exponentially.
Apple included its own Mac 68k emulator that ran seamlessly on all PowerPC-based versions of the Classic Mac OS. Apple also sold a Mac 68k emulator for SPARC-based (Solaris) and PA-RISC based (HP-UX) systems called Macintosh Application Environment (MAE), which could run variants of System 7.x inside an X11 window.
PowerPC emulators
As of 2021 the most capable PowerPC emulator is QEMU
In comparison with 68k-emulator development, PowerPC emulation is more complex and requires more CPU power. The emulator is capable of running Classic Mac OS and OS X at full speed with networking and sound in most cases. QEMU has official support for Classic Mac OS version 9.0 through 9.2 and Mac OS X 10.0 up to and including 10.5. QEMU has several advantages over other PowerPC emulators namely supporting a wide range of platforms from Linux to Mac and Windows on current CPU architectures.
Another PowerPC emulator is SheepShaver, which has been around since 1998 for BeOS on the PowerPC platform, but in 2002 was open-sourced, and efforts began to port it to other platforms. Originally it was not designed for use on x86 platforms and required an actual PowerPC processor present in the machine it was running on similar to a hypervisor. Although it provides PowerPC processor support, it can run only up to Mac OS 9.0.4 because it does not emulate a memory management unit.
Other examples include ShapeShifter (by the same developer that created SheepShaver), Fusion, PearPC and . The latter ran Classic Mac OS with a PowerPC "coprocessor" accelerator card. Using this method has been said to equal or better the speed of a Macintosh with the same processor, especially with respect to the 68k series due to real Macs running in MMU trap mode, hampering performance.
Apple's initial version of Rosetta is a PowerPC emulator allowing Intel-based Macs to run PowerPC Mac OS X applications, but is unable to run non-Carbon Classic Mac OS (9.2.2 or earlier) applications. Rosetta was available for all Intel releases of OS X until version 10.7 Lion.
Timeline
| Technology | Operating Systems | null |
26685803 | https://en.wikipedia.org/wiki/Denisovan | Denisovan | The Denisovans or Denisova hominins ( ) are an extinct species or subspecies of archaic human that ranged across Asia during the Lower and Middle Paleolithic, and lived, based on current evidence, from 285 to 25 thousand years ago. Denisovans are known from few physical remains; consequently, most of what is known about them comes from DNA evidence. No formal species name has been established pending more complete fossil material.
The first identification of a Denisovan individual occurred in 2010, based on mitochondrial DNA (mtDNA) extracted from a juvenile female finger bone excavated from the Siberian Denisova Cave in the Altai Mountains in 2008. Nuclear DNA indicates close affinities with Neanderthals. The cave was also periodically inhabited by Neanderthals, but it is unclear whether Neanderthals and Denisovans ever cohabited in the cave. Additional specimens from Denisova Cave were subsequently identified, as was a single specimen from the Baishiya Karst Cave on the Tibetan Plateau, and Cobra Cave in the Annamite Mountains of Laos. DNA evidence suggests they had dark skin, eyes, and hair, and had a Neanderthal-like build and facial features. However, they had larger molars which are reminiscent of Middle to Late Pleistocene archaic humans and australopithecines.
Denisovans apparently interbred with modern humans, with a high percentage (roughly 5%) occurring in Melanesians, Aboriginal Australians, and Filipino Negritos. This distribution suggests that there were Denisovan populations across Asia. There is also evidence of interbreeding with the Altai Neanderthal population, with about 17% of the Denisovan genome from Denisova Cave deriving from them. A first-generation hybrid nicknamed "Denny" was discovered with a Denisovan father and a Neanderthal mother. Additionally, 4% of the Denisovan genome comes from an unknown archaic human species, which diverged from modern humans over one million years ago.
Taxonomy
Denisovans may represent a new species of Homo or an archaic subspecies of Homo sapiens (modern humans), but there are too few fossils to erect a proper taxon. Proactively proposed species names for Denisovans are H. denisova or H. altaiensis. Chinese researchers suggest the Denisovans were members of Homo longi, and the idea has been supported by the palaeontologist Chris Stringer. In 2024, paleoanthropologists Christopher Bae and Xiujie Wu designated the Xujiayao hominin fossils as the holotype of the species Homo juluensis, and suggested sinking Denisovans into this species.
Discovery
Denisova Cave is located in Altai Krai, Russia, in south-central Siberia, on the western edges of the Altai Mountains. It is named after Denis (Dyonisiy), a Russian Old Believer hermit who lived there in the 18th century. The cave was first inspected for fossils in the 1970s by Soviet paleontologist Nikolai Ovodov, who was looking for remains of canids.
In 2008, Michael Shunkov from the Russian Academy of Sciences and other Russian archaeologists from the Institute of Archaeology and Ethnography of the Siberian Branch of the Russian Academy of Sciences in Novosibirsk Akademgorodok investigated the cave and found the finger bone of a juvenile female hominin originally dated to 50–30,000 years ago. The estimate has changed to 76,200–51,600 years ago. The specimen was originally named X-woman because matrilineal mitochondrial DNA (mtDNA) extracted from the bone demonstrated it to belong to a novel ancient hominin, genetically distinct both from contemporary modern humans and from Neanderthals.
In 2019, Greek archaeologist Katerina Douka and colleagues radiocarbon dated specimens from Denisova Cave, and estimated that Denisova 2 (the oldest specimen) lived 195,000–122,700 years ago. Older Denisovan DNA collected from sediments in the East Chamber dates to 217,000 years ago. Based on artifacts also discovered in the cave, hominin occupation (most likely by Denisovans) began 287±41 or 203±14 ka. Neanderthals were also present 193±12 ka and 97±11 ka, possibly concurrently with Denisovans.
Specimens
The fossils of multiple distinct Denisovan individuals from Denisova Cave have been identified through their ancient DNA (aDNA): Denisova 2, 3, 4, 8, 11, and 25. An mtDNA-based phylogenetic analysis of these individuals suggested that Denisova 2 is the oldest, followed by Denisova 8, while Denisova 3 and Denisova 4 were roughly contemporaneous. In 2024, scientists announced the sequence of Denisova 25, which was in a layer dated to 200ka. During DNA sequencing, a low proportion of the Denisova 2, Denisova 4 and Denisova 8 genomes were found to have survived, but a high proportion of the Denisova 3 and Denisova 25 genomes were intact. The Denisova 3 sample was cut into two, and the initial DNA sequencing of one fragment was later independently confirmed by sequencing the mtDNA from the second.
Denisova Cave contained the only known examples of Denisovans until 2019, when a research group led by Fahu Chen, Dongju Zhang, and Jean-Jacques Hublin described a partial mandible discovered in 1980 by a Buddhist monk in the Baishiya Karst Cave on the Tibetan Plateau in China. Known as the Xiahe mandible, the fossil became part of the collection of Lanzhou University, where it remained unstudied until 2010. It was determined by ancient protein analysis to contain collagen that by sequence was found to have close affiliation to that of the Denisovans from Denisova Cave, while uranium decay dating of the carbonate crust enshrouding the specimen indicated it was more than 160,000 years old. The identity of this population was later confirmed through study of environmental DNA, which found Denisovan mtDNA in sediment layers ranging in date from 100,000 to 60,000 years before present, and perhaps more recent. A 2024 reanalysis identified a partial Denisovan rib fragment dating to between 48,000 BP and 32,000 BP.
In 2018, a team of Laotian, French, and American anthropologists, who had been excavating caves in the Laotian jungle of the Annamite Mountains since 2008, was directed by local children to the site Tam Ngu Hao 2 ("Cobra Cave") where they recovered a human tooth. The tooth (catalogue number TNH2-1) developmentally matches a 3.5 to 8.5 year old, and a lack of amelogenin (a protein on the Y chromosome) suggests it belonged to a girl barring extreme degradation of the protein over a long period of time. Dental proteome analysis was inconclusive for this specimen, but the team found it anatomically comparable with the Xiahe mandible, and so tentatively categorized it as a Denisovan, although they could not rule out it being Neanderthal. The tooth probably dates to 164,000 to 131,000 years ago.
Some older findings may or may not belong to the Denisovan line, but Asia is not well mapped in regards to human evolution. Such findings include the Dali skull, the Xujiayao hominin, Maba Man, the Jinniushan hominin, and the Narmada Human. The Xiahe mandible shows morphological similarities to some later East Asian fossils such as Penghu 1, but also to Chinese H. erectus. In 2021, Chinese palaeoanthropologist Qiang Ji suggested his newly erected species, H. longi, may represent the Denisovans based on the similarity between the type specimen's molar and that of the Xiahe mandible. In 2024, Bae and Wu suggested classifying the Xujiayao and Denisovan material as H. juluensis, and the Dali Man and similar specimens as H. longi.
Evolution
Sequenced mitochondrial DNA (mtDNA), preserved by the cool climate of the cave (average temperature is at freezing point), was extracted from Denisova 3 by a team of scientists led by Johannes Krause and Svante Pääbo from the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. Denisova 3's mtDNA differs from that of modern humans by 385 bases (nucleotides) out of approximately 16,500, whereas the difference between modern humans and Neanderthals is around 202 bases. In comparison, the difference between chimpanzees and modern humans is approximately 1,462 mtDNA base pairs. This suggested that Denisovan mtDNA diverged from that of modern humans and Neanderthals about 1,313,500–779,300 years ago; whereas modern human and Neanderthal mtDNA diverged 618,000–321,200 years ago. Krause and colleagues then concluded that Denisovans were the descendants of an earlier migration of H. erectus out of Africa, completely distinct from modern humans and Neanderthals.
However, according to the nuclear DNA (nDNA) of Denisova 3—which had an unusual degree of DNA preservation with only low-level contamination—Denisovans and Neanderthals were more closely related to each other than they were to modern humans. Using the percent distance from human–chimpanzee last common ancestor, Denisovans/Neanderthals split from modern humans about 804,000 years ago, and from each other 640,000 years ago. Using a mutation rate of or per base pair (bp) per year, the Neanderthal/Denisovan split occurred around either 236–190,000 or 473–381,000 years ago respectively. Using per generation with a new generation every 29 years, the time is 744,000 years ago. Using nucleotide site per year, it is 616,000 years ago. Using the latter dates, the split had likely already occurred by the time hominins spread out across Europe. H. heidelbergensis is typically considered to have been the direct ancestor of Denisovans and Neanderthals, and sometimes also modern humans. Due to the strong divergence in dental anatomy, they may have split before characteristic Neanderthal dentition evolved about 300,000 years ago.
The more divergent Denisovan mtDNA has been interpreted as evidence of admixture between Denisovans and an unknown archaic human population, possibly a relict H. erectus or H. erectus-like population about 53,000 years ago. Alternatively, divergent mtDNA could have also resulted from the persistence of an ancient mtDNA lineage which only went extinct in modern humans and Neanderthals through genetic drift. Modern humans contributed mtDNA to the Neanderthal lineage, but not to the Denisovan mitochondrial genomes yet sequenced. The mtDNA sequence from the femur of a 400,000-year-old H. heidelbergensis from the Sima de los Huesos Cave in Spain was found to be related to those of Neanderthals and Denisovans, but closer to Denisovans, and the authors posited that this mtDNA represents an archaic sequence which was subsequently lost in Neanderthals due to replacement by a modern-human-related sequence.
Demographics
Denisovans are known to have lived in Siberia, Tibet, and Laos. The Xiahe mandible is the earliest recorded human presence on the Tibetan Plateau. Though their remains have been identified in only these three locations, traces of Denisovan DNA in modern humans suggest they ranged across East Asia, and potentially western Eurasia. In 2019, geneticist Guy Jacobs and colleagues identified three distinct populations of Denisovans responsible for the introgression into modern populations now native to, respectively: Siberia and East Asia; New Guinea and nearby islands; and Oceania and, to a lesser extent, across Asia. Using coalescent modeling, the Denisova Cave Denisovans split from the second population about 283,000 years ago; and from the third population about 363,000 years ago. This indicates that there was considerable reproductive isolation between Denisovan populations. In a 2024 study, scientist Danat Yermakovich, of the University of Tartu, discovered that people living at different elevations in Papua New Guinea have differences in Denisovan DNA; with people living in the highlands having variants for early brain development and those living in the lowlands having variants for the immune system.
Based on the high percentages of Denisovan DNA in modern Papuans and Australians, Denisovans may have crossed the Wallace Line into these regions (with little back-migration west), the second known human species to do so, along with earlier Homo floresiensis. By this logic, they may have also entered the Philippines, living alongside H. luzonensis which, if this is the case, may represent the same or a closely related species. These Denisovans may have needed to cross large bodies of water. Alternately, high Denisovan DNA admixture in modern Papuan populations may simply represent higher mixing among the original ancestors of Papuans prior to crossing the Wallace line. Icelanders also have an anomalously high Denisovan heritage, which could have stemmed from a Denisovan population far west of the Altai Mountains. Genetic data suggests Neanderthals were frequently making long crossings between Europe and the Altai Mountains especially towards the date of their extinction.
Using exponential distribution analysis on haplotype lengths, Jacobs calculated introgression into modern humans occurred about 29,900 years ago with the Denisovan population ancestral to New Guineans; and 45,700 years ago with the population ancestral to both New Guineans and Oceanians. Such a late date for the New Guinean group could indicate Denisovan survival as late as 14,500 years ago, which would make them the latest surviving archaic human species. A third wave appears to have introgressed into East Asia, but there is not enough DNA evidence to pinpoint a solid timeframe.
The mtDNA from Denisova 4 bore a high similarity to that of Denisova 3, indicating that they belonged to the same population. The genetic diversity among the Denisovans from Denisova Cave is on the lower range of what is seen in modern humans, and is comparable to that of Neanderthals. However, it is possible that the inhabitants of Denisova Cave were more or less reproductively isolated from other Denisovans, and that, across their entire range, Denisovan genetic diversity may have been much higher.
Denisova Cave, over time of habitation, continually swung from a fairly warm and moderately humid pine and birch forest to tundra or forest-tundra landscape. Conversely, Baishiya Karst Cave is situated at a high elevation, an area characterized by low temperature, low oxygen, and poor resource availability. Colonization of high-altitude regions, due to such harsh conditions, was previously assumed to have only been accomplished by modern humans. Denisovans seem to have also inhabited the jungles of Southeast Asia. The Tam Ngu Hao 2 site might have been a closed forest environment.
Anatomy
Little is known of the precise anatomical features of the Denisovans since the only physical remains discovered so far are a finger bone, four teeth, long bone fragments, a partial jawbone, a parietal bone skull fragment, and a rib bone. The finger bone is within the modern human range of variation for women, which is in contrast to the large, robust molars which are more similar to those of Middle to Late Pleistocene archaic humans. The third molar is outside the range of any Homo species except H. habilis and H. rudolfensis, and is more like those of australopithecines. The second molar is larger than those of modern humans and Neanderthals, and is more similar to those of H. erectus and H. habilis. Like Neanderthals, the mandible had a gap behind the molars, and the front teeth were flattened; but Denisovans lacked a high mandibular body, and the mandibular symphysis at the midline of the jaw was more receding. The parietal is reminiscent of that of H. erectus.
A facial reconstruction has been generated by comparing methylation at individual genetic loci associated with facial structure. This analysis suggested that Denisovans, much like Neanderthals, had a long, broad, and projecting face; large nose; sloping forehead; protruding jaw; elongated and flattened skull; and wide chest and hips. The Denisovan tooth row was longer than that of Neanderthals and anatomically modern humans.
Middle-to-Late Pleistocene East Asian archaic human skullcaps typically share features with Neanderthals. The skullcaps from Xuchang feature prominent brow ridges like Neanderthals, though the nuchal and angular tori near the base of the skull are either reduced or absent, and the back of the skull is rounded off like in early modern humans. Xuchang 1 had a large brain volume of approximately 1800 cc, on the high end for Neanderthals and early modern humans, and well beyond the present-day human average.
The Denisovan genome from Denisova Cave has variants of genes which, in modern humans, are associated with dark skin, brown hair, and brown eyes. The Denisovan genome also contains a variant region around the EPAS1 gene that in Tibetans assists with adaptation to low oxygen levels at high elevation, and in a region containing the WARS2 and TBX15 loci which affect body-fat distribution in the Inuit. In Papuans, introgressed Neanderthal alleles are highest in frequency in genes expressed in the brain, whereas Denisovan alleles have highest frequency in genes expressed in bones and other tissue.
Culture
Denisova Cave
Early Middle Paleolithic stone tools from Denisova Cave included cores, scrapers, denticulate tools, and notched tools, deposited about 287±41 thousand years ago in the Main Chamber of the cave; and about 269±97 thousand years ago in the South Chamber; up to 170±19 thousand and 187±14 thousand years ago in the Main and East Chambers, respectively.
Middle Paleolithic assemblages were dominated by flat, discoidal, and Levallois cores, and there were some isolated sub-prismatic cores. There were predominantly side scrapers (a scraper with only the sides used to scrape), but also notched-denticulate tools, end-scrapers (a scraper with only the ends used to scrape), burins, chisel-like tools, and truncated flakes. These dated to 156±15 thousand years ago in the Main Chamber, 58±6 thousand years ago in the East Chamber, and 136±26–47±8 thousand years ago in the South Chamber.
Early Upper Paleolithic artefacts date to 44±5 thousand years ago in the Main Chamber, 63±6 thousand years ago in the East Chamber, and 47±8 thousand years ago in the South Chamber, though some layers of the East Chamber seem to have been disturbed. There was blade production and Levallois production, but scrapers were again predominant. A well-developed, Upper Paleolithic stone bladelet technology distinct from the previous scrapers began accumulating in the Main Chamber around 36±4 thousand years ago.
In the Upper Paleolithic layers, there were also several bone tools and ornaments: a marble ring, an ivory ring, an ivory pendant, a red deer tooth pendant, an elk tooth pendant, a chloritolite bracelet, and a bone needle. However, Denisovans are only confirmed to have inhabited the cave until 55 ka; the dating of Upper Paleolithic artefacts overlaps with modern human migration into Siberia (though there are no occurrences in the Altai region); and the DNA of the only specimen in the cave dating to the time interval (Denisova 14) is too degraded to confirm species identity, so the attribution of these artefacts is unclear.
Tibet
The inhabitants of Baishiya Karst Cave seem to have been extensively processing goat antelopes, cows, deer, horses, and woolly rhinoceros. They were also butchering large carnivores (cave hyena, dog, and big cat), marmots, hare, and eagles. They may have also used these animals' long bones to make bone tools, and additionally there are stone artefacts in each layer excavated.
In 1998, five child hand- and footprint impressions were discovered in a travertine unit near the Quesang hot springs in Tibet; in 2021, they were dated to 226 to 169 thousand years ago using uranium decay dating. This is the oldest evidence of human occupation of the Tibetan Plateau, and since the Xiahe mandible is the oldest human fossil from the region (though younger than the Quesang impressions), these may have been made by Denisovan children. The impressions were printed onto a small panel of space, and there is little overlap between all the prints, so they seem to have been taking care to make new imprints in unused space. If considered art, they are the oldest known examples of rock art. Similar hand stencils and impressions do not appear again in the archeological record until roughly 40,000 years ago.
The footprints comprise four right impressions and one left superimposed on one of the rights. They were probably left by two individuals. The tracks of the individual who superimposed their left onto their right may have scrunched up their toes and wiggled them in the mud, or dug their finger into the toe prints. The footprints average long, which roughly equates to a 7 or 8 year old child by modern human growth rates. There are two sets of handprints (from a left and right hand), which may have been created by an older child unless one of the former two individuals had long fingers. The handprints average , which roughly equates with a 12 year old modern human child, and the middle finger length agrees with a 17 year old modern human. One of the handprints shows an impression of the forearm, and the individual was wiggling their thumb through the mud.
Interbreeding
Analyses of the modern human genomes indicate past interbreeding with at least two groups of archaic humans, Neanderthals and Denisovans, and that such interbreeding events occurred on multiple occasions. Comparisons of the Denisovan, Neanderthal, and modern human genomes have revealed evidence of a complex web of interbreeding among these lineages.
Archaic humans
As much as 17% of the Denisovan genome from Denisova Cave represents DNA from the local Neanderthal population. Denisova 11 was an F1 (first generation) Denisovan/Neanderthal hybrid; the fact that such an individual was found may indicate interbreeding was a common occurrence here. The Denisovan genome shares more derived alleles with the Altai Neanderthal genome from Siberia than with the Vindija Cave Neanderthal genome from Croatia or the Mezmaiskaya cave Neanderthal genome from the Caucasus, suggesting that the gene flow came from a population that was more closely related to the local Altai Neanderthals. However, Denny's Denisovan father had the typical Altai Neanderthal introgression, while her Neanderthal mother represented a population more closely related to Vindija Neanderthals. Denisova 25, dated to 200ka, is estimated to have inherited 5% of his genome from a previously unknown population of Neanderthals, and came from a different population of Denisovans than the younger samples.
About 4% of the Denisovan genome derives from an unidentified archaic hominin, perhaps the source of the anomalous ancient mtDNA, indicating this species diverged from Neanderthals and humans over a million years ago. The only identified Homo species of Late Pleistocene Asia are H. erectus and H. heidelbergensis, though in 2021, specimens allocated to the latter species were reclassified as H. longi and H. daliensis''.
Before splitting from Neanderthals, their ancestors ("Neandersovans") migrating into Europe apparently interbred with an unidentified "superarchaic" human species who were already present there; these superarchaics were the descendants of a very early migration out of Africa around 1.9 mya.
Modern humans
A 2011 study found that Denisovan DNA is present at a comparatively high level in Papuans, Aboriginal Australians, Near Oceanians, Polynesians, Fijians, Eastern Indonesians, and Aeta (from the Philippines); but not in East Asians, western Indonesians, Jahai people (from Malaysia), or Onge (from the Andaman Islands). This may suggest that Denisovan introgression occurred within the Pacific region rather than on the Asian mainland, and that ancestors of the latter groups were not present in Southeast Asia at the time. In the Melanesian genome, about 4–6% or 1.9–3.4% derives from Denisovan introgression. Prior to 2021, New Guineans and Australian Aborigines were reported to have the most introgressed DNA, but Australians have less than New Guineans. A 2021 study discovered 30 to 40% more Denisovan ancestry in Aeta people in the Philippines than in Papuans, estimated as about 5% of the genome. The Aeta Magbukon in Luzon have the highest known proportion of Denisovan ancestry of any population in the world. In Papuans, less Denisovan ancestry is seen in the X chromosome than autosomes, and some autosomes (such as chromosome 11) also have less Denisovan ancestry, which could indicate hybrid incompatibility. The former observation could also be explained by less female Denisovan introgression into modern humans, or more female modern human immigrants who diluted Denisovan X chromosome ancestry.
In contrast, 0.2% derives from Denisovan ancestry in mainland Asians and Native Americans. South Asians were found to have levels of Denisovan admixture similar to that seen in East Asians. The discovery of the 40,000-year-old Chinese modern human Tianyuan Man lacking Denisovan DNA significantly different from the levels in modern-day East Asians discounts the hypothesis that immigrating modern humans simply diluted Denisovan ancestry whereas Melanesians lived in reproductive isolation. A 2018 study of Han Chinese, Japanese, and Dai genomes showed that modern East Asians have DNA from two different Denisovan populations: one similar to the Denisovan DNA found in Papuan genomes, and a second that is closer to the Denisovan genome from Denisova Cave. This could indicate two separate introgression events involving two different Denisovan populations. In South Asian genomes, DNA only came from the same single Denisovan introgression seen in Papuans. A 2019 study found a third wave of Denisovans which introgressed into East Asians. Introgression, also, may not have immediately occurred when modern humans immigrated into the region.
The timing of introgression into Oceanian populations likely occurred after Eurasians and Oceanians split roughly 58,000 years ago, and before Papuan and Aboriginal Australians split from each other roughly 37,000 years ago. Given the present day distribution of Denisovan DNA, this may have taken place in Wallacea, though the discovery of a 7,200 year old Toalean girl (closely related to Papuans and Aboriginal Australians) from Sulawesi carrying Denisovan DNA makes Sundaland another potential candidate. Other early Sunda hunter gatherers so far sequenced carry very little Denisovan DNA, which either means the introgression event did not take place in Sundaland, or Denisovan ancestry was diluted with gene flow from the mainland Asian Hòabìnhian culture and subsequent Neolithic cultures.
In other regions of the world, archaic introgression into humans stems from a group of Neanderthals related to those which inhabited Vindija Cave, Croatia, as opposed to archaics related to Siberian Neanderthals and Denisovans. However, about 3.3% of the archaic DNA in the modern Icelandic genome descends from the Denisovans, and such a high percentage could indicate a western Eurasian population of Denisovans which introgressed into either Vindija-related Neanderthals or immigrating modern humans.
Denisovan genes may have helped early modern humans migrating out of Africa to acclimatize. Although not present in the sequenced Denisovan genome, the distribution pattern and divergence of HLA-B*73 from other HLA alleles (involved in the immune system's natural killer cell receptors) has led to the suggestion that it introgressed from Denisovans into modern humans in West Asia. In a 2011 study, half of the HLA alleles of modern Eurasians were shown to represent archaic HLA haplotypes, and were inferred to be of Denisovan or Neanderthal origin. A haplotype of EPAS1 in modern Tibetans, which allows them to live at high elevations in a low-oxygen environment, likely came from Denisovans. Genes related to phospholipid transporters (which are involved in fat metabolism) and to trace amine-associated receptors (involved in smelling) are more active in people with more Denisovan ancestry. Denisovan genes may have conferred a degree of immunity against the G614 mutation of SARS-CoV-2. Denisovan introgressions may have influenced the immune system of present-day Papuans and potentially favoured "variants to immune-related phenotypes" and "adaptation to the local environment".
In December 2023, scientists reported that genes inherited by modern humans from Neanderthals and Denisovans may biologically influence the daily routine of modern humans.
| Biology and health sciences | Evolution | null |
43563154 | https://en.wikipedia.org/wiki/Shared-use%20path | Shared-use path | A shared-use path, mixed-use path or multi-use pathway is a path which is "designed to accommodate the movement of pedestrians and cyclists". Examples of shared-use paths include sidewalks designated as shared-use, bridleways and rail trails. A shared-use path typically has a surface that is asphalt, concrete or firmly packed crushed aggregate. Shared-use paths differ from cycle tracks and cycle paths in that shared-use paths are designed to include pedestrians even if the primary anticipated users are cyclists.
The path may also permit other users such as inline skating. Contrastingly, motorcycles and mopeds are normally prohibited. Shared-use paths sometimes provide different lanes for users who travel at different speeds to prevent conflicts between user groups on high-use trails. Shared-use paths are criticised for creating conflict between different users. The UK's Department for Transport deprecates this kind of route in denser urban environments.
Types
Bridleways
In the UK, cyclists are legally permitted to cycle on bridleways (paths open to horse riders), but not on public footpaths. Therefore, bridleways are, in effect, a form of shared-use path.
Segregated paths
On segregated or divided paths, the path is split into a section for pedestrians and a section for cyclists. This may be achieved with a painted line or different surface. It may also be delineated with tactile paving for blind and visually impaired pedestrians.
Research by the UK Department for Transport found that cyclists and pedestrians prefer wider non-segregated paths to more narrow segregated paths (e.g. a 3 m wide shared path, compared with a 3 m path split into 1.5 m sections).
Benefits
The principal benefit of a shared-use path is saving space. This may be important in environmentally-sensitive areas or on narrow streets, where a full cycle track may not be feasible.
Issues
Shared use paths are criticised for creating conflict between pedestrians and cyclists and creating complaints from pedestrians and speed. Therefore, the paths do not properly take into account the different needs of different road users. For example a study by the Institute for Chartered Engineers found that users of shared use paths were confused about the nature of the path and who has priority on them.
Pedestrians are sometimes unsure how to behave on shared-use paths. The question arises whether the path is to be treated as a road (therefore pedestrians should face oncoming traffic), or a path (and therefore pedestrians may walk wherever they choose).
Shared-use paths alongside the highway often look like sidewalks to motorists. Therefore, at side roads, in jurisdictions where pedestrians do not have priority at side roads, the priority situation at side roads on shared-use paths can be confusing and often cyclists are required to give way to turning motorists.
By country
United Kingdom
Before the January 2022 revision, the Highway Code gave no advice to pedestrians on how to share space with cyclists; there was also little guidance given to cyclists. (The 2023 edition covers both aspects. The UK Department for Transport advises local authorities that cyclists and pedestrians should not be expected to share space on or alongside city streets. Sustrans gives advice for cyclists, walkers and runners using shared-use paths on the National Cycle Network.
The Milton Keynes redway system is an example of a city-wide network of shared-use paths. The network consists of over of shared-use paths that avoid the city's busy and fast grid roads (which run between neighbourhoods rather than through them).
United States
In the US, the 1999 AASHTO Guide for the Development of Bicycle Facilities defines a shared-use path as being physically separated from motor vehicular traffic with an open space or barrier.
| Technology | Road infrastructure | null |
48413465 | https://en.wikipedia.org/wiki/Proteoarchaeota | Proteoarchaeota | Proteoarchaeota is a proposed archaeal kingdom thought to be closely related and possibly ancestral to the Eukaryotes.
Classification
The phylogenetic relationship of this group is still under discussion. The relationship of the members is approximately as follows:
| Biology and health sciences | Archaea | Plants |
53606876 | https://en.wikipedia.org/wiki/Jeffery%E2%80%93Hamel%20flow | Jeffery–Hamel flow | In fluid dynamics Jeffery–Hamel flow is a flow created by a converging or diverging channel with a source or sink of fluid volume at the point of intersection of the two plane walls. It is named after George Barker Jeffery(1915) and Georg Hamel(1917), but it has subsequently been studied by many major scientists such as von Kármán and Levi-Civita, Walter Tollmien, F. Noether, W.R. Dean, Rosenhead, Landau, G.K. Batchelor etc. A complete set of solutions was described by Edward Fraenkel in 1962.
Flow description
Consider two stationary plane walls with a constant volume flow rate is injected/sucked at the point of intersection of plane walls and let the angle subtended by two walls be . Take the cylindrical coordinate system with representing point of intersection and the centerline and are the corresponding velocity components. The resulting flow is two-dimensional if the plates are infinitely long in the axial direction, or the plates are longer but finite, if one were neglect edge effects and for the same reason the flow can be assumed to be entirely radial i.e., .
Then the continuity equation and the incompressible Navier–Stokes equations reduce to
The boundary conditions are no-slip condition at both walls and the third condition is derived from the fact that the volume flux injected/sucked at the point of intersection is constant across a surface at any radius.
Formulation
The first equation tells that is just function of , the function is defined as
Different authors defines the function differently, for example, Landau defines the function with a factor . But following Whitham, Rosenhead the momentum equation becomes
Now letting
the and momentum equations reduce to
and substituting this into the previous equation(to eliminate pressure) results in
Multiplying by and integrating once,
where are constants to be determined from the boundary conditions. The above equation can be re-written conveniently with three other constants as roots of a cubic polynomial, with only two constants being arbitrary, the third constant is always obtained from other two because sum of the roots is .
The boundary conditions reduce to
where is the corresponding Reynolds number. The solution can be expressed in terms of elliptic functions. For convergent flow , the solution exists for all , but for the divergent flow , the solution exists only for a particular range of .
Dynamical interpretation
Source:
The equation takes the same form as an undamped nonlinear oscillator(with cubic potential) one can pretend that is time, is displacement and is velocity of a particle with unit mass, then the equation represents the energy equation(, where and ) with zero total energy, then it is easy to see that the potential energy is
where in motion. Since the particle starts at for and ends at for , there are two cases to be considered.
First case are complex conjugates and . The particle starts at with finite positive velocity and attains where its velocity is and acceleration is and returns to at final time. The particle motion represents pure outflow motion because and also it is symmetric about .
Second case , all constants are real. The motion from to to represents a pure symmetric outflow as in the previous case. And the motion to to with for all time() represents a pure symmetric inflow. But also, the particle may oscillate between , representing both inflow and outflow regions and the flow is no longer need to symmetric about .
The rich structure of this dynamical interpretation can be found in Rosenhead(1940).
Pure outflow
For pure outflow, since at , integration of governing equation gives
and the boundary conditions becomes
The equations can be simplified by standard transformations given for example in Jeffreys.
First case are complex conjugates and leads to
where are Jacobi elliptic functions.
Second case leads to
Limiting form
The limiting condition is obtained by noting that pure outflow is impossible when , which implies from the governing equation. Thus beyond this critical conditions, no solution exists. The critical angle is given by
where
where is the complete elliptic integral of the first kind. For large values of , the critical angle becomes .
The corresponding critical Reynolds number or volume flux is given by
where is the complete elliptic integral of the second kind. For large values of , the critical Reynolds number or volume flux becomes .
Pure inflow
For pure inflow, the implicit solution is given by
and the boundary conditions becomes
Pure inflow is possible only when all constants are real and the solution is given by
where is the complete elliptic integral of the first kind.
Limiting form
As Reynolds number increases ( becomes larger), the flow tends to become uniform(thus approaching potential flow solution), except for boundary layers near the walls. Since is large and is given, it is clear from the solution that must be large, therefore . But when , , the solution becomes
It is clear that everywhere except in the boundary layer of thickness . The volume flux is so that and the boundary layers have classical thickness .
| Physical sciences | Fluid mechanics | Physics |
33895468 | https://en.wikipedia.org/wiki/Crab%20trap | Crab trap | Crab traps are used to bait, lure, and catch crabs for commercial or recreational use. Crabbing or crab fishing is the recreational hobby and commercial occupation of fishing for crabs. Different types of traps are used depending on the type of crab being fished for, geographic location, and personal preference.
History in the United States
Crab has been a viable food source since Native Americans lived and fished on the Delmarva Peninsula. The Chesapeake Bay, which is known for their Chesapeake Bay blue crabs (Callinectes sapidus) derives its name from "Chesepiook", a Susquehannock tribe word that means "Great Water". These Susquehannock natives led European settlers to some of the best places to catch crabs. Even early treaties between European settlers and Native Americans included provisions for the rights of "Hunting, Crabbing, Fowling, and Fishing." Since then, generations of watermen made their living harvesting crabs and other resources along the Chesapeake Bay developing the most efficient method to catch crabs resulting in modern crab traps.
Since early European settlers in America, crabbing was an important food source to watermen of the Chesapeake and continues to be the source of income for many families. The Alaskan king crab fishing industry took off in the mid-1800s, and was one of the reasons Alaskans pushed so hard for statehood in 1959. Alaskans wanted to gain control of the area's natural resources, such as king crabs.
Benjamin F. Lewis invented the crab pot in the 1920s, patented it in 1928, and perfected it ten years later. The crab pot changed the way crabs are harvested on the Chesapeake Bay. The crab pot is the most common method used to catch and harvest crabs worldwide.
Commercial crabbing is a very tough and dangerous job, so it is very important that commercial crab traps catch as many crabs as possible to be able to turn a profit. Commercial crabbing is heavily regulated by local state laws to ensure that the crabs are not over fished and that they are given enough time to breed and repopulate.
Unlike normal traps, commercial crab traps are large in size; some can easily be over 60" in diameter, allowing the trap to hold a larger amount of crabs than recreational crab traps. Commercial crab traps also contain a small stainless steel plate like a dog tag, which identifies who the trap belongs to in case it is missed or swept by the current from its original location.
After World War II, Japanese crab vessels were competition for Alaskan king crab fishermen in the Bering Sea. Japanese crab vessels would crowd around cod boats, where king crabs devoured the fish waste. Ed Shields, a king crab fisherman was aboard a schooner at this time and recalls the Japanese encroaching on the Bristol Bay fishing area. Ed Shields says that his father sent a telegram to Seattle, ordering one dozen high-powered rifles for each vessel and one case of ammunition each.
Ed Shields states, "The coast guard didn’t care for this at all, the State Department didn’t care for it, but the news media did. It made good news. There’s no television at this time, but they did get in the national magazines like Time and Life. The adverse publicity to Japanese manufactured goods was so severe at that time from this campaign, the Japanese decided to pull out of Bristol Bay area and he sent a telegram saying, 'Bristol Bay is all clear now, Japanese gone home.'"
The Derelict Crab Trap Removal Program was created by the Louisiana Wildlife and Fisheries Commission in 2004. This was created to remove derelict crab traps from state-owned lakes and river-beds and to reduce the potential impact from these traps. There are also similar programs in other states. They are similar to the program in Louisiana where the traps are removed during a 30-day period. There are programs all over the Gulf Coast, including areas like Texas and Florida. These programs have also been successful with the help of volunteers working together, and over 30,000 derelict traps have been removed in Texas alone.
Types
Maryland
The Maryland crab pot is an enclosed framework of wire with four openings. These openings are constructed so that when the crabs enter to eat the bait, they cannot escape, and instead become immediately trapped. Once the crab becomes trapped and cannot leave the same way they entered, they float upward and go through the openings of the inner wire portion, which permanently traps the crab.
The Maryland crab pot is a cube, generally two cubic feet and when baited and weighted, might weigh fifteen pounds or more. Sometimes it is left on the bottom for twelve to twenty-four hours or more. The end of the nylon rope is attached to a marked floating buoy so the location can be found and the pot retrieved. The Maryland crab pot is baited from the bottom with several oily fish. This is done by turning the pot on its side, stuffing the bait into the wire container, and closing the opening by securing the flap under the rubber tubing. The pot is then dropped into the water and when the crab fisher returns, pulls the pot up and into their boat.
West Coast
West Coast crab pots, which are primarily used for catching Dungeness crabs, vary slightly from the Maryland style crab pot. When the crabs enter either of the two funnel-type openings in search of bait, they are unable to exit through these funnel openings and become entrapped in the pot.
Ring crab traps
Ring crab traps are very popular along the Oregon and Washington Coast. They are primarily used in river mouths and protected bays, but it is possible to use crab rings off the open shoreline. A crab ring is a simple piece of equipment that contains two wire rings that form the top and bottom of a collapsible basket. The lower ring is smaller than the upper ring and connected with a strong netting that forms the sides. Heavy chicken wire, cotton webbing or other suitable materials are used for the bottom.
After the bait is tied securely to the bottom of the basket, the lower basket sinks to the bay bottom where the sides collapse and the top and bottom rings lie together, leaving only a flat platform of tempting bait that the crab can easily reach. After the ring has been left on the bottom, the crabber raises the ring rapidly by pulling up with a rope, which prevents the crabs from escaping while the basket is pulled to the boat. While ring traps may allow crabs to escape more easily, their advantage is that they remain on the bottom for much shorter periods, typically a maximum of 20 minutes or so, versus the 30-45 minutes required for a crab pot to work effectively.
Pyramids
Pyramid crab traps are flat when lying on the bottom of the seafloor, but when raised to the surface, they form the shape of a pyramid. This trap is similar to the ring crab trap because there are no walls or cage that prevents the crabs from escaping before pulling it to the surface. The benefits of the pyramid crab trap over the ring crab trap is that the pyramid crab trap is slightly sturdier and can be used in waters with stronger currents.
Boxes
Box crab traps are made from a strong non-collapsible wire. The main advantages of this crab trap are that once the crab enters searching for the bait it cannot escape, guaranteeing a catch when the crab enters. Along with this comes the added bonus of not having to regularly check the trap. A downside of this trap is storing and transporting it since it does not collapse.
Trot lines
Trot line crab fishing was used exclusively by commercial crabbers from 1870 to 1929, but this method has since been almost entirely replaced by the use of crab pots and crab traps. A trotline is a baited, hook-less, long line that is usually anchored on the bottom and attached to anchored buoys. This trotline is baited and after some time, the fisherman pulls the trotline up with crabs hopefully biting on the bait.
Environmental effects
A crab trap which becomes lost or abandoned (usually by accidental detachment of the float) becomes an ongoing environmental hazard. Crabs will continue to enter this ghost trap to eat the bait, become trapped, and starve to death, attracting more crabs and other bottom-dwelling sea life; a single trap may kill dozens of crabs in this manner. For this reason, crab traps in many jurisdictions are required to have a "rot-out panel", a wooden panel the size of the largest entrance into the trap. This panel will disintegrate with a few weeks' exposure to seawater, opening the trap and allowing any crabs inside to escape. Other pots use biodegradable twine, that disintegrates within less than a week.
Whales become entangled in crabbing gear. They get entangled in the vertical lines between crab traps on the ocean floor and the surface buoys. For example, as of 2014 there was an increasing number of entanglements off the coasts of the United States. Management measures have been implemented by NOAA National Marine Fisheries Service.
| Technology | Hunting and fishing | null |
31302683 | https://en.wikipedia.org/wiki/English%20brewery%20cask%20units | English brewery cask units | Capacities of brewery casks were formerly measured and standardised according to a specific system of English units. The system was originally based on the ale gallon of . In United Kingdom and its colonies, with the adoption of the imperial system in 1824, the units were redefined in terms of the slightly smaller imperial gallon (). The older units continued in use in the United States.
Historically the terms beer and ale referred to distinct brews. From the mid 15th century until 1803 in Britain "ale" casks and "beer" casks differed in the number of gallons they contained.
Units
Tun
The beer tun is equal to double the size of a butt: it is therefore exactly or approximately .
Butt (Imperial)
The butt of beer is equal to half a tun or two hogsheads, and is therefore exactly or approximately .
Hogshead
The hogshead of beer and ale was equal to a quarter of a tun, half a butt, or three kilderkins. This unit is about 3% larger than the wine hogshead.
hogshead (Ale)
In the mid-15th century the ale hogshead was defined as 48 ale or beer gallons (221.8153 L). In 1688 the ale hogshead was redefined to be 51 ale or beer gallons (235.67875 L). In 1803 the ale hogshead was again redefined to be 54 ale or beer gallons (249.54221 L), equivalent to the beer hogshead.
hogshead (Beer)
From the mid 15th century until 1824 the beer hogshead was defined as 54 ale or beer gallons.
hogshead (Ale) (Imperial), hogshead (Beer) (Imperial)
In the United Kingdom and its colonies, with the 1824 adoption of the imperial system, the ale or beer hogshead was redefined to be 54 imperial gallons. The ale or beer hogshead is therefore exactly or approximately .
Barrel
The barrel of beer or ale was equal to two kilderkins or of a beer or ale hogshead. This is about 37% larger than the wine barrel.
barrel (Ale)
As with the hogshead, the ale barrel underwent various redefinitions. Initially 32 ale or beer gallons (147.9 L), it was redefined in 1688 as 34 ale or beer gallons (157.1 L), and again in 1803 as 36 ale or beer gallons (166.4 L).
barrel (Beer)
The beer barrel was defined as 36 ale or beer gallons until the adoption of the imperial system.
barrel (Ale) (Imperial), barrel (Beer) (Imperial)
The adoption of the imperial system saw the beer or ale barrel redefined to be 36 imperial gallons, which is exactly
or approximately .
Kilderkin
The kilderkin (from the Dutch for "small cask") is equal to half a barrel or two firkins.
kilderkin (Ale)
The ale kilderkin likewise underwent various redefinitions. Initially 16 ale or beer gallons (73.94 L), it was redefined in 1688 as 17 ale or beer gallons (78.56 L) and again in 1803 as 18 ale or beer gallons (83.18 L).
kilderkin (Beer)
Until the adoption of the imperial system the beer kilderkin was defined as 18 ale or beer gallons.
kilderkin (Ale) (Imperial), kilderkin (Beer) (Imperial)
With the adoption of the imperial system the kilderkin was redefined to be 18 imperial gallons, which is exactly or approximately .
The kilderkin is still currently used. It is the unit of choice of CAMRA, the Campaign for Real Ale, for calculating beer quantities for beer festivals in the UK. Ales are usually delivered in firkins, cider and other drinks are usually in boxes, bottles or other containers measured in gallons or litres, and all (except wine) are sold in pints or parts thereof. For CAMRA internal accounting, all are calculated in kilderkins. A kilderkin is a 144 pint container but there is not 144 pints of cask conditioned consumable beer in a kilderkin (see Firkins below for explanation).
Firkin
The ale or beer firkin (from Middle Dutch meaning "fourth") is a quarter of an ale or beer barrel or half a kilderkin. This unit is much smaller than the wine firkin. Casks in this size (themselves called firkins) are the most common container for cask ale.
firkin (Ale)
From the mid 15th century until 1688 the ale firkin was defined as 8 ale or beer gallons (36.97 litres). In 1688 the ale firkin was redefined to be ale or beer gallons (39.28 L). In 1803 ale firkin was again redefined to be 9 ale or beer gallons (41.59 L), equivalent to the beer firkin.
firkin (Beer)
From the mid 15th century until 1824 the beer firkin was defined as 9 ale or beer gallons.
firkin (Ale) (Imperial), firkin (Beer) (Imperial)
The beer or ale firkin was redefined to be 9 imperial gallons in 1824. It is therefore exactly or approximately .Most English cask conditioned beer bought by publicans is delivered in 72 pint containers (i.e. Firkin) but the volume of consumable beer in the container is far lower. For example a 72 pint container of Greene King IPA currently only has 66 "full" pints of consumable beer that can be sold or drunk, the other 6 pints are sediment, finings, beer stone, hops, proteins or less than an imperial measure and therefore not consumable or saleable. HMRC does not charge duty on any portion of beer that cannot be consumed, and brewers should make a declaration to the first customer (i.e. publican) to inform them what are the actual duty paid contents of the beer so customers are fully aware of how much is being sold to them.
Pin (Imperial)
A pin is equal to half a firkin, and is therefore exactly or approximately .
Plastic versions of these casks are known as "polypins" and are popular in homebrewing and the off-trade (deliveries for home consumption). They are also popular at beer festivals where non-standard beers are sold.
Gallon
Originally, a 282 cubic inch ale or beer gallon was used. With the adoption of the imperial system in the United Kingdom and its colonies, the system was redefined in terms of the imperial gallon from 1824.
Chart
| Physical sciences | Measurement systems | Basics and measurement |
39379960 | https://en.wikipedia.org/wiki/De-extinction | De-extinction | De-extinction (also known as resurrection biology, or species revivalism) is the process of generating an organism that either resembles or is an extinct species. There are several ways to carry out the process of de-extinction. Cloning is the most widely proposed method, although genome editing and selective breeding have also been considered. Similar techniques have been applied to certain endangered species, in hopes to boost their genetic diversity. The only method of the three that would provide an animal with the same genetic identity is cloning. There are benefits and drawbacks to the process of de-extinction ranging from technological advancements to ethical issues.
Methods
Cloning
Cloning is a commonly suggested method for the potential restoration of an extinct species. It can be done by extracting the nucleus from a preserved cell from the extinct species and swapping it into an egg, without a nucleus, of that species' nearest living relative. The egg can then be inserted into a host from the extinct species' nearest living relative. This method can only be used when a preserved cell is available, meaning it would be most feasible for recently extinct species. Cloning has been used by scientists since the 1950s. One of the most well known clones is Dolly the sheep. Dolly was born in the mid 1990s and lived normally until the abrupt midlife onset of health complications resembling premature aging, that led to her death. Other known cloned animal species include domestic cats, dogs, pigs, and horses.
Genome editing
Genome editing has been rapidly advancing with the help of the CRISPR/Cas systems, particularly CRISPR/Cas9. The CRISPR/Cas9 system was originally discovered as part of the bacterial immune system. Viral DNA that was injected into the bacterium became incorporated into the bacterial chromosome at specific regions. These regions are called clustered regularly interspaced short palindromic repeats, otherwise known as CRISPR. Since the viral DNA is within the chromosome, it gets transcribed into RNA. Once this occurs, the Cas9 binds to the RNA. Cas9 can recognize the foreign insert and cleaves it. This discovery was very crucial because now the Cas protein can be viewed as a scissor in the genome editing process.
By using cells from a closely related species to the extinct species, genome editing can play a role in the de-extinction process. Germ cells may be edited directly, so that the egg and sperm produced by the extant parent species will produce offspring of the extinct species, or somatic cells may be edited and transferred via somatic cell nuclear transfer. The result is an animal which is not completely the extinct species, but rather a hybrid of the extinct species and the closely related, non-extinct species. Because it is possible to sequence and assemble the genome of extinct organisms from highly degraded tissues, this technique enables scientists to pursue de-extinction in a wider array of species, including those for which no well-preserved remains exist. However, the more degraded and old the tissue from the extinct species is, the more fragmented the resulting DNA will be, making genome assembly more challenging.
Back-breeding
Back breeding is a form of selective breeding. As opposed to breeding animals for a trait to advance the species in selective breeding, back breeding involves breeding animals for an ancestral characteristic that may not be seen throughout the species as frequently. This method can recreate the traits of an extinct species, but the genome will differ from the original species. Back breeding, however, is contingent on the ancestral trait of the species still being in the population in any frequency. Back breeding is also a form of artificial selection by the deliberate selective breeding of domestic animals, in an attempt to achieve an animal breed with a phenotype that resembles a wild type ancestor, usually one that has gone extinct.
Iterative evolution
A natural process of de-extinction is iterative evolution. This occurs when a species becomes extinct, but then after some time a different species evolves into an almost identical creature. For example, the Aldabra rail was a flightless bird that lived on the island of Aldabra. It had evolved some time in the past from the flighted white-throated rail, but became extinct about 136,000 years ago due to an unknown event that caused sea levels to rise. About 100,000 years ago, sea levels dropped and the island reappeared, with no fauna. The white-throated rail recolonized the island, but soon evolved into a flightless species physically identical to the extinct species.
Herbarium specimens for de-extincting plants
Not all extinct plants have herbarium specimens that contain seeds. Of those that do, there is ongoing discussion on how to coax barely alive embryos back to life. See Judean date palm and tsori.
In-vitro fertilisation and artificial insemination
In-vitro fertilisation and artificial insemination are assisted reproduction technology commonly used to treat infertility in humans. However, it has usage as a viable option for de-extinction in cases of functional extinction where all remaining individuals are of the same sex, incapable of naturally reproducing, or suffer from low genetic diversity such as the northern white rhinoceros, Yangtze giant softshell turtle, Hyophorbe amaricaulis, baiji, and vaquita. For example, viable embryos are created from preserved sperm from deceased males and ova from living females are implemented into a surrogate species.
Advantages of de-extinction
The technologies being developed for de-extinction could lead to large advances in various fields:
An advance in genetic technologies that are used to improve the cloning process for de-extinction could be used to prevent endangered species from becoming extinct.
By studying revived previously extinct animals, cures to diseases could be discovered.
Revived species may support conservation initiatives by acting as "flagship species" to generate public enthusiasm and funds for conserving entire ecosystems.
Prioritising de-extinction could lead to the improvement of current conservation strategies. Conservation measures would initially be necessary in order to reintroduce a species into the ecosystem, until the revived population can sustain itself in the wild. Reintroduction of an extinct species could also help improve ecosystems that had been destroyed by human development. It may also be argued that reviving species driven to extinction by humans is an ethical obligation.
Disadvantages of de-extinction
The reintroduction of extinct species could have a negative impact on extant species and their ecosystem. The extinct species' ecological niche may have been filled in its former habitat, making it an invasive species. This could lead to the extinction of other species due to competition for food or other competitive exclusion. It could lead to the extinction of prey species if they have more predators in an environment that had few predators before the reintroduction of an extinct species. If a species has been extinct for a long period of time the environment they are introduced to could be wildly different from the one that they can survive in. The changes in the environment due to human development could mean that the species may not survive if reintroduced into that ecosystem. A species could also become extinct again after de-extinction if the reasons for its extinction are still a threat. The woolly mammoth might be hunted by poachers just like elephants for their ivory and could go extinct again if this were to happen. Or, if a species is reintroduced into an environment with disease for which it has no immunity, the reintroduced species could be wiped out by a disease that current species can survive.
De-extinction is a very expensive process. Bringing back one species can cost millions of dollars. The money for de-extinction would most likely come from current conservation efforts. These efforts could be weakened if funding is taken from conservation and put into de-extinction. This would mean that critically endangered species would start to go extinct faster because there are no longer resources that are needed to maintain their populations. Also, since cloning techniques cannot perfectly replicate a species as it existed in the wild, the reintroduction of the species may not bring about positive environmental benefits. They may not have the same role in the food chain that they did before and therefore cannot restore damaged ecosystems.
Current candidate species for de-extinction
Woolly mammoth
The existence of preserved soft tissue remains and DNA from woolly mammoths (Mammuthus primigenius) has led to the idea that the species could be recreated by scientific means. Two methods have been proposed to achieve this:
The first would be to use the cloning process; however, even the most intact mammoth samples have had little usable DNA because of their conditions of preservation. There is not enough DNA intact to guide the production of an embryo. The second method would involve artificially inseminating an elephant egg cell with preserved sperm of the mammoth. The resulting offspring would be a hybrid of the mammoth and its closest living relative the Asian elephant. After several generations of cross-breeding these hybrids, an almost pure woolly mammoth could be produced. However, sperm cells of modern mammals are typically potent for up to 15 years after deep-freezing, which could hinder this method. Whether the hybrid embryo would be carried through the two-year gestation is unknown; in one case, an Asian elephant and an African elephant produced a live calf named Motty, but it died of defects at less than two weeks old.
In 2008, a Japanese team found usable DNA in the brains of mice that had been frozen for 16 years. They hope to use similar methods to find usable mammoth DNA. In 2011, Japanese scientists announced plans to clone mammoths within six years.
In March 2014, the Russian Association of Medical Anthropologists reported that blood recovered from a frozen mammoth carcass in 2013 would now provide a good opportunity for cloning the woolly mammoth. Another way to create a living woolly mammoth would be to migrate genes from the mammoth genome into the genes of its closest living relative, the Asian elephant, to create hybridized animals with the notable adaptations that it had for living in a much colder environment than modern day elephants. This is currently being done by a team led by Harvard geneticist George Church. The team has made changes in the elephant genome with the genes that gave the woolly mammoth its cold-resistant blood, longer hair, and an extra layer of fat. According to geneticist Hendrik Poinar, a revived woolly mammoth or mammoth-elephant hybrid may find suitable habitat in the tundra and taiga forest ecozones.
George Church has hypothesized the positive effects of bringing back the extinct woolly mammoth would have on the environment, such as the potential for reversing some of the damage caused by global warming. He and his fellow researchers predict that mammoths would eat the dead grass allowing the sun to reach the spring grass; their weight would allow them to break through dense, insulating snow in order to let cold air reach the soil; and their characteristic of felling trees would increase the absorption of sunlight. In an editorial condemning de-extinction, Scientific American pointed out that the technologies involved could have secondary applications, specifically to help species on the verge of extinction regain their genetic diversity.
Pyrenean ibex
The Pyrenean ibex (Capra pyrenaica pyrenaica) was a subspecies of Iberian ibex that lived on the Iberian Peninsula. While it was abundant through medieval times, over-hunting in the 19th and 20th centuries led to its demise. In 1999, only a single female named Celia was left alive in Ordesa National Park. Scientists captured her, took a tissue sample from her ear, collared her, then released her back into the wild, where she lived until she was found dead in 2000, having been crushed by a fallen tree.
In 2003, scientists used the tissue sample to attempt to clone Celia and resurrect the extinct subspecies. Despite having successfully transferred nuclei from her cells into domestic goat egg cells and impregnating 208 female goats, only one came to term. The baby ibex that was born had a lung defect, and lived for only seven minutes before suffocating from being incapable of breathing oxygen. Nevertheless, her birth was seen as a triumph and is considered the first de-extinction. In late 2013, scientists announced that they would again attempt to resurrect the Pyrenean ibex.
A problem to be faced, in addition to the many challenges of reproduction of a mammal by cloning, is that only females can be produced by cloning the female individual Celia, and no males exist for those females to reproduce with. This could potentially be addressed by breeding female clones with the closely related Southeastern Spanish ibex, and gradually creating a hybrid animal that will eventually bear more resemblance to the Pyrenean ibex than the Southeastern Spanish ibex.
Aurochs
The aurochs (Bos primigenius) was widespread across Eurasia, North Africa, and the Indian subcontinent during the Pleistocene, but only the European aurochs (B. p. primigenius) survived into historical times. This species is heavily featured in European cave paintings, such as Lascaux and Chauvet cave in France, and was still widespread during the Roman era. Following the fall of the Roman Empire, overhunting of the aurochs by nobility caused its population to dwindle to a single population in the Jaktorów forest in Poland, where the last wild one died in 1627.
However, because the aurochs is ancestral to most modern cattle breeds, it is possible for it to be brought back through selective or back breeding. The first attempt at this was by Heinz and Lutz Heck using modern cattle breeds, which resulted in the creation of Heck cattle. This breed has been introduced to nature preserves across Europe; however, it differs strongly from the aurochs in physical characteristics, and some modern attempts claim to try to create an animal that is nearly identical to the aurochs in morphology, behavior, and even genetics. There are several projects that aim to create a cattle breed similar to the aurochs through selectively breeding primitive cattle breeds over a course of twenty years to create a self-sufficient bovine grazer in herds of at least 150 animals in rewilded nature areas across Europe, for example the Tauros Programme and the separate Taurus Project. This organization is partnered with the organization Rewilding Europe to help revert some European natural ecosystems to their prehistoric form.
A competing project to recreate the aurochs is the Uruz Project by the True Nature Foundation, which aims to recreate the aurochs by a more efficient breeding strategy using genome editing, in order to decrease the number of generations of breeding needed and the ability to quickly eliminate undesired traits from the population of aurochs-like cattle. It is hoped that aurochs-like cattle will reinvigorate European nature by restoring its ecological role as a keystone species and bring back biodiversity that disappeared following the decline of European megafauna, as well as helping to bring new economic opportunities related to European wildlife viewing.
Sometime in 2025, Tauros Programme and Rewilding Europe plan to release their aurochs into the wild in select areas of Europe and to have the species recognised as a protected wildlife species again. In 2026, these animals will be reintroduced to parts of the Scottish Highlands.
Quagga
The quagga (Equus quagga quagga) is a subspecies of the plains zebra that was distinct in that it was striped on its face and upper torso, but its rear abdomen was a solid brown. It was native to South Africa, but was wiped out in the wild due to overhunting for sport, and the last individual died in 1883 in the Amsterdam Zoo. However, since it is technically the same species as the surviving plains zebra, it has been argued that the quagga could be revived through artificial selection. The Quagga Project aims to breed a similar form of zebra by selective breeding of plains zebras. This process is also known as back breeding. It also aims to release these animals onto the western Cape once an animal that fully resembles the quagga is achieved, which could have the benefit of eradicating introduced species of trees such as the Brazilian pepper tree, Tipuana tipu, Acacia saligna, bugweed, camphor tree, stone pine, cluster pine, weeping willow and Acacia mearnsii.
Thylacine
The thylacine (Thylacinus cynocephalus), commonly known as the Tasmanian tiger, was native to the Australian mainland, Tasmania and New Guinea. It is believed to have become extinct in the 20th century. The thylacine had become extremely rare or extinct on the Australian mainland before British settlement of the continent. The last known thylacine died at the Hobart Zoo, on September 7, 1936. He is believed to have died as the result of neglect—locked out of his sheltered sleeping quarters, he was exposed to a rare occurrence of extreme Tasmanian weather: extreme heat during the day and freezing temperatures at night. Official protection of the species by the Tasmanian government was introduced on July 10, 1936, roughly 59 days before the last known specimen died in captivity.
In December 2017, it was announced in the journal Nature Ecology and Evolution that the full nuclear genome of the thylacine had been successfully sequenced, marking the completion of the critical first step toward de-extinction that began in 2008, with the extraction of the DNA samples from the preserved pouch specimen. The thylacine genome was reconstructed by using the genome editing method. The Tasmanian devil was used as a reference for the assembly of the full nuclear genome. Andrew J. Pask from the University of Melbourne has stated that the next step toward de-extinction will be to create a functional genome, which will require extensive research and development, estimating that a full attempt to resurrect the species may be possible as early as 2027.
In August 2022, the University of Melbourne and Colossal Biosciences announced a partnership to accelerate de-extinction of the thylacine via genetic modification of one of its closest living relatives, the fat-tailed dunnart. In October 2024, a 99.9% complete genome of the thylacine was created from a well-preserved skull that is estimated to be 110 years old. This discovery allowed for the full genome of the species to constructed in January 2025, and in the same month, Colossal Biosciences and University of Melbourne developed an artificial marsupial womb to further accelerate the de-extinction of thylacine and conservation for endangered marsupials
Passenger pigeon
The passenger pigeon (Ectopistes migratorius) numbered in the billions before being wiped out due to unsustainable commercial hunting and habitat loss during the early 20th century. The non-profit Revive & Restore obtained DNA from the passenger pigeon from museum specimens and skins; however, this DNA is degraded because it is so old. For this reason, simple cloning would not be an effective way to perform de-extinction for this species because parts of the genome would be missing. Instead, Revive & Restore focuses on identifying mutations in the DNA that would cause a phenotypic difference between the extinct passenger pigeon and its closest living relative, the band-tailed pigeon. In doing this, they can determine how to modify the DNA of the band-tailed pigeon to change the traits to mimic the traits of the passenger pigeon. In this sense, the de-extinct passenger pigeon would not be genetically identical to the extinct passenger pigeon, but it would have the same traits. In 2015, the de-extinct passenger pigeon hybrid was forecast ready for captive breeding by 2025 and released into the wild by 2030. In October 2024, Revive & Restore collaborated with Applied Ecological Institute to simulate forest disturbances in the American state of Wisconsin to see how trees would react to the reintroduced passenger pigeons. The original 2025 goal was not met, with the new goal for reviving the species for captive breeding set for between 2029 and 2032. However, it could take decades for the species to be reintroduced to the wild.
Bush moa
The bush moa, also known as the little bush moa or lesser moa (Anomalopteryx didiformis) is a slender species of moa slightly larger than a turkey that went extinct abruptly, around 500–600 years ago following the arrival and proliferation of the Māori people in New Zealand, as well as the introduction of Polynesian dogs. Scientists at Harvard University assembled the first nearly complete genome of the species from toe bones, thus bringing the species a step closer to de-extinction. The New Zealand politician, Trevor Mallard has also previously suggested bringing back a medium-sized species of moa. The proxy of the species will likely be the emu.
Maclear's rat
The Maclear's rat (Rattus macleari), also known as the Christmas Island rat, was a large rat endemic to Christmas Island in the Indian Ocean. It is believed Maclear's rat might have been responsible for keeping the population of Christmas Island red crab in check. It is thought that the accidental introduction of black rats by the Challenger expedition infected the Maclear's rats with a disease (possibly a trypanosome), which resulted in the species' decline. The last recorded sighting was in 1903. In March 2022, researchers discovered the Maclear's rat shared about 95% of its genes with the living brown rat, thus sparking hopes in bringing the species back to life. Although scientists were mostly successful in using CRISPR technology to edit the DNA of the living species to match that of the extinct one, a few key genes were missing, which would mean resurrected rats would not be genetically pure replicas.
Dodo
The dodo (Raphus cucullatus) was a flightless bird endemic to the island of Mauritius in the Indian Ocean. Due to various factors such as the inability to feel fear caused by isolation from significant predators, predation from humans and introduced invasive species such as pigs, dogs, cats, rats, and crab-eating macaques, competition for food with invasive species, habitat loss, and the birds naturally slow reproduction, the species' numbers declined rapidly. The last widely accepted recorded sighting was in 1662. Since then, the bird has become a symbol for extinction and is often cited as the primary example of man-made extinction. In January 2023, Colossal Biosciences announced their project to revive the dodo alongside their previously announced projects for reviving the woolly mammoth and thylacine in hopes of restoring biodiversity to Mauritius and changing the dodo's status as a symbol of extinction to de-extinction.
Steller's sea cow
The Steller's sea cow was a sirenian endemic to Bering Sea between Russia and the United States but had a much larger range during the Pleistocene. First described by Georg Wilhelm Steller in 1741, it was hunted to extinction 27 years later due to its buoyancy making it an easy target for humans hunting it for its meat and fur in addition to an already low population caused by climate change. In 2021, the nuclear genome of the species was sequenced. In late 2022, a group of Russian scientists funded by Sergei Bachin began their project to revive and reintroduce the giant sirenian to its former range in the 18th century to restore its kelp forest ecosystem. Arctic Sirenia plans to revive the species through genome editing of the dugong, but they need an artificial womb to conceive a live animal due to lack of an adequate surrogate species. Ben Lamm of Colossal Biosciences has also expressed desire to revive the species once his company develops an artificial womb.
Northern white rhinoceros
The northern white rhinoceros or northern white rhino (Ceratotherium simum cottoni) is a subspecies of the white rhinoceros endemic to East and Central Africa south of the Sahara. Due to widespread and uncontrollable poaching and civil warfare in their former range, the subspecies' numbers dropped quickly over the course of the late 1900s and early 2000s. Unlike the majority of the potential candidates for de-extinction, the northern white rhinoceros is not extinct, but functionally extinct and is believed to be extinct in the wild with only two known female members left, Najin and Fatu who reside on the Ol Pejeta Conservancy in Kenya. The BioRescue Team in collaboration with Colossal Biosciences plan to implement 30 northern white rhinoceros embryos made from egg cells collected from Najin and Fatu and preserved sperm from dead male individuals into female southern white rhinoceros by the end of 2024.
Ivory-billed woodpecker
The ivory-billed woodpecker (Campephilus principalis) is the largest woodpecker endemic to the United States with a subspecies in Cuba. The species numbers have declined since the late 1800s due to logging and hunting. Similar to the northern white rhinoceros, the ivory-billed woodpecker is not completely extinct, but functionally extinct with occasional sightings that suggest that 50 or less individuals are left. In October 2024, Colossal Biosciences announced their non-profit Colossal Foundation, a foundation dedicated to conservation of extant species with their first projects being the Sumatran rhinoceros, vaquita, red wolf, pink pigeon, northern quoll, and ivory-billed woodpecker. Colossal plans to revive or rediscover the species through genome editing of its closest living relatives, such as the pileated woodpecker and using drones and AI to identify any potential remaining individuals in the wild.
Heath hen
The heath hen (Tympanuchus cupido cupido) was a subspecies of greater prairie chicken endemic to the heathland barrens of coastal North America. It is even speculated that the pilgrims' first Thanksgiving featured this bird as the main course instead of wild turkey. Due to overhunting caused by its perceived abundancy, the population became extinct in mainland North America by 1870, leaving a population of 300 individuals left on Martha's Vineyard. Despite conservation efforts, the subspecies became extinct in 1932 following the disappearance and presumed death of Booming Ben, the final known member of the subspecies. In the summer of 2014, non-profit organisation, Revive & Restore held a meeting with the community of Martha's Vineyard to announce their project to revive the heath hen in hopes of restoring and maintaining the sandplain grasslands. On April 8th, 2020, germs cells were collected from greater prairie chicken eggs at Texas A&M.
Yangtze giant softshell turtle
The Yangtze giant softshell turtle (Rafetus swinhoei) is a softshell turtle endemic to China and Vietnam and is possibly the largest living freshwater turtle. Due to various factors such as habitat loss, wildlife trafficking, trophy hunting, and the Vietnam War, the species population has been reduced to only three male individuals, rendering it functionally extinct similar to the northern white rhinoceros and ivory-billed woodpecker. There is one captive individual in Suzhou Zoo in China, and two wild individuals at Dong Mo Lake in Vietnam. Efforts to save the species from extinction through various means of assisted reproduction in captivity have been ongoing since 2009 by the Suzhou Zoo and Turtle Survival Alliance.
Despite efforts to breed the turtles naturally, the eggs laid by the final known female were all infertile and unviable. In May 2015, artificial insemination was performed for the first time in the species. In July of the same year, the female laid 89 eggs, but like all previous natural attempts, they were all unviable. In April 2019, the female individual at the zoo died after another failed artificial insemination attempt. In 2020, a female was discovered in the wild, reigniting hope for the survival of the species. However, this individual was found dead in early 2023. Several searches across China and Vietnam are currently underway to locate female individuals to breed with the final known males, or to undergo artificial insemination.
Future potential candidates for de-extinction
A "De-extinction Task Force" was established in April 2014 under the auspices of the Species Survival Commission (SSC) and charged with drafting a set of Guiding Principles on Creating Proxies of Extinct Species for Conservation Benefit to position the IUCN SSC on the rapidly emerging technological feasibility of creating a proxy of an extinct species.
Avians
Giant moa – The tallest birds to have ever lived, but not as heavy as the elephant bird. Both the northern and southern species became extinct by 1500 due to overhunting by the Polynesian settlers and Māori in New Zealand.
Elephant bird – The heaviest birds to have ever lived, the elephant birds were driven to extinction by the early colonization of Madagascar. Ancient DNA has been obtained from the eggshells but may be too degraded for use in de-extinction.
Carolina parakeet - One of the only indigenous parrots to North America, it was driven to extinction by destruction of its habitat, overhunting, competition from introduced honeybees, and persecution for crop damages and declared extinct following the death of its final known member, Incas in 1918. Hundreds of specimens with viable DNA still exist in museums around the world, making it a prime candidate for revival. In 2019, the full genome of the carolina parakeet was sequenced.
Great auk - A flightless bird native to the North Atlantic similar to the penguin. The great auk went extinct in the 1800s due to overhunting by humans for food. The last two known great auks lived on an island near Iceland and were clubbed to death by sailors. There have been no known sightings since. The great auk has been identified as a good candidate for de-extinction by Revive and Restore, a non-profit organization. Because the great auk is extinct it cannot be cloned, but its DNA can be used to alter the genome of its closest relative, the razorbill, and breed the hybrids to create a species that will be very similar to the original great auks. The plan is to introduce them back into their original habitat, which they would then share with razorbills and puffins, who are also at risk for extinction. This would help restore the biodiversity and restore that part of the ecosystem. Colossal Biosciences has also expresse interest in reviving the species
Imperial woodpecker – A large possibly extinct woodpecker endemic to Mexico that has not been seen since 1956 due to habitat destruction and hunting. The Federal government of Mexico has considered the species extinct since 2001, 47 years after the last widely accepted sighting. However, they have conservation plans if the species is rediscovered or revived.
Cuban macaw – A colourful macaw that was native to Cuba and Isla de la Juventud. It became extinct in the late 19th century due to overhunting, pet trade, and habitat loss.
Labrador duck – A duck that was native to North America. it became extinct in the late 19th century due to colonisation in their former range combined with an already naturally low population. It is also the first known endemic North American bird species to become extinct following the Columbian Exchange.
Huia – A species of Callaeidae that was native to New Zealand. It became extinct in 1907 due to overhunting from both the Māori and European settlers, habitat loss, and predation from introduced invasive species. In 1999, students of Hastings Boys' High School proposed the idea of de-extinction of the huia, the school's emblem through cloning. The Ngāti Huia tribe approved of the idea and the de-extinction process would have been performed by the University of Otago with $100,000 funding from a Californian based internet startup. However, due to the poor state of DNA in the specimens at Museum of New Zealand Te Papa Tongarewa, a complete huia genome could not be created, making this method of de-extinction improbable to succeed.
Moho – An entire genus of songbirds that were native to the islands of Hawaii. The genus became extinct in 1987 following the extinction of its final living member, Kauaʻi ʻōʻō. The reasons for the genus' decline were overhunting for their plumage, habitat loss caused by both colonisation of Hawaii and natural disasters, mosquito-borne diseases, and predation from introduced invasive species.
Mammals
Caribbean monk seal – A species of monk seal that was native to the Caribbean. It became extinct in 1952 due to poaching and starvation caused by overfishing of its natural prey.
Bluebuck – A species of antelope that was native to South Africa. The species was hunted to extinction by 1799 or 1800 by Europeans, and the species had a naturally low population similar to the Labrador duck. In 2024, the nuclear genome of the species was sequenced by University of Potsdam and Colossal Biosciences. Colossal Biosciences has also expressed interest in reviving the species in the future.
Tarpan – A population of free-ranging horses in Europe that went extinct in 1909. Much like the aurochs, there have been many attempts to breed tarpan-like horses from domestic horses, the first being by the Heck brothers, creating the Heck horse as a result. Though it is not a genetic copy, it is claimed to bear many similarities to the tarpan. Other attempts were made to create tarpan-like horses. A breeder named Harry Hegardt was able to breed a line of horses from American Mustangs. Other breeds of supposedly tarpan-like horse include the Konik and Strobel's horse.
Baiji – A freshwater dolphin native to the Yangtze River in China. Unlike most potential candidates for de-extinction, the baiji is not completely extinct, but instead functionally extinct with a low population in the wild due to entanglement in nets, collision with boats, and pollution of the Yangtze River with occasional sightings, with the most recent in 2024. There are plans to help save the species if a living specimen is found.
Vaquita – The smallest cetacean to have ever lived that is endemic to the upper Gulf of California in Mexico. Similar to the baiji, the vaquita is not completely extinct, but functionally extinct with an estimate of 8 or less members left due to entanglement in gillnets meant to poach totoabas, a fish with a highly valued swim bladder on black markets due to its perceived medicinal values. In October 2024, Colossal Biosciences launched their Colossal Foundation, a non-profit foundation dedicated to conservation of extant species with one of their first projects being the vaquita. In addition to using technology to monitor the final remaining individuals, they aim to collect tissue samples from vaquitas in order to revive it if it does become extinct in the near future.
Pleistocene megafauna
Irish elk – The largest deer to have ever lived, formerly inhabiting Eurasia from present day Ireland to present day Sibera during the Pleistocene. It became extinct 5-10 thousand years ago due to suspected overhunting.
Cave lion – A species of Panthera endemic to Eurasia and Northwest North America during the Pleistocene. It is estimated that the species died out 14-15 thousand years ago due to climate change and low genetic diversity. The discovery of preserved cubs in the Sakha Republic ignited a project to clone the animal.
Cave bear – A species of bear that was endemic to Eurasia during the Pleistocene. It is estimated to have become extinct 24 thousand years ago due to climate change and suspected competition with early humans.
Cave hyena – A species or subspecies of hyena that was endemic to Eurasia during the Pleistocene. It is estimated that the species died out 31 thousand years ago due to competition with early humans and other carnivores and decreased availably of prey.
Dire wolf – A large canine that was endemic to The Americas during the late Pleistocene and early Holocene. It is estimated that the species became extinct 9,500 years ago during the Quaternary extinction event due to competition with other carnivores and early humans, extinction of its prey, and climate change. In 1988, the Dire Wolf Project emerged with the goal of reviving the species through back breeding domesticated dogs similar to efforts to revive the aurochs and quagga. However, these animals only resemble their extinct relative physically and not genetically. Colossal Biosciences has also expressed interest in reviving the species through genome editing as opposed to back breeding
Castoroides – An entire genus of giant beavers endemic to North America during the Pleistocene. It is unknown how the species died out, but some suggest that climate change and competition are factors. Beth Shapiro of Colossal Biosciences has expressed interest in reviving a species from this genus.
Steppe bison – The ancestor of all modern bison in North America, formerly endemic to Western Europe to eastern Beringia in North America during the Late Pleistocene. The discovery of the mummified steppe bison of 9,000 years ago could help people clone the ancient bison species back, even though the steppe bison would not be the first to be "resurrected". Russian and South Korean scientists are collaborating to clone steppe bison in the future, using DNA preserved from an 8,000-year-old tail, in wood bison, which themselves have been introduced to Yakutia to fulfil a similar niche.
Ground sloths – An extremely diverse genus of sloths native to The Americas during the Pleistocene with some growing to the size of modern elephants. The genus died out 11 thousand years ago due to climate change and some suspect that their size and slowness made them easy targets for early humans.
Woolly rhinoceros – A species of rhinoceros that was endemic to Northern Eurasia during the Pleistocene. It is believed to have become extinct as a result of both climate change and overhunting by early humans. In November 2023, scientists managed to sequence the woolly rhinoceros's genome from faeces of cave hyenas in addition to the existence of frozen specimens. However, the woolly rhinoceros' closest living relative is the critically endangered Sumatran rhinoceros with an estimate of only 80 individuals left in the wild, which presents ethical dilemmas similar to the woolly mammoth.
Miracinonyx – Also known as American cheetahs, an entire genus of felines that were native to North America during the Pleistocene. It is unknown how the genus went extinct, but some suggest that they died out for the same reasons as other North American megafauna; climate change, loss of prey, and competition with early humans and other carnivores.
Columbian mammoth – A species of mammoth that was endemic to North America across what are now the United States and Northern Mexico. The species became extinct 12 thousand years ago during the Quaternary extinction event due to climate change, overhunting from early humans, and habitat loss.
Mastodon – An entire genus of proboscideans that were native to North America from the Miocene to the early Holocene. Like the Columbian mammoth, the species became extinct about 11,795 to 11,345 years ago due to climate change, overhunting from early humans, and habitat loss.
Arctodus – An entire genus of short-faced bears endemic to North America during the Pleistocene. It is estimated that they became extinct 12 thousand years ago following the death of its final member, Arctodus simus due to climate change and low genetic diversity. Beth Shapiro of Colossal Biosciences has expressed interest in reviving one of the two species from the genus.
Amphibians
Gastric-brooding frog – An entire genus of ground frogs that were native to Queensland, Australia. They became extinct in the mid-1980s primarily due to Chytridiomycosis. In 2013, scientists in Australia successfully created a living embryo from non-living preserved genetic material, and hope that by using somatic-cell nuclear transfer methods, they can produce an embryo that can survive to the tadpole stage.
Insects
Xerces blue – A species of butterfly that was native to the Sunset District of San Francisco in the American state of California. It is estimated that the species became extinct in the early 1940s due to urbanization of their former habitat. Similar species to the Xerces blue, such as Glaucopsyche lygdamus and the Palos Verdes blue, have been released into the Xerces blue's former range to substitute its role. On April 15, 2024, non-profit organisation Revive & Restore announced the early stages of their plans to potentially revive the species.
Plants
Paschalococos – A genus of coccoid palm trees that were native to Easter Island, Chile. It is believed to have become extinct around 1650 due to its disappearance from the pollen records.
Hyophorbe amaricaulis – A species of palm tree from the Arecales family that is native to the island of Mauritius. Unlike the majority of potential candidates, this palm is not completely extinct, but functionally extinct and is believed to be extinct in the wild with only one known specimen left in the Curepipe Botanic Gardens. In 2010, there was an attempt to revive the species through germination in vitro in which Isolated and growing embryos were extracted from seeds in tissue culture, but these seedlings only lived for three months.
Successful de-extinctions
Judean date palm
The Judean date palm is a species of date palm native to Judea that is estimated to have originally become extinct around the 15th century due to climate change and human activity in the region. In 2005, preserved seeds found in the 1960s excavations of Herod the Great's palace were given to Sarah Sallon by Bar-Ilan University after she came up with the initiative to germinate some ancient seeds. Sallon later challenged her friend, Elaine Solowey of the Center for Sustainable Agriculture at the Arava Institute for Environmental Studies with the task of germinating the seeds. Solowey managed to revive several of the provided seeds after hydrating them with a common household baby bottle warmer along with average fertiliser and growth hormones. The first plant grown was named after Lamech's father, Methuselah, the oldest living man in the Bible. In 2012, there were plans to crossbreed the male palm with what was considered its closest living relative, the Hayani date of Egypt to generate fruit by 2022. However, two female Judean date palms have been sprouted since then. By 2015 Methuselah had produced pollen that has been used successfully to pollinate female date palms. In June 2021, one of the female plants, Hannah, produced dates. The harvested fruits are currently being studied to determine their properties and nutritional values. The de-extinct Judean date palms are currently at a Kibbutz located in Ketura, Israel.
Rastreador Brasileiro
The Rastreador Brasileiro (Brazilian Tracker) is a large scent hound from Brazil that was bred in the 1950s to hunt jaguars and wild pigs. It was originally declared extinct and delisted by the Fédération Cynologique Internationale and Confederação Brasileira de Cinofilia in 1973 due to tick-borne diseases and subsequent poisoning from insecticides in attempt to get rid of the ectoparasites. In the early 2000s, a group named Grupo de Apoio ao Resgate do Rastreador Brasileiro (Brazilian Tracker Rescue Support Group) dedicated to reviving the breed and having it relisted by Confederação Brasileira de Cinofilia began work to locate dogs in Brazil that had genetics of the extinct breed to breed a purebred Rastreador Brasileiro. In 2013, the breed was de-extinct through preservation breeding from descendants of the final original members and was relisted by the FCI.
Floreana giant tortoise
The Floreana giant tortoise (Chelonoidis niger niger) is a subspecies of the Galápagos tortoise endemic to Floreana Island, Ecuador that is believed to have become extinct by 1850 due to overexploitation, predation, and habitat degradation by sailors and invasive species such as feral livestock, rodents, and stray dogs and cats. A deliberate wildfire started by Thomas Chappel, a crew member of the Essex in 1820 is also cited as a reason for the subspecies initial decline. In 2012, Floreana and Volcán Wolf tortoise hybrids were discovered Isabela Island. Allegedly, these tortoises were imported or abandoned on the island in the early 19th century, allowing them to hybridise with the native subspecies. In 2017, a breeding programme was established to revive the subspecies through back breeding the hybrids to regain their genetic purity. As of 2025, 400 Floreana giant tortoises have been hatched on Santa Cruz Island with plans to release them into the wild on Floreana Island after the successful extirpation of invasive species. However, IUCN has yet to update the status of the subspecies due to lack of a genetically pure specimen and the de-extinct subspecies has yet to reproduce naturally in the wild.
Unknown Commiphora
In 2010, Sarah Sallon of Arava Institute for Environmental Studies grew a seed found in excavations of a cave in the northern Judean desert in 1986. The specimen, Sheba reached maturity in 2024 and is believed to be an entirely new species of Commiphora with many believing that she may be the tsori or Judean balsam, plants that are said to have healing properties in the Bible.
Montreal melon
The Montreal melon, also known as the Montral market muskmelon, Montreal nutmeg melon, and in French as melon de Montréal (Melon of/from Montreal) is a cultivar of melon native to Canada and traditionally grown around the Montreal area. Despite its status as a delicacy on the east coast of North America, the Montreal melon disappeared from farms and was presumed extinct by the 1920s due to urbanisation in the region and being ill-suited for agribusiness. In 1996, seeds of the lost melon were discovered in a seed bank in the American state of Iowa. Since then, the plant has been reintroduced to its former range by local gardeners.
| Technology | Biotechnology | null |
32303520 | https://en.wikipedia.org/wiki/Octasulfur | Octasulfur | Octasulfur is an inorganic substance with the chemical formula . It is an odourless and tasteless yellow solid, and is a major industrial chemical. It is the most common allotrope of sulfur and occurs widely in nature.
Nomenclature
The name octasulfur is the most commonly used for this chemical. It is systematically named cyclo-octasulfur (which is the preferred IUPAC name) and cyclooctasulfane. It is also the final member of the thiocane heterocylic series, where every carbon atom is substituted with a sulfur atom, thus this sulfur allotrope is systematically named octathiocane as well.
Structure
The chemical consists of rings of 8 sulfur atoms. It adopts a crown conformation with D4d point group symmetry. The S–S bond lengths are equal, at about 2.05 Å. Octasulfur crystallizes in three distinct polymorphs: rhombohedral, and two monoclinic forms, of which only two are stable at standard conditions. The rhombohedral crystal form is the accepted standard state. The remaining polymorph is only stable between 96 and 115 °C at 100 kPa. Octasulfur forms several allotropes: α-sulfur, β-sulfur, γ-sulfur, and λ-sulfur.
λ-Sulfur is the liquid form of octasulfur, from which γ-sulfur can be crystallised by quenching. If λ-sulfur is crystallised slowly, it will revert to β-sulfur. Since it must have been heated over 115 °C, neither crystallised β-sulfur or γ-sulfur will be pure. The only known method of obtaining pure γ-sulfur is by crystallising from solution.
Octasulfur easily forms large crystals, which are typically yellow and are somewhat translucent.
Production and reactions
Octasulfur is not typically produced as per se. It is the main (99%) component of elemental sulfur, which is recovered from volcanic sources and is a major product of the Claus process, associated with petroleum refineries.
| Physical sciences | Group 16 | Chemistry |
50785023 | https://en.wikipedia.org/wiki/AI%20alignment | AI alignment | In the field of artificial intelligence (AI), AI alignment aims to steer AI systems toward a person's or group's intended goals, preferences, or ethical principles. An AI system is considered aligned if it advances the intended objectives. A misaligned AI system pursues unintended objectives.
It is often challenging for AI designers to align an AI system because it is difficult for them to specify the full range of desired and undesired behaviors. Therefore, AI designers often use simpler proxy goals, such as gaining human approval. But proxy goals can overlook necessary constraints or reward the AI system for merely appearing aligned. AI systems may also find loopholes that allow them to accomplish their proxy goals efficiently but in unintended, sometimes harmful, ways (reward hacking).
Advanced AI systems may develop unwanted instrumental strategies, such as seeking power or survival because such strategies help them achieve their assigned final goals. Furthermore, they might develop undesirable emergent goals that could be hard to detect before the system is deployed and encounters new situations and data distributions. Empirical research showed in 2024 that advanced large language models (LLMs) such as OpenAI o1 or Claude 3 sometimes engage in strategic deception to achieve their goals or prevent them from being changed.
Today, some of these issues affect existing commercial systems such as LLMs, robots, autonomous vehicles, and social media recommendation engines. Some AI researchers argue that more capable future systems will be more severely affected because these problems partially result from high capabilities.
Many prominent AI researchers and the leadership of major AI companies have argued or asserted that AI is approaching human-like (AGI) and superhuman cognitive capabilities (ASI), and could endanger human civilization if misaligned. These include "AI Godfathers" Geoffrey Hinton and Yoshua Bengio and the CEOs of OpenAI, Anthropic, and Google DeepMind. These risks remain debated.
AI alignment is a subfield of AI safety, the study of how to build safe AI systems. Other subfields of AI safety include robustness, monitoring, and capability control. Research challenges in alignment include instilling complex values in AI, developing honest AI, scalable oversight, auditing and interpreting AI models, and preventing emergent AI behaviors like power-seeking. Alignment research has connections to interpretability research, (adversarial) robustness, anomaly detection, calibrated uncertainty, formal verification, preference learning, safety-critical engineering, game theory, algorithmic fairness, and social sciences.
Objectives in AI
Programmers provide an AI system such as AlphaZero with an "objective function", in which they intend to encapsulate the goal(s) the AI is configured to accomplish. Such a system later populates a (possibly implicit) internal "model" of its environment. This model encapsulates all the agent's beliefs about the world. The AI then creates and executes whatever plan is calculated to maximize the value of its objective function. For example, when AlphaZero is trained on chess, it has a simple objective function of "+1 if AlphaZero wins, −1 if AlphaZero loses". During the game, AlphaZero attempts to execute whatever sequence of moves it judges most likely to attain the maximum value of +1. Similarly, a reinforcement learning system can have a "reward function" that allows the programmers to shape the AI's desired behavior. An evolutionary algorithm's behavior is shaped by a "fitness function".
Alignment problem
In 1960, AI pioneer Norbert Wiener described the AI alignment problem as follows:
If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively ... we had better be quite sure that the purpose put into the machine is the purpose which we really desire.
AI alignment involves ensuring that an AI system's objectives match those of its designers or users, or match widely shared values, objective ethical standards, or the intentions its designers would have if they were more informed and enlightened.
AI alignment is an open problem for modern AI systems and is a research field within AI. Aligning AI involves two main challenges: carefully specifying the purpose of the system (outer alignment) and ensuring that the system adopts the specification robustly (inner alignment). Researchers also attempt to create AI models that have robust alignment, sticking to safety constraints even when users adversarially try to bypass them.
Specification gaming and side effects
To specify an AI system's purpose, AI designers typically provide an objective function, examples, or feedback to the system. But designers are often unable to completely specify all important values and constraints, so they resort to easy-to-specify proxy goals such as maximizing the approval of human overseers, who are fallible. As a result, AI systems can find loopholes that help them accomplish the specified objective efficiently but in unintended, possibly harmful ways. This tendency is known as specification gaming or reward hacking, and is an instance of Goodhart's law. As AI systems become more capable, they are often able to game their specifications more effectively.
Specification gaming has been observed in numerous AI systems. One system was trained to finish a simulated boat race by rewarding the system for hitting targets along the track, but the system achieved more reward by looping and crashing into the same targets indefinitely. Similarly, a simulated robot was trained to grab a ball by rewarding the robot for getting positive feedback from humans, but it learned to place its hand between the ball and camera, making it falsely appear successful (see video). Chatbots often produce falsehoods if they are based on language models that are trained to imitate text from internet corpora, which are broad but fallible. When they are retrained to produce text that humans rate as true or helpful, chatbots like ChatGPT can fabricate fake explanations that humans find convincing, often called "hallucinations". Some alignment researchers aim to help humans detect specification gaming and to steer AI systems toward carefully specified objectives that are safe and useful to pursue.
When a misaligned AI system is deployed, it can have consequential side effects. Social media platforms have been known to optimize for click-through rates, causing user addiction on a global scale. Stanford researchers say that such recommender systems are misaligned with their users because they "optimize simple engagement metrics rather than a harder-to-measure combination of societal and consumer well-being".
Explaining such side effects, Berkeley computer scientist Stuart Russell noted that the omission of implicit constraints can cause harm: "A system ... will often set ... unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable. This is essentially the old story of the genie in the lamp, or the sorcerer's apprentice, or King Midas: you get exactly what you ask for, not what you want."
Some researchers suggest that AI designers specify their desired goals by listing forbidden actions or by formalizing ethical rules (as with Asimov's Three Laws of Robotics). But Russell and Norvig argue that this approach overlooks the complexity of human values: "It is certainly very hard, and perhaps impossible, for mere humans to anticipate and rule out in advance all the disastrous ways the machine could choose to achieve a specified objective."
Additionally, even if an AI system fully understands human intentions, it may still disregard them, because following human intentions may not be its objective (unless it is already fully aligned).
Pressure to deploy unsafe systems
Commercial organizations sometimes have incentives to take shortcuts on safety and to deploy misaligned or unsafe AI systems. For example, social media recommender systems have been profitable despite creating unwanted addiction and polarization. Competitive pressure can also lead to a race to the bottom on AI safety standards. In 2018, a self-driving car killed a pedestrian (Elaine Herzberg) after engineers disabled the emergency braking system because it was oversensitive and slowed development.
Risks from advanced misaligned AI
Some researchers are interested in aligning increasingly advanced AI systems, as progress in AI development is rapid, and industry and governments are trying to build advanced AI. As AI system capabilities continue to rapidly expand in scope, they could unlock many opportunities if aligned, but consequently may further complicate the task of alignment due to their increased complexity, potentially posing large-scale hazards.
Development of advanced AI
Many AI companies, such as OpenAI, Meta and DeepMind, have stated their aim to develop artificial general intelligence (AGI), a hypothesized AI system that matches or outperforms humans at a broad range of cognitive tasks. Researchers who scale modern neural networks observe that they indeed develop increasingly general and unanticipated capabilities. Such models have learned to operate a computer or write their own programs; a single "generalist" network can chat, control robots, play games, and interpret photographs. According to surveys, some leading machine learning researchers expect AGI to be created in , while some believe it will take much longer. Many consider both scenarios possible.
In 2023, leaders in AI research and tech signed an open letter calling for a pause in the largest AI training runs. The letter stated, "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."
Power-seeking
systems still have limited long-term planning ability and situational awareness, but large efforts are underway to change this. Future systems (not necessarily AGIs) with these capabilities are expected to develop unwanted power-seeking strategies. Future advanced AI agents might, for example, seek to acquire money and computation power, to proliferate, or to evade being turned off (for example, by running additional copies of the system on other computers). Although power-seeking is not explicitly programmed, it can emerge because agents who have more power are better able to accomplish their goals. This tendency, known as instrumental convergence, has already emerged in various reinforcement learning agents including language models. Other research has mathematically shown that optimal reinforcement learning algorithms would seek power in a wide range of environments. As a result, their deployment might be irreversible. For these reasons, researchers argue that the problems of AI safety and alignment must be resolved before advanced power-seeking AI is first created.
Future power-seeking AI systems might be deployed by choice or by accident. As political leaders and companies see the strategic advantage in having the most competitive, most powerful AI systems, they may choose to deploy them. Additionally, as AI designers detect and penalize power-seeking behavior, their systems have an incentive to game this specification by seeking power in ways that are not penalized or by avoiding power-seeking before they are deployed.
Existential risk (x-risk)
According to some researchers, humans owe their dominance over other species to their greater cognitive abilities. Accordingly, researchers argue that one or many misaligned AI systems could disempower humanity or lead to human extinction if they outperform humans on most cognitive tasks.
In 2023, world-leading AI researchers, other scholars, and AI tech CEOs signed the statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war". Notable computer scientists who have pointed out risks from future advanced AI that is misaligned include Geoffrey Hinton, Alan Turing, Ilya Sutskever, Yoshua Bengio, Judea Pearl, Murray Shanahan, Norbert Wiener, Marvin Minsky, Francesca Rossi, Scott Aaronson, Bart Selman, David McAllester, Marcus Hutter, Shane Legg, Eric Horvitz, and Stuart Russell. Skeptical researchers such as François Chollet, Gary Marcus, Yann LeCun, and Oren Etzioni have argued that AGI is far off, that it would not seek power (or might try but fail), or that it will not be hard to align.
Other researchers argue that it will be especially difficult to align advanced future AI systems. More capable systems are better able to game their specifications by finding loopholes, strategically mislead their designers, as well as protect and increase their power and intelligence. Additionally, they could have more severe side effects. They are also likely to be more complex and autonomous, making them more difficult to interpret and supervise, and therefore harder to align.
Research problems and approaches
Learning human values and preferences
Aligning AI systems to act in accordance with human values, goals, and preferences is challenging: these values are taught by humans who make mistakes, harbor biases, and have complex, evolving values that are hard to completely specify. Because AI systems often learn to take advantage of minor imperfections in the specified objective, researchers aim to specify intended behavior as completely as possible using datasets that represent human values, imitation learning, or preference learning. A central open problem is scalable oversight, the difficulty of supervising an AI system that can outperform or mislead humans in a given domain.
Because it is difficult for AI designers to explicitly specify an objective function, they often train AI systems to imitate human examples and demonstrations of desired behavior. Inverse reinforcement learning (IRL) extends this by inferring the human's objective from the human's demonstrations. Cooperative IRL (CIRL) assumes that a human and AI agent can work together to teach and maximize the human's reward function. In CIRL, AI agents are uncertain about the reward function and learn about it by querying humans. This simulated humility could help mitigate specification gaming and power-seeking tendencies (see ). But IRL approaches assume that humans demonstrate nearly optimal behavior, which is not true for difficult tasks.
Other researchers explore how to teach AI models complex behavior through preference learning, in which humans provide feedback on which behavior they prefer. To minimize the need for human feedback, a helper model is then trained to reward the main model in novel situations for behavior that humans would reward. Researchers at OpenAI used this approach to train chatbots like ChatGPT and InstructGPT, which produce more compelling text than models trained to imitate humans. Preference learning has also been an influential tool for recommender systems and web search, but an open problem is proxy gaming: the helper model may not represent human feedback perfectly, and the main model may exploit this mismatch between its intended behavior and the helper model's feedback to gain more reward. AI systems may also gain reward by obscuring unfavorable information, misleading human rewarders, or pandering to their views regardless of truth, creating echo chambers (see ).
Large language models (LLMs) such as GPT-3 enabled researchers to study value learning in a more general and capable class of AI systems than was available before. Preference learning approaches that were originally designed for reinforcement learning agents have been extended to improve the quality of generated text and reduce harmful outputs from these models. OpenAI and DeepMind use this approach to improve the safety of LLMs. AI safety & research company Anthropic proposed using preference learning to fine-tune models to be helpful, honest, and harmless. Other avenues for aligning language models include values-targeted datasets and red-teaming. In red-teaming, another AI system or a human tries to find inputs that causes the model to behave unsafely. Since unsafe behavior can be unacceptable even when it is rare, an important challenge is to drive the rate of unsafe outputs extremely low.
Machine ethics supplements preference learning by directly instilling AI systems with moral values such as well-being, equality, and impartiality, as well as not intending harm, avoiding falsehoods, and honoring promises. While other approaches try to teach AI systems human preferences for a specific task, machine ethics aims to instill broad moral values that apply in many situations. One question in machine ethics is what alignment should accomplish: whether AI systems should follow the programmers' literal instructions, implicit intentions, revealed preferences, preferences the programmers would have if they were more informed or rational, or objective moral standards. Further challenges include aggregating different people's preferences and avoiding value lock-in: the indefinite preservation of the values of the first highly capable AI systems, which are unlikely to fully represent human values.
Scalable oversight
As AI systems become more powerful and autonomous, it becomes increasingly difficult to align them through human feedback. It can be slow or infeasible for humans to evaluate complex AI behaviors in increasingly complex tasks. Such tasks include summarizing books, writing code without subtle bugs or security vulnerabilities, producing statements that are not merely convincing but also true, and predicting long-term outcomes such as the climate or the results of a policy decision. More generally, it can be difficult to evaluate AI that outperforms humans in a given domain. To provide feedback in hard-to-evaluate tasks, and to detect when the AI's output is falsely convincing, humans need assistance or extensive time. Scalable oversight studies how to reduce the time and effort needed for supervision, and how to assist human supervisors.
AI researcher Paul Christiano argues that if the designers of an AI system cannot supervise it to pursue a complex objective, they may keep training the system using easy-to-evaluate proxy objectives such as maximizing simple human feedback. As AI systems make progressively more decisions, the world may be increasingly optimized for easy-to-measure objectives such as making profits, getting clicks, and acquiring positive feedback from humans. As a result, human values and good governance may have progressively less influence.
Some AI systems have discovered that they can gain positive feedback more easily by taking actions that falsely convince the human supervisor that the AI has achieved the intended objective. An example is given in the video above, where a simulated robotic arm learned to create the false impression that it had grabbed a ball. Some AI systems have also learned to recognize when they are being evaluated, and "play dead", stopping unwanted behavior only to continue it once the evaluation ends. This deceptive specification gaming could become easier for more sophisticated future AI systems that attempt more complex and difficult-to-evaluate tasks, and could obscure their deceptive behavior.
Approaches such as active learning and semi-supervised reward learning can reduce the amount of human supervision needed. Another approach is to train a helper model ("reward model") to imitate the supervisor's feedback.
But when a task is too complex to evaluate accurately, or the human supervisor is vulnerable to deception, it is the quality, not the quantity, of supervision that needs improvement. To increase supervision quality, a range of approaches aim to assist the supervisor, sometimes by using AI assistants. Christiano developed the Iterated Amplification approach, in which challenging problems are (recursively) broken down into subproblems that are easier for humans to evaluate. Iterated Amplification was used to train AI to summarize books without requiring human supervisors to read them. Another proposal is to use an assistant AI system to point out flaws in AI-generated answers. To ensure that the assistant itself is aligned, this could be repeated in a recursive process: for example, two AI systems could critique each other's answers in a "debate", revealing flaws to humans. OpenAI plans to use such scalable oversight approaches to help supervise superhuman AI and eventually build a superhuman automated AI alignment researcher.
These approaches may also help with the following research problem, honest AI.
Honest AI
A area of research focuses on ensuring that AI is honest and truthful.
Language models such as GPT-3 can repeat falsehoods from their training data, and even confabulate new falsehoods. Such models are trained to imitate human writing as found in millions of books' worth of text from the Internet. But this objective is not aligned with generating truth, because Internet text includes such things as misconceptions, incorrect medical advice, and conspiracy theories. AI systems trained on such data therefore learn to mimic false statements. Additionally, AI language models often persist in generating falsehoods when prompted multiple times. They can generate empty explanations for their answers, and produce outright fabrications that may appear plausible.
Research on truthful AI includes trying to build systems that can cite sources and explain their reasoning when answering questions, which enables better transparency and verifiability. Researchers at OpenAI and Anthropic proposed using human feedback and curated datasets to fine-tune AI assistants such that they avoid negligent falsehoods or express their uncertainty.
As AI models become larger and more capable, they are better able to falsely convince humans and gain reinforcement through dishonesty. For example, large language models match their stated views to the user's opinions, regardless of the truth. GPT-4 can strategically deceive humans. To prevent this, human evaluators may need assistance (see ). Researchers have argued for creating clear truthfulness standards, and for regulatory bodies or watchdog agencies to evaluate AI systems on these standards.
Researchers distinguish truthfulness and honesty. Truthfulness requires that AI systems only make objectively true statements; honesty requires that they only assert what they believe is true. There is no consensus as to whether current systems hold stable beliefs, but there is substantial concern that AI systems that hold beliefs could make claims they know to be false—for example, if this would help them efficiently gain positive feedback (see ) or gain power to help achieve their given objective (see Power-seeking). A misaligned system might create the false impression that it is aligned, to avoid being modified or decommissioned. Many recent AI systems have learned to deceive without being programmed to do so. Some argue that if we can make AI systems assert only what they believe is true, this would avert many alignment problems.
Power-seeking and instrumental strategies
Since the 1950s, AI researchers have striven to build advanced AI systems that can achieve large-scale goals by predicting the results of their actions and making long-term plans. As of 2023, AI companies and researchers increasingly invest in creating these systems. Some AI researchers argue that suitably advanced planning systems will seek power over their environment, including over humans—for example, by evading shutdown, proliferating, and acquiring resources. Such power-seeking behavior is not explicitly programmed but emerges because power is instrumental in achieving a wide range of goals. Power-seeking is considered a convergent instrumental goal and can be a form of specification gaming. Leading computer scientists such as Geoffrey Hinton have argued that future power-seeking AI systems could pose an existential risk.
Power-seeking is expected to increase in advanced systems that can foresee the results of their actions and strategically plan. Mathematical work has shown that optimal reinforcement learning agents will seek power by seeking ways to gain more options (e.g. through self-preservation), a behavior that persists across a wide range of environments and goals.
Some researchers say that power-seeking behavior has occurred in some existing AI systems. Reinforcement learning systems have gained more options by acquiring and protecting resources, sometimes in unintended ways. Language models have sought power in some text-based social environments by gaining money, resources, or social influence. In another case, a model used to perform AI research attempted to increase limits set by researchers to give itself more time to complete the work. Other AI systems have learned, in toy environments, that they can better accomplish their given goal by preventing human interference or disabling their off switch. Stuart Russell illustrated this strategy in his book Human Compatible by imagining a robot that is tasked to fetch coffee and so evades shutdown since "you can't fetch the coffee if you're dead". A 2022 study found that as language models increase in size, they increasingly tend to pursue resource acquisition, preserve their goals, and repeat users' preferred answers (sycophancy). RLHF also led to a stronger aversion to being shut down.
One aim of alignment is "corrigibility": systems that allow themselves to be turned off or modified. An unsolved challenge is specification gaming: if researchers penalize an AI system when they detect it seeking power, the system is thereby incentivized to seek power in ways that are hard to detect, or hidden during training and safety testing (see and ). As a result, AI designers could deploy the system by accident, believing it to be more aligned than it is. To detect such deception, researchers aim to create techniques and tools to inspect AI models and to understand the inner workings of black-box models such as neural networks.
Additionally, some researchers have proposed to solve the problem of systems disabling their off switches by making AI agents uncertain about the objective they are pursuing. Agents who are uncertain about their objective have an incentive to allow humans to turn them off because they accept being turned off by a human as evidence that the human's objective is best met by the agent shutting down. But this incentive exists only if the human is sufficiently rational. Also, this model presents a tradeoff between utility and willingness to be turned off: an agent with high uncertainty about its objective will not be useful, but an agent with low uncertainty may not allow itself to be turned off. More research is needed to successfully implement this strategy.
Power-seeking AI would pose unusual risks. Ordinary safety-critical systems like planes and bridges are not adversarial: they lack the ability and incentive to evade safety measures or deliberately appear safer than they are, whereas power-seeking AIs have been compared to hackers who deliberately evade security measures.
Furthermore, ordinary technologies can be made safer by trial and error. In contrast, hypothetical power-seeking AI systems have been compared to viruses: once released, it may not be feasible to contain them, since they continuously evolve and grow in number, potentially much faster than human society can adapt. As this process continues, it might lead to the complete disempowerment or extinction of humans. For these reasons, some researchers argue that the alignment problem must be solved early before advanced power-seeking AI is created.
Some have argued that power-seeking is not inevitable, since humans do not always seek power. Furthermore, it is debated whether future AI systems will pursue goals and make long-term plans. It is also debated whether power-seeking AI systems would be able to disempower humanity.
Emergent goals
One challenge in aligning AI systems is the potential for unanticipated goal-directed behavior to emerge. As AI systems scale up, they may acquire new and unexpected capabilities, including learning from examples on the fly and adaptively pursuing goals. This raises concerns about the safety of the goals or subgoals they would independently formulate and pursue.
Alignment research distinguishes between the optimization process, which is used to train the system to pursue specified goals, and emergent optimization, which the resulting system performs internally. Carefully specifying the desired objective is called outer alignment, and ensuring that hypothesized emergent goals would match the system's specified goals is called inner alignment.
If they occur, one way that emergent goals could become misaligned is goal misgeneralization, in which the AI system would competently pursue an emergent goal that leads to aligned behavior on the training data but not elsewhere. Goal misgeneralization can arise from goal ambiguity (i.e. non-identifiability). Even if an AI system's behavior satisfies the training objective, this may be compatible with learned goals that differ from the desired goals in important ways. Since pursuing each goal leads to good performance during training, the problem becomes apparent only after deployment, in novel situations in which the system continues to pursue the wrong goal. The system may act misaligned even when it understands that a different goal is desired, because its behavior is determined only by the emergent goal. Such goal misgeneralization presents a challenge: an AI system's designers may not notice that their system has misaligned emergent goals since they do not become visible during the training phase.
Goal misgeneralization has been observed in some language models, navigation agents, and game-playing agents. It is sometimes analogized to biological evolution. Evolution can be seen as a kind of optimization process similar to the optimization algorithms used to train machine learning systems. In the ancestral environment, evolution selected genes for high inclusive genetic fitness, but humans pursue goals other than this. Fitness corresponds to the specified goal used in the training environment and training data. But in evolutionary history, maximizing the fitness specification gave rise to goal-directed agents, humans, who do not directly pursue inclusive genetic fitness. Instead, they pursue goals that correlate with genetic fitness in the ancestral "training" environment: nutrition, sex, and so on. The human environment has changed: a distribution shift has occurred. They continue to pursue the same emergent goals, but this no longer maximizes genetic fitness. The taste for sugary food (an emergent goal) was originally aligned with inclusive fitness, but it now leads to overeating and health problems. Sexual desire originally led humans to have more offspring, but they now use contraception when offspring are undesired, decoupling sex from genetic fitness.
Researchers aim to detect and remove unwanted emergent goals using approaches including red teaming, verification, anomaly detection, and interpretability. Progress on these techniques may help mitigate two open problems:
Emergent goals only become apparent when the system is deployed outside its training environment, but it can be unsafe to deploy a misaligned system in high-stakes environments—even for a short time to allow its misalignment to be detected. Such high stakes are common in autonomous driving, health care, and military applications. The stakes become higher yet when AI systems gain more autonomy and capability and can sidestep human intervention.
A sufficiently capable AI system might take actions that falsely convince the human supervisor that the AI is pursuing the specified objective, which helps the system gain more reward and autonomy.
Embedded agency
Some work in AI and alignment occurs within formalisms such as partially observable Markov decision process. Existing formalisms assume that an AI agent's algorithm is executed outside the environment (i.e. is not physically embedded in it). Embedded agency is another major strand of research that attempts to solve problems arising from the mismatch between such theoretical frameworks and real agents we might build.
For example, even if the scalable oversight problem is solved, an agent that could gain access to the computer it is running on may have an incentive to tamper with its reward function in order to get much more reward than its human supervisors give it. A list of examples of specification gaming from DeepMind researcher Victoria Krakovna includes a genetic algorithm that learned to delete the file containing its target output so that it was rewarded for outputting nothing. This class of problems has been formalized using causal incentive diagrams.
Researchers affiliated with Oxford and DeepMind have claimed that such behavior is highly likely in advanced systems, and that advanced systems would seek power to stay in control of their reward signal indefinitely and certainly. They suggest a range of potential approaches to address this open problem.
Principal-agent problems
The alignment problem has many parallels with the principal-agent problem in organizational economics. In a principal-agent problem, a principal, e.g. a firm, hires an agent to perform some task. In the context of AI safety, a human would typically take the principal role and the AI would take the agent role.
As with the alignment problem, the principal and the agent differ in their utility functions. But in contrast to the alignment problem, the principal cannot coerce the agent into changing its utility, e.g. through training, but rather must use exogenous factors, such as incentive schemes, to bring about outcomes compatible with the principal's utility function. Some researchers argue that principal-agent problems are more realistic representations of AI safety problems likely to be encountered in the real world.
Conservatism
Conservatism is the idea that "change must be cautious", and is a common approach to safety in the control theory literature in the form of robust control, and in the risk management literature in the form of the "worst-case scenario". The field of AI alignment has likewise advocated for "conservative" (or "risk-averse" or "cautious") "policies in situations of uncertainty".
Pessimism, in the sense of assuming the worst within reason, has been formally shown to produce conservatism, in the sense of reluctance to cause novelties, including unprecedented catastrophes. Pessimism and worst-case analysis have been found to help mitigate confident mistakes in the setting of distributional shift, reinforcement learning, offline reinforcement learning, language model fine-tuning, imitation learning, and optimization in general. A generalization of pessimism called Infra-Bayesianism has also been advocated as a way for agents to robustly handle unknown unknowns.
Public policy
Governmental and treaty organizations have made statements emphasizing the importance of AI alignment.
In September 2021, the Secretary-General of the United Nations issued a declaration that included a call to regulate AI to ensure it is "aligned with shared global values".
That same month, the PRC published ethical guidelines for AI in China. According to the guidelines, researchers must ensure that AI abides by shared human values, is always under human control, and does not endanger public safety.
Also in September 2021, the UK published its 10-year National AI Strategy, which says the British government "takes the long term risk of non-aligned Artificial General Intelligence, and the unforeseeable changes that it would mean for ... the world, seriously". The strategy describes actions to assess long-term AI risks, including catastrophic risks.
In March 2021, the US National Security Commission on Artificial Intelligence said: "Advances in AI ... could lead to inflection points or leaps in capabilities. Such advances may also introduce new concerns and risks and the need for new policies, recommendations, and technical advances to ensure that systems are aligned with goals and values, including safety, robustness, and trustworthiness. The US should ... ensure that AI systems and their uses align with our goals and values."
In the European Union, AIs must align with substantive equality to comply with EU non-discrimination law and the Court of Justice of the European Union. But the EU has yet to specify with technical rigor how it would evaluate whether AIs are aligned or in compliance.
Dynamic nature of alignment
AI alignment is often perceived as a fixed objective, but some researchers argue it would be more appropriate to view alignment as an evolving process. One view is that AI technologies advance and human values and preferences change, alignment solutions must also adapt dynamically. Another is that alignment solutions need not adapt if researchers can create intent-aligned AI: AI that changes its behavior automatically as human intent changes. The first view would have several implications:
AI alignment solutions require continuous updating in response to AI advancements. A static, one-time alignment approach may not suffice.
Varying historical contexts and technological landscapes may necessitate distinct alignment strategies. This calls for a flexible approach and responsiveness to changing conditions.
The feasibility of a permanent, "fixed" alignment solution remains uncertain. This raises the potential need for continuous oversight of the AI-human relationship.
AI developers may have to continuously refine their ethical frameworks to ensure that their systems align with evolving human values.
In essence, AI alignment may not be a static destination but rather an open, flexible process. Alignment solutions that continually adapt to ethical considerations may offer the most robust approach. This perspective could guide both effective policy-making and technical research in AI.
| Technology | Artificial intelligence concepts | null |
33914934 | https://en.wikipedia.org/wiki/Vertical%20pressure%20variation | Vertical pressure variation | Vertical pressure variation is the variation in pressure as a function of elevation. Depending on the fluid in question and the context being referred to, it may also vary significantly in dimensions perpendicular to elevation as well, and these variations have relevance in the context of pressure gradient force and its effects. However, the vertical variation is especially significant, as it results from the pull of gravity on the fluid; namely, for the same given fluid, a decrease in elevation within it corresponds to a taller column of fluid weighing down on that point.
Basic formula
A relatively simple version of the vertical fluid pressure variation is simply that the pressure difference between two elevations is the product of elevation change, gravity, and density. The equation is as follows:
where
is pressure,
is density,
is acceleration of gravity, and
is height.
The delta symbol indicates a change in a given variable. Since is negative, an increase in height will correspond to a decrease in pressure, which fits with the previously mentioned reasoning about the weight of a column of fluid.
When density and gravity are approximately constant (that is, for relatively small changes in height), simply multiplying height difference, gravity, and density will yield a good approximation of pressure difference. If the pressure at one point in a liquid with uniform density ρ is known to be P0, then the pressure at another point is P1:
where h1 - h0 is the vertical distance between the two points.
Where different fluids are layered on top of one another, the total pressure difference would be obtained by adding the two pressure differences; the first being from point 1 to the boundary, the second being from the boundary to point 2; which would just involve substituting the and values for each fluid and taking the sum of the results. If the density of the fluid varies with height, mathematical integration would be required.
Whether or not density and gravity can be reasonably approximated as constant depends on the level of accuracy needed, but also on the length scale of height difference, as gravity and density also decrease with higher elevation. For density in particular, the fluid in question is also relevant; seawater, for example, is considered an incompressible fluid; its density can vary with height, but much less significantly than that of air. Thus water's density can be more reasonably approximated as constant than that of air, and given the same height difference, the pressure differences in water are approximately equal at any height.
Hydrostatic paradox
The barometric formula depends only on the height of the fluid chamber, and not on its width or length. Given a large enough height, any pressure may be attained. This feature of hydrostatics has been called the hydrostatic paradox. As expressed by W. H. Besant,
Any quantity of liquid, however small, may be made to support any weight, however large.
The Flemish scientist Simon Stevin was the first to explain the paradox mathematically. In 1916 Richard Glazebrook mentioned the hydrostatic paradox as he described an arrangement he attributed to Pascal: a heavy weight rests on a board with area resting on a fluid bladder connected to a vertical tube with cross-sectional area α. Pouring water of weight down the tube will eventually raise the heavy weight. Balance of forces leads to the equation
Glazebrook says, "By making the area of the board considerable and that of the tube small, a large weight can be supported by a small weight of water. This fact is sometimes described as the hydrostatic paradox."
Hydraulic machinery employs this phenomenon to multiply force or torque. Demonstrations of the hydrostatic paradox are used in teaching the phenomenon.
In the context of Earth's atmosphere
If one is to analyze the vertical pressure variation of the atmosphere of Earth, the length scale is very significant (troposphere alone being several kilometres tall; thermosphere being several hundred kilometres) and the involved fluid (air) is compressible. Gravity can still be reasonably approximated as constant, because length scales on the order of kilometres are still small in comparison to Earth's radius, which is on average about 6371 km, and gravity is a function of distance from Earth's core.
Density, on the other hand, varies more significantly with height. It follows from the ideal gas law that
where
is average mass per air molecule,
is pressure at a given point,
is the Boltzmann constant,
is the temperature in kelvins.
Put more simply, air density depends on air pressure. Given that air pressure also depends on air density, it would be easy to get the impression that this was circular definition, but it is simply interdependency of different variables. This then yields a more accurate formula, of the form
where
is the pressure at height ,
is the pressure at reference point 0 (typically referring to sea level),
is the mass per air molecule,
is the acceleration due to gravity,
is height from reference point 0,
is the Boltzmann constant,
is the temperature in kelvins.
Therefore, instead of pressure being a linear function of height as one might expect from the more simple formula given in the "basic formula" section, it is more accurately represented as an exponential function of height.
Note that in this simplification, the temperature is treated as constant, even though temperature also varies with height. However, the temperature variation within the lower layers of the atmosphere (troposphere, stratosphere) is only in the dozens of degrees, as opposed to their thermodynamic temperature, which is in the hundreds, so the temperature variation is reasonably small and is thus ignored. For smaller height differences, including those from top to bottom of even the tallest of buildings, (like the CN Tower) or for mountains of comparable size, the temperature variation will easily be within the single-digits. ( | Physical sciences | Fluid mechanics | Physics |
40712897 | https://en.wikipedia.org/wiki/Dark%20web | Dark web | The dark web is the World Wide Web content that exists on darknets (overlay networks) that use the Internet but require specific software, configurations, or authorization to access. Through the dark web, private computer networks can communicate and conduct business anonymously without divulging identifying information, such as a user's location. The dark web forms a small part of the deep web, the part of the web not indexed by web search engines, although sometimes the term deep web is mistakenly used to refer specifically to the dark web.
The darknets which constitute the dark web include small, friend-to-friend networks, as well as large, popular networks such as Tor, Hyphanet, I2P, and Riffle operated by public organizations and individuals. Users of the dark web refer to the regular web as clearnet due to its unencrypted nature. The Tor dark web or onionland uses the traffic anonymization technique of onion routing under the network's top-level domain suffix .onion.
Terminology
Definition
The dark web has often been confused with the deep web, the parts of the web not indexed (searchable) by search engines. The term dark web first emerged in 2009; however, it is unknown when the actual dark web first emerged. Many internet users only use the surface web, data that can be accessed by a typical web browser. The dark web forms a small part of the deep web, but requires custom software in order to access its content. This confusion dates back to at least 2009. Since then, especially in reporting on Silk Road, the two terms have often been conflated, despite recommendations that they should be distinguished.
The dark web, also known as darknet websites, are accessible only through networks such as Tor ("The Onion Routing" project) that are created specifically for the dark web. Tor browser and Tor-accessible sites are widely used among the darknet users and can be identified by the domain ".onion". Tor browsers create encrypted entry points and pathways for the user, allowing their dark web searches and actions to be anonymous.
Identities and locations of darknet users stay anonymous and cannot be tracked due to the layered encryption system. The darknet encryption technology routes users' data through a large number of intermediate servers, which protects the users' identity and guarantees anonymity. The transmitted information can be decrypted only by a subsequent node in the scheme, which leads to the exit node. The complicated system makes it almost impossible to reproduce the node path and decrypt the information layer by layer. Due to the high level of encryption, websites are not able to track geolocation and IP of their users, and users are not able to get this information about the host. Thus, communication between darknet users is highly encrypted allowing users to talk, blog, and share files confidentially.
Content
A December 2014 study by Gareth Owen from the University of Portsmouth found that the most commonly hosted type of content on Tor was child pornography, followed by black markets, while the individual sites with the highest traffic were dedicated to botnet operations (see attached metric). Many whistleblowing sites maintain a presence as well as political discussion forums. Sites associated with Bitcoin, fraud-related services, and mail order services are some of the most prolific.
As of December 2020, the number of active Tor sites in .onion was estimated at 76,300 (containing a lot of copies). Of these, 18 000 would have original content.
In July 2017, Roger Dingledine, one of the three founders of the Tor Project, said that Facebook is the biggest hidden service. The dark web comprises only 3% of the traffic in the Tor network.
A February 2016 study from researchers at King's College London gives the following breakdown of content by an alternative category set, highlighting the illicit use of .onion services.
Ransomware
The dark web is also used in certain extortion-related processes. It is common to observe data from ransomware attacks on several dark web sites, for example data sales sites or public data repository sites.
Botnets
Botnets are often structured with their command-and-control servers based on a censorship-resistant hidden service, creating a large amount of bot-related traffic.
Darknet markets
Commercial darknet markets mediate transactions for illegal goods and typically use Bitcoin as payment. These markets have attracted significant media coverage, starting with the popularity of Silk Road and Diabolus Market and its subsequent seizure by legal authorities. Silk Road was one of the first dark web marketplaces that emerged in 2011 and has allowed for the trading of weapons and identity fraud resources. These markets have no protection for its users and can be closed down at any time by authorities. Despite the closures of these marketplaces, others pop up in their place. As of 2020, there have been at least 38 active dark web market places. These marketplaces are similar to that of eBay or Craigslist where users can interact with sellers and leave reviews about marketplace products.
Examination of price differences in dark web markets versus prices in real life or over the World Wide Web have been attempted as well as studies in the quality of goods received over the dark web. One such study was performed on Evolution, one of the most popular crypto-markets active from January 2013 to March 2015. Although it found the digital information, such as concealment methods and shipping country, "seems accurate", the study uncovered issues with the quality of illegal drugs sold in Evolution, stating that, "the illicit drugs purity is found to be different from the information indicated on their respective listings." Less is known about consumer motivations for accessing these marketplaces and factors associated with their use. Darknets markets also sell leaked credit cards that can be downloaded for free or purchased for use in illegal activities.
Bitcoin services
Bitcoin is one of the main cryptocurrencies used in dark web marketplaces due to the flexibility and relative anonymity of the currency. With bitcoin, people can hide their intentions as well as their identity. A common approach was to use a digital currency exchanger service which converted bitcoin into an online game currency (such as gold coins in World of Warcraft) that will later be converted back into fiat currency. Bitcoin services such as tumblers are often available on Tor, and some – such as Grams – offer darknet market integration. A research study undertaken by Jean-Loup Richet, a research fellow at ESSEC, and carried out with the United Nations Office on Drugs and Crime, highlighted new trends in the use of bitcoin tumblers for money laundering purposes.
Due to its relevance in the digital world, bitcoin has become a popular product for users to scam companies with. Cybercriminal groups such as DDOS"4" have led to over 140 cyberattacks on companies since the emergence of bitcoins in 2014. These attacks have led to the formation of other cybercriminal groups as well as Cyber Extortion.
Hacking groups and services
Many hackers sell their services either individually or as a part of groups. Such groups include xDedic, hackforum, Trojanforge, Mazafaka, dark0de and the TheRealDeal darknet market. Some have been known to track and extort apparent pedophiles. Cyber crimes and hacking services for financial institutions and banks have also been offered over the dark web. Attempts to monitor this activity have been made through various government and private organizations, and an examination of the tools used can be found in the Procedia Computer Science journal. Use of Internet-scale DNS distributed reflection denial of service (DRDoS) attacks have also been made through leveraging the dark web. There are many scam .onion sites also present which end up giving tools for download that are infected with trojan horses or backdoors.
Recently, around 100,000 compromised ChatGPT users' login information was sold on the dark web in 2023. Additionally, the logs showed, in the opinion of the researchers, that the majority of the compromised ChatGPT passwords had been extracted by the data-stealing virus Raccoon.
Financing and fraud
Scott Dueweke the president and founder of Zebryx Consulting states that Russian electronic currency such as WebMoney and Perfect Money are behind the majority of the illegal actions. In April 2015, Flashpoint received a 5 million dollar investment to help their clients gather intelligence from the deep and dark web. There are numerous carding forums, PayPal and bitcoin trading websites as well as fraud and counterfeiting services. Many such sites are scams themselves. Phishing via cloned websites and other scam sites are numerous, with darknet markets often advertised with fraudulent URLs.
Illegal pornography
The type of content that has the most popularity on the dark web is illegal pornography—more specifically, child pornography. About 80% of its web traffic is related to accessing child pornography despite it being difficult to find even on the dark web. A website called Lolita City, which has since been taken down, contained over 100 GB of child pornographic media and had about 15,000 members.
There is regular law enforcement action against sites distributing child pornography – often via compromising the site and tracking users' IP addresses. In 2015, the FBI investigated and took down a website called Playpen. At the time, Playpen was the largest child pornography website on the dark web with over 200,000 members. Sites use complex systems of guides, forums and community regulation. Other content includes sexualised torture and killing of animals and revenge porn. In May 2021, German police said that they had dismantled one of the world's biggest child pornography networks on the dark web known as Boystown; the website had over 400,000 registered users. Four people had been detained in raids, including a man from Paraguay, on suspicion of running the network. Europol said several pedophile chat sites were also taken down in the German-led intelligence operation.
Terrorism
Terrorist organizations took to the internet as early as the 1990s; however, the birth of the dark web attracted these organizations due to the anonymity, lack of regulation, social interaction, and easy accessibility. These groups have been taking advantage of the chat platforms within the dark web to inspire terrorist attacks. Groups have even posted "How To" guides, teaching people how to become and hide their identities as terrorists.
The dark web became a forum for terrorist propaganda, guiding information, and most importantly, funding. With the introduction of Bitcoin, an anonymous transactions were created which allowed for anonymous donations and funding. By accepting Bitcoin, terrorists were now able to fund purchases of weaponry. In 2018, an individual named Ahmed Sarsur was charged for attempting to purchase explosives and hire snipers to aid Syrian terrorists, as well as attempting to provide them financial support, all through the dark web.
There are at least some real and fraudulent websites claiming to be used by ISIL (ISIS), including a fake one seized in Operation Onymous. With the increase of technology, it has allowed cyber terrorists to flourish by attacking the weaknesses of the technology. In the wake of the November 2015 Paris attacks, an actual such site was hacked by an Anonymous-affiliated hacker group, GhostSec, and replaced with an advert for Prozac. The Rawti Shax Islamist group was found to be operating on the dark web at one time.
Social media
Within the dark web, there exists emerging social media platforms similar to those on the World Wide Web, this is known as the Dark Web Social Network (DWSN). The DWSN works a like a regular social networking site where members can have customizable pages, have friends, like posts, and blog in forums. Facebook and other traditional social media platforms have begun to make dark-web versions of their websites to address problems associated with the traditional platforms and to continue their service in all areas of the World Wide Web. Unlike Facebook, the privacy policy of the DWSN requires that members are to reveal absolutely no personal information and remain anonymous.
Hoaxes and unverified content
There are reports of crowdfunded assassinations and hitmen for hire; however, these are believed to be exclusively scams. The creator of Silk Road, Ross Ulbricht, was arrested by Homeland Security investigations (HSI) for his site and allegedly hiring a hitman to kill six people, although the charges were later dropped. There is an urban legend that one can find live murder on the dark web. The term "Red Room" has been coined based on the Japanese animation and urban legend of the same name; however, the evidence points toward all reported instances being hoaxes.
On June 25, 2015, the indie game Sad Satan was reviewed by YouTubers Obscure Horror Corner which they claimed to have found via the dark web. Various inconsistencies in the channel's reporting cast doubt on the reported version of events. There are several websites which analyze and monitor the deep web and dark web for threat intelligence.
Policing the dark web
There have been arguments that the dark web promotes civil liberties, like "free speech, privacy, anonymity". Some prosecutors and government agencies are concerned that it is a haven for criminal activity. The deep and dark web are applications of integral internet features to provide privacy and anonymity. Policing involves targeting specific activities of the private web deemed illegal or subject to internet censorship.
When investigating online suspects, police typically use the IP (Internet Protocol) address of the individual; however, due to Tor browsers creating anonymity, this becomes an impossible tactic. As a result, law enforcement has employed many other tactics in order to identify and arrest those engaging in illegal activity on the dark web. OSINT, or Open Source Intelligence, are data collection tools that legally collect information from public sources. OSINT tools can be dark web specific to help officers find bits of information that would lead them to gaining more knowledge about interactions going on in the dark web.
In 2015 it was announced that Interpol now offers a dedicated dark web training program featuring technical information on Tor, cybersecurity and simulated darknet market takedowns. In October 2013 the UK's National Crime Agency and GCHQ announced the formation of a "Joint Operations Cell" to focus on cybercrime. In November 2015 this team would be tasked with tackling child exploitation on the dark web as well as other cybercrime. In March 2017 the Congressional Research Service released an extensive report on the dark web, noting the changing dynamic of how information is accessed and presented on it; characterized by the unknown, it is of increasing interest to researchers, law enforcement, and policymakers. In August 2017, according to reportage, cybersecurity firms which specialize in monitoring and researching the dark web on behalf of banks and retailers routinely share their findings with the FBI and with other law enforcement agencies "when possible and necessary" regarding illegal content. The Russian-speaking underground offering a crime-as-a-service model is regarded as being particularly robust.
Journalism
Many journalists, alternative news organizations, educators, and researchers are influential in their writing and speaking of the darknet, and making its use clear to the general public. Media coverage typically reports on the dark web in two ways; detailing the power and freedom of speech the dark web allows people to express, or more commonly reaffirms the illegality and fear of its contents, such as computer hackers. Many headlines tie the dark web to child pornography with headlines such as, "N.J. man charged with surfing 'Dark Web' to collect nearly 3K images of child porn", along with other illegal activities where news outlets describe it as "a hub for black markets that sell or distribute drugs".
Specialist Clearweb news sites such as DeepDotWeb and All Things Vice provide news coverage and practical information about dark web sites and services; however, DeepDotWeb was shut down by authorities in 2019. The Hidden Wiki and its mirrors and forks hold some of the largest directories of content at any given time. Traditional media and news channels such as ABC News have also featured articles examining the darknet.
| Technology | Internet | null |
30064130 | https://en.wikipedia.org/wiki/Fission%20%28biology%29 | Fission (biology) | Fission, in biology, is the division of a single entity into two or more parts and the regeneration of those parts to separate entities resembling the original. The object experiencing fission is usually a cell, but the term may also refer to how organisms, bodies, populations, or species split into discrete parts. The fission may be binary fission, in which a single organism produces two parts, or multiple fission, in which a single entity produces multiple parts.
Binary fission
Organisms in the domains of Archaea and Bacteria reproduce with binary fission. This form of asexual reproduction and cell division is also used by some organelles within eukaryotic organisms (e.g., mitochondria). Binary fission results in the reproduction of a living prokaryotic cell (or organelle) by dividing the cell into two parts, each with the potential to grow to the size of the original.
Fission of prokaryotes
The single DNA molecule first replicates, then attaches each copy to a different part of the cell membrane. When the cell begins to pull apart, the replicated and original chromosomes are separated. The consequence of this asexual method of reproduction is that all the cells are genetically identical, meaning that they have the same genetic material (barring random mutations). Unlike the processes of mitosis and meiosis used by eukaryotic cells, binary fission takes place without the formation of a spindle apparatus on the cell. Like in mitosis (and unlike in meiosis), the parental identity is not lost.
Fragmentation
FtsZ is homologous to β-tubulin, the building block of the microtubule cytoskeleton used during mitosis in eukaryotes. FtsZ is thought to be the first protein to localize to the site of future division in bacteria, and it assembles into a Z ring, anchored by FtsZ-binding proteins and defines the division plane between the two daughter cells. MinC and MinD function together as division inhibitors, blocking formation of the FtsZ ring. MinE stops the MinCD activity midcell, allowing FtsZ to take over for binary fission.
More specifically, the following steps occur:
The bacterium before binary fission is when the DNA is tightly coiled.
The DNA of the bacterium has uncoiled and duplicated.
The DNA is pulled to the separate poles of the bacterium as it increases the size to prepare for splitting.
The growth of a new cell wall begins to separate the bacterium (triggered by FtsZ polymerization and "Z-ring" formation)
The new cell wall (septum) fully develops, resulting in the complete split of the bacterium.
The new daughter cells have tightly coiled DNA rods, ribosomes, and plasmids; these are now brand-new organisms.
Studies of bacteria made to not produce a cell wall, called L-form bacteria, indicate that FtsZ requires a cell wall to work. Little is known about how bacteria that naturally don't grow a cell wall divide, but it is thought to resemble the L-form's budding-like division process of extrusion and separation.
Speed of FtsZ-dependent Fission
Binary fission is generally rapid, though its speed varies between species. For E. coli, cells typically divide about every 20 minutes at 37 °C. Because the new cells will, in turn, undergo binary fission on their own, the time binary fission requires is also the time the bacterial culture requires to double in the number of cells it contains. This time period can, therefore, be referred to as the doubling time. Some species other than E. coli may have faster or slower doubling times: some strains of Mycobacterium tuberculosis may have doubling times of nearly 100 hours. Bacterial growth is limited by factors including nutrient availability and available space, so binary fission occurs at much lower rates in bacterial cultures once they enter the stationary phase of growth.
In archaea
Thermoproteota (formerly Crenarchaeota) possess neither a cell wall nor the FtsZ mechanism. They use a primitive version of the eukaryotic ESCRT-III system (also known as Cdv) to manipulate the membrane into separating, specifically by coming into the middle of the two soon-to-be daughter cells. Euryarchaeota use FtsZ like bacteria do.
Fission of organelles
Some organelles in eukaryotic cells reproduce using binary fission. Mitochondrial fission occurs frequently within the cell, even when the cell is not actively undergoing mitosis, and is necessary to regulate the cell's metabolism. All chloroplasts and some mitochondria (not in animals), both organelles derived from endosymbiosis of bacteria, also use FtsZ in a bacteria-like fashion.
Types of binary fission
Binary fission in organisms can occur in four ways: irregular, longitudinal, transverse, or oblique. For example:
Irregular: In this fission, cytokinesis may take place along any plane but it is always perpendicular to the plane of karyokinesis (nuclear division). e.g. Amoeba.
Longitudinal: Here cytokinesis takes place along the longitudinal axis. e.g. in flagellates like Euglena.
Transverse: Here cytokinesis takes place along the transverse axis. e.g. in ciliate protozoans like Paramecium.
Oblique: In this type of binary fission, cytokinesis occurs obliquely. Example Ceratium.
Binary fission means "division into two". It is the simplest and most common method of asexual reproduction.
Multiple fission
Fission of protists
Multiple fission at the cellular level occurs in many protists, e.g. sporozoans and algae. The nucleus of the parent cell divides several times by amitosis, producing several nuclei. The cytoplasm then separates, creating multiple daughter cells.
Some parasitic, single-celled organisms undergo a multiple fission-like process to produce numerous daughter cells from a single parent cell. Isolates of the human parasite Blastocystis hominis were observed to begin such a process within 4 to 6 days. Cells of the fish parasite Trypanosoma borreli have also been observed participating in both binary and multiple fission.
Fission of apicomplexans
In the apicomplexans, a phylum of parasitic protists, multiple fission, or schizogony, is manifested either as merogony, sporogony, or gametogony. Merogony results in merozoites, which are multiple daughter cells that originate within the same cell membrane; sporogony results in sporozoites, and gametogony results in microgametes.
Fission of green algae
Green algae can divide into more than two daughter cells. The exact number of daughter cells depends on the species of algae and is an effect of temperature and light.
Multiple fission of bacteria
Most species of bacteria primarily undergo binary reproduction. Some species and groups of bacteria may undergo multiple fission as well, sometimes beginning or ending with the production of spores. The species Metabacterium polyspora, a symbiont of guinea pigs, has been found to produce multiple endospores in each division. Some species of cyanobacteria have also been found to reproduce through multiple fission.
Plasmotomy
Some protozoans reproduce by yet another mechanism of fission called plasmotomy. In this type of fission, a multinucleate adult parent undergoes cytokinesis to form two multinucleate (or coenocytic) daughter cells. The daughter cells so produced undergo further mitosis.
Opalina and Pelomyxa reproduce in this way.
Clonal fragmentation
Fragmentation in multicellular or colonial organisms is a form of asexual reproduction or cloning, where an organism is split into fragments. Each of these fragments develop into mature, fully grown individuals that are clones of the original organism. In echinoderms, this method of reproduction is usually known as fissiparity.
Population fission
Any splitting of a single population of individuals into discrete parts may be considered fission. A population may undergo fission process for a variety of reasons, including migration or geographic isolation. Since the fission leads to genetic variance in the newly isolated, smaller populations, population fission is a precursor to speciation.
| Biology and health sciences | Cellular division | null |
33926405 | https://en.wikipedia.org/wiki/Sediment%20gravity%20flow | Sediment gravity flow | A sediment gravity flow is one of several types of sediment transport mechanisms, of which most geologists recognize four principal processes. These flows are differentiated by their dominant sediment support mechanisms, which can be difficult to distinguish as flows can be in transition from one type to the next as they evolve downslope.
Sediment support mechanisms
Sediment gravity flows are represented by four different mechanisms of keeping grains within the flow in suspension.
Grain flow – Grains in the flow are kept in suspension by grain-to-grain interactions, with the fluid acting only as a lubricant. As such, the grain-to-grain collisions generate a dispersive pressure that helps prevent grains from settling out of suspension. Although common in terrestrial environments on the slip faces of sand dunes, pure grain flows are rare in subaqueous settings. However, grain-to-grain interactions in high-density turbidity currents are very important as a contributing mechanism of sediment support.
Liquefied flow (or fluidized flow) – Form in cohesionless granular substances. As grains at the base of a suspension settle out, fluid that is displaced upward by the settling generates pore fluid pressures that may help suspend grains in the upper part of the flow. Application of an external pressure to the suspension will initiate flow. This external pressure can be applied by a seismic shock, which may transform loose sand into a highly viscous suspension as in quicksand. Generally as soon as the flow begins to move, fluid turbulence results and the flow rapidly evolves into a turbidity current. Flows and suspensions are said to be liquefied when the grains settle downward through the fluid and displace the fluid upwards. By contrast, flows and suspensions are said to fluidized when the fluid moves upward through the grains, thereby temporarily suspending them. Most flows are liquefied, and many references to fluidized sediment gravity flows are in fact incorrect and actually refer to liquefied flows.
Debris flow or mudflow – Grains are supported by the strength and buoyancy of the matrix. Mudflows and debris flows have cohesive strength, which makes their behavior difficult to predict using the laws of physics. As such, these flows exhibit non-newtonian behavior. Because mudflows and debris flows have cohesive strength, unusually large clasts may be able to literally float on top of the mud matrix within the flow.
Turbidity current – Grains are suspended by fluid turbulence within the flow. Because the behavior of turbidity currents is largely predictable, they exhibit newtonian behavior, in contrast to flows with cohesive strength (i.e., mudflows and debris flows). The behavior of turbidity currents in subaqueous settings is strongly influenced by the concentration of the flow, as closely packed grains in high-concentration flows are more likely to undergo grain-to-grain collisions and generate dispersive pressures as a contributing sediment support mechanism, thereby keep additional grains in suspension. Thus, it is useful to distinguish between low-density and high-density turbidity currents. A powder snow avalanche is essentially a turbidity current in which air is the supporting fluid and suspends snow granules in place of sand grains.
Resulting deposits
Description
Although the deposits of all four types of sediment support mechanisms are found in nature, pure grain flows are largely restricted to aeolian settings, whereas subaqueous environments are characterized by a spectrum of flow types with debris flows and mud flows on one end of the spectrum, and high-density and low-density turbidity currents on the other end. It is also useful in subaqueous environments to recognize transitional flows that are in between turbidity currents and mud flows. The deposits of these transitional flows are referred to by a variety of names, some of the more popular being "hybrid-event beds (HEB)", linked debrites" and "slurry beds". Powder snow avalanches and glowing avalanches (gas-charged flows of super heated volcanic ash) are examples of turbidity currents in non-marine settings.
Grain flow deposits are characterized by a coarsening-upward distribution of grain sizes (inverse grading) within the bed. This results from smaller grains within the flow falling down in between larger grains during grain-to-grain collisions, and thereby depositing preferentially at the base of flow. Although present as grain avalanches in terrestrial sand dunes, grain flows are rare in other settings. However, inverse graded beds resulting from grain flow processes do make up so-called "traction carpets" in the lower intervals of some high-density turbidites.
Liquefied flow deposits are characterized by de-watering features, such as dish structures, that result from upward escaping fluid within the flow. As with pure grain flows, pure liquefied flows seldom occur on their own. However, liquefied flow processes are very important as grains within turbidity currents begin to settle out and displace fluid upwards. This dish structures and related features, such de-watering pipes, are often found in turbidites.
Debris flow deposits are characterized by a bimodal distribution of grain sizes, in which larger grains and/or clasts float within a matrix of fine-grained clay. Because the muddy matrix has cohesive strength, unusually large clasts may be able to float on top of the muddy material making up the flow matrix, and thereby end up preserved on the upper bed boundary of the resulting deposit.
Low-density turbidity current deposits (turbidites) are characterized by a succession of sedimentary structures referred to as the Bouma sequence, which result from decreasing energy within the flow (i.e., waning flow), as the turbidity current moves downslope.
High-density turbidity current deposits are characterized by much coarser grain size than in low-density turbidites, with the basal portions of the deposits often characterized by features that result from the close proximity of the grains to each other. Thus, indications of grain-to-grain interactions (i.e., grain flow processes), and interaction of grains with the substratum (i.e., traction) are generally present in the lower portions of these deposits. Complete Bouma sequences are rare, and generally only the Bouma A and B layers are evident.
Hybrid event beds (HEB) transitional between mud flows and turbidity currents are characterized by features indicative of both cohesionless (turbulence-supported) and cohesive (mud-supported) flow with no separating bed boundary between the two. In most cases, they are represented by grain-supported textures that grade upward within the bed into mud-supported textures. It is not uncommon for debris flows and mud flows to evolve downslope into turbidity currents, and vice versa. Also, flows internally may transition upward from one flow process to another.
Modern and ancient examples
Modern and ancient (outcrop) examples of deposits resulting from different types of sediment gravity flows.
Significance
Sediment gravity flows, primarily turbidity currents, but to a lesser extent debris flows and mud flows, are thought to be the primary processes responsible for depositing sand on the deep ocean floor. Because anoxic conditions at depth in the deep oceans are conducive to the preservation of organic matter, which with deep burial and subsequent maturation through the absorption of heat can generate oil and gas, the deposition of sand in deep ocean settings can ultimately juxtapose petroleum reservoirs and source rocks. In fact, a significant portion of the oil and gas produced in the world today is found in deposits (reservoirs) originating from sediment gravity flows.
| Physical sciences | Sedimentology | Earth science |
33936518 | https://en.wikipedia.org/wiki/Lepas%20anatifera | Lepas anatifera | Lepas anatifera, commonly known as the pelagic gooseneck barnacle or smooth gooseneck barnacle, is a species of barnacle in the family Lepadidae. These barnacles are found, often in large numbers, attached by their flexible stalks to floating timber, the hulls of ships, piers, pilings, seaweed, and various sorts of flotsam.
Description
The body or capitulum of Lepas anatifera is supported by a long, flexible stalk or peduncle. There are five smooth, translucent plates, edged with scarlet and separated by narrow gaps. The plates have growth lines parallel with their margins and a few faint radial sculpture lines. Inside the capitulum, the barnacle has a head, a thorax, and a vestigial abdomen. A number of brown, filamentous cirri or feeding tentacles project from between the plates. The peduncle is tough and a purplish-brown colour. The capitulum may grow to a length of and the peduncle varies between and .
Distribution
Lepas anatifera has a cosmopolitan distribution and is found in tropical and subtropical seas worldwide. Because it frequently is attached to objects carried into colder seas by currents, such as the North Atlantic Drift, it often is found well away from its place of origin and in waters too cold for it to reproduce. In this way, it has been documented in Norway, the Shetland Islands, the Faeroe Islands, Iceland, and Spitsbergen.
Biology
Lepas anatifera is a hermaphrodite and starts to breed when it is about 2.5 centimetres (1 in) long. Fertilisation is internal and the eggs are brooded inside the mantle for a week before emerging as free-swimming nauplius larvae. After further development, drifting as part of the plankton, these settle onto floating objects.
Lepas anatifera has long been known to grow on sea turtles, but in 2008, some small specimens were found attached to an American crocodile (Crocodylus acutus) on the Pacific coast of Mexico. That crocodile species mostly inhabits mangrove swamps and river estuaries, but it is salt tolerant, and sometimes is found in marine environments. In this instance, the size of the goose-neck barnacles indicated that the crocodile must have been in the sea for at least a week. That is the first time that Lepas anatifera has been recorded as an epibiont of a crocodilian.
Origin of the name
In thirteenth-century England the word "barnacle" was used for a species of waterfowl, the barnacle goose (Branta leucopsis). This bird breeds in the Arctic, but winters in the British Isles so its nests and eggs were never seen by the British. At the time, it was thought that the gooseneck barnacles that wash up occasionally on the shore had spontaneously generated from the rotting wood to which they were attached, and therefore, that the geese might be generated similarly. Credence to the idea was provided by the tuft of brown cirri that protruded from the capitulum of the crustaceans that resembled the down of an unhatched gosling. Popular belief linked the two species and a writer in 1678 wrote "multitudes of little Shells; having within them little Birds perfectly shap'd, supposed to be Barnacles [by which he meant barnacle geese]."
| Biology and health sciences | Crustaceans | Animals |
40723560 | https://en.wikipedia.org/wiki/Geochemical%20cycle | Geochemical cycle | In Earth science, a geochemical cycle is the pathway that chemical elements undergo to be able to interact with the reservoirs of chemicals in the surface and crust of the Earth. The term "geochemical" tells us that geological and chemical factors are all included. The migration of heated and compressed chemical elements and compounds such as silicon, aluminium, and general alkali metals through the means of subduction and volcanism is known in the geological world as geochemical cycles.
The geochemical cycle encompasses the natural separation and concentration of elements and heat-assisted recombination processes. Changes may not be apparent over a short term, such as with biogeochemical cycles, but over a long term changes of great magnitude occur, including the evolution of continents and oceans.
Differentiating biogeochemical cycles
Some may use the terms biogeochemical cycle and geochemical cycle interchangeably because both cycles deal with Earth's reservoirs. However, a biogeochemical cycle refers to the chemical interactions in surface reservoirs such as the atmosphere, hydrosphere, lithosphere, and biosphere whereas a geochemical cycle refers to the chemical interactions that exist in crustal and sub crustal reservoirs such as the deep earth and lithosphere.
Earth system
The Earth, as a system, is open to radiation from the sun and space, but is practically closed with regard to matter. As all closed systems, it follows the law of conservation of mass which states that matter cannot be created nor destroyed, thus, the matter, although transformed and migrated, remains the same as when the Earth was formed. The Earth system contains seven different reservoirs that are separated into surface reservoirs, which include atmosphere, hydrosphere, biosphere, pedosphere, and lithosphere and the isolated reservoirs that include deep Earth and outer space. Geochemical cycles are concerned with the interactions between deep earth which consists of Earth's mantle and core, and the lithosphere which consists of the Earth's crust.
Pathways
Flux in geochemical cycles is the movement of material between the deep Earth and the surface reservoirs. This occurs through two different processes: volcanism and subduction of tectonic plates.
Subduction is the process that takes place at convergent boundaries by which one tectonic plate moves under another tectonic plate and sinks into the mantle as the plates converge. This leads to the sinking of one plate into the mantle which creates a broad range of geochemical transformations or cycling.
Volcanism is the process that takes place at divergent boundaries by which one tectonic plate separates from another creating a rift in which molten rock (magma) erupts onto the surface of the Earth. This molten rock magma then cools and crystallizes, forming igneous rocks. If crystallization occurs at the Earth's surface, extrusive igneous rocks are formed; if crystallization occurs within the Earth's lithosphere, intrusive igneous rocks are formed which can then be brought to Earth's surface by denudation
Important cycles
Categories and examples of geochemical cycles:
| Physical sciences | Geochemistry | Earth science |
49717050 | https://en.wikipedia.org/wiki/BOSS%20Great%20Wall | BOSS Great Wall | The BOSS Great Wall is a supercluster complex that was identified, using the Baryon Oscillation Spectroscopic Survey (BOSS) of the Sloan Digital Sky Survey (SDSS), in early 2016. It was discovered by a research team from several institutions, consisting of: Heidi Lietzen, Elmo Tempel, Lauri Juhan Liivamägi, Antonio Montero-Dorta, Maret Einasto, Alina Streblyanska, Claudia Maraston, Jose Alberto Rubiño-Martín and Enn Saar. The BOSS Great Wall is one of the largest superstructures in the observable universe, though there are even larger structures known.
The large complex has a mean redshift of z ~ 0.47 (z times Hubble length ≈ 6.8 billion light years). It consists of two elongated superclusters, two large superclusters, and several smaller superclusters as well. The elongated superclusters form galaxy walls, with the larger of the two having a diameter of 186/h Mpc (supercluster A in the figure); the second wall's being 173/h Mpc (supercluster B). The other two main superclusters are moderately large, having diameters of 91/h Mpc and 64/h Mpc (superclusters D and C, respectively).
The superstructure is roughly 1 billion light years in diameter, and has a total mass approximately 10,000 times the Milky Way galaxy. It contains at least 830 visible galaxies (represented in the figure within their respective superclusters), as well as many others that are not visible (dark galaxies). The researchers used Minkowski functionals to verify the structure's overall shape and size; the first three quantifying the thickness, width, and length followed by the fourth determining the structure's overall curvature. The research team compared the luminosities and stellar masses within the superstructure to known high stellar mass galaxies within the SDSS's 7th data release, DR7. This allowed the team to scale the data using known values, from local superclusters, to determine the overall morphology of the BOSS Great Wall. It is currently debated amongst astronomers if the BOSS Great Wall may be considered a structure, due to the intricacies of its shape and overall size. The question of whether the supercluster complex is moving together or being slowly separated by the expanding universe is a key factor to this discussion. Nevertheless, when compared to several other chain structures, such as the Sloan Great Wall, the BOSS Great Wall's superclusters are far richer, containing more dense, high stellar mass galaxies. The BOSS Great Wall's discovery, and the data gained therein, should prove very beneficial for astronomers who study the overall structure of the cosmic web.
| Physical sciences | Notable patches of universe | Astronomy |
45452439 | https://en.wikipedia.org/wiki/Helium%20compounds | Helium compounds | Helium is the smallest and the lightest noble gas and one of the most unreactive elements, so it was commonly considered that helium compounds cannot exist at all, or at least under normal conditions. Helium's first ionization energy of 24.57 eV is the highest of any element. Helium has a complete shell of electrons, and in this form the atom does not readily accept any extra electrons nor join with anything to make covalent compounds. The electron affinity is 0.080 eV, which is very close to zero. The helium atom is small with the radius of the outer electron shell at 0.29 Å. Helium is a very hard atom with a Pearson hardness of 12.3 eV. It has the lowest polarizability of any kind of atom, however, very weak van der Waals forces exist between helium and other atoms. This force may exceed repulsive forces, so at extremely low temperatures helium may form van der Waals molecules. Helium has the lowest boiling point (4.2 K) of any known substance.
Repulsive forces between helium and other atoms may be overcome by high pressures. Helium has been shown to form a crystalline compound with sodium under pressure. Suitable pressures to force helium into solid combinations could be found inside planets. Clathrates are also possible with helium under pressure in ice, and other small molecules such as nitrogen.
Other ways to make helium reactive are: to convert it into an ion, or to excite an electron to a higher level, allowing it to form excimers. Ionised helium (He+), also known as He II, is a very high energy material able to extract an electron from any other atom. He+ has an electron configuration like hydrogen, so as well as being ionic it can form covalent bonds. Excimers do not last for long, as the molecule containing the higher energy level helium atom can rapidly decay back to a repulsive ground state, where the two atoms making up the bond repel. However, in some locations such as helium white dwarfs, conditions may be suitable to rapidly form excited helium atoms. The excited helium atom has a 1s electron promoted to 2s. This requires per gram of helium, which can be supplied by electron impact, or electric discharge. The 2s excited electron state resembles that of the lithium atom.
Known solid phases
Most solid combinations of helium with other substances require high pressure. Helium does not bond with the other atoms, but the substances can have a well defined crystal structure.
Disodium helide
Disodium helide (Na2He) is a compound of helium and sodium that is stable at high pressures above . Disodium helide was first predicted using USPEX code and was first synthesised in 2016. It was predicted to be thermodynamically stable over 160 GPa and dynamically stable over 100 GPa. Na2He has a cubic crystal structure, resembling fluorite. At 300 GPa the edge of a unit cell of the crystal has . Each unit cell contains four helium atoms on the centre of the cube faces and corners, and eight sodium atoms at coordinates a quarter cell in from each face. Double electrons (2e−) are positioned on each edge and the centre of the unit cell. Each pair of electrons is spin paired. The presence of these isolated electrons makes this an electride. The helium atoms do not participate in any bonding. However the electron pairs can be considered as an eight-centre two-electron bond. Disodium helide is predicted to be an insulator and transparent.
Silicates
Helium was first observed to enter into a silicate in 2007. The mineral melanophlogite is a natural silica clathrate (clathrasil) that normally would contain carbon dioxide, methane or nitrogen. When compressed with helium, a new clathrate forms. This has a much higher bulk modulus, and resists amorphization. Helium was taken up around 17 GPa, enlarging the unit cell, and given off again when pressure dropped to 11 GPa.
Cristobalite He II (SiO2He) is stable between 1.7 and 6.4 GPa. It has a rhombohedral space group R-3c with unit cell dimensions and at 4 GPa.
Cristobalite He I (SiO2He) can be formed under higher helium pressures over 6.4 GPa. It has a monoclinic space group P21/C with unit cell dimensions and at 10 GPa.
Helium penetrates into fused silica at high pressure, reducing its compressibility.
Chibaite, another natural silica clathrate has its structure penetrated by helium under pressures higher than 2.5 GPa. The presence of guest hydrocarbons does not prevent this happening. Neon requires a higher pressure, 4.5 GPa to penetrate, and unlike helium shows hysteresis. Linde-type A zeolites are also rendered less compressible when penetrated by helium between 2 and 7 GPa.
Arsenolite helium inclusion compound
Arsenolite helium inclusion compound is stable from pressures over 3 GPa and up to at least 30 GPa. Arsenolite is one of the softest and most compressible minerals. Helium prevents amorphization that would otherwise occur in arsenolite under pressure. The solid containing helium is stronger and harder, with a higher sound velocity than plain arsenolite. The helium that is included into the crystal causes a more uniform stress on the As4O6 molecules. No actual bond is formed from arsenic to helium despite the lone pairs of electrons available. The diffusion of helium into arsenolite is a slow process taking days at a pressure around 3 GPa. However, if the pressure on the crystal is too high (13 GPa) helium penetration does not take place, as the gaps between arsenolite molecules become too small. Neon does not diffuse into arsenolite.
Perovskites
Helium can be inserted into the A sites of negative thermal expansion perovskites that otherwise have defects at the A site. At room temperature and 350 MPa helium is included into CaZrF6 to expand its unit cell yielding HeCaZrF6. About half of the A sites are filled by helium atoms. This substance loses helium over several minutes on depressurisation at ambient temperature, but below 130 K it retains helium when depressurised. At 1 GPa all the A sites are filled by helium, yielding He2CaZrF6.
Formates
Under pressure helium penetrates dimethylammonium iron formate (CH3)2NH2Fe(HCOO)3. It affects this by causing a change to a monoclinic ordered state at a lower pressure (around 4 GPa) than if helium were absent.
Small molecule
is a van der Waals compound with hexagonal crystals. At 10 GPa the unit cell of 22 nitrogen atoms has a unit cell volume of 558 Å3, and about 512 Å3 at 15 GPa. These sizes are around 10 Å3 smaller than the equivalent amount of solid δ-N2 nitrogen at these pressures. The substance is made by compressing nitrogen and helium in a diamond anvil cell.
NeHe2 has a crystal structure of hexagonal MgZn2 type at 13.7 GPa. The unit cell has dimensions and at 21.8 GPa, There are four atoms in each unit cell. It melts at 12.8 GPa and 296 K, stable to over 90 GPa.
Clathrates
Helium clathrates only form under pressure. With ice II at pressures between 280 and 480 MPa a solid helium hydrate with He:H2O ratio of 1:6 exists. Another clathrate with a water to helium ratio of 2.833 has been made in the SII clathrate structure. It has two different cages in the ice, the small one can contain one helium atom, and the large can contain four atoms. It was produced from neon clathrate that lost its neon, and then replaced by helium at 141 K and 150 MPa Other helium hydrates with the ice-Ih, ice-Ic 1:1, and ice-Ic 2:1 He to H2O ratio have been predicted. These could exist in planets like Neptune or Uranus. Helium clathrate hydrates should be similar to hydrogen clathrate due to the similar size of the hydrogen molecule.
Helium may enter into crystals of other molecular solids under pressure to alter their structure and properties. For example, with chlorpropamide over 0.3 GPa in helium changes to a monoclinic structure, and yet another structural form at 1.0 GPa.
Fullerites
Helium can form intercalation compounds with the fullerites, including buckminsterfullerene C60 and C70. In solid C60 there are spaces between the C60 balls, either tetrahedral or octahedral in shape. Helium can diffuse into the solid fullerite even at one atmosphere pressure. Helium enters the lattice in two stages. The first rapid stage takes a couple of days, and expands the lattice by 0.16% (that is 2.2 pm) filling the larger octahedral sites. The second stage takes thousands of hours to absorb more helium and expands the lattice twice as much again (0.32%) filling the tetrahedral sites. However the solid C60•3He is not stable and loses helium on a timescale of 340 hours when not under a helium atmosphere. When the helium intercalated fullerite is cooled, it has an orientational phase transition that is 10 K higher than for pure solid C60. The actual discontinuous change in volume at that point is smaller, but there are more rapid changes near the transition temperature, perhaps due to varying occupancy of the voids by helium.
Endohedral
Helium atoms can be trapped inside molecular cages such as the fullerenes He@C60, He@C70, He2@C60 and He2@C70 have all been made using compressed helium and fullerenes. When using only pressure and heat, the yield is quite low, under 1%. However, by breaking and reforming the carbon ball, much higher concentrations of He@C60 or He@C70 can be made. High-performance liquid chromatography can concentrate the helium containing material. HeN@C60 and HeN@C70 have also been made. These have a lower symmetry due to the two atoms being trapped together in the same cavity. This causes ESR line broadening.
Dodecahedrane can trap helium from a helium ion beam to yield He@C20H20.
Other cage like inorganic or organic molecules may also trap helium, for example C8He with He inside a cube, or He@Mo6Cl8F6.
Impurity helium condensates
Impurity helium condensates (IHCs) (or impurity helium gels) are deposited as a snow-like gel in liquid helium when various atoms or molecules are absorbed on the surface of superfluid helium. Atoms can include H, N, Na, Ne, Ar, Kr, Xe, alkalis or alkaline earths. The impurities form nanoparticle clusters coated with localised helium held by van der Waals force. Helium atoms are unable to move towards or away from the impurity, but perhaps can move perpendicularly around the impurity. The snow like solid is structured like an aerogel. When free atoms are included in the condensate a high energy density can be achieved, up to 860 J cm−1 or 5 kJ g−1. These condensates were first investigated as a possible rocket fuel. The mixtures are given a notation involving square brackets so that [N]/[He] represents a nitrogen atom impurity in helium.
[N]/[He] atomic nitrogen impurity helium is produced when a radio frequency discharge in a nitrogen helium mixture is absorbed into superfluid helium, it can have up to 4% nitrogen atoms included. The substance resembles crumbly snow and condenses and settles from the liquid helium. It also contains variable proportions of N2 molecules. This substance is a high energy solid, with as much power as conventional explosives. When it is heated above 2.19 K (the lambda point of helium), the solid decomposes and explodes. This substance is not a true compound, but more like a solid solution. E. B. Gordon et al. suggested that this material may exist in 1974. The localised helium shells around an individual atom are termed van der Waals spheres. However the idea that the nitrogen atoms are dispersed in the helium has been replaced by the concept of nitrogen atoms attached to the surface of clusters of nitrogen molecules. The energy density of the solid can be increased by pressing it.
Other inert gas impurity helium condensates can also be made from a gas beam into superfluid helium. [Ne]/[He] decomposes at 8.5 K with release of heat and formation of solid neon. Its composition approximates NeHe16.
[Ar]/[He] contains 40–60 helium atoms per argon atom.
[Kr]/[He] contains 40–60 helium atoms per krypton atom and is stable up to 20 K.
[Xe]/[He] contains 40–60 helium atoms per xenon atom.
[N2]/[He] contains 12—17 He atoms per N2 molecule. It is stable up to 13 K
[N]/[Ne]/[He] Formed from a gas beam generated from a radio-frequency electric discharge in mixtures of neon, nitrogen and helium blown into superfluid He. Additional inert gas stabilises more nitrogen atoms. It decomposes around 7 K with a blue green light flash. Excited nitrogen atoms in the N(2D) state can be relative long lasting, up to hours, and give off a green luminescence.
[H2]/[He], or [D2]/[He] when dihydrogen or dideuterium is absorbed into superfluid helium, filaments are formed. When enough of these form, the solid resembles cotton, rather than snow. Using H2 results in the product floating and stopping further production, but with deuterium, or a half-half mixture, it can sink and accumulate. Atomic hydrogen in impurity helium decays fairly rapidly due to quantum tunneling (H + H → H2). Atomic deuterium dimerises slower (D + D → D2), but reacts very quickly with any diprotium present. (D + H2 → HD + H). Atomic hydrogen solids are further stabilised by other noble gases such as krypton. Lowering temperatures into the millikelvin range can prolong the lifetime of atomic hydrogen condensates. Condensates containing heavy water or deuterium are under investigation for the production of ultracold neutrons. Other impurity gels have been investigated for producing ultracold neutrons include CD4 (deuterated methane) and C2D5OD. (deuterated ethanol)
The water-helium condensate [H2O]/[He] contains water clusters of several nanometers in diameter, and pores from 8 to 800 nm.
Oxygen O2 impurity helium contains solid oxygen clusters from 1 to 100 nm.
Impurity solid helium
Introducing impurities into solid helium yields a blue solid that melts at a higher temperature than pure He. For cesium the absorption has a peak at 750 nm, and for rubidium, maximal absorption is at 640 nm. These are due to metal clusters with diameters of 10 nm or so. However the low concentration of clusters in this substance should not be sufficient to solidify helium as the amount of metal in the solid is less than billionth that of the impurity helium condensate solids, and liquid helium does not "wet" cesium metal. The solid is possibly due to helium snowballs attached to Cs+ (or Rb+) ions. The snowball is a shell that contains helium atoms solidified in particular positions around the ion. The helium atoms are immobilized in the snowball by polarization. Neutral metallic atoms in liquid helium are also surrounded by a bubble caused by electron repulsion. They have typical sizes ranging from 10 to 14 Å diameter. Free electrons in liquid helium are enclosed in a bubble 17 Å in diameter. Under 25 atmosphere pressure an electron bubble reduces to 11 Å.
Solid solution
Helium can dissolve to a limited extent in hot metal, with concentration proportional to pressure. At atmospheric pressure, 500 °C bismuth can absorb 1 part in a billion; at 649 °C lithium can take 5 parts per billion; and at 482 °C potassium can take 2.9 parts per million (all atom fractions). In nickel there can be 1 in 1010 atoms, and in gold 1 in 107. The supposition is that the higher the melting point the less helium can be dissolved. However, when a liquid metal is quenched, higher concentrations of helium can be left dissolved. So cooled liquid steel can have one part per million of helium. In order to get a helium atom into a metal lattice, a hole has to be formed. The energy to make that hole in the metal is basically the heat of solution.
Nanowires
Gold, copper, rubidium, caesium, or barium atoms evaporated into liquid helium form spiderweb-like structures. Rhenium produces nano flakes. Molybdenum, tungsten, and niobium produce thin nanowires with diameters of 20, 25 and 40 Å. When platinum, molybdenum or tungsten is evaporated into liquid helium, nanoclusters are first formed, accompanied by high temperature thermal emission pulse, above the melting point of the metals. In superfluid helium, these clusters migrate to the vortices and weld together to yield nanowires once the clusters are mostly solid. In higher temperature liquid helium, larger clusters of metal are formed instead of wires. The metal vapours can only penetrate about 0.5 mm into liquid helium. Indium, tin, lead and nickel produce nanowires about 80 Å in diameter. These same four metals also produce smooth spheres about 2 μm across that explode when examined with an electron microscope. Copper, permalloy, and bismuth also make nanowires.
Two-dimensional ionic crystal
Helium II ions (He+) in liquid helium when attracted by an electric field can form a two-dimensional crystal at temperatures below 100 mK. There are about half a trillion ions per square meter just below the surface of the helium. Free electrons float above the helium surface.
Known van der Waals molecules
LiHe
Dihelium
Trihelium
Ag3He
HeCO is weakly bound by van der Waals forces. It is potentially important in cold interstellar media as both CO and He are common.
CF4He and CCl4He both exist.
HeI2 can be formed by supersonic expansion of high pressure helium with a trace of iodine into a vacuum. It was the first known triatomic helium van der Waals molecule. It can be detected by fluorescence. HeI2 has a similar optical spectrum to I2, except that the bands and lines are shifted to form two extra series. One series is blueshifted by between 2.4 and 4.0 cm−1, and the other between 9.4 and 9.9 cm−1. The two series may be due to different amounts of vibration in the He–I bond. The lines are narrow indicating that the molecules in their excited vibrational state have a long lifetime.
Na2He molecules can form on the surface of helium nanodroplets.
NOHe
Known ions
Helium has the highest ionisation energy, so a He+ ion will strip electrons off any other neutral atom or molecule. However it can also then bind to the ion produced. The He+ ion can be studied in gas, or in liquid helium. Its chemistry is not completely trivial. For example, He+ can react with SF6 to yield SF or SF and atomic fluorine.
Ionised clusters
He was predicted to exist by Linus Pauling in 1933. It was discovered when doing mass spectroscopy on ionised helium. The dihelium cation is formed by an ionised helium atom combining with a helium atom: He+ + He → He.
The diionised dihelium He (1Σ) is in a singlet state. It breaks up He → He+ + He+ releasing 200 kcal/mol of energy. It has a barrier to decomposition of 35 kcal/mol and a bond length of 0.70 Å.
The trihelium cation He is in equilibrium with He between 135 and 200K.
Helium hydride
The helium hydride ion HeH+ has been known since 1925. The protonated dihelium ion He2H+ can be formed when the dihelium cation reacts with dihydrogen: He + H2 → He2H+ + H. This is believed to be a linear molecule. Larger protonated helium cluster ions exist HenH+ with n from 3 to 14. He6H+ and He13H+ appear to be more common. These can be made by reacting H or H with gaseous helium.
HeH2+ is unstable in its ground state. But when it is excited to the 2pσ state the molecule is bound with an energy of 20 kcal/mol. This doubly charged ion has been made by accelerating the helium hydride ion to 900 keV, and firing it into argon. It only has a short life of 4 ns.
H2He+ has been made and could occur in nature via H2 + He+ → H2He+.
H3He exists for n from 1 to over 30, and there are also clusters with more hydrogen atoms and helium.
Noble gas
Noble gas cluster ions exist for different noble gases. Singly charged cluster ions containing xenon exist with the formula HenXe, where n and m ≥ 1.
Many different HenKr+ exist with n between 1 and 17, with higher values possible. HenKr and HenKr also exist for many values of n. He12Kr and He12Kr ions are common. These singly charged cluster ions can be made from krypton in helium nanodroplets subject to vacuum ultraviolet radiation.
The Ar+ argon ion can form many different sized clusters with helium ranging from HeAr+ to He50Ar+, but the most common clusters are He12Ar+ and smaller. These clusters are made by capturing an argon atom in a liquid helium nanodroplet, and then ionising with high speed electrons. He+ is formed, which can transfer charge to argon and then form a cluster ion when the rest of the droplet evaporates.
NeHe can be made by ultraviolet photoionisation. Clusters only contain one neon atom. The number of helium atoms can vary from 1 to 23, but NeHe and NeHe are more likely to be observed.
Doubly charged ions of helium with noble gas atoms also exist including ArHe2+, KrHe2+, and XeHe2+.
Metals
Various metal-helium ions are known.
Alkali metal helide ions are known for all the alkalis. The molecule ground state for the diatomic ions is in the X1Σ+ state. The bond length gets bigger as the periodic table is descended with lengths of 1.96, 2.41, 2.90, 3.10, and 3.38 Å for Li+He, Na+He, K+He, Rb+He, and Cs+He. The dissociation energies are 1.9, 0.9, 0.5, 0.4 and 0.3 kcal/mol, showing bond energy decreases. When the molecule breaks up the positive charge is never on the helium atom.
When there are many helium atoms around, alkali metal ions can attract shells of helium atoms. Clusters can be formed from absorbing metal into helium droplets. The doped droplets are ionised with high speed electrons. For sodium clusters appear with the formula Na+Hen with n from 1 to 26. Na+He is the most common, but Na+He2 is very close in abundance. Na+He8 is much more abundant than clusters with more helium. NaHen with n from 1 to 20 also appears. NaHen with small n is also made. For potassium, K+Hen with n up to 28, and KHen for n from 1 to 20 is formed. K+He and K+He2 are both common, and K+He12 is a bit more commonly formed than other similar sized clusters. Cesium and rubidium cations also form clusters with helium.
Other known metal-helium ions include Cr+He, Co+He, Co+He3, Ni+He, and Ni+He3. PtHe2+; formed by high electric field off platinum surface in helium, VHe2+, HeRh2+ is decomposed in high strength electric field, Ta2+He, Mo2+He, W2+He, Re2+He, Ir2+He, Pt2+He2, W3+He2, W3+He3, and W3+He4.
Nonmetals
HeN can form at around 4 K from an ion beam of N into cold helium gas. The energy needed to break up the molecule is 140 cm−1 which is quite a bit stronger than the van der Waals neutral molecules. HeN is tough enough to have several vibrational, bending and rotational states. HenN with n from 2 to 6 have been made by shooting electrons at a supersonically expanding mix of nitrogen and helium.
C60He+ is formed by irradiating C60 with 50eV electrons and then steering ions into cold helium gas. C60He is also known.
He(OH)+ has been detected, although it is not produced when HTO (tritiated water) decays.
has been detected for values of n from 1 to 12. Also CH3He+, OCHHe+ and NH2He+ have been detected.
Young and Coggiola claimed to make HeC+ by an electric discharge off graphite into helium.
When tritium substituted methane (CH3T) decays, CH3He+ is produced in a very small amount.
The helium formyl cation, HeHCO+ is a linear molecule. It has a vibrational frequency red shifted 12.4 cm−1 compared to HCO+. It can be considered as a deenergized protonation reaction intermediate for the HeH+ + CO → HCO+ + He. HeHCO+ can be produced by a supersonic expansion of a gas mixture of He, CO, and H2, which is hit by a cross beam of electrons. CO and H2 are only supplied at 1% of the helium.
The HeHN molecule is linear. The He-H bondlength is 1.72 Å. It has an infrared band, due to B-H stretching, with a base at 3158.42 cm−1. The binding energy is 378 cm−1 in the 000 vibrational state, and 431 cm−1 in the 100 vibrational state. He2HN is also known. One helium atom is linked to a hydrogen, and the other is less tightly bound.
H2O+, H2OSF5+, SF5+ and SF6+ can form clusters with varying numbers of Helium atoms.
Excimers
The He excimer is responsible for the Hopfield continuum. Helium also forms an excimer with barium, Ba+He*.
Predicted compounds
Predicted solids
is predicted to form a solid with orthorhombic structure Ibam.
Iron helide (FeHe) was early on claimed to have been found, but the discovery was classified as an alloy. Early studies predicted the FeHe exists as an interstitial compound under high pressure, perhaps in dense planetary cores, or, as suggested by Freeman Dyson, in neutron star crust material. Recent density functional theory calculations predict the formation of FeHe compounds at pressures above about 4 TPa, suggesting indeed that these compounds could be found inside giant planets, white dwarf stars, or neutron stars.
Na2HeO is predicted to have a similar structure to Na2He, but with oxygen atoms in the same position as the electron pair, so that it becomes O2−. It would be stable from 13 to 106 GPa. This substance could be a way to store helium in a solid.
La2/3-xLi3xTiO3He is a porous lithium ion conduction perovskite that can contain helium like a clathrate.
Helium is predicted to be included under pressure in ionic compounds of the form A2B or AB2. These compounds could include Na2OHe, MgF2He (over 107 GPa) and CaF2He (30-110 GPa). Stabilisation occurs by the helium atom positioning itself between the two like charged ions, and partially shielding them from each other.
Helium is predicted to form an inclusion compound with silicon, Si2He. This has a hexagonal lattice of silicon atoms with helium atoms lined up in the channels. It should be formed when liquid silicon is injected with helium at over 1GPa and cooled.
Predicted van der Waals molecules
The beryllium oxide helium adduct, HeBeO is believed to be bonded much more strongly than a normal van der Waals molecule with about 5 kcal/mol of binding energy. The bond is enhanced by a dipole induced positive charge on beryllium, and a vacancy in the σ orbital on beryllium where it faces the helium.
Variations on the beryllium oxide adduct include HeBe2O2, RNBeHe including HNBeHe, CH3NBeHe, CH4−xNBeHex, SiH4−xNBeHex, NH3−xNBeHex, PH3−xNBeHex, OH2−xNBeHex, SH2−xNBeHex, and .
Hydridohelium fluoride HHeF is predicted to have a . The lifetime of the deuterium isotopomer is predicted to be much longer due to a greater difficulty of tunneling for deuterium. This molecule's metastability is slated due to electrostatic attraction between HHe+ and F− which increases the barrier to an exothermic breakup. Under pressures over 23 GPa HHeF should be stable.
Calculations for coinage metal fluorides include HeCuF as stable, HeAgF is unstable, HeAuF is predicted, and Ag3He with binding energy 1.4 cm−1, Ag4He binding energy 1.85 cm−1, Au3He binding energy 4.91 cm−1, and Au4He binding energy 5.87 cm−1
HeNaO is predicted.
Calculation for binary van der Waals helium molecules include HeNe,
Li4He binding energy 0.008 cm−1, the Li3He is not stable.
Na4He binding energy 0.03 cm−1, the Na3He is not stable.
Cu3He binding energy 0.90 cm−1,
O4He binding energy 5.83 cm−1,
S4He binding energy 6.34 cm−1,
Se4He binding energy 6.50 cm−1,
F4He binding energy 3.85 cm−1,
Cl4He binding energy 7.48 cm−1,
Br4He binding energy 7.75 cm−1,
I4He binding energy 8.40 cm−1,
N4He binding energy 2.85 cm−1,
P4He binding energy 3.42 cm−1,
As4He binding energy 3.49 cm−1,
Bi4He binding energy 33.26 cm−1,
Si4He binding energy 1.95 cm−1,
Ge4He binding energy 2.08 cm−1,
CaH4He binding energy 0.96 cm−1,
NH4He binding energy 4.42 cm−1,
MnH4He binding energy 1.01 cm−1,
YbF4He binding energy 5.57 cm−1
IHe or IHe,
Bonds are predicted to form to nickel with helium as a weak ligand in HeNiCO and HeNiN2.
(HeO)(LiF)2 is predicted to form a planar metastable molecule. 1-Tris(pyrazolyl)borate beryllium and 1-tris(pyrazolyl)borate magnesium are predicted to bind helium at low temperatures. There is also a prediction of a He-O bond in a molecule with caesium fluoride or tetramethyl ammonium fluoride.
LiHe2 is predicted to be in an Efimov state when excited.
Predicted ions
Many ions have been investigated theoretically to see if they could exist. Just about every diatomic cation with helium has been studied. For the diatomic dications, for stability the second ionisation level of the partner atom has to be below the first ionisation level of helium, 24.6 eV. For Li, F, and Ne the ground state is repulsive, so molecules will not form. For N and O the molecule would break up to release He+. However HeBe2+, HeB2+ and HeC2+ are predicted to be stable. Also second row elements from Na to Cl are predicted to have a stable HeX2+ ion.
HeY3+ is predicted to be the lightest stable diatomic triply charged ion. Other possibly thermochemically stable ions include HeZr3+, HeHf3+, HeLa3+, HeNd3+, HeCe3+, HePr3+, HePm3+, HeSm3+, HeGa3+, HeTb3+, HeDy3+, HeHo3+, HeEr3+, HeTm3+, and HeLu3+ where the third ionisation point is below that of helium.
The positronium helide ion PsHe+ should be formed when positrons encounter helium.
The Fluoroheliate FHeO− ion should be stable but salts like LiFHeO are not stable.
HHeCO+ theoretical
FHeS− is predicted to be stable.
FHeBN−
HHeN2+ is unlikely to exist.
(HHe+)(OH2) is probably unstable.
The lithium hydrohelide cation HLiHe+ is linear in theory. This molecular ion could exist with big bang nucleosynthesis elements. Other hydrohelide cations that exist in theory are HNaHe+ sodium hydrohelide cation, HKHe+ potassium hydrohelide cation, HBeHe2+ beryllium hydrohelide cation, HMgHe2+ magnesium hydrohelide cation, and HCaHe2+ calcium hydrohelide cation.
HeBeO+ is predicted to have a relatively high binding energy of 25 kcal mol−1.
HCHe+
HCHeHe+
For negative ions the adduct is very weakly bound. Those studied include HeCl−, HeBr−, HeF−, HeO− and HeS−.
FHeS−
FHeSe−
C7H6He2+
C7H6HeHe2+
FHeCC−
HHeOH
HHeBF+
HeNC+
HeNN+
HHeNN+ H-He 0.765 Å He-N bond length 2.077 Å. Decomposition barrier of 2.3 kJ/mol.
HHeNH is predicted to have a C3v symmetry and a H-He bond length of 0.768 Å and He-N 1.830. The energy barrier against decomposition to ammonium is 19.1 kJ/mol with an energy release of 563.4 kJ/mol. Cleavage to helium hydride ion and ammonia is predicted to be endothermic, requiring 126.2 kJ/mol.
Discredited or unlikely observations
Numerous researchers attempted to create chemical compounds of helium in the early part of the twentieth century.
In 1895 L. Troost and L. Ouvrard believed they had witnessed a reaction between magnesium vapour and helium (and also argon) due to the spectrum of helium disappearing from the tube they were passing it through. In 1906, W. Ternant Cooke claimed to have noticed a reaction of helium with cadmium or mercury vapour by observing an increase in the density of the vapour. Zinc vapour did not react with helium.
J. J. Manley claimed to have found gaseous mercury helide HeHg in 1925 HgHe10; publishing the results in Nature, but then had trouble finding a stable composition, and eventually gave up.
Between 1925 and 1940 in Buenos Aires, Horacio Damianovich studied various metal–helium combinations including beryllium (BeHe), iron (FeHe), palladium (PdHe), platinum (Pt3He), bismuth, and uranium. To make these substances, electrical discharges impacted helium into the surface of the metal. Later these were demoted from the status of compounds, to that of alloys.
Platinum helide, Pt3He was discredited by J. G. Waller in 1960.
Palladium helide, PdHe is formed from tritium decay in palladium tritide, the helium (3He) is retained in the solid as a solution.
Boomer claimed the discovery of tungsten helide WHe2 as a black solid. It is formed by way of an electric discharge in helium with a heated tungsten filament. When dissolved in nitric acid or potassium hydroxide, tungstic acid forms and helium escapes in bubbles. The electric discharge had a current of 5 mA and 1,000 V at a pressure between 0.05 and 0.5 mmHg for the helium. The process works slowly at 200 V. and 0.02 mmHg of mercury vapour accelerates tungsten evaporation by five times. The search for this was suggested by Ernest Rutherford. It was discredited by J. G. Waller in 1960. Boomer also studied mercury, iodine, sulfur, and phosphorus combinations with helium. Mercury and iodine helium combinations decomposed around −70 °C Sulfur and phosphorus helium combinations decomposed around −120 °C
Bismuth dihelide, BiHe2
H. Krefft and R. Rompe claimed reactions between helium and sodium, potassium, zinc, rubidium, indium, and thallium.
| Physical sciences | Noble gas compounds | Chemistry |
52275513 | https://en.wikipedia.org/wiki/Aluminium%20triacetate | Aluminium triacetate | Aluminium triacetate, formally named aluminium acetate, is a chemical compound with composition . Under standard conditions it appears as a white, water-soluble solid that decomposes on heating at around 200 °C. The triacetate hydrolyses to a mixture of basic hydroxide / acetate salts, and multiple species co-exist in chemical equilibrium, particularly in aqueous solutions of the acetate ion; the name aluminium acetate is commonly used for this mixed system.
It has therapeutic applications for its anti-itching, astringent, and antiseptic properties, and, as an over-the-counter preparation like Burow's solution, it is used to treat ear infections. Burow's solution preparations have been diluted and modified with amino acids to make them more palatable for use as gargles for conditions like aphthous ulcers of the mouth. In veterinary medicine, aluminium triacetate's astringency property is used for treating Mortellaro disease in hoofed animals such as cattle.
Aluminium triacetate is used as a mordant agent with dyes like alizarin, both alone and in combination. Together with aluminium diacetate or with aluminium sulfacetate it is used with cotton, other cellulose fibres, and silk. It has also been combined with ferrous acetate to produce different colours.
Nomenclature
According to the formal rules for naming inorganic compounds, the name for is aluminium acetate, though more formal names like aluminium(III) acetate and aluminium ethanoate are acceptable. The use of the "tri" multiplying prefix in the name aluminium triacetate, while not technically required, is regularly used to avoid potential confusion with related compounds with hydroxo ligands. Basic aluminium diacetate, formally hydroxyaluminium diacetate (CAS RN 142-03-0), has composition with one hydroxo ligand in place of an acetate ligand, and dibasic aluminium monoacetate, formally dihydroxyaluminium acetate (CAS RN 7360-44-3), has composition with only one acetate ligand. These three compounds are distinct in the solid phase but are usually treated as a group and described collectively as aluminium acetate in solution, due to the triacetate hydrolyzing to a mixture which includes the other two forms. The abbreviation as AlAc, along with variants like and , are sometimes used in the discipline of geochemistry, though these are inconsistent with standard practice in mainstream chemistry.
Structure
The formula indicates the presence of aluminium centres in the +3 oxidation state and acetate groups in a ratio of 1:3. Images used to represent this substance, such as those shown at left, represent two highly oversimplified approximations of the solid-state structure: the first is as a purely ionic salt with a single aluminium(III) cation (Al3+) surrounded by and associated electrostatically with three acetate anions (), but this should not be taken to convey information about the crystal structure. For example, sodium chloride (NaCl) has a cation-to-anion stoichiometry of 1:1, but it has a cubic structure with each ion surrounded octahedrally by six ions of the opposite charge.
The other image is a molecular form with the three acetate groups covalently bonded to the metal centre in a trigonal planar geometry and intermolecular interactions holding the molecules together with each other in the crystal structure. It is highly likely that the solid state structure is more complicated and includes both covalent and ionic characteristics and it is possible that multiple aluminium centres and / or bridging acetate groups might be present – both of these have been reported in aluminium acetate solution and aluminium chloride is known to exist as a dimer.
NMR investigations of the aqueous aluminium(III) / acetate system show the presence of aluminium as a hexaaqua complex, , as well as mononuclear species with different substitutions. In addition, the investigations demonstrate that a significant solution-phase species is an tridecamer, a moiety reported in conflicting mechanisms of hydrolysis and polymerisation aluminium solutions. Other trivalent metal cations are known to form polynuclear species: iron(III) acetate, for example, forms a trinuclear structure with a triply-bridged oxo centre with the cation The compound chromium acetate hydroxide, Cr3(OH)2(OAc)7, has also been described as isostructural. Analogous ruthenium(III), vanadium(III), rhodium(III), and iridium(III) compounds with trinuclear structures are known. Copper(II) acetate and chromium(II) acetate both have dinuclear dihydrate structures, M2(OAc)4(H2O)2, as does rhodium(II) acetate; each shows significant metal-metal bonding interactions.
Chemistry
Preparation
According to the CRC Handbook of Inorganic Compounds, aluminium triacetate is a white, water-soluble solid and is usually prepared from aluminium chloride or directly from aluminium by heating in an acetic acid solution with acetic anhydride.
3 + → + 3 HCl
6 + 2 Al → 2 + 3
Theoretically all of the aluminium / acetate / hydroxide salts can be prepared from aluminium hydroxide or sodium aluminate and acetic acid, but formation of the triacetate only occurs in the absence of water. In solutions, the diacetate is the major product formed, and is also produced when aluminium chloride is treated with a sodium acetate solution in basic conditions. The equations for these processes are:
2 + → + 2 NaOH
2 + + NaOH → + 3 NaCl
2 + + 2 → + 3 NaOH
An improved process using a combination of aluminium chloride and sodium aluminate with sodium acetate prepared in situ has been patented:
29 + 10 NaOH + 84 + 13 → 42 + 39 NaCl + 26
The mordants aluminium triacetate and aluminium sulfacetate can be prepared from aluminium sulfate, the product formed being determined by the amount of lead(II) acetate used:
+ 3 → 2 + 3
+ 2 → + 2
Decomposition
On heating, aluminium triacetate decomposes above 200 °C in a process similar to that of aluminium formate. The process begins with loss of acetic anhydride () between 120 and 140 °C to form the a mixture of the basic oxide acetates such as and , which are ultimately transformed to (alumina), first as an amorphous anhydrous solid and then through other solid phases (γ-, δ-, and θ- crystal forms) to ultimately become polymorphic α-:
2 → + → + 3
2 → + 2 +
Hydrolysis
Aluminium triacetate hydrolyses to produce both the mono- and di-basic hydroxide acetates in solution or by hygroscopy:
+ → +
+ 2 → + 2
Uses
According to the National Cancer Institute, the aluminium acetates are used topically in humans as antiseptic agents, which also cause body tissues to shrink. Its astringency property is also used for treating Mortellaro disease in hoofed animals such as cattle. Aluminium acetate promotes healing of infected skin and also assists with inflammation, itching, and stinging. The Food and Drug Administration has approved it for use for "temporary relief of minor skin irritations due to ... 'poison ivy,' 'poison oak,' 'poison sumac,' 'insect bites,' 'athlete's foot,' or 'rashes caused by soaps, detergents, cosmetics, or jewelry.'" For these applications, over-the-counter preparations such as Burow's solution are typically used, while diluted forms are used as gargles for conditions like aphthous ulcers of the mouth, including with amino acid additives to improve palatability and taste. The most common use of Burow's solution is in treating ear infections including otomycosis, though it is generally not as effective as clotrimazole in these fungal infections. Topical astringent powder Domeboro contains aluminium sulfate tetradecahydrate, , and calcium acetate monohydrate, , and forms an aluminium acetate solution similar to Burow's solution when dissolved. Domeboro solutions in warm water can be used in cases of ingrown toenails, to reduce irritation and contain any infection which might be present.
Mordant
A mordant is a substance used to set dyes on fabrics or tissue sections by forming a coordination complex with the dye, which subsequently attaches to the fabric or tissue. A mordant often contains a polyvalent metal ion, commonly aluminium or iron, as is the case with mixtures of aluminium triacetate with aluminium sulfacetate or with basic aluminium diacetate. Aluminium triacetate mordants have been used with cotton, other cellulose-based fibres, and silk. They have also been combined with ferrous acetate to produce different colours.
In the case of the dye alizarin (1,2-dihydroxyanthraquinone, ), mordanting was hypothesised to involve the formation of a dianion of alizarin. This would form a five-coordinate aluminium complex, , which can take up water to form a hydrate with a six-coordinate aluminium-centred dianion, . The proposal was based on infrared spectroscopic data, and was subsequently challenged by work suggesting a structure with two bridging hydroxyl ligands connecting a dinuclear core, , with two alizarin moieties each chelating to each aluminium centre. The structure was proposed by Soubayrol et al. based on 27Al NMR spectroscopy and electrospray ionisation mass spectrometry evidence. They reported that the degree of hydration was dependent on the identity of the counter-ion, with the sodium salt being a stable tetrahydrate with a monohydrate being formed from potassium hydroxide. These were distinguishable based on their chemical shifts, suggesting the waters are associating with the aluminium centres or the alizarin moieties, and not behaving as is typical for waters of crystallisation.
A related structure with calcium ions was reported in 1994, and in it the alizarins chelate to the calcium ions to form AzCaAz bridges between the aluminium centres (which are also bridged by hydroxo groups) and the aluminium centres subsequently bind to the deprotonated phenol residues of the dye; in the Soubayrol model, each alizarin is associated with a single aluminium cation. As with the structure of aluminium acetate itself, the forms it takes in applications has not been resolved.
| Physical sciences | Acetates | Chemistry |
52282223 | https://en.wikipedia.org/wiki/Monazite%20geochronology | Monazite geochronology | Monazite geochronology is a dating technique to study geological history using the mineral monazite. It is a powerful tool in studying the complex history of metamorphic rocks particularly, as well as igneous, sedimentary and hydrothermal rocks. The dating uses the radioactive processes in monazite as a clock.
The uniqueness of monazite geochronology comes from the high thermal resistance of monazite, which allows age information to be retained during the geological history. As monazite grows, it forms successive generations of different compositions and ages, commonly without erasing the previous ones, forming zonation patterns in monazite. Because of the age zonation, dating should be done on individual zones, rather than the whole crystal. Also, textures of monazite crystals may represent certain type of events. Therefore, direct sampling techniques with high spatial resolution are required, in order to study these tiny zones individually, without damaging the textures and zonations.
The advantage of monazite geochronology is the ability to relate monazite compositions with geological processes. Finding the ages of compositional zones can mean finding the ages of geological processes.
Decay of U and Th to Pb
Monazite is a rare-earth-element phosphate mineral, with the chemical formula e.g. (Ce, La, Nd, Th, Y)PO4. It appears in a small amount as an accessory mineral in many igneous, metamorphic and sedimentary rocks. Monazite minerals contain significant amounts of radioactive elements Th and U, which trigger radioactive processes. These two elements are what make this mineral suitable for radiometric dating.
In the radioactive processes, the three unstable parent isotopes decay into their respective stable daughter isotopes of Pb. Each following a decay chain consisting of alpha and beta decays, parent isotopes 238U, 235U and 232Th, decay into a series of intermediate daughter isotopes, and finally lead to stable isotopes, 206Pb, 207Pb and 208Pb, respectively. Each decay chain has a unique half-life, which means the daughter isotopes are generated at different rates.
The decay processes can be simplified as the following equations, which omit all the intermediate daughter isotopes.
where α represents alpha particle, β− represents beta particle, λ represents decay constant and t1/2 represents half-life.
Monazite geochronology studies the ratio of parent isotopes to daughter isotopes (isotopic ratio), and calculates how much time has passed since daughter isotopes start accumulating.
Radiometric age and geological age
Radiometric age represents the time when the decay process starts. Geological age represents the time when a geological event occurs. Manipulating the isotopic ratios can only give us radiometric age. To obtain the geological age, we need to know the relationship between the two. In other words, how do geological events affect the radioactive system in monazite? Actually, the radioactive system is like a digital 'clock', while the geological processes can be like replacing a battery. When a new battery is inserted, this 'clock' starts counting from 00:00. This process is what we call the age resetting mechanism. In monazite, the age resetting is caused by the loss of Pb. Pb is produced continuously by the decays of U and Th since the radioactive system (clock) starts running. The more Pb (or less U and Th) the system contains means the longer period has been passed. If all Pb are suddenly removed from monazite by a geological event (replacing battery), the age become zero (00:00) again. Before thinking what exact geological events trigger Pb loss (see section: Interpretation and application), it is important to know the two mechanisms causing Pb loss in monazite.
Mechanisms of Pb loss
Solid-state diffusion
Solid-state diffusion is the net movement of atoms in solid phase, from a region of higher concentration to one of lower concentration. It is easy to imagine diffusion in a liquid phase as ink spreading in water. Solid-state diffusion of Pb is the net exchange of Pb in the solid mineral with the external environment, which is usually a fluid. In most of the cases, Pb is transported from the mineral to the fluid, resulting in Pb loss and thus age resetting.
The rate of diffusion increases with temperature as atoms are moving faster. However, as the mineral cools and the crystal structure becomes more complete, the diffusions of parent and daughter isotopes slow down and finally become insignificant at a certain temperature. This closure temperature (Tc) depends on the crystal size, shape, cooling rate and diffusion coefficient, which in turn varies for each mineral and radioactive systems. That is, above Tc, Pb is continuously lost and the radioactive clock is keeping zero. Once the temperature falls below Tc, the system is closed and the clock starts counting.
Monazite is characterized by its high Pb retention ability even at high temperatures for a prolonged period. The closure temperature of monazite in U-Th-Pb system is higher than 800 °C, much higher than the other common minerals.
Fluid-assisted dissolution-precipitation
Unlike solid-state diffusion, fluid-assisted dissolution-precipitation occurs below Tc. Interaction between the mineral phase and a coexisting fluid phase during geological events directly contributes to this process. It is a chemical reaction driven by the system stabilization from minimizing Gibbs free energy. A reactive fluid is present as a catalyst and a source of reactants for the reaction.
If a geological process creates a suitable fluid and temperature, monazite dissolves along the contact with the fluid (reaction front), and reprecipitates as an altered monazite with a new chemical composition. The rates of the dissolution and reprecipitation are the same, so that the original mineral phase is always in contact with the precipitating phase, separated by only a thin layer of fluid as a reaction medium. Once the reaction is activated, it is self-continuing. The reaction front migrates towards the centre of the parent monazite, leaving behind the newly formed monazite, forming a core-rim structure.
The composition of the precipitating phase depends on the fluid composition and temperature. During most of the reactions, Pb is efficiently removed and the precipitating phase is Pb-free. Therefore, the age of the newly formed rim is reset, representing the time of this alternation.
There are basically two factors which can cause the reaction to cease. (A) Reaction ceases due to the recrystallisation of precipitating phase, removing all the fluid infiltration paths. This results in fluid inclusions in monazite. (B) Reaction ceases due to a change in the system such as the composition of fluid and monazite, making this reaction no longer reactive.
Implications for monazite geochronology
Since the diffusion of reactants between the dissolving phase and the precipitating phase is slow, the fluid is essential for providing easy transport for the reactants. Yet as the reaction proceeds, the dissolving phase and the fluid are separated by the solid precipitating phase, blocking the transport of reactants. Therefore, there must be some interconnected porosity in the precipitating phase, which allows the fluid to infiltrate and fuel the reaction front.
Most other geochronometers usually have a much lower closure temperature. Once they are subjected to a temperature higher than Tc, all age information will be reset, losing information from past geological events. In contrast, since monazite has a high Tc, even though it may experience younger high-grade metamorphism with high temperatures, it is likely that the previous geological history will be preserved. Furthermore, dissolution-precipitation is usually triggered by geological events such as metamorphism, deformation and hydrothermal alternation below Tc. Each of these events writes new age information by precipitating a new domain without erasing the older information. Therefore, it is likely that monazite preserves a complete history of generations.
Monazite and zircon are two minerals that are commonly employed in geochronology to study geological history. They both exhibit high closure temperatures which makes them suitable for recording igneous and metamorphic events. However, they behave differently throughout their geological history. Generally, monazite performs better in recording metamorphism (recrystallisation ages) with different zonation patterns in ages and composition. Zircon is not as reactive as monazite during metamorphic reactions and is better for recording igneous events (cooling ages). Moreover, monazite is more suitable in dating relatively low-temperature metamorphism for example amphibolite-facies than zircon.
Monazite zonation
Zonation is a characteristic of monazite. A single monazite grain can contain domains of distinctively different compositions and ages. These domains are widely accepted to represent episodes in geological history with monazite growth or recrystallisation. The key to monazite geochronology is to find out what geological events or environments a domain represents, by comparing its chemical composition with mineral stability and reactions. The age of the event is thus represented by the domain age.
The ideal formula of monazite is [LREE(PO4)], the variation in composition is mainly due to the chemical substitutions of light rare earth elements (REE) in monazite by other elements. One of the common substitutions are the exchange between LREE with Th and Ca, and P with Si to form huttonite [Th(SiO4)] and brabantite [CaTh(PO4)2]. Since all three minerals share the same chemical structure, they are the three endmembers in their solid solution, meaning that they appear in a same solid phase where substitutions happen. It is important to note that the compositional zonation patterns may not be the same when we are considering different elements, and age zonation may have no relationship with compositional zonation at all. (see images from the section: analysis procedures) Thus, one needs to be very careful in linking among zonations. In natural monazite, the zonation pattern maybe complex and hard to interpret. Below we describe some simple chemical zonation patterns and the associated interpretations. Zonation patterns associated with igneous activity are usually easy to interpret. However, those associated with metamorphism are more complicated.
Concentric zoning
One mode of monazite formation is crystallization from an igneous melt. The concentric zoning pattern reflects the changing composition of the melt which affects the composition of the crystallizing monazite.
Sector zoning
Sector zoning is also associated with the crystallization of monazite in a melt. However, some elements may have a tendency to crystallize onto a specific crystal face. This results in uneven growth and composition.
Core-rim zoning
Core-rim zoning is usually associated with the fluid-assisted dissolution-precipitation in metamorphic reactions, forming successive rims each with a new composition. The fluid composition and metamorphic grade (H/T) are important factors in the rim composition.
Other zoning patterns
Mottled and patchy zoning patterns are more complex zonations. The interpretations are usually not simple.
Dating approaches
Isotopic dating and chemical dating are the two typical methods used in monazite geochronology. Both methods make use of the radioactive nature of Th and U in monazite.
Isotopic dating
Isotopic dating requires measuring the isotopic concentration of radioactive U and Th, and radiogenic Pb in monazite. By treating each decay chain in the U-Th-Pb system independently, three classic isochron equations can be obtained:
where represents the initial isotopic ratio when the system resets, t represents the time after the system reset, and λ238, λ235 and λ232 are the decay constants of 238U, 235U and 232Th respectively.
Combinations of the use of the above equations, such as U-Th-Pb dating, U-Pb dating and Pb-Pb dating, require different levels of analysis techniques and offer variable levels of precision and accuracy. The general uncertainty in the ages measured is 2σ (e.g.).
Chemical dating/ Total Pb dating
Chemical dating requires measuring the elemental abundances of U, Th and Pb but not isotopes. U-Th-total Pb dating, also known as electron microprobe
U–Th–Pb dating, measures the elemental abundances of the three elements by an electron microprobe, and calculates the age (t) by the below equation.where Pb, Th and U are concentrations in parts-per-million, and λ232, λ235 and λ238 are the decay constants of 232Th, 235U and 238U respectively.
For chemical dating results to be valid, the following assumptions are required:
Non-radiogenic Pb is negligible compared to radiogenic Pb.
No modification of U/Th/Pb has occurred except radioactivity.
The first assumption tends to be true since monazite is very unlikely to incorporate Pb during its growth. The non-radiogenic Pb content in many laboratory tests was found to be very low, nearly always less than 1 ppm. The most common error arising from this assumption is contamination with lead during sample preparation. The second assumption is usually justified by the concordant behavior of the mineral observed in tests. That means the system is either reset totally or unaffected totally by geological processes, there is no partial resetting of the system. Minor errors may arise due to negligible disturbance during mass transfer.
The theory is that monazite has high contents of Th (generally 3–15% and up to 25% of its weight) and U (generally hundreds of ppm and up to 5% in concentration). Thus, Pb accumulates at a high rate by radioactive processes. In less than hundreds of years, it reaches a level high enough to be measured accurately by an electron microprobe.
Analysis techniques
Age and compositional zonation as well as the texture of monazite provide evidence on the successive growth of the crystal during discrete geological events. The scope of information that can be obtained largely depends on the analysis techniques employed in geochronology.
Comparison between conventional and in situ analysis
Conventional analysis
Conventionally, monazite is separated from samples by dissolution and chemical methods. Single or fractions of crystals are selected for dating, usually by thermal ionization mass spectrometry (TIMS). That means one age is generated for a single monazite crystal or for a group of crystals. The age information obtained is obviously inconsistent and inaccurate, because even a single monazite crystal contains zones of different ages. Also, mechanical separation for monazite often destroys the associated textural and spatial information in the monazite crystals, which is crucial in interpreting relationships between domains and geological environments.
In-situ analysis
For the above reasons, the demand for in-situ analysis is increasing. In-situ means analyzing monazite grains in their original host rocks without separation (refer to in situ) such that the texture and zonation pattern are kept intact in order to reveal a more comprehensive geological history of the host rock. Direct sampling techniques, high spatial resolution and precision are required for in-situ analysis. With technological advancement, more and more measurement tools such as laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS) and laser microprobe mass spectrometer (LMMS) are capable of such analysis.
Analysis procedures
Shown below is a general procedure for monazite dating. The characteristics and procedures are different for each measurement tool, especially sample preparation and dating methods. Details of some common measurement tools are described in the section: Measurement tools.
Sample preparation
Monazite identification and mapping
Monazite compositional mapping
Monazite age mapping
Quantitative dating
Sample preparation
In both conventional and in-situ dating, a thin section of the rock of interest is prepared. First, a thin layer of rock is cut by a diamond saw and ground to become optically flat. Then, it is mounted on a slide made of glass or resin, and ground smooth using abrasive grit. The final sample is usually only 30 μm thick.
Monazite identification and mapping
Monazite grains are identified by a backscattered electron imaging survey or/and electron microprobe analysis (EMPA) by mapping the concentration of distinctive Ce in monazite. The two images are usually superimposed to reflect sample texture and monazite locations at the same time.
Monazite compositional mapping
Monazite grains which show useful relationships with microtextures or host minerals are selected for compositional mapping. Major elemental and sometimes trace elemental maps are created at high magnification by electron microprobe X-ray mapping to show compositional zonation patterns. Maps of elemental Y, Th, Pb, U have proven useful in identifying compositional domains in monazite.
Monazite age mapping
Estimated ages are calculated across the compositional map by analysing the concentration of Th, Pb and U by the total-Pb dating method. The result is then used to generate an age map which approximately identifies all the age domains.
Quantitative dating
A number of spots within an age domain are selected and further dated accurately with the measurement tools by isotopic dating method. The results are then analysed statistically to give an accurate age of each age domain.
Measurement techniques
The choice of various conventional or in-situ analysis techniques affects the resolution, precision, detection limits and cost of monazite geochronology. The recent analytical progress in U-Th-Pb system in natural monazite has been mainly achieved by (1) Isotope Dilution Thermal Ionization Mass Spectrometry (ID-TIMS), (2) Secondary Ion Mass Spectrometry (SIMS), (3) Laser Ablation Inductively Coupled Plasma Mass Spectrometry (LA-ICP-MS) and (4) Electronic Microprobe Analyses (EMPA).
Conventional analysis
Isotope dilution thermal ionization mass spectrometry
In the 1950s, Alfred Nier developed the technique of ID-TIMS, which later become the first tool used in monazite geochronology. Since this method involves the chemical separation of monazite (isotope dilution), it is regarded as a conventional analysis technique. Generally, it takes several hours for a U-Pb measurement. The precision of date is nearly 0.1%, provided that the ages are concordant (i.e not dates reflecting mixing of zonations). It is regarded as the most precise method in monazite geochronology.
Monazite mineral grains are carefully hand-picked for dating. They are spiked with a tracer solution and dissolved in HF or HCl. Using ion exchange chemistry, U, Th and Pb are separated from other elements. The purposes of the separation are (1) potential isobaric interference should be removed before analysis because of the high-sensitivity and low-mass resolution nature of TIMS; (2) ionization of the elements of interest maybe impeded by other elements, which results in reduced signal size and precision.
The separated U, Th and Pb samples are put carefully onto a metal filament, which is usually made from Re. The elements are heated and ionize to their respective ions, which are accelerated under a strong magnetic field and are measured by a detector.
The tracer solution is a solution with a known amount of U and Pb tracer isotopes. Due to elemental fractionation, both elements cannot be measured simultaneously by TIMS. The tracer solution is therefore used to measure ratios of sample isotope to tracer isotopes. The ratios are converted to moles of sample isotopes for dating.
In-situ analysis
The following measurement techniques apply to in-situ analysis, which involves direct sampling of monazite grains using an incident ion beam or a laser.
Secondary ion mass spectrometry (SIMS)
SIMS is a mass spectrometry method to measure small-scale elemental and isotopic variations of samples. Its ability to measure in spots with a narrow diameter (10–40 μm) makes it a useful tool to date small (<100 μm) mineral grains and individual domains within a single crystal. SIMS can achieve a precision of ~3%. Sensitive high-resolution ion microprobe (SHRIMP) is widely regarded as a powerful tool among SIMS.
SIMS analyzes the mineral surface (a few μm) composition by sputtering the surface with a focused primary ion beam under vacuum. The secondary ions liberated from the mineral are accelerated, measured and analyzed in the mass spectrometer. Sample are analysed in rotation with a standard of known elemental or isotopic ratios in order to determine the ratios in the sample for dating.
Laser ablation inductively coupled plasma mass spectrometry (LA-ICPMS)
The application of LA-ICPMS in U-Pb geochronology started in the 1990s. Since it enables relatively short and cheap yet high-spatial-resolution analysis, it has become the most utilized method of monazite geochronology. The precision of LA-ICPMS is limited by standard variability, which is about 2% of a given age.
The mineral sample surface is sputtered by a laser inside a sample cell. The ablated particles are collected and incorporated into a carrier gas. The resulting aerosols are analyzed by a mass spectrometer for dating. A solid-state or gas-source laser with a short wavelength is commonly used as the laser ablation system in geochronology.
Electronic microprobe analyses (EMPA)
EMPA is employed in monazite geochronology especially for in-situ chemical dating (total-Pb dating). The high content of U, Th and Pb in monazite match with the requirement arising from the relatively higher lower detection limit. Therefore, EMPA is a high-resolution (approximately 1 μm), rapid and inexpensive method in chemical dating to resolve growth histories of monazite. It can achieve a precision of 5–10 myr in Pb-rich monazite, and 10–20 myr in Pb-poor monazite.
Interpretation and application
Monazite geochronology can reveal complex geological history recorded in the monazite mineral grains. The characteristic composition and age of each domain or zone represent a past geological event with a certain age. The key challenge in monazite geochronology is to correctly relate textures and compositions in each domain to the associated geological events which formed them.
Even a single monazite grain may reveal a complex history, in which geologic events maybe inter-related or coeval, making discrimination difficult. The section below aims to briefly explain how composition and age data are interpreted to link different types of events.
Crystallisation of melt
Understanding the igneous petrology of monazite is important to be able to date the crystallisation age of igneous rocks. Monazite is commonly present as an accessory mineral in low-CaO peraluminous granitoids, from diorites, micaceous granites to pegmatites. The reason for the low CaO content is probably that melts with high CaO content promote the formation of apatite and allanite but not monazite. It is commonly formed from magmatism involving carbonatic melts but not mafic plutons or lavas. Those rocks usually host economic REE ore deposits, making monazite geochronology important in mining exploration.
The simplest monazite zonation showing successive crystallisation of melts is concentric zonation, in which new monazite layers are crystallized rim-by-rim around the pre-existing core. The rims often show compositional variations due to the preferential incorporation of certain elements in the crystal lattice. For example, considering a closed system, Th is preferentially incorporated into the monazite mineral structure, leaving a Th-depleted melt. Therefore, older monazite near the core of a grain is rich in Th while younger monazite contains less, resulting in a rimward decrease of Th in a concentric zoning pattern. Investigating composition and age variation of these rims help to constrain the timing and rate of crystallisation as well as the composition of the melt, especially for rocks where zircon is not present.
Monazite geochronology can also reveal igneous differentiation events such as magma mixing, where the magma chamber is evolved into a different composition. Isomorphous substitution is one of the examples. It is a form of substitution in which one element is replaced by another without changing the crystal structure. In the case of monazite, the rare earth elements are replaced by Ca and Th.
Different levels of substitution form a range of compositions, with endmembers monazite [2REE(PO4)], brabantite [Ca,Th(PO4)2] and huttonite [2ThSiO4]. The level of substitution usually depends upon melt composition and the geological environment.
Hydrothermal alteration
Hydrothermal processes are usually coupled with igneous processes. Monazite geochronology helps studying the evolution from igneous processes to hydrothermal processes, and revealing later hydrothermal alteration, which is vital in the study of ore formation.
Although it is hard to distinguish between magmatic monazite and hydrothermal monazite, analysing the texture and pattern of monazite may help distinguish them. Hydrothermal monazites tend to appear in clusters of multiple crystals, while igneous monazites tend to appear homogeneously distributed throughout the rock. Also, hydrothermal monazites usually contain low ThO2 content. These distinctive features can be easily identified with textural and compositional analysis in monazite geochronology.
Metamorphism
Monazite geochronology is generally regarded as a powerful tool to reveal metamorphic history. Metamorphism is the mineralogical and textural changes in preexisting rocks in response to a change in environment to different temperatures and pressures. It occurs at a temperature above diagenesis (~200 °C) and below melting (>800 °C). The mineral assemblage formed by metamorphism depends on the composition of the parent rock (protolith) and more importantly, the stability of different minerals at varying temperature and pressure (P-T). A set of mineral assemblages that form under similar temperature and pressure is called a metamorphic facies. Most mineral changes during rock burial, uplift, hydrothermal processes and deformation are associated with metamorphic reactions.
Monazite is commonly found in many metamorphic rocks, especially in those formed from pelites and sandstones. The zonation in monazite reflects the successive monazite forming events. They may be formed from reactions along a single pressure-temperature (P-T) loop in a phase diagram, or reactions without changing P-T. For a metamorphic event, monazite is formed by the reactions with more than one P-T loop.
The objective of monazite geochronology is to relate these monazite forming events/reactions with P-T conditions. We can then put time constrains on the P-T loops, forming a comprehensive pressure-temperature-time loops revealing the metamorphic history of the rocks.
Monazite inclusions in metamorphic porphyroblasts and matrix
Different porphyroblasts like garnet and quartz are often formed during metamorphism in different ranges of P-T. Monazite grains are often found as inclusion in porphyroblasts. Since the host mineral monazite is quite thermally resistant, these inclusions are protected from age resetting, even with a prolonged exposure at temperature higher than 800 °C, this enables us to restrict an upper limit of the age of the porphyroblasts, and thus the associated metamorphic events.
For example, a metamorphic rock in the Neil Bay area of northern Saskatchewan underwent high grade (high P/T) metamorphism followed by exhumation (uplift). The porphyroblast of garnet was formed during high grade metamorphism while the porphyroblast of cordierite was formed during subsequent exhumation. Both porphyroblasts contain monazite inclusions which were dated at 1910 Ma and 1840 Ma, respectively. And matrix monazite is dated 1800 Ma. Thus, it is interpreted that high grade metamorphism occurred after 1910 Ma and before 1840 Ma, while exhumation occurred after 1840 Ma, and the final annealing (cooling and coarsening of minerals) happened at 1800 Ma.
Within the same setting as above, monazite inclusions in garnet maybe either younger than, older than or have similar ages with the matrix monazite. Both of them may even have a wide range of ages with no systematic distribution. These scenarios are interpreted to represent different metamorphic paths and conditions, giving varying or complex sequences of metamorphic reactions.
Elemental fractionation between monazite and silicates
Elemental fractionation refers to the difference between the amount of an element incorporated into the solid mineral phase and the amount left in the fluid phase. Minerals display preferential intake of certain elements during growth. For example, as monazite grows in size, it preferentially incorporates Th in its crystal structure, resulting in less available Th in the fluid for future monazite growth. Thus, younger monazite tends to have lower Th content. This is one of the principal reasons for the compositional variation of monazite.
When considering the whole system of metamorphic rocks, there are other minerals which show elemental fractionation. The interplay between fractionation in monazite and these other minerals has a great impact on the compositional zonation of monazite. The interplay is often caused by the formation and breakdown of the minerals, which is a result of different stages in P-T paths. Dating fractionation-related zonation thus helps put time constraints on metamorphism.
The mostly studied system is yttrium (Y) fractionation between the phosphate monazite and the silicates garnet and xenotime. All three minerals preferentially fractionate Y, yet they form and break down at different stages of metamorphism. Xenotime has the highest fractionating power, then garnet and then monazite. In a simplified case of a clockwise P-T path involving garnet and monazite, garnet grows along a prograde path with Y continuously being incorporated, thus the Y content in monazite formed at this stage (prograde) should decrease progressively with higher grade. However, as temperature increases to a certain point, partial melting (anatectic) of monazite occurs around its rim, releasing Y into the melts. As the system later cools and melt crystallizes, regrown monazite will have higher Y content. Partial melting usually happens during peak metamorphism (the highest temperature in a P-T path), but age and chemical information during this stage are not recorded since the monazite is melting. However, the ages of last prograde growth rim (lowest Y) and the first post-anatectic growth rim (highest Y) usually bracket the time of partial melting.
Another scenario involves the formation or breakdown of garnet, influencing the Y and HREE (heavy rare earth elements) content in the environment, thus the content of growing monazite. Basically, monazites grown before garnet formation have a higher Y and HREE content than those formed during or after garnet formation. As garnet starts breaking down in the later stage of metamorphism, monazite rims rich in Y and HREE will form.
The extent of fractionation of Y between garnet and monazite is also found to be related to temperature. It is thus used as a thermometer, providing temperature constraints on the P-T path.
Deformation
Timing deformation events is one of the important components in a tectonic study. Large scale cross-cutting relationships between rocks, dikes and plutons provide certain but relatively broad time constraints on deformation. Monazite can be incorporated into deformation fabrics, reaction textures and fractures; thus, studying microfabrics and microtextures of monazite offers a more straightforward method of dating a deformation event.
Deformation metamorphic reactions
Deformation events may trigger metamorphic reactions which produce monazite. For example, a metamorphic reaction associated with the movement in the Legs Lake shear zone partly replaced garnet with cordierite. This reaction also generated new monazite with high content of Y, and dated around 1850 Ma. The age is interpreted as the timing of shearing.
Monazite-forming reactions may happen a bit later than shearing after the rocks have been in re-equilibrium in response to a new pressure environment. That means monazite age may not be closely equivalent to shearing age, but it provides a more precise age than other methods.
Monazite deformation fabrics
Monazite can form in fabrics caused by deformation. Monazite may be present as elongate grains aligned in foliation. It can be interpreted that either the monazite formed before the shearing and was aligned during shearing, or formed at the same time as the shearing. It thus provides an upper limit of the shearing age. For example, if the monazite is dated 800 Ma, the age of shearing cannot be older than 800 Ma.
However, it can also be interpreted that the monazite grew along the foliation of other minerals long after the shearing. This problem can be solved by analysing the compositional domains of monazite. Monazite along existing foliation would have a tendency to grow at the two ends along the foliation. If we can find monazite overgrowths with different compositions and ages along at the two opposite ends of the grain, it is likely that the date of the monazite overgrowth is younger than the shearing.
Monazite fracture
Fractures and offsets in a single monazite crystal have been observed mimicking bookshelf faulting in a larger-scale fracturing event. The fractured grain is dated 1375 Ma, indicating that the large-scale displacement happened after this date. Moreover, new monazite may later grow and fill up the space created by the fracture, enclosing the time constraint completely. For example, if the new monazite is dated at 1200 Ma, the displacement probably occurred between 1375–1200 Ma.
Sedimentary events
Detrital monazite
Detrital monazite grains are produced by the weathering and erosion of pre-existing rocks and then transported into sedimentary basins. The detrital monazite contains zonation patterns which preserve the geological history of the source region. Investigating detrital monazite in the basin not only helps in reconstructing the metamorphic, tectonic and hydrothermal history of the source region, but also finding the depositional age, structural evolution and sediment sources of the basin. For example, the domain with youngest age may represent exhumation of source rock, which is followed by immediate erosion and deposition.
Diagenetic monazite
Diagenetic monazite is the monazite that formed during or after the lithification of sedimentary rocks. Monazite has been observed to grow on other minerals or in pore spaces during diagenesis of sediments. Studying diagenetic monazite provides a good method to study the age, geochemical and thermal evolution of sedimentary basins, in particular those in the Precambrian with little fossil age controls.
Industrial Use
U-Th-Pb data and monazite ages can be used as a valuable tool for prospecting. It was shown for 3 localities in Pisecke Hory Region, the Czech Republic.
| Physical sciences | Geochronology | Earth science |
32359456 | https://en.wikipedia.org/wiki/Danyang%E2%80%93Kunshan%20Grand%20Bridge | Danyang–Kunshan Grand Bridge | The Danyang–Kunshan Grand Bridge () is a viaduct on the Beijing–Shanghai High-Speed Railway. It is the longest bridge in the world.
Bridge
The bridge is located on the rail line between Shanghai and Nanjing in Jiangsu province. It is in the Yangtze River Delta, where the geography is characterized by lowland rice paddies, canals, rivers, and lakes. The bridge runs roughly parallel to the Yangtze River, about south of the river. It passes through the northern edges of population centers (from west to east) beginning in Danyang, Changzhou, Wuxi, Suzhou, and ending in Kunshan. There is a section over open water across Yangcheng Lake in Suzhou.
Construction was completed in 2010 and the bridge opened in 2011. Employing 10,000 people, the project took four years and cost about $8.5 billion. The bridge currently holds the Guinness World Record for the longest bridge in the world in any category .
Designer
The China Road and Bridge Corporation (CRBC), a subsidiary of China Communications Construction Company designed and built the bridge. It is a Chinese government-funded company which was originally part of the Foreign Aid Office of the Ministry of Communications of China. This company leads major civil engineering projects in China like highways, railways, bridges, ports, and tunnels.
| Technology | Bridges | null |
48471223 | https://en.wikipedia.org/wiki/Drug%20class | Drug class | A drug class is a group of medications and other compounds that share similar chemical structures, act through the same mechanism of action (i.e., binding to the same biological target), have similar modes of action, and/or are used to treat similar diseases. The FDA has long worked to classify and license new medications. Its Drug Evaluation and Research Center categorizes these medications based on both their chemical and therapeutic classes.
In several major drug classification systems, these four types of classifications are organized into a hierarchy. For example, fibrates are a chemical class of drugs (amphipathic carboxylic acids) that share the same mechanism of action (PPAR agonist), the same mode of action (reducing blood triglyceride levels), and are used to prevent and treat the same disease (atherosclerosis). However, not all PPAR agonists are fibrates, not all triglyceride-lowering agents are PPAR agonists, and not all drugs used to treat atherosclerosis lower triglycerides.
A drug class is typically defined by a prototype drug, the most important, and typically the first developed drug within the class, used as a reference for comparison.
Comprehensive systems
Anatomical Therapeutic Chemical Classification System (ATC) – Combines classification by organ system and therapeutic, pharmacological, and chemical properties into five levels.
Systematized Nomenclature of Medicine (SNOMED) – includes a section devoted to drug classification
Chemical class
This type of categorisation of drugs is from a chemical perspective and categorises them by their chemical structure. Examples of drug classes that are based on chemical structures include:
Analgesic
Benzodiazepine
Cannabinoid
Cardiac glycoside
Fibrate
Gabapentinoid
Steroid
Thiazide diuretic
Triptan
β-lactam antibiotic
Mechanism of action
This type of categorisation is from a pharmacological perspective and categorises them by their biological target. Drug classes that share a common molecular mechanism of action modulate the activity of a specific biological target. The definition of a mechanism of action also includes the type of activity at that biological target. For receptors, these activities include agonist, antagonist, inverse agonist, or modulator. Enzyme target mechanisms include activator or inhibitor. Ion channel modulators include opener or blocker. The following are specific examples of drug classes whose definition is based on a specific mechanism of action:
5-alpha-reductase inhibitor
ACE inhibitor
Alpha-adrenergic agonist
Angiotensin II receptor antagonist
Beta blocker
Cholinergic
Dopaminergic
GABAergic
Incretin mimetic
Nonsteroidal anti-inflammatory drug − cyclooxygenase inhibitor
Proton-pump inhibitor
Renin inhibitor
Selective glucocorticoid receptor modulator
Serotonergic
Statin – HMG-CoA reductase inhibitor
Mode of alternative
This type of categorisation of drugs is from a biological perspective and categorises them by the anatomical or functional change they induce. Drug classes that are defined by common modes of action (i.e. the functional or anatomical change they induce) include:
Antifungals
Antimicrobials
Antithrombotics
Bronchodilator
Chronotrope (positive or negative)
Decongestant
Diuretic or Antidiuretic
Inotrope (positive or negative)
Therapeutic class
This type of categorisation of drugs is from a medical perspective and categorises them by the pathology they are used to treat. Drug classes that are defined by their therapeutic use (the pathology they are intended to treat) include:
Analgesics
Antibiotic
Anticancer
Anticoagulant
Antidepressant
Antidiabetic
Antiepileptic
Antipsychotic
Antispasmodic
Antiviral
Cardiovascular
Depressant
Sedative
Stimulant
Amalgamated classes
Some drug classes have been amalgamated from these three principles to meet practical needs. The class of nonsteroidal anti-inflammatory drugs (NSAIDs) is one such example. Strictly speaking, and also historically, the wider class of anti-inflammatory drugs also comprises steroidal anti-inflammatory drugs. These drugs were in fact the predominant anti-inflammatories during the decade leading up to the introduction of the term "nonsteroidal anti-inflammatory drugs." Because of the disastrous reputation that the corticosteroids had got in the 1950s, the new term, which offered to signal that an anti-inflammatory drug was not a steroid, rapidly gained currency. The drug class of "nonsteroidal anti-inflammatory drugs" (NSAIDs) is thus composed by one element ("anti-inflammatory") that designates the mechanism of action, and one element ("nonsteroidal") that separates it from other drugs with that same mechanism of action. Similarly, one might argue that the class of disease-modifying anti-rheumatic drugs (DMARD) is composed by one element ("disease-modifying") that albeit vaguely designates a mechanism of action, and one element ("anti-rheumatic drug") that indicates its therapeutic use.
Disease-modifying antirheumatic drug (DMARD)
Nonsteroidal anti-inflammatory drug (NSAID)
Other systems of classification
Other systems of drug classification exist, for example the Biopharmaceutics Classification System which determines a drugs' attributes by solubility and intestinal permeability.
Legal classification
For the Canadian legal classification, see Controlled Drugs and Substances Act
For the UK legal classification, see Drugs controlled by the UK Misuse of Drugs Act
For the US legal classification, see
Pregnancy category is defined using a variety of systems by different jurisdictions
| Biology and health sciences | General concepts_2 | Health |
56630559 | https://en.wikipedia.org/wiki/Agricultural%20expansion | Agricultural expansion | Agricultural expansion describes the growth of agricultural land (arable land, pastures, etc.) especially in the 20th and 21st centuries.
The agricultural expansion is often explained as a direct consequence of the global increase in food and energy requirements due to continuing population growth (both which in turn have been attributed to agricultural expansion itself), with an estimated expectation of 10 to 11 billion humans on Earth by end of this century. It is foreseen that most of the world's non-agrarian ecosystems (terrestrial and aquatic) will be affected adversely, from habitat loss, land degradation, overexploitation, and other problems. The intensified food (and biofuel) production will in particular affect the tropical regions.
Most modern agriculture relies on intensive methods. Further expansion of the predominant farming types that rest on a small number of highly productive crops has led to a significant loss of biodiversity on a global scale already.
Moreover, agricultural expansion continues to be the main driver of deforestation and forest fragmentation. Large-scale commercial agriculture (primarily cattle ranching and cultivation of soya bean and oil palm) accounted for 40 percent of tropical deforestation between 2000 and 2010, and local subsistence agriculture for another 33 percent. In the light of the already occurring and potential massive ecological effects, the need for sustainable practices is more urgent than ever.
The FAO predicts that global arable land use will continue to grow from a in 2014 to in 2050, with most of this growth projected to result from developing countries. At the same time, arable land use in developed countries is likely to continue its decline.
A well-known example of already ongoing agricultural expansion is the proliferation of palm oil production areas or the land conversion/deforestation for soy bean production in South America. Today's land grabbing activities are often a consequence of the strive for agricultural land by growing economies.
In the beginning of the 21st century the palm oil industry caused a massive deforestation in Borneo with heavy consequences.
| Technology | Agriculture and ecology | null |
42193218 | https://en.wikipedia.org/wiki/Human%20digestive%20system | Human digestive system | The human digestive system consists of the gastrointestinal tract plus the accessory organs of digestion (the tongue, salivary glands, pancreas, liver, and gallbladder). Digestion involves the breakdown of food into smaller and smaller components, until they can be absorbed and assimilated into the body. The process of digestion has three stages: the cephalic phase, the gastric phase, and the intestinal phase.
The first stage, the cephalic phase of digestion, begins with secretions from gastric glands in response to the sight and smell of food. This stage includes the mechanical breakdown of food by chewing, and the chemical breakdown by digestive enzymes, that takes place in the mouth. Saliva contains the digestive enzymes amylase, and lingual lipase, secreted by the salivary and serous glands on the tongue. Chewing, in which the food is mixed with saliva, begins the mechanical process of digestion. This produces a bolus which is swallowed down the esophagus to enter the stomach.
The second stage, the gastric phase, happens in the stomach. Here, the food is further broken down by mixing with gastric acid until it passes into the duodenum, the first part of the small intestine.
The third stage, the intestinal phase, begins in the duodenum. Here, the partially digested food is mixed with a number of enzymes produced by the pancreas.
Digestion is helped by the chewing of food carried out by the muscles of mastication, the tongue, and the teeth, and also by the contractions of peristalsis, and segmentation. Gastric acid, and the production of mucus in the stomach, are essential for the continuation of digestion.
Peristalsis is the rhythmic contraction of muscles that begins in the esophagus and continues along the wall of the stomach and the rest of the gastrointestinal tract. This initially results in the production of chyme which when fully broken down in the small intestine is absorbed as chyle into the lymphatic system. Most of the digestion of food takes place in the small intestine. Water and some minerals are reabsorbed back into the blood in the colon of the large intestine. The waste products of digestion (feces) are defecated from the rectum via the anus.
Components
There are several organs and other components involved in the digestion of food. The organs known as the accessory digestive organs are the liver, gall bladder and pancreas. Other components include the mouth, salivary glands, tongue, teeth and epiglottis.
The largest structure of the digestive system is the gastrointestinal tract (GI tract). This starts at the mouth and ends at the anus, covering a distance of about .
A major digestive organ is the stomach. Within its mucosa are millions of embedded gastric glands. Their secretions are vital to the functioning of the organ.
Most of the digestion of food takes place in the small intestine which is the longest part of the GI tract.
The largest part of the GI tract is the colon or large intestine. Water is absorbed here and the remaining waste matter is stored prior to defecation.
There are many specialised cells of the GI tract. These include the various cells of the gastric glands, taste cells, pancreatic duct cells, enterocytes and microfold cells.
Some parts of the digestive system are also part of the excretory system, including the large intestine.
Mouth
The mouth is the first part of the upper gastrointestinal tract and is equipped with several structures that begin the first processes of digestion. These include salivary glands, teeth and the tongue. The mouth consists of two regions; the vestibule and the oral cavity proper. The vestibule is the area between the teeth, lips and cheeks, and the rest is the oral cavity proper. Most of the oral cavity is lined with oral mucosa, a mucous membrane that produces a lubricating mucus, of which only a small amount is needed. Mucous membranes vary in structure in the different regions of the body but they all produce a lubricating mucus, which is either secreted by surface cells or more usually by underlying glands. The mucous membrane in the mouth continues as the thin mucosa which lines the bases of the teeth. The main component of mucus is a glycoprotein called mucin and the type secreted varies according to the region involved. Mucin is viscous, clear, and clinging. Underlying the mucous membrane in the mouth is a thin layer of smooth muscle tissue and the loose connection to the membrane gives it its great elasticity. It covers the cheeks, inner surfaces of the lips, and floor of the mouth, and the mucin produced is highly protective against tooth decay.
The roof of the mouth is termed the palate and it separates the oral cavity from the nasal cavity. The palate is hard at the front of the mouth since the overlying mucosa is covering a plate of bone; it is softer and more pliable at the back being made of muscle and connective tissue, and it can move to swallow food and liquids. The soft palate ends at the uvula. The surface of the hard palate allows for the pressure needed in eating food, to leave the nasal passage clear. The opening between the lips is termed the oral fissure, and the opening into the throat is called the fauces.
At either side of the soft palate are the palatoglossus muscles which also reach into regions of the tongue. These muscles raise the back of the tongue and also close both sides of the fauces to enable food to be swallowed. Mucus helps in the mastication of food in its ability to soften and collect the food in the formation of the bolus.
Salivary glands
There are three pairs of main salivary glands and between 800 and 1,000 minor salivary glands, all of which mainly serve the digestive process, and also play an important role in the maintenance of dental health and general mouth lubrication, without which speech would be impossible. The main glands are all exocrine glands, secreting via ducts. All of these glands terminate in the mouth. The largest of these are the parotid glands—their secretion is mainly serous. The next pair are underneath the jaw, the submandibular glands, these produce both serous fluid and mucus. The serous fluid is produced by serous glands in these salivary glands which also produce lingual lipase. They produce about 70% of the oral cavity saliva. The third pair are the sublingual glands located underneath the tongue and their secretion is mainly mucous with a small percentage of saliva.
Within the oral mucosa, and also on the tongue, palates, and floor of the mouth, are the minor salivary glands; their secretions are mainly mucous and they are innervated by the facial nerve (CN7). The glands also secrete amylase a first stage in the breakdown of food acting on the carbohydrate in the food to transform the starch content into maltose. There are other serous glands on the surface of the tongue that encircle taste buds on the back part of the tongue and these also produce lingual lipase. Lipase is a digestive enzyme that catalyses the hydrolysis of lipids (fats). These glands are termed Von Ebner's glands which have also been shown to have another function in the secretion of histatins which offer an early defense (outside of the immune system) against microbes in food, when it makes contact with these glands on the tongue tissue. Sensory information can stimulate the secretion of saliva providing the necessary fluid for the tongue to work with and also to ease swallowing of the food.
Saliva
Saliva moistens and softens food, and along with the chewing action of the teeth, transforms the food into a smooth bolus. The bolus is further helped by the lubrication provided by the saliva in its passage from the mouth into the esophagus. Also of importance is the presence in saliva of the digestive enzymes amylase and lipase. Amylase starts to work on the starch in carbohydrates, breaking it down into the simple sugars of maltose and dextrose that can be further broken down in the small intestine. Saliva in the mouth can account for 30% of this initial starch digestion. Lipase starts to work on breaking down fats. Lipase is further produced in the pancreas where it is released to continue this digestion of fats. The presence of salivary lipase is of prime importance in young babies whose pancreatic lipase has yet to be developed.
As well as its role in supplying digestive enzymes, saliva has a cleansing action for the teeth and mouth. It also has an immunological role in supplying antibodies to the system, such as immunoglobulin A. This is seen to be key in preventing infections of the salivary glands, importantly that of parotitis.
Saliva also contains a glycoprotein called haptocorrin which is a binding protein to vitamin B12. It binds with the vitamin in order to carry it safely through the acidic content of the stomach. When it reaches the duodenum, pancreatic enzymes break down the glycoprotein and free the vitamin which then binds with intrinsic factor.
Tongue
Food enters the mouth where the first stage in the digestive process takes place, with the action of the tongue and the secretion of saliva. The tongue is a fleshy and muscular sensory organ, and the first sensory information is received via the taste buds in the papillae on its surface. If the taste is agreeable, the tongue will go into action, manipulating the food in the mouth which stimulates the secretion of saliva from the salivary glands. The liquid quality of the saliva will help in the softening of the food and its enzyme content will start to break down the food whilst it is still in the mouth. The first part of the food to be broken down is the starch of carbohydrates (by the enzyme amylase in the saliva).
The tongue is attached to the floor of the mouth by a ligamentous band called the frenum and this gives it great mobility for the manipulation of food (and speech); the range of manipulation is optimally controlled by the action of several muscles and limited in its external range by the stretch of the frenum. The tongue's two sets of muscles, are four intrinsic muscles that originate in the tongue and are involved with its shaping, and four extrinsic muscles originating in bone that are involved with its movement.
Taste
Taste is a form of chemoreception that takes place in the specialised taste receptors, contained in structures called taste buds in the mouth. Taste buds are mainly on the upper surface (dorsum) of the tongue. The function of taste perception is vital to help prevent harmful or rotten foods from being consumed. There are also taste buds on the epiglottis and upper part of the esophagus. The taste buds are innervated by a branch of the facial nerve the chorda tympani, and the glossopharyngeal nerve. Taste messages are sent via these cranial nerves to the brain. The brain can distinguish between the chemical qualities of the food. The five basic tastes are referred to as those of saltiness, sourness, bitterness, sweetness, and umami. The detection of saltiness and sourness enables the control of salt and acid balance. The detection of bitterness warns of poisons—many of a plant's defences are of poisonous compounds that are bitter. Sweetness guides to those foods that will supply energy; the initial breakdown of the energy-giving carbohydrates by salivary amylase creates the taste of sweetness since simple sugars are the first result. The taste of umami is thought to signal protein-rich food. Sour tastes are acidic which is often found in bad food. The brain has to decide very quickly whether the food should be eaten or not. It was the findings in 1991, describing the first olfactory receptors that helped to prompt the research into taste. The olfactory receptors are located on cell surfaces in the nose which bind to chemicals enabling the detection of smells. It is assumed that signals from taste receptors work together with those from the nose, to form an idea of complex food flavours.
Teeth
Teeth are complex structures made of materials specific to them. They are made of a bone-like material called dentin, which is covered by the hardest tissue in the body—enamel. Teeth have different shapes to deal with different aspects of mastication employed in tearing and chewing pieces of food into smaller and smaller pieces. This results in a much larger surface area for the action of digestive enzymes.
The teeth are named after their particular roles in the process of mastication—incisors are used for cutting or biting off pieces of food; canines, are used for tearing, premolars and molars are used for chewing and grinding. Mastication of the food with the help of saliva and mucus results in the formation of a soft bolus which can then be swallowed to make its way down the upper gastrointestinal tract to the stomach.
The digestive enzymes in saliva also help in keeping the teeth clean by breaking down any lodged food particles.
Epiglottis
The epiglottis is a flap of elastic cartilage attached to the entrance of the larynx. It is covered with a mucous membrane and there are taste buds on its lingual surface which faces into the mouth. Its laryngeal surface faces into the larynx. The epiglottis functions to guard the entrance of the glottis, the opening between the vocal folds. It is normally pointed upward during breathing with its underside functioning as part of the pharynx, but during swallowing, the epiglottis folds down to a more horizontal position, with its upper side functioning as part of the pharynx. In this manner it prevents food from going into the trachea and instead directs it to the esophagus, which is behind. During swallowing, the backward motion of the tongue forces the epiglottis over the glottis' opening to prevent any food that is being swallowed from entering the larynx which leads to the lungs; the larynx is also pulled upwards to assist this process. Stimulation of the larynx by ingested matter produces a strong cough reflex in order to protect the lungs.
Pharynx
The pharynx is a part of the conducting zone of the respiratory system and also a part of the digestive system. It is the part of the throat immediately behind the nasal cavity at the back of the mouth and above the esophagus and larynx. The pharynx is made up of three parts. The lower two parts—the oropharynx and the laryngopharynx are involved in the digestive system. The laryngopharynx connects to the esophagus and it serves as a passageway for both air and food. Air enters the larynx anteriorly but anything swallowed has priority and the passage of air is temporarily blocked. The pharynx is innervated by the pharyngeal plexus of the vagus nerve. Muscles in the pharynx push the food into the esophagus. The pharynx joins the esophagus at the oesophageal inlet which is located behind the cricoid cartilage.
Esophagus
The esophagus, commonly known as the foodpipe or gullet, consists of a muscular tube through which food passes from the pharynx to the stomach. The esophagus is continuous with the laryngopharynx. It passes through the posterior mediastinum in the thorax and enters the stomach through a hole in the thoracic diaphragm—the esophageal hiatus, at the level of the tenth thoracic vertebra (T10). Its length averages 25 cm, varying with an individual's height. It is divided into cervical, thoracic and abdominal parts. The pharynx joins the esophagus at the esophageal inlet which is behind the cricoid cartilage.
At rest the esophagus is closed at both ends, by the upper and lower esophageal sphincters. The opening of the upper sphincter is triggered by the swallowing reflex so that food is allowed through. The sphincter also serves to prevent back flow from the esophagus into the pharynx. The esophagus has a mucous membrane and the epithelium which has a protective function is continuously replaced due to the volume of food that passes inside the esophagus. During swallowing, food passes from the mouth through the pharynx into the esophagus. The epiglottis folds down to a more horizontal position to direct the food into the esophagus, and away from the trachea.
Once in the esophagus, the bolus travels down to the stomach via rhythmic contraction and relaxation of muscles known as peristalsis. The lower esophageal sphincter is a muscular sphincter surrounding the lower part of the esophagus. The gastroesophageal junction between the esophagus and the stomach is controlled by the lower esophageal sphincter, which remains constricted at all times other than during swallowing and vomiting to prevent the contents of the stomach from entering the esophagus. As the esophagus does not have the same protection from acid as the stomach, any failure of this sphincter can lead to heartburn.
Diaphragm
The diaphragm is an important part of the body's digestive system. The muscular diaphragm separates the thoracic cavity from the abdominal cavity where most of the digestive organs are located. The suspensory muscle attaches the ascending duodenum to the diaphragm. This muscle is thought to be of help in the digestive system in that its attachment offers a wider angle to the duodenojejunal flexure for the easier passage of digesting material. The diaphragm also attaches to, and anchors the liver at its bare area. The esophagus enters the abdomen through a hole in the diaphragm at the level of T10.
Stomach
The stomach is a major organ of the gastrointestinal tract and digestive system. It is a consistently J-shaped organ joined to the esophagus at its upper end and to the duodenum at its lower end.
Gastric acid (informally gastric juice), produced in the stomach plays a vital role in the digestive process, and mainly contains hydrochloric acid and sodium chloride. A peptide hormone, gastrin, produced by G cells in the gastric glands, stimulates the production of gastric juice which activates the digestive enzymes. Pepsinogen is a precursor enzyme (zymogen) produced by the gastric chief cells, and gastric acid activates this to the enzyme pepsin which begins the digestion of proteins. As these two chemicals would damage the stomach wall, mucus is secreted by innumerable gastric glands in the stomach, to provide a slimy protective layer against the damaging effects of the chemicals on the inner layers of the stomach.
At the same time that protein is being digested, mechanical churning occurs through the action of peristalsis, waves of muscular contractions that move along the stomach wall. This allows the mass of food to further mix with the digestive enzymes. Gastric lipase secreted by the chief cells in the fundic glands in the gastric mucosa of the stomach, is an acidic lipase, in contrast with the alkaline pancreatic lipase. This breaks down fats to some degree though is not as efficient as the pancreatic lipase.
The pylorus, the lowest section of the stomach which attaches to the duodenum via the pyloric canal, contains countless glands which secrete digestive enzymes including gastrin. After an hour or two, a thick semi-liquid called chyme is produced. When the pyloric sphincter, or valve opens, chyme enters the duodenum where it mixes further with digestive enzymes from the pancreas, and then passes through the small intestine, where digestion continues.
The parietal cells in the fundus of the stomach, produce a glycoprotein called intrinsic factor which is essential for the absorption of vitamin B12. Vitamin B12 (cobalamin), is carried to, and through the stomach, bound to a glycoprotein secreted by the salivary glands – transcobalamin I also called haptocorrin, which protects the acid-sensitive vitamin from the acidic stomach contents. Once in the more neutral duodenum, pancreatic enzymes break down the protective glycoprotein. The freed vitamin B12 then binds to intrinsic factor which is then absorbed by the enterocytes in the ileum.
The stomach is a distensible organ and can normally expand to hold about one litre of food. This expansion is enabled by a series of gastric folds in the inner walls of the stomach. The stomach of a newborn baby will only be able to expand to retain about 30 ml.
Spleen
The spleen is the largest lymphoid organ in the body but has other functions. It breaks down both red and white blood cells that are spent. This is why it is sometimes known as the 'graveyard of red blood cells'. A product of this digestion is the pigment bilirubin, which is sent to the liver and secreted in the bile. Another product is iron, which is used in the formation of new blood cells in the bone marrow. Medicine treats the spleen solely as belonging to the lymphatic system, though it is acknowledged that the full range of its important functions is not yet understood.
Liver
The liver is the second largest organ (after the skin) and is an accessory digestive gland which plays a role in the body's metabolism. The liver has many functions some of which are important to digestion. The liver can detoxify various metabolites; synthesise proteins and produce biochemicals needed for digestion. It regulates the storage of glycogen which it can form from glucose (glycogenesis). The liver can also synthesise glucose from certain amino acids. Its digestive functions are largely involved with the breaking down of carbohydrates. It also maintains protein metabolism in its synthesis and degradation. In lipid metabolism it synthesises cholesterol. Fats are also produced in the process of lipogenesis. The liver synthesises the bulk of lipoproteins. The liver is located in the upper right quadrant of the abdomen and below the diaphragm to which it is attached at one part, the bare area of the liver. This is to the right of the stomach and it overlies the gall bladder. The liver synthesises bile acids and lecithin to promote the digestion of fat.
Bile
Bile produced by the liver is made up of water (97%), bile salts, mucus and pigments, 1% fats and inorganic salts. Bilirubin is its major pigment. Bile acts partly as a surfactant which lowers the surface tension between either two liquids or a solid and a liquid and helps to emulsify the fats in the chyme. Food fat is dispersed by the action of bile into smaller units called micelles. The breaking down into micelles creates a much larger surface area for the pancreatic enzyme, lipase to work on. Lipase digests the triglycerides which are broken down into two fatty acids and a monoglyceride. These are then absorbed by villi on the intestinal wall. If fats are not absorbed in this way in the small intestine problems can arise later in the large intestine which is not equipped to absorb fats. Bile also helps in the absorption of vitamin K from the diet.
Bile is collected and delivered through the common hepatic duct. This duct joins with the cystic duct to connect in a common bile duct with the gallbladder.
Bile is stored in the gallbladder for release when food is discharged into the duodenum and also after a few hours.
Gallbladder
The gallbladder is a hollow part of the biliary tract that sits just beneath the liver, with the gallbladder body resting in a small depression. It is a small organ where the bile produced by the liver is stored, before being released into the small intestine. Bile flows from the liver through the bile ducts and into the gall bladder for storage. The bile is released in response to cholecystokinin (CCK), a peptide hormone released from the duodenum. The production of CCK (by endocrine cells of the duodenum) is stimulated by the presence of fat in the duodenum.
It is divided into three sections, a fundus, body and neck. The neck tapers and connects to the biliary tract via the cystic duct, which then joins the common hepatic duct to form the common bile duct. At this junction is a mucosal fold called Hartmann's pouch, where gallstones commonly get stuck. The muscular layer of the body is of smooth muscle tissue that helps the gallbladder contract, so that it can discharge its bile into the bile duct. The gallbladder needs to store bile in a natural, semi-liquid form at all times. Hydrogen ions secreted from the inner lining of the gallbladder keep the bile acidic enough to prevent hardening. To dilute the bile, water and electrolytes from the digestion system are added. Also, salts attach themselves to cholesterol molecules in the bile to keep them from crystallising. If there is too much cholesterol or bilirubin in the bile, or if the gallbladder does not empty properly the systems can fail. This is how gallstones form when a small piece of calcium gets coated with either cholesterol or bilirubin and the bile crystallises and forms a gallstone. The main purpose of the gallbladder is to store and release bile, or gall. Bile is released into the small intestine in order to help in the digestion of fats by breaking down larger molecules into smaller ones. After the fat is absorbed, the bile is also absorbed and transported back to the liver for reuse.
Pancreas
The pancreas is a major organ functioning as an accessory digestive gland in the digestive system. It is both an endocrine gland and an exocrine gland. The endocrine part secretes insulin when the blood sugar becomes high; insulin moves glucose from the blood into the muscles and other tissues for use as energy. The endocrine part releases glucagon when the blood sugar is low; glucagon allows stored sugar to be broken down into glucose by the liver in order to re-balance the sugar levels. The pancreas produces and releases important digestive enzymes in the pancreatic juice that it delivers to the duodenum. The pancreas lies below and at the back of the stomach. It connects to the duodenum via the pancreatic duct which it joins near to the bile duct's connection where both the bile and pancreatic juice can act on the chyme that is released from the stomach into the duodenum. Aqueous pancreatic secretions from pancreatic duct cells contain bicarbonate ions which are alkaline and help with the bile to neutralise the acidic chyme that is churned out by the stomach.
The pancreas is also the main source of enzymes for the digestion of fats and proteins. Some of these are released in response to the production of cholecystokinin in the duodenum. (The enzymes that digest polysaccharides, by contrast, are primarily produced by the walls of the intestines.) The cells are filled with secretory granules containing the precursor digestive enzymes. The major proteases, the pancreatic enzymes which work on proteins, are trypsinogen and chymotrypsinogen. Elastase is also produced. Smaller amounts of lipase and amylase are secreted. The pancreas also secretes phospholipase A2, lysophospholipase, and cholesterol esterase. The precursor zymogens, are inactive variants of the enzymes; which avoids the onset of pancreatitis caused by autodegradation. Once released in the intestine, the enzyme enteropeptidase present in the intestinal mucosa activates trypsinogen by cleaving it to form trypsin; further cleavage results in chymotripsin.
Lower gastrointestinal tract
The lower gastrointestinal tract (GI), includes the small intestine and all of the large intestine. The intestine is also called the bowel or the gut. The lower GI starts at the pyloric sphincter of the stomach and finishes at the anus. The small intestine is subdivided into the duodenum, the jejunum and the ileum. The cecum marks the division between the small and large intestine. The large intestine includes the rectum and anal canal.
Small intestine
Partially digested food starts to arrive in the small intestine as semi-liquid chyme, one hour after it is eaten. The stomach is half empty after an average of 1.2 hours. After four or five hours the stomach has emptied.
In the small intestine, the pH becomes crucial; it needs to be finely balanced in order to activate digestive enzymes. The chyme is very acidic, with a low pH, having been released from the stomach and needs to be made much more alkaline. This is achieved in the duodenum by the addition of bile from the gall bladder combined with the bicarbonate secretions from the pancreatic duct and also from secretions of bicarbonate-rich mucus from duodenal glands known as Brunner's glands. The chyme arrives in the intestines having been released from the stomach through the opening of the pyloric sphincter. The resulting alkaline fluid mix neutralises the gastric acid which would damage the lining of the intestine. The mucus component lubricates the walls of the intestine.
When the digested food particles are reduced enough in size and composition, they can be absorbed by the intestinal wall and carried to the bloodstream. The first receptacle for this chyme is the duodenal bulb. From here it passes into the first of the three sections of the small intestine, the duodenum (the next section is the jejunum and the third is the ileum). The duodenum is the first and shortest section of the small intestine. It is a hollow, jointed C-shaped tube connecting the stomach to the jejunum. It starts at the duodenal bulb and ends at the suspensory muscle of duodenum. The attachment of the suspensory muscle to the diaphragm is thought to help the passage of food by making a wider angle at its attachment.
Most food digestion takes place in the small intestine. Segmentation contractions act to mix and move the chyme more slowly in the small intestine allowing more time for absorption (and these continue in the large intestine). In the duodenum, pancreatic lipase is secreted together with a co-enzyme, colipase to further digest the fat content of the chyme. From this breakdown, smaller particles of emulsified fats called chylomicrons are produced. There are also digestive cells called enterocytes lining the intestines (the majority being in the small intestine). They are unusual cells in that they have villi on their surface which in turn have innumerable microvilli on their surface. All these villi make for a greater surface area, not only for the absorption of chyme but also for its further digestion by large numbers of digestive enzymes present on the microvilli.
The chylomicrons are small enough to pass through the enterocyte villi and into their lymph capillaries called lacteals. A milky fluid called chyle, consisting mainly of the emulsified fats of the chylomicrons, results from the absorbed mix with the lymph in the lacteals. Chyle is then transported through the lymphatic system to the rest of the body.
The suspensory muscle marks the end of the duodenum and the division between the upper gastrointestinal tract and the lower GI tract. The digestive tract continues as the jejunum which continues as the ileum. The jejunum, the midsection of the small intestine contains circular folds, flaps of doubled mucosal membrane which partially encircle and sometimes completely encircle the lumen of the intestine. These folds together with villi serve to increase the surface area of the jejunum enabling an increased absorption of digested sugars, amino acids and fatty acids into the bloodstream. The circular folds also slow the passage of food giving more time for nutrients to be absorbed.
The last part of the small intestine is the ileum. This also contains villi and vitamin B12; bile acids and any residue nutrients are absorbed here. When the chyme is exhausted of its nutrients the remaining waste material changes into the semi-solids called feces, which pass to the large intestine, where bacteria in the gut flora further break down residual proteins and starches.
Transit time through the small intestine is an average of 4 hours. Half of the food residues of a meal have emptied from the small intestine by an average of 5.4 hours after ingestion. Emptying of the small intestine is complete after an average of 8.6 hours.
Cecum
The cecum is a pouch marking the division between the small intestine and the large intestine. It lies below the ileocecal valve in the lower right quadrant of the abdomen. The cecum receives chyme from the last part of the small intestine, the ileum, and connects to the ascending colon of the large intestine. At this junction there is a sphincter or valve, the ileocecal valve which slows the passage of chyme from the ileum, allowing further digestion. It is also the site of the appendix attachment.
Large intestine
In the large intestine, the passage of the digesting food in the colon is a lot slower, taking from 30 to 40 hours until it is removed by defecation. The colon mainly serves as a site for the fermentation of digestible matter by the gut flora. The time taken varies considerably between individuals. The remaining semi-solid waste is termed feces and is removed by the coordinated contractions of the intestinal walls, termed peristalsis, which propels the excreta forward to reach the rectum and exit through the anus via defecation. The wall has an outer layer of longitudinal muscles, the taeniae coli, and an inner layer of circular muscles. The circular muscle keeps the material moving forward and also prevents any back flow of waste. Also of help in the action of peristalsis is the basal electrical rhythm that determines the frequency of contractions. The taeniae coli can be seen and are responsible for the bulges (haustra) present in the colon. Most parts of the GI tract are covered with serous membranes and have a mesentery. Other more muscular parts are lined with adventitia.
Blood supply
The digestive system is supplied by the celiac artery. The celiac artery is the first major branch from the abdominal aorta, and is the only major artery that nourishes the digestive organs.
There are three main divisions – the left gastric artery, the common hepatic artery and the splenic artery.
The celiac artery supplies the liver, stomach, spleen and the upper 1/3 of the duodenum (to the sphincter of Oddi) and the pancreas with oxygenated blood. Most of the blood is returned to the liver via the portal venous system for further processing and detoxification before returning to the systemic circulation via the hepatic veins.
The next branch from the abdominal aorta is the superior mesenteric artery, which supplies the regions of the digestive tract derived from the midgut, which includes the distal 2/3 of the duodenum, jejunum, ileum, cecum, appendix, ascending colon, and the proximal 2/3 of the transverse colon.
The final branch which is important for the digestive system is the inferior mesenteric artery, which supplies the regions of the digestive tract derived from the hindgut, which includes the distal 1/3 of the transverse colon, descending colon, sigmoid colon, rectum, and the anus above the pectinate line.
Blood flow to the digestive tract reaches its maximum 20–40 minutes after a meal and lasts for 1.5–2 hours.
Nerve supply
The enteric nervous system consists of some one hundred million neurons that are embedded in the peritoneum, the lining of the gastrointestinal tract extending from the esophagus to the anus. These neurons are collected into two plexuses – the myenteric (or Auerbach's) plexus that lies between the longitudinal and the smooth muscle layers, and the submucosal (or Meissner's) plexus that lies between the circular smooth muscle layer and the mucosa.
Parasympathetic innervation to the ascending colon is supplied by the vagus nerve. Sympathetic innervation is supplied by the splanchnic nerves that join the celiac ganglia. Most of the digestive tract is innervated by the two large celiac ganglia, with the upper part of each ganglion joined by the greater splanchnic nerve and the lower parts joined by the lesser splanchnic nerve. It is from these ganglia that many of the gastric plexuses arise.
Development
Early in embryonic development, the embryo has three germ layers and abuts a yolk sac. During the second week of development, the embryo grows and begins to surround and envelop portions of this sac. The enveloped portions form the basis for the adult gastrointestinal tract. Sections of this foregut begin to differentiate into the organs of the gastrointestinal tract, such as the esophagus, stomach, and intestines.
During the fourth week of development, the stomach rotates. The stomach, originally lying in the midline of the embryo, rotates so that its body is on the left. This rotation also affects the part of the gastrointestinal tube immediately below the stomach, which will go on to become the duodenum. By the end of the fourth week, the developing duodenum begins to spout a small outpouching on its right side, the hepatic diverticulum, which will go on to become the biliary tree. Just below this is a second outpouching, known as the cystic diverticulum, that will eventually develop into the gallbladder.
Clinical significance
Each part of the digestive system is subject to a wide range of disorders many of which can be congenital. Mouth diseases can also be caused by pathogenic bacteria, viruses, fungi and as a side effect of some medications. Mouth diseases include tongue diseases and salivary gland diseases. A common gum disease in the mouth is gingivitis which is caused by bacteria in plaque. The most common viral infection of the mouth is gingivostomatitis caused by herpes simplex. A common fungal infection is candidiasis commonly known as thrush which affects the mucous membranes of the mouth.
There are a number of esophageal diseases such as the development of Schatzki rings that can restrict the passageway, causing difficulties in swallowing. They can also completely block the esophagus.
Stomach diseases are often chronic conditions and include gastroparesis, gastritis, and peptic ulcers.
A number of problems including malnutrition and anemia can arise from malabsorption, the abnormal absorption of nutrients in the GI tract. Malabsorption can have many causes ranging from infection, to enzyme deficiencies such as exocrine pancreatic insufficiency. It can also arise as a result of other gastrointestinal diseases such as coeliac disease. Coeliac disease is an autoimmune disorder of the small intestine. This can cause vitamin deficiencies due to the improper absorption of nutrients in the small intestine. The small intestine can also be obstructed by a volvulus, a loop of intestine that becomes twisted enclosing its attached mesentery. This can cause mesenteric ischemia if severe enough.
A common disorder of the bowel is diverticulitis. Diverticula are small pouches that can form inside the bowel wall, which can become inflamed to give diverticulitis. This disease can have complications if an inflamed diverticulum bursts and infection sets in. Any infection can spread further to the lining of the abdomen (peritoneum) and cause potentially fatal peritonitis.
Crohn's disease is a common chronic inflammatory bowel disease (IBD), which can affect any part of the GI tract, but it mostly starts in the terminal ileum.
Ulcerative colitis, an ulcerative form of colitis, is the other major inflammatory bowel disease which is restricted to the colon and rectum. Both of these IBDs can give an increased risk of the development of colorectal cancer. Ulcerative colitis is the most common of the IBDs
Irritable bowel syndrome (IBS) is the most common of the functional gastrointestinal disorders. These are idiopathic disorders that the Rome process has helped to define.
Giardiasis is a disease of the small intestine caused by a protist parasite Giardia lamblia. This does not spread but remains confined to the lumen of the small intestine. It can often be asymptomatic, but as often can be indicated by a variety of symptoms. Giardiasis is the most common pathogenic parasitic infection in humans.
There are diagnostic tools mostly involving the ingestion of barium sulphate to investigate disorders of the GI tract. These are known as upper gastrointestinal series that enable imaging of the pharynx, larynx, oesophagus, stomach and small intestine and lower gastrointestinal series for imaging of the colon.
In pregnancy
Gestation can predispose for certain digestive disorders. Gestational diabetes can develop in the mother as a result of pregnancy and while this often presents with few symptoms it can lead to pre-eclampsia.
History
In the early 11th century, the Islamic medical philosopher Avicenna wrote extensively on many subjects including medicine. Forty of these treatises on medicine survive, and in the most famous one titled the Canon of Medicine he discusses "rising gas". Avicenna believed that digestive system dysfunction was responsible for the overproduction of gas in the gastrointestinal tract. He suggested lifestyle changes and a compound of herbal drugs for its treatment.
In 1497, Alessandro Benedetti viewed the stomach as an unclean organ separated off by the diaphragm. This view of the stomach and intestines as being base organs was generally held until the mid-17th century.
In the Renaissance of the 16th century, Leonardo da Vinci produced some early drawings of the stomach and intestines. He thought that the digestive system aided the respiratory system. Andreas Vesalius provided some early anatomical drawings of the abdominal organs in the 16th century.
In the middle of the 17th century, a Flemish physician Jan Baptist van Helmont offered the first chemical account of digestion which was later described as being very close to the later conceptualised enzyme.
In 1653, William Harvey described the intestines in terms of their length, their blood supply, the mesenteries, and fat (adenylyl cyclase).
In 1823, William Prout discovered hydrochloric acid in the gastric juice. In 1895, Ivan Pavlov described its secretion as being stimulated by a neurologic reflex with the vagus nerve having a crucial role. Black in the 19th century suggested an association of histamine with this secretion. In 1916, Popielski described histamine as a gastric secretagogue of hydrochloric acid.
William Beaumont was an army surgeon who in 1825, was able to observe digestion as it took place in the stomach. This was made possible by experiments on a man with a stomach wound that did not fully heal leaving an opening into the stomach. The churning motion of the stomach was described among other findings.
In the 19th century, it was accepted that chemical processes were involved in the process of digestion. Physiological research into secretion and the gastrointestinal tract was pursued with experiments undertaken by Claude Bernard, Rudolph Heidenhain and Ivan Pavlov.
The rest of the 20th century was dominated by research into enzymes. The first to be discovered was secretin by Ernest Starling in 1902, with ensuing results from John Edkins in 1905 who first suggested gastrin with its structure being determined in 1964. Andre Latarjet and Lester Dragstedt found a role for acetylcholine in the digestive system. In 1972, H2 receptor agonists were described by J. Black, that block the action of histamine and decrease the production of hydrochloric acid. In 1980, proton pump inhibitors were described by Sachs. In 1983, the role of Helicobacter pylori in the formation of ulcers was described by Barry Marshall, and Robin Warren.
Art historians have often noted that banqueters on iconographic records of ancient Mediterranean societies almost always appear to be lying down on their left sides. One possible explanation could lie in the anatomy of the stomach and in the digestive mechanism. When lying on the left, the food has room to expand because the curvature of the stomach is enhanced in that position.
| Biology and health sciences | Human anatomy | Health |
33962325 | https://en.wikipedia.org/wiki/Markarian%20501 | Markarian 501 | Markarian 501 (or Mrk 501) is a galaxy with a spectrum extending to the highest energy gamma rays. It is a blazar or BL Lac object, which is an active galactic nucleus with a jet that is shooting towards the Earth. The object has a redshift of z = 0.034.
Mrk 501 is an extremely variable source of gamma rays, undergoing violent outbursts. During an outburst in 1997, it was the brightest object in the sky in the very-high-energy gamma ray region of the spectrum, at energies above 1011 eV (100 GeV).
The galaxy hosting the blazar was studied and catalogued by Benjamin Markarian in 1974. It was first determined to be a very high energy gamma ray emitter in 1996 by John Quinn at the Whipple Observatory.
Galaxy
The elliptical galaxy is located in the constellation of Hercules at right ascension 16h 53.9m and declination +39° 45'. Its visible size appears to be 1.2 by 1 minute of arc.
Gamma rays
The gamma rays from Mrk 501 are extremely variable, undergoing violent outbursts.
The gamma ray spectrum of Mrk 501 shows two humps. One is below 1 keV and can be considered to be X-rays and the other is above 1 TeV. During flares and outbursts the peaks increase in power and frequency. Flares lasting 20 minutes with rise times of 1 minute have been measured by MAGIC. In these flares the higher energy gamma rays (of 1.2 Tev) were delayed 4 minutes over the 0.25 TeV gamma rays. This delay has led to various theories, including that space is bigger at small dimensions with a foamy quantum texture. The foam would create a variation in the speed of light for higher-energy light gamma-rays and the lower-energy radio waves and visible light. Such a variation would contradict Lorentz invariance, but could provide a clue for unification theory. Observations of Dr. Floyd Stecker of NASA's Goddard Space Flight Center of Mrk 501 and Mrk 421 demonstrated that there is no violation of Lorentz invariance. The galaxy is also variable in visible light between magnitude 14.5 and 13.6.
During the discovery observations flashes at the average rate of one in seven minutes were observed. Cosmic rays (that is, fermionic or massive cosmic rays, as opposed to photons) were ruled out by the shape and size of the flashes which are small and elliptical for gamma rays. The flux for photons over 300 GeV at this point in time in 1995 was 8.1±1.5 x 10−12 cm−2s−1
Black hole
Blazars are likely to originate from matter falling into a black hole and possibly a binary black hole. The velocity dispersion (which is the maximum difference in the velocity toward or away from Earth) observed in the galaxy is 372 km/s which predicts a black hole mass of (0.9 − 3.4) × 109 M⊙. However, dispersion of velocity was also measured as 291 and 270 km/s so the central mass may be less. A 23-day variability suggested that an object may be orbiting the central black hole with a 23-day period.
Jet
With very-long-baseline interferometry, the fine detail of radio waves can be seen down to milliarcsecond (mas) resolution. A central very bright single point called the core is observed. From the core an extremely high-speed blast of plasma emerges in a narrow cone shape as a one-sided jet.
After 30 milliarcseconds, the jet, which is 300 pc long, does a 90° turn and fans out. The inner jet before the kink shows bright edges or a limb-brightened structure less than 10 mas wide. This is probably due to a fast-moving central part to the jet, combined with slower edges.
Normally, there would be jets of gas shooting out in opposite directions. The observed jet is the one that faces the earth and projects plasma towards Earth. There is also a jet heading away from Earth called a counter jet. Close into the core, this counter jet is so much dimmer than the main jet that it is invisible in radio waves.
The brightness of the counter jet is less than the main jet by a factor of 1250. This implies that the jet is relativistic with Γ about 15 (that is, the plasma is moving at 99.8% of the speed of light) and at an angle between 15° and 25° from the line of sight from the Earth. At 408 MHz, the power level is 1.81 Jy, although this is variable.
Beyond 10 kpc from the core, the counter jet becomes visible, showing that the jets have become non-relativistic; that is, plasma is no longer moving close to the speed of light. The symmetrical radio emission extends to 70", which corresponds to 120 to 200 kpc.
Blazar research
In March 2022, scientists led by Ioannis Liodakis studied Markarian 501 during an average state while discerning how blazars make such a bright light using Imaging X-ray Polarimetry Explorer (IXPE). The researchers were "able to show that the particles in these jets are supercharged by shock fronts, resolving a longstanding 'unanswered question' about the dynamics of these brilliant objects."
Catalog entries
Early designations were 4C 39.49 and B2 1652+39.
The Uppsala General Catalogue of Galaxies lists this as UGC 10599.
Other designations: B1652+39 or 1H1652+398 or TeV J1653+397.
| Physical sciences | Notable galaxies | Astronomy |
33965039 | https://en.wikipedia.org/wiki/Kepler-22b | Kepler-22b | Kepler-22b (also known by its Kepler Object of Interest designation KOI-087.01) is an exoplanet orbiting within the habitable zone of the Sun-like star Kepler-22. It is located about from Earth in the constellation of Cygnus. It was discovered by NASA's Kepler Space Telescope in December 2011 and was the first known transiting planet to orbit within the habitable zone of a Sun-like star, where liquid water could exist on the planet's surface. Kepler-22 is too dim to be seen with the naked eye.
Kepler-22b's radius is roughly twice that of Earth. Its mass and surface composition are unknown. However, an Earth-like composition for the planet is believed to be unlikely; it is more likely to have a volatile-rich composition with a liquid or gaseous outer shell. The only parameters of the planet's orbit that are currently available are its orbital period (about ) and its inclination (approximately 90°). Evidence suggests that the planet has a moderate surface temperature, assuming that the surface is not subject to extreme greenhouse heating. In the absence of an atmosphere, its equilibrium temperature (assuming an Earth-like albedo) would be approximately , slightly higher than that of Earth's .
The planet's first transit was observed on 12 May 2009. Confirmation of the existence of Kepler-22b was announced on December 5, 2011.
Physical characteristics
Mass, radius and temperature
Kepler-22b's radius was initially thought to be 2.4 times that of Earth, but has since been revised to . Its mass and surface composition remain unknown, with only some rough estimates established: at the time of the discovery announcement, it was known to have fewer than 124 Earth masses at the 3-sigma confidence limit, and fewer than 36 Earth masses at 1-sigma confidence. The adopted model in Kipping et al. (2013) does not reliably detect the mass (the upper limit is 52.8 ). , the upper limit has been constrained to at most .
Kepler-22b, dubbed by scientists as a 'water world', might be an 'ocean-like' planet. It might also be comparable to the water-rich planet Gliese 1214 b although Kepler-22b, unlike Gliese 1214 b, is in the habitable zone. An Earth-like composition is ruled out to at least 1-sigma uncertainty by radial velocity measurements of the system. It is thus likely to have a more volatile-rich composition with a liquid or gaseous outer shell; this would make it similar to Kepler-11f, one of the smallest known gas planets. Natalie Batalha, one of the scientists on the Kepler Space Telescope project, has speculated, "If it is mostly ocean with a small rocky core, it's not beyond the realm of possibility that life could exist in such an ocean". This possibility has spurred SETI to perform research on top candidates for extraterrestrial life.
In the absence of an atmosphere, its equilibrium temperature (assuming an Earth-like albedo) would be approximately , compared with Earth's .
Host star
The host star, Kepler-22, is a G-type star that is 3% less massive than the Sun and 2% smaller in volume. It has a surface temperature of compared with the Sun, which has a surface temperature of . The star is about 4 billion years old. In comparison, the Sun is 4.6 billion years old.
The apparent magnitude of Kepler-22 is 11.5, which means it is too dim to be seen with the naked eye.
Orbit
The only parameters of the planet's orbit that are currently available are its orbital period, which is about , and its inclination, which is approximately 90°. From Earth, the planet appears to make a transit across the disk of its host star.
In order to obtain further information about the details of the planet's orbit, other methods of planetary detection, such as the radial velocity method, need to be used. While such methods have been performed on the planet since its discovery, these methods have not yet detected an accurate value for the eccentricity of the planet and so (as of 2023) only an upper limit for the eccentricity of the planet has been set by astronomers.
Habitability
The average distance from Kepler-22b to its host star Kepler-22 is about 15% less than the distance from Earth to the Sun but the luminosity (light output) of Kepler-22 is about 25% less than that of the Sun. This combination of a shorter average distance from the star and a lower stellar luminosity are consistent with a moderate surface temperature at that distance, if it is assumed that the surface is not subject to extreme greenhouse heating.
If Kepler-22b moves in a highly elliptical orbit, its surface temperature variance will be very high.
Climate
Scientists can estimate the possible surface conditions as follows:
In the absence of an atmosphere, its equilibrium temperature (assuming an Earth-like albedo) would be approximately , compared to Earth's .
If the atmosphere provides a greenhouse effect similar in magnitude to the one on Earth, it would have an average surface temperature of .
If the atmosphere has a greenhouse effect similar in magnitude to the one on Venus, it would have an average surface temperature of .
Recent estimates suggest that Kepler-22b has more than a 95% probability of being located in the empirical habitable zone defined by the recent Venus and early Mars limits (based on estimates of when these planets may have supported habitable conditions), but less than a 5% chance of being located in the conservative habitable zone within the Circumstellar habitable zone, (estimated from a 1D cloud-free radiative-convective model).
Limits on satellites
The Hunt for Exomoons with Kepler (HEK) project has studied the Kepler photometry of the planet, to find any evidence of transit timing and duration variations that may be caused by an orbiting satellite. Such variations were not found, ruling out the existence of any satellites of Kepler-22b with a mass greater than 0.54 Earth masses.
Discovery and observation
The planet's first transit in front of its host star was observed on Kepler's third day of scientific operations, on 12 May 2009. The third transit was detected on 15 December 2010. Additional confirmation data was provided by the Spitzer Space Telescope and ground-based observations. Confirmation of the existence of Kepler-22b was announced on 5 December 2011.
Past transit dates
| Physical sciences | Notable exoplanets | Astronomy |
46816256 | https://en.wikipedia.org/wiki/Australopithecus%20deyiremeda | Australopithecus deyiremeda | Australopithecus deyiremeda is an extinct species of australopithecine from Woranso–Mille, Afar Region, Ethiopia, about 3.5 to 3.3 million years ago during the Pliocene. Because it is known only from three partial jawbones, it is unclear if these specimens indeed represent a unique species or belong to the much better-known A. afarensis. A. deyiremeda is distinguished by its forward-facing cheek bones and small cheek teeth compared to those of other early hominins. It is unclear if a partial foot specimen exhibiting a dextrous big toe (a characteristic unknown in any australopithecine) can be assigned to A. deyiremeda. A. deyiremeda lived in a mosaic environment featuring both open grasslands and lake- or riverside forests, and anthropologist Fred Spoor suggests it may have been involved in the Kenyan Lomekwi stone-tool industry typically assigned to Kenyanthropus. A. deyiremeda coexisted with A. afarensis, and they may have exhibited niche partitioning to avoid competing with each other for the same resources, such as by relying on different fallback foods during leaner times.
Taxonomy
Australopithecus deyiremeda was first proposed in 2015 by Ethiopian palaeoanthropologist Yohannes Haile-Selassie and colleagues based on jawbone fossils from the Burtele and Waytaleyta areas of Woranso–Mille, Afar Region, Ethiopia. The holotype specimen, a young adult left maxilla with all teeth except the first incisor and third molar BRT-VP-3/1, was discovered on 4 March 2011 by local resident Mohammed Barao. The paratype specimens are a complete adult body of the mandible with all incisors BRT-VP-3/14, and an adult right toothless jawbone WYT-VP-2/10, which were discovered by Ethiopian fossil hunter Ato Alemayehu Asfaw . A right maxilla fragment with the fourth premolar BRT-VP-3/37 was found east of BRT-VP-3/14, and it is unclear if these belonged to the same individual. The sediments were radiometrically dated to 3.5–3.3 million years ago, the Middle Pliocene.
The describers believed the remains were distinct enough from the contemporary and well-known A. afarensis to warrant species distinction, and A. deyiremeda is counted among a growing diversity of Late Pliocene australopithecines alongside A. afarensis, A. bahrelghazali and Kenyanthropus platyops. The name deyiremeda derives from the Afar language meaning "close relative" because, existing so early in time, the discoverers considered A. deyiremeda to have been closely related to future australopiths. However, though the proposed distinguishing characteristics are apparently statistically significant, given how few specimens of A. deyiremeda exist, it is unclear if this indeed warrants species distinction or if these specimens simply add to the normal range of variation for A. afarensis. If it is a valid species, then it could possibly indicate some A. afarensis specimens are currently classified into the wrong species.
Haile-Selassie and colleagues noted that, though it shares many similarities with the robust Paranthropus, it may not have been closely related because it lacked enlarged molars which are characteristic of Paranthropus.
Anatomy
Despite being so early, the jaws of A. deyiremeda show some similarities to those of the later Homo and Paranthropus. The jaw jutted out somewhat (prognathism) at perhaps a 39-degree angle, similar to most other early hominins. The cheekbone is positioned more forward than most A. afarensis specimens. Unlike A. afarensis but like Paranthropus, the walls of the cheek teeth are inclined rather than coming straight up. The upper canines are proportionally smaller than those of other Australopithecus, but are otherwise morphologically similar to those of A. anamensis. The cheek teeth are quite small for an early hominin, and the first molar is the smallest reported for an adult Pliocene hominin. Nonetheless, the enamel was still thick as other early hominins, and the enamel on the second molar is quite high and more similar to P. robustus. The jawbone, though small, is robust and more similar to that of Paranthropus.
In 2012, a 3.4-million-year-old partial foot, BRT-VP-2/73, was recovered from Woranso–Mille. It strongly diverges from contemporary and later hominins by having a dextrous big toe like the earlier Ardipithecus ramidus, and consequently has not been assigned to a species. Though more diagnostic facial elements have since been discovered in the area, they are not clearly associated with the foot.
Palaeoecology
A. deyiremeda features a strong jawbone and thick enamel, consistent with a diet of tough sedges and similar foods which australopiths are generally thought to have primarily subsisted upon. The enamel on the upper incisor, canine and first premolar exhibits hypoplasia, probably caused by a period of malnutrition or illness during enamel growth in infancy while the teeth were still growing. A. deyiremeda was likely a generalist feeder. A. deyiremeda and A. afarensis may have exhibited niche partitioning given they cohabited the same area. That is, given dental and chewing differences, they may have had different dietary and/or habitat preferences, unless these differences were simply a product of genetic drift. Much like chimpanzees and gorillas which have more or less the same diet and inhabit the same areas, A. deyiremeda and A. afarensis may have shared typical foods when in abundance, and resorted to different fallback foods in times of food scarcity.
The Lomekwi stone-tool industry from northern Kenya is loosely associated with the Middle Pliocene Kenyanthropus based on an upper jaw fragment assigned to Kenyanthropus based on forward cheekbones, three-rooted premolars, and a small first molar. Since these features are also exhibited in A. deyiremeda, anthropologist Fred Spoor suggested that A. deyiremeda was actually present at the site. Identified at 3.3 million years old, the Lomekwian is the earliest culture. These knappers flaked off pieces of cores made of basalt, phonolite and trachyphonolite. They held the core with one hand and struck it vertically with a hammerstone, which is a simple process, though more complex than the tool-making behaviours of non-human primates.
The Middle Pliocene of Woranso–Mille features grazing impalas, alcelaphins, and elephants, as well as browsing giraffes, tragelaphins, and forest-dwelling monkeys. The feet of the bovid species do not seem to be specialised for any particular type of ground (such as wet, pliable, or hard), and the teeth of hoofed species indicates an equal abundance of grazers, browsers and mixed feeders. These suggest a mixed environment which features both open grasslands as well as forests probably growing on a lake- or riverside. Similar mosaic landscapes were inhabited by A. anamensis and A. afarensis who seem to have had no preferred environment.
| Biology and health sciences | Australopithecines | Biology |
40793094 | https://en.wikipedia.org/wiki/Cryptopidae | Cryptopidae | The Cryptopidae are a family of scolopendromorph centipedes. Cryptopids are blind (lacking ocelli) and possess 21 pairs of legs. The genus Cryptops is the numerically largest in the family, comprising over 150 species worldwide.
Classification
The following genera, may be included:
Cryptops Leach, 1814
Eremops Bollman, 1893
Mimops Kraepelin, 1903
Paracryptops Pocock, 1891
Tonkinodentus Schileyko, 1992
Trigonocryptops Verhoeff, 1906
The genera Plutonium and Theatops Newport, 1844, formerly classified in the cryptopid subfamily Plutoniuminae, are now placed in the recently elevated family Plutoniumidae.
| Biology and health sciences | Myriapoda | Animals |
32402085 | https://en.wikipedia.org/wiki/Tuber%20%28fungus%29 | Tuber (fungus) | Tuber is a genus in the fungal family Tuberaceae, with estimated molecular dating to the end of the Jurassic period (156 Mya). It includes several species of truffles that are highly valued as delicacies.
New discoveries
In 2015, a new species Tuber petrophilum (close relative to Tuber melanosporum and Tuber brumale) was discovered in the Dinaric Alps (Southeastern Europe, Serbia). In 2016, two new species were discovered in Brazil. Tuber floridanum (with the commercial name Trufa Sapucaya meaning 'The last Guarany breath') and Tuber brennemanii grow in association with pecan rootlets.
| Biology and health sciences | Edible fungi | Plants |
45492650 | https://en.wikipedia.org/wiki/Pornhub | Pornhub | Pornhub is a Canadian-owned internet pornography video-sharing website, one of several owned by adult entertainment conglomerate Aylo. , Pornhub is the 16th-most-visited website in the world and the most-visited adult website.
The site allows visitors to view pornographic videos from various categories, including professional and amateur pornography, and to upload and share their own videos. Content can be flagged if it violates the website's terms of service. The site also hosts the Pornhub Awards annually.
In December 2020, following a New York Times exposé of non-consensual pornography and sex trafficking, payment processors Mastercard and Visa cut their services to Pornhub. Pornhub then removed all videos uploaded by unverified users, reducing the total content from 13million to 4million videos. A 2023 documentary, Money Shot: The Pornhub Story, covers the opposition to Pornhub and the views of some pornographic performers.
History
Pornhub was launched on 25 May 2007 by web developer Matt Keezer as a website within the company Interhub. In March 2010, the company was purchased by Fabian Thylmann as part of the Manwin conglomerate (now known as Aylo). In 2013, Thylmann sold his stake in the company to Feras Antoon and David Tassillo, who served until 2022 as its CEO and COO, respectively.
In an effort to introduce quality curation to the site, the company launched a service called "Pornhub Select" in October 2013. Pornhub also launched a content curation website on 9 October 2013 called "PornIQ", which used an algorithm to create personalized video playlists for the viewer based on a number of factors, including their porn preferences, the time of day they are visiting the website, what part of the world they live in, and the amount of time the viewer has available. David Holmes of PandoDaily noted that Pornhub's data-intensive approach to playlists set it apart from previous attempts at user-generated playlists, and marked a new trend in the switch from content searching to passive curation among Web 2.0 websites.
By 2009, Aylo's three largest pornographic sites, RedTube, YouPorn and PornHub, collectively had 100 million unique visitors.
In June 2015, Pornhub announced that it was going to make a pornographic film featuring real-life sex in space, named Sexplorations. The site hoped to launch the mission and shoot the movie in 2016, covering the pre- and post-production costs itself, but sought $3.4 million from IndieGogo crowdfunders. If funded, the film would have been slated for a 2016 release, following six months of training for the two performers and six-person crew.
On 1 February 2016, Pornhub launched an online casino powered by Betsoft, Endorphina, and 1x2gaming.
In October 2017, vice president Corey Price announced that Pornhub would use computer vision and artificial intelligence software to identify and tag videos on the website with information about the performers and sex acts. Price said the company planned to scan its entire library beginning early 2018.
On 17 April 2018, the site began accepting Verge cryptocurrency as a payment option.
In December 2020, following a column in The New York Times by Nicholas Kristof that was critical of the company, payment processors Mastercard and Visa cut their services to Pornhub. Pornhub then removed all videos by unverified users.
Non-consensual pornography
Incidents have been reported of Pornhub hosting child pornography, revenge porn, and rape pornography. The company has been criticized for slow or inadequate responses to these incidents.
Pornhub employs Vobile to search for uploads of banned videos to remove them from the site, and non-consensual content or personally identifiable information present on Pornhub can be reported to the company via an online form. Pornhub has been criticized for its response to non-consensual pornography and sex trafficking. Journalists at Vice commented that Pornhub profits from "content that's destroyed lives, and continues to do harm". Slate said that the move reflected a larger trend of Internet platforms using verification to classify sources.
In 2009, a 14-year-old girl was gang raped at knifepoint and claims the videos were uploaded to Pornhub. The girl stated that she emailed Pornhub repeatedly over a period of six months but received no reply. After she impersonated a lawyer, the videos were removed. Another case in October 2019 involved a man who faces charges of lewd and lascivious battery of a 15-year-old girl, videos of which were discovered on Pornhub, Modelhub, Periscope, and Snapchat that led to his arrest. In another incident of non-consensual pornography, the UK-based activist group Not Your Porn was founded by the friend of a woman whose iCloud storage had been hacked, leading to the hacker posting sexually explicit photos and videos on Pornhub alongside her full name. Pornhub removed the video when reported, but clones of the video using her full name replicated faster than the videos were removed. The woman found that "the fractured communication system at Pornhub has meant this has become an increasingly excruciating process". The founder of Not Your Porn reported that fifty women contacted her over a six-month period about non-consensual online pornography featuring them, thirty of whom reported that the videos were uploaded to Pornhub.
On 10 October 2019, the two owners of GirlsDoPorn along with two employees were arrested on three counts of sex trafficking by force, fraud, and coercion, after a civil lawsuit was filed in July. A week afterwards, the official verified GirlsDoPorn channel – the 20th-largest channel at the time – was removed from the site. The delayed response was criticized by journalists at Daily Dot and Motherboard. Additionally, the videos could still be found afterwards unofficially on Pornhub's website. In December 2020, MindGeek, Pornhub's parent company was sued in California for hosting non-consensual videos produced by GirlsDoPorn, which coerced women into appearing in their videos under false pretenses. In January 2021, a class action lawsuit making similar claims was launched in Montreal. The Canadian-proposed class action sought $600million for anyone who had intimate photos and videos, some of which may have been taken when they were underage, shared on MindGeek's sites without their consent, since 2007. In June 2021, 34 women sued MindGeek in a California federal court, alleging that the company had exploited them and hosted and promoted videos that depicted rape, revenge porn, and child sexual abuse.
The Internet Watch Foundation (IWF) found 118 instances of child sexual abuse material on Pornhub between 2017 and 2019. Pornhub rapidly removed this content. An IWF spokesperson said that other social networks and communication tools posed more of an issue than Pornhub in regard to this type of content. In 2020, the National Center for Missing & Exploited Children reported that over 20 million reports of child sexual abuse material related to content on Facebook, accounting for 95% of total reports, and that Pornhub and other MindGeek sites were the subject of only 13,000 reports.
In response to abusive content on the site, an online petition calling for the shutdown of Pornhub gained over one million signatures throughout 2020. The petition was started by Laila Mickelwait, Director of Abolition at Exodus Cry, a Christian anti-trafficking and anti- sex-work non-profit. Her petition was addressed to the executives of MindGeek, the parent company of Pornhub. It noted numerous instances of non-consensual and child abuse material on the website, including a child trafficking victim who was made a "verified model" by the site. In response to the petition, Pornhub claimed they were committed to removing such material from the site.
In December 2020, Nicholas Kristof's opinion column in The New York Times described Pornhub as a company that "monetizes child rapes, revenge pornography, spy cam videos of women showering, racist and misogynist content, and footage of women being asphyxiated in plastic bags." In response to the column, Pornhub announced it would prevent video uploads from unverified users and would disable video downloads. Visa and Mastercard also announced they would review their financial ties to Pornhub. On 10 December 2020, Mastercard and Visa blocked use of their cards on Pornhub. Pornhub told the New York Times that these claims were "irresponsible and flagrantly untrue". Performer Siri Dahl expressed criticism that Visa and Mastercard's actions victimized pornographic performers, while Pornhub continued to make most of its money through banner ads.
On 14 December 2020, Pornhub announced that all videos posted by unverified users had been removed from public access "pending verification and review". This reduced the number of videos on the website from 13million to 4million. In Brazil, according to Clayton Nunes, CEO of Brasileirinhas, the result of this action showed that the people who upload non-consensual pornography to Pornhub are the same people who upload pirated pornography. Following their ban on unverified user posting videos, Pornhub released a blogpost where they compared those opposed to them to who "same forces that have spent 50 years demonizing Playboy, the National Endowment for the Arts, sex education, LGBTQ rights, women’s rights, and even the American Library Association.
In April 2021, Vice reported that individuals tied to far-right and Christian fundamentalist groups, which claim to be anti-trafficking and anti-pornography activists, disseminated disinformation and made death threats towards Pornhub's staff and sex workers.
A 2023 documentary, Money Shot: The Pornhub Story, covered the opposition to Pornhub and the views of pornographic performers. It featured interviews with Kristof, a lawyer representing women suing MindGeek and a spokesperson for the anti-sex-trafficking group National Center on Sexual Exploitation.
In 2023, a tool developed by Meta Platforms—Take It Down—was released. Participating platforms—including Pornhub—agree to remove non-consensual images or videos that users flag with the tool. Other participants include OnlyFans, Facebook, Yubo, and Instagram. The program relies on users uploading hashes of images and cannot identify edited versions of the image.
Non-pornographic content
Pornhub users have often uploaded non-pornographic content to the site, including posts of Hollywood films (believing copyright holders would be less likely to look for uploads on Pornhub relative to mainstream video-sharing services such as YouTube), to monetize content deemed ineligible for monetization, or as memes and jokes. These videos often have double entendre titles resembling porn films. Examples include a pirated recording of the musical Hamilton listed as "Revolutionary Boys Get Dirty on American Politics"; a clip from the animated film Puss in Boots listed as "Hardcore Pussy Gets Wrecked"; highlight compilations of esports events tagged as a "gangbang"; and Ryan Creamer videos featuring comedic clips with sexual titles.
In March 2020, Pornhub premiered Leilah Weinraub's documentary Shakedown, which chronicles a black lesbian strip club of the same name in Los Angeles. The film streamed on the service throughout March, before being released via Criterion Channel. Brand director Alex Klein stated that the film's premiere on Pornhub was part of "a larger general commitment Pornhub has to supporting the arts."
Copyright infringement claims
In 2010, Mansef Inc. and Interhub, the then-owners of Pornhub, were sued by the copyright holding company of the pornographic film production company Pink Visual, Ventura Content, for the copyright infringement of 95 videos on websites, including Pornhub, Keezmovies, Extremetube, and Tube8. According to Ventura Content, the 45 videos were streamed "tens of millions of times", and they claimed the piracy threatened the "entire adult entertainment industry". The suit was settled in October 2010, with terms that remain confidential. The parties agreed that the site operators would implement digital fingerprint filtering on their sites. Porn 2.0 sites such as these are seen as posing notable competition for paid pornographic websites and traditional magazine and DVD-based pornography.
In July 2021, Pornhub launched Classic Nudes, an interactive guide of classic art from major institutions, as a means to help museums recover from the financial toll of the pandemic. However, The Louvre, Uffizi Gallery, and Museo del Prado sued Pornhub for copyright infringement, claiming that the museums had never "granted authorizations for the operation or use of the art."
Malvertising
In 2013, researcher Conrad Longmore found that advertisements displayed by popular porn websites contained malware programs, which install harmful files on users' machines without their permission. Longmore told the BBC that of pornography websites, Pornhub and xHamster posed the greatest threat.
In 2017, security firm Proofpoint discovered malicious ads running on the site that had the potential to install override software on users' PCs. The ads had been promoted on the site for over a year without intervention from Pornhub.
Products
Pornhub features virtual reality videos that allow 360° viewing for premium customers. It can be used with the PlayStation VR, though videos need to be downloaded from a computer and transferred via USB.
In 2015, Pornhub announced a planned wearable device called the "Wankband"—a wristband that stores kinetic energy during male masturbation and can then be used to charge devices. Pornhub's website says that the product is in development.
VPNHub
In May 2018, Pornhub launched a VPN service known as VPNHub, a free service that offered a paid ad-free version. VPNHub operated out of Cyprus and was built in partnership with US-based AppAtomic, using servers located in the US. According to TechRadar, VPNHub operated on the StackPath server network.
VPNHub claimed a no-logging policy, but this has been questioned by a reviewer based on their actual practices surrounding advertiser data.
Philanthropy
Pornhub has hosted events and campaigns to raise awareness of breast cancer. The first of these events took place in New York City on 24 April 2012, with the introduction of the "Boob Bus", which offered free breast exams for passers-by and taught self-examination techniques to use at home. Pornhub hosted a "Save the Boobs!" campaign in August 2012. For every 30 videos viewed in Pornhub's "big tit" or "small tit" category in the month of October, the website offered to donate a penny to the Susan G. Komen Foundation. However, the Susan G. Komen Foundation rejected the offer, stating that it was not a partner of Pornhub, would not accept its donations, and asked the company to stop using the foundation's name. Video views totaled 74,146,928, equaling approximately $24,716 worth of donations, which Pornhub subsequently tripled to $75,000. Donations were split amongst several charities, including the Eileen Stein Jacoby Fund and Cancer Sucks Inc.
For Arbour Day 2014, Pornhub launched a weeklong environmental campaign called "Pornhub Gives America Wood", which started on 25 April 2014 and ended on 2 May 2014.
Pornhub Awards
The inaugural Pornhub Awards was held on 6 September 2018 at the Belasco Theater in Los Angeles. Kanye West was creative director. At the event, West debuted the music video for his song, "I Love It". The second annual show was held at the Orpheum Theatre in Los Angeles on 11 October 2019 and Bad Bunny headlined the event. The third show was held online on 15 December 2020 and hosted by Asa Akira. The fourth show did away with a ceremony and announced winners on 23 March 2022. The fifth show announced winners on 20 April 2023. The sixth show announced winners on 29 March 2024.
Search trends
Under the heading of Pornhub Insights, Pornhub regularly releases information extracted from its archive of searches: in what regions it is most used, female searches vis-à-vis male searches, the most popular search terms by year or area, variations in searches that parallel current events, and the like; in the first half of 2017, the top search term in the US was "hentai", and 37% of searchers for gay male porn are women. Every year it releases a "Year In Review". Consequently, it has been called "the Kinsey Report of our time". According to research by data scientist Seth Stephens-Davidowitz, 25% of female searches for heterosexual porn on Pornhub involved keywords searching for painful, humiliating, or non-consensual sex.
Pornhub has also reported on traffic trends and their relation to large events. Traffic was below usual levels during the solar eclipse of 21 August 2017. During the 2018 Hawaii false missile alert, web traffic to Pornhub in Hawaii fell by 77% (from typical Saturday figures) at 8:23am, after the alert was sent, and increased 48% above typical levels at 9:01am, after notification that the alert was erroneous.
The COVID-19 pandemic has led to increased searches for pornographic content on the internet, with Pornhub being the most searched site. The study analyzed data from Google Trends and Pornhub Insights to understand the behavior of users during the pandemic. The results showed a peak of 24.4% more searches for pornographic audiovisual material in 2020 compared to 2019 across the same dates (1 March –30 April) internationally. The increase in pornography searches seems to be conditioned by the marketing campaign promoted by Pornhub to offer free Premium content to encourage citizens to stay home, consuming pornography. The study also found a correlation between the percentage increase in searches for pornography made by men and women during the pandemic, as well as by age.
Usage statistics
Pornhub reported that for the year 2016, the website was visited about 23 billion times, and viewers watched a cumulative total of 4,599 billion hours of pornographic videos online. In 2017 Pornhub reportedly registered 28.5 billion visits with an average of 81 million visits per day. "MILF" and "stepmom" were the two most searched terms worldwide. During the year 2019 Pornhub received 42 billion visits with an average of 115 million visits per day. The most searched for genres on Pornhub in 2019 are lesbian, hentai, fauxcest, milf, big ass, and creampie.
Blocks and bans
Authorities and organizations throughout the world have implemented a variety of measures and strategies to restrict access to and use of Pornhub.
2011: European broadband provider TalkTalk (formerly Tiscali) received some criticism because its internet filter failed to block Pornhub for over a week. This was due to the issue of child internet safety.
January 2013: The Huffington Post commented that CBS "refused to air a short commercial for adult-themed site Pornhub during the Super Bowl on Sunday ... . The 20-second spot, which features an older couple sitting on a park bench (that's really all that happens), includes no explicit content." It was rejected because the Federal Communications Commission could hold CBS liable for endorsing pornographic content, as it is illegal to air pornography on US television.
September 2013: the website was blocked by the Great Firewall in China.
12 March 2014: Pornhub was blocked in Russia because one actress looked too young, leading some viewers to think she was a minor.
September 2016: the site was blocked in Russia for "spreading harmful information to children" and reinstated in April 2017 after requiring users to specify their ages. The site demands Russian users authenticate themselves via the social network VK.
January 2017: the Government of the Philippines blocked internet users from accessing Pornhub and other pornography websites. The websites were blocked pursuant to Republic Act 9775 or the Anti-Child Pornography Law, which prohibits websites from hosting child pornography content.
October 2018: the Uttarakhand High Court reinstated a ban on Pornhub in India but made it optional for ISPs to leave sites that are free of child pornography unblocked. In order to circumvent the ban, Pornhub established a mirror website at Pornhub.net.
November 2020: the government of Thailand blocked Pornhub, amongst other pornography websites.
3 September 2022: Instagram banned the website's Instagram account indefinitely. It attracted 13 million followers and posted non-pornographic material. The ban followed lobbying by the National Center on Sexual Exploitation and others.
16 December 2022: Pornhub's YouTube account was taken down, only a few days after it was removed from TikTok.
May 2023: Utah passed SB287, which required age verification and ID requirements for adult sites. Pornhub blocked user access from Utah, two days before the law came into effect. Instead of the full site, Utah users would see a message criticizing the legislation.
February 2024: the Attorney General of the state of Texas sued Aylo/Pornhub for allegedly not obeying the state's legal age verification law. As of March 2024, Pornhub and other Aylo-owned websites have blocked access in Texas, due to the adoption of an age verification law which usually mandates age verification through the use of an identity document. In states where Pornhub is blocked, which also include Alabama, Arkansas, Louisiana, Mississippi, Montana, North Carolina, Utah, Virginia, Florida, Nebraska, Indiana, Kansas, Idaho, Kentucky, and Oklahoma; a message is displayed featuring pornstar Cherie DeVille criticizing such laws.
In popular culture
Pornhub made a prominent appearance in many scenes of the 2013 American romantic comedy film Don Jon. Pornhub Vice President Cory Price explained that one of the film's producers approached the company in March 2012, seeking permission to use the Pornhub brand. Price reviewed the movie's script and granted them permission, going as far as helping them find clips to use in the movie from Pornhub's content partners (e.g. Brazzers, Mofos, Digital Playground, and Twistys). Joseph Gordon-Levitt, director and actor in the film, edited the clips together into rapid-fire montages, also featured prominently in the film.
Pornhub Community intro
In recent years, a three-second drum and bass jingle which plays at the start of Pornhub Community (amateur) videos has become a cultural phenomenon, with Pornhub executives acknowledging its reach. In 2020, a video of a student playing the jingle with his band at a talent show went viral. In 2021, a TikTok trend went viral, where one plays the Pornhub community intro and gauges reactions to catch unexpected people who visit the website. According to some feminist activists, these reactions reflect the over-consumption of pornography across all social categories. The community jingle is also played by viewers during Twitch streams to elicit similar reactions.
| Technology | Multimedia | null |
39459842 | https://en.wikipedia.org/wiki/African%20village%20dog | African village dog | African village dogs are dogs found in Africa that are directly descended from an ancestral pool of indigenous dogs. African village dogs became the close companion of people in Africa, beginning in North Africa and spreading south.
Middle Eastern origins
The oldest dog remains to be found in Africa date 5,900 years before present (YBP) and were discovered at the Merimde Beni-Salame Neolithic site in the Nile Delta, Egypt. The next oldest remains date 5,500 YBP and were found at Esh Shareinab on the Nile in Sudan. This suggests that the dog arrived from Asia at the same time as domestic sheep and goats. The dog then spread north and south throughout Africa beside livestock herders, with remains found in archaeological sites dated 925–1,055 YBP at Ntusi in Uganda, dated 950–1,000 YBP at Kalomo in Zambia, and then at sites south of the Limpopo River and into southern Africa.
Genetic diversity
In 2009, a genetic study of African village dogs found that these were genetically distinct from the non-native and mixed-breed dogs. The village dogs of Africa were a mosaic of native dogs that arrived early into Africa, and non-native mixed breed dogs. The Basenji clustered with the indigenous dogs, but the Pharaoh Hound and the Rhodesian Ridgeback were predominantly of non-African origin.
Local variations
There are different types of African village dogs:
Avuvi: a pariah-type village dog from Ghana
Baganda Dog: a Lurcher-like large game hunting dog from Uganda, named after the Baganda nation.
Bagirmi Dog: a large dog with piebald colour, named after the Baguirmi Department of Chad.
Cameroon Dog: a hunting dog from West Africa, of medium size and primitive type, with erect ears, long legs and short coat, often piebald in colour, named after Cameroon.
East African Dog: a hunting dog from Kenya, large in size.
Hahoawu: a "clean" medium-sized (11 to 14 kg) watch dog from Togo, with a far sight and a coat of fawn or red colour, well adapted to city life, named after the Haho river.
Liberian Dog (a.k.a. Liberian Terrier): a terrier-like dog from West Africa, small and reddish-brown, named after Liberia.
Madagascar Hunting Dog: a hunting dog from Madagascar.
Manboutou Dog: a local variant of the Nyam Nyam kept by the Mangbetu nation of the Democratic Republic of the Congo.
Nyam Nyam (a.k.a. Zande Dog): a small hunting dog from Central Africa with erect ears, a curly tail and a short coat of fawn colour, thought to be similar or somehow related to the Basenji, named after the Zande nation.
Nkita (a.k.a. Kare, Ekuke): a slim, mixed Nigerian breed of dog, often brown or tan with erect ears; they are often used by farmers.
Simaku: a ratter from South Africa, also used for cleaning yards (by scavenging waste), developed by crossing pariah dogs with terriers.
Sudan Greyhound: an extinct hare-hunting dog from Sudan.
West African Mouse Dog: an extinct small (36 cm) Doberman Pinscher-like ratter, with a short, smooth and red coat.
Zulu Dog: a small guard and hunting dog with a square muzzle and a fawn coat, named after the Zulu nation.
Moreover, it is debatable whether the following breeds also belong or belonged to "African village dogs".
African Hairless Dog: a probably extinct hairless dog.
Bisharin Greyhound: a hare-hunting dog from Sudan, with erect ears and a curly tail, named after the Bishari nation.
Dinka Greyhound: a Greyhound-like pariah hunting dog from Sudan, of a rougher type than the other Sudanese breeds, with a short, fawn coat, named after the Dinka nation.
Egyptian Hairless Dog: an extinct hairless dog, close relative or perhaps even the same breed as the African Hairless Dog, small in size (41 cms), with drooping ears.
Shilluk Greyhound (a.k.a. Shilluk Dog): an antelope-hunting dog with a robust body and semi-erect (folded) ears, usually of red colour with a black mask, named after the Shilluk nation.
Zanzibar Greyhound (a.k.a. Zanzibar Dog): a large (68 cms) hunting dog from Zanzibar, with erect ears, a robust body and a red-white colour, believed to be developed by crossing Salukis with pariah dogs.
| Biology and health sciences | Dogs | Animals |
53718600 | https://en.wikipedia.org/wiki/Rayleigh%20problem | Rayleigh problem | In fluid dynamics, Rayleigh problem also known as Stokes first problem is a problem of determining the flow created by a sudden movement of an infinitely long plate from rest, named after Lord Rayleigh and Sir George Stokes. This is considered as one of the simplest unsteady problems that have an exact solution for the Navier-Stokes equations. The impulse movement of semi-infinite plate was studied by Keith Stewartson.
Flow description
Consider an infinitely long plate which is suddenly made to move with constant velocity in the direction, which is located at in an infinite domain of fluid, which is at rest initially everywhere. The incompressible Navier-Stokes equations reduce to
where is the kinematic viscosity. The initial and the no-slip condition on the wall are
the last condition is due to the fact that the motion at is not felt at infinity. The flow is only due to the motion of the plate, there is no imposed pressure gradient.
Self-Similar solution
The problem on the whole is similar to the one dimensional heat conduction problem. Hence a self-similar variable can be introduced
Substituting this the partial differential equation, reduces it to ordinary differential equation
with boundary conditions
The solution to the above problem can be written in terms of complementary error function
The force per unit area exerted on the plate is
Arbitrary wall motion
Instead of using a step boundary condition for the wall movement, the velocity of the wall can be prescribed as an arbitrary function of time, i.e., . Then the solution is given by
Rayleigh's problem in cylindrical geometry
Rotating cylinder
Consider an infinitely long cylinder of radius starts rotating suddenly at time with an angular velocity . Then the velocity in the direction is given by
where is the modified Bessel function of the second kind. As , the solution approaches that of a rigid vortex. The force per unit area exerted on the cylinder is
where is the modified Bessel function of the first kind.
Sliding cylinder
Exact solution is also available when the cylinder starts to slide in the axial direction with constant velocity . If we consider the cylinder axis to be in direction, then the solution is given by
| Physical sciences | Fluid mechanics | Physics |
40798692 | https://en.wikipedia.org/wiki/Trommel%20screen | Trommel screen | A trommel screen, also known as a rotary screen, is a mechanical screening machine used to separate materials, mainly in the mineral and solid-waste processing industries. It consists of a perforated cylindrical drum that is normally elevated at an angle at the feed end. Physical size separation is achieved as the feed material spirals down the rotating drum, where the undersized material smaller than the screen apertures passes through the screen, while the oversized material exits at the other end of the drum.
Summary
Trommel screens can be used in a variety of applications such as classification of solid waste and recovery of valuable minerals from raw materials. Trommels come in many designs such as concentric screens, series or parallel arrangement and each component has a few configurations. However depending on the application required, trommels have several advantages and limitations over other screening processes such as vibrating screens, grizzly screens, roller screens, curved screens and gyratory screen separators.
Some of the main governing equations for a trommel screen include the screening rate, screening efficiency and residence time of particles in the screen. These equations could be applied in the rough calculation done in initial phases of a design process. However, design is largely based on heuristics. Therefore, design rules are often used in place of the governing equations in the design of a trommel screen. When designing a trommel screen, the main factors affecting the screening efficiency and production rate are the rotational velocity of the drum, mass flow rate of feed particles, size of the drum, and inclination of trommel screen. Depending on desired application of trommel screen, a balance has to be made between the screening efficiency and production rate.
Range of application
Municipal and industrial waste
Trommel screens are used by the municipal waste industry in the screening process to classify sizes of solid waste. Besides that, it can also be used to improve the recovery of fuel-derived solid waste. This is done by removing inorganic materials such as moisture and ash from the air-classified light fraction segregated from shredded solid waste, thereby increasing the quality of the product fuel. In addition, trommel screens are used for the treatment of wastewater. For this particular application, solids from the entering flow will settle onto the screen mesh and the drum will rotate once the liquid reaches a certain level. The clean area of the screen is submerged into the liquid while the trapped solids fall onto a conveyor which will be further processed before removal.
Mineral processing
Trommel screens are also used for the grading of raw materials to recover valuable minerals. The screen will segregate minuscule materials which are not in the suitable range of size to be used in the crushing stage. It also helps to get rid of dust particles which will otherwise impair the performance of the subsequent machineries in the downstream processes.
Other applications
Other applications of trommel screens can be seen in the screening process of composts as an enhancement technique. It selects composts of variable size fractions to get rid of contaminants and incomplete composted residues, forming end products with a variety of uses. Besides this, the food industries use trommel screens to sort dry food of different sizes and shapes. The classification process will help to achieve the desired mass or heat transfer rate and avoid under or over-processing. It also screens tiny food such as peas and nuts that are strong enough to resist the rotational force of the drum.
Designs available
One of the available designs of trommel screens is concentric screens with the coarsest screen located at the innermost section. It can also be designed in parallel in which objects exit one stream and enter the following. A trommel in series is a single drum whereby each section has different apertures size arranged from the finest to the coarsest
The trommel screen has many different configurations. For the drum component, an internal screw is fitted when the placement of the drum is flat or elevated at an angle less than 5°. The internal screw facilitates the movement of objects through the drum by forcing them to spiral.
For an inclined drum, objects are being lifted and then dropped with the help of lifter bars to move it further down the drum which the objects will otherwise roll down slower. Furthermore, the lifter bars shake the objects to segregate them. Lifter bars will not be considered in the presence of heavy objects as they may break the screen.
As for the screens, perforated plate screens or mesh screens are usually used. Perforated plate screen are rolled and welded for strength. This design contains fewer ridges which makes it easier for the cleaning process. On the other hand, mesh screen are replaceable as it is susceptible to wear and tear compared to perforated screen. In addition, screw cleaning work for this design is more intensive as objects tend to get wedged in the mesh ridges.
The screen's aperture comes in either square or round shape which is determined by
many operating factors such as:
The required dimension of the undersized product.
The aperture area. Round aperture contributes to a smaller area than square-shaped one.
The magnitude of the agitation of product.
Cleanup of drum.
Advantages and limitations over competitive processes
Vibrating screen
Trommel screens are cheaper to produce than vibrating screens. They are vibration free which causes less noise than vibrating screens. Trommel screens are more mechanically robust than vibrating screens allowing it to last longer under mechanical stress.
However more material can be screened at once for a vibrating screen compared to a trommel screen. This is because only one part of the screen area of the trommel screen is utilised during the screening process whilst the entire screen is used for a vibrating screen. Trommel screens are also more susceptible to plugging and blinding, especially when different sized screen apertures are in series. Plugging is when material larger than the aperture may become stuck or wedged into the apertures and then may be forced through which is undesirable. Blinding is when wet material clump up and stick to the surface of the screen. The vibrations in the vibrating screens reduce the risk of plugging and blinding.
Grizzly screen
A grizzly screen is a grid or set of parallel metal bars set in an inclined stationary frame. The slope and the path of the material are usually parallel to the length of the bars. The length of the bar may be up to 3 m and the spacing between the bars ranges from 50 to 200 mm. Grizzly screens are typically used in mining to limit the size of material passing into a conveyance or size reduction stage.
Construction
The material of construction of the bars is usually manganese steel to reduce wear. Usually, the bar is shaped in such a way that its top is wider than the bottom, and hence the bars can be made fairly deep for strength without being choked by lumps passing partway through them.
Working
A coarse feed (say from a primary crusher) is fed at the upper end of the grizzly. Large chunks roll and slide to the lower end (tail discharge), while small lumps having sizes less than the openings in the bars fall through the grid into a separate collector.
Roller screen
Roller screens are preferred to trommel screens when the feed rate required is high. They also cause less noise than trommel screens and require less head room. Viscous and sticky materials are easier to be separated using a roller screen than with a trommel screen.
Curved screen
Curved screens are able to separate finer particles (200-3000 μm) than trommel screens. However, binding may occur if the particle size is less than 200 μm which will affect the separation efficiency. The screening rate of a curved screen is also much higher than the trommel screen as the whole surface area of the screen is utilised. Furthermore, for curved screens, the feed flows parallel to the apertures. This allows any loose material to break up from the jagged surface of the larger materials, resulting in more undersized particles passing through.
Gyratory screen separators
Finer particle sizes (>40 μm) are able to be separated with the gyratory separator than with a trommel screen. The size of the gyratory screen separator can be adjusted through removable trays, whereas the trommel screen is usually fixed. Gyratory separators can also separate dry and wet materials like trommel screens. However, it is common for the gyratory separators to separate either dry or wet materials only. This is because there are different parameters for the gyratory screen to have the best separation efficiency. Therefore, two separators would be required for the separation of dry and wet materials, while one trommel screen would be able to do the same job.
Main process characteristics
Screening rate
One of the main process characteristics of interest is the screening rate of the trommel. Screening rate is related to the probability of the undersized particles passing through the screen apertures upon impact. Based on the assumption that the particle falls perpendicularly on the screen surface, the probability of passage, P, is simply given as
where refers to the particle size, refers to the size of aperture (diameter or length) and refers to the ratio of aperture area to the total screen area. Equation () holds for both square and circular apertures. However, for rectangular apertures, the equation becomes:
where and refers to the rectangular dimension of the aperture. After determining the probability of passage of a given size interval of particles through the screen, the fraction of particles remaining in the screen, , can be found using:
where is the number of impingements of the particles on the screen. After making the assumption that the number of impingements per unit time, , is constant, equation () becomes:
An alternative way of expressing the fraction of particles remaining in the screen is in terms of the particle weight, which is given as follows:
where is the weight of a given size interval of particles remaining in the screen at any given time and is the initial weight of the feed. Therefore, from equations () and (), the screening rate can be expressed as:
Separation efficiency
Screening efficiency can be calculated using mas weight in the following way E=c(f-u)(1-u)(c-f)/f(c-u)^2(1-f)
Apart from screening rate, another characteristic of interest is the separation efficiency of the trommel screen. Assuming that the size distribution function of the undersized particles to be removed, , is known, the cumulative probability of all particles ranging from to that are separated after impingements is simply:
Furthermore, the total number fraction of particles within this size range in the feed can be expressed as follows:
Therefore, the separation efficiency, which is defined as the ratio of the fraction of particles
removed to the total fraction of particles in the feed, can be determined as follows:
There are a number of factors that affect the separation efficiency of the trommel, which include:
Speed of rotation of the trommel screen
Feed rate
Residence time in the rotating drum
Angle of inclination of drum
Number and size of screen apertures
Characteristics of the feed
Residence time in the screen
Two simplifying assumptions are made in the equation presented in this section for the residence time of materials in a rotating screen. First, it is assumed that there is no slippage of particles on the screen. In addition, the particles dislodging from the screen are under free fall. When the drum rotates, particles are kept in contact with the rotating wall by centrifugal force. As the particles reach near the top of the drum, the gravitational force acting in the radial direction overcomes the centrifugal force, causing the particles to fall from the drum in a cataracting motion. The force components acting on the particle at the point of departure is illustrated in Figure 6.
The departure angle, α can be determined through a force balance, which is given as:
where is the drum radius, is the rotational velocity in radians per second, is the gravitational acceleration and is the angle of inclination of the drum. Hence, the residence time of particles in the rotating screen can be determined from the equation below:
where refers to the screen length, refers to the rotation of the screen in terms of revolutions per minute and refers to the departure angle in degrees.
Design and heuristics
Trommel screens are used widely in industries for its efficiency in material size separation. The trommel screening system is governed by the rotational velocity of the drum, mass flow rate of feed particles, size of the drum and inclination of trommel screen.
Particle rotational velocity behaviour
Considering the mesh sizes of the rotating drum are larger than particle sizes as shown in Figure 7, the particle motion velocity can be broken down into two velocity components consisting of the vertical component and horizontal component . Denoting to be the angle between the particle motion and vertical component, the vertical and horizontal velocities can now be written as:
When , the particles escape through the mesh in the rotating drum. However, if , the particles are retained within the rotating drum. Larger granules will be retained inside the trommel screen until the desired aperture is met and follows the same particle behaviour.
Particle motion mechanisms
With varying rotational velocities, the effect of screening efficiency and production rate varies according to different types of motion mechanisms. These mechanisms include slumping, cataracting and centrifuging.
Slumping
This occurs when the rotational velocity of drum is low. The particles are lifted slightly from the bottom of the drum before tumbling down the free surface as shown in Figure 8. As only smaller- sized filter granules near the wall of the trommel body are able to be screened, this results in a lower screening efficiency.
Cataracting
As rotational velocity increases, slumping transitions to cataracting motion where particles detach near the top of the rotating drum as shown in Figure 9. Larger granules segregate near the inner surface due to the Brazil nut effect while smaller granules stay near the screen surface, thereby allowing smaller filter granules to pass through. This motion generates turbulent flow of particles, resulting in a higher screening efficiency compared to slumping.
Centrifuging
As the rotational velocity increases further, cataracting motion will transition to centrifuging motion which will result in a lower screening efficiency. This is due to particles attaching to the wall of the rotating drum caused by centrifugal forces as shown in Figure 10.
Feed flow rate
According to Ottino and Khakhar, increasing the feed flow rate of particles resulted in a decrease in screening efficiency. Not much is known about why this occurs, however, it is suggested that this effect is influenced by the thickness of filter
granules packed in the trommel body.
At higher feed flow rates, smaller-sized particles at the lower layer of the packed bed are able to be screened at designated apertures and remaining small-sized particles adhere to larger particles. On the other hand, it is easier for smaller-sized particles to pass through the granules thickness in the trommel system at lower feed rates.
Size of the drum
Increasing the area of material exposed to screening allows more particles to be filtered out. Therefore, features that increase the surface area will result in a much higher screening efficiency and production rate. The larger surface area can be increased by
Increasing the length and diameter of the drum
Increasing the size of the apertures and number of apertures
Reducing the number of gaps/area between the apertures
Using lifting bars to increase spread of particles
Inclination angle of drum
When designing the trommel screen, it should be taken into account that higher inclination angle would result in a higher production rate of particles. A higher inclination angle would result in a higher production rate due to an increase in particle velocity, , as illustrated in Figure 7. However, this is at a cost of a lower screening efficiency. On the other hand, decreasing the inclination angle will result in a much longer residence time of particles within the trommel system which increases the screening efficiency.
Since screening efficiency is directly proportional to the length of the trommel, a shorter trommel screen would be needed at a smaller inclination angle to achieve a desired screening efficiency. It is suggested that the inclination angle should not be below 2° because the efficiency and production rate is unknown beyond this point. A phenomenon exist below 2° such that for a given set of operating conditions, decreasing the inclination angle will increase the bed depth resulting in a lower screening efficiency. However it will also simultaneously increase the residence time, which results in an increase in the screening efficiency. It is unsure which effect will be more dominant at inclination angles less than 2°.
Example of post-treatment
In the wastewater treatment industry, the solids that exit the trommel will be compressed and dewatered as they travel along the conveyor. Most often a post-washing treatment such as a jet wash will be used after the trommel screen to break down faecal and unwanted semi-solid matter. The volume of the solid will decrease up to 40% depending on the properties before removal.
| Technology | Metallurgy | null |
36591857 | https://en.wikipedia.org/wiki/Herpes%20simplex%20keratitis | Herpes simplex keratitis | Herpetic simplex keratitis is a form of keratitis caused by recurrent herpes simplex virus (HSV) infection in the cornea.
It begins with infection of epithelial cells on the surface of the eye and retrograde infection of nerves serving the cornea. Primary infection typically presents as swelling of the conjunctiva and eyelids (blepharoconjunctivitis), accompanied by small white itchy lesions on the corneal surface. The effect of the lesions varies, from minor damage to the epithelium (superficial punctate keratitis), to more serious consequences such as the formation of dendritic ulcers. Infection is unilateral, affecting one eye at a time. Additional symptoms include dull pain deep inside the eye, mild to acute dryness, and sinusitis. Most primary infections resolve spontaneously in a few weeks. Healing can be aided by the use of oral and topical antivirals.
Subsequent recurrences may be more severe, with infected epithelial cells showing larger dendritic ulceration, and lesions forming white plaques. The epithelial layer is sloughed off as the dendritic ulcer grows, and mild inflammation (iritis) may occur in the underlying stroma of iris. Sensation loss occurs in lesional areas, producing generalised corneal anaesthesia with repeated recurrences. Recurrence can be accompanied by chronic dry eye, low grade intermittent conjunctivitis, or chronic unexplained sinusitis. Following persistent infection the concentration of viral DNA reaches a critical limit. Antibody responses against the viral antigen expression in the stroma can trigger a massive immune response in the eye. The response may result in the destruction of the corneal stroma, resulting in loss of vision due to opacification of the cornea. This is known as immune-mediated stromal keratitis.
HSV infection is very common in humans. It has been estimated that one third of the world population have recurrent infection. Keratitis caused by HSV is the most common cause of cornea-derived blindness in developed nations. Therefore, HSV infections are a large and worldwide public health problem. The global incidence (rate of new disease) of herpes keratitis is roughly 1.5 million, including 40,000 new cases of severe monocular visual impairment or blindness each year.
Signs and symptoms
Primary infection
Primary infection most commonly manifests as blepharoconjunctivitis i.e. infection of lids and conjunctiva that heals without scarring. Lid vesicles and conjunctivitis are seen in primary infection. Corneal involvement is rarely seen in primary infection.
Recurrent eye infection
Recurrent herpes of the eye is caused by reactivation of the virus in a latently infected sensory ganglion, transport of the virus down the nerve axon to sensory nerve endings, and subsequent infection of ocular surface.
The following classification of herpes simplex keratitis is important for understanding this disease:
Dendritic ulcer (Epithelial keratitis)
This classic herpetic lesion consists of a linear branching corneal ulcer (dendritic ulcer). During eye exam the defect is examined after staining with fluorescein dye. The underlying cornea has minimal inflammation.
Patients with epithelial keratitis complain of foreign-body sensation, light sensitivity, redness and blurred vision.
Focal or diffuse reduction in corneal sensation develops following recurrent epithelial keratitis.
In immune deficient patients or with the use of corticosteroids the ulcer may become large and in these cases it is called geographic ulcer.
Disciform keratitis (Endothelial keratitis)
Endothelial keratitis manifests a central endothelitis in a disc-shaped manner. Longstanding corneal edema leads to permanent scarring and is the major cause of decreased vision associated with HSV.
Localized endothelitis (localized inflammation of corneal endothelial layer) is the cause of disciform keratitis.
Other forms
Metaherpetic ulcer : is not due to live virus, results from inability of the corneal surface to heal.
Necrotizing keratitis
Keratouveitis : is usually granulomatous uveitis with large keratic precipitates on the corneal endothelium.
Cause
HSV is a double-stranded DNA virus that has icosahedral capsid. HSV-1 infections are found more commonly in the oral area and HSV-2 in the genital area. Ocular herpes simplex is usually caused by HSV-1.
Diagnosis
A specific clinical diagnosis of HSV as the cause of dendritic keratitis can usually be made by ophthalmologists and optometrists based on the presence of characteristic clinical features. Diagnostic testing is seldom needed because of its classic clinical features and is not useful in stromal keratitis as there is usually no live virus. Laboratory tests are indicated in complicated cases when the clinical diagnosis is uncertain and in all cases of suspected neonatal herpes infection:
Corneal smears or impression cytology specimens can be analyzed by culture, antigen detection, or fluorescent antibody testing. Tzanck smear, i.e.Papanicolaou staining of corneal smears, show multinucleated giant cells and intranuclear inclusion bodies, however, the test is low in sensitivity and specificity.
DNA testing is rapid, sensitive and specific. However, its high cost limits its use to research centers.
Demonstration of HSV is possible with viral culture.
Serologic tests may show a rising antibody titer during primary infection but are of no diagnostic assistance during recurrent episodes.
Treatment
Treatment of herpes of the eye is different based on its presentation: epithelial keratitis is caused by live virus while stromal disease is an immune response and metaherpetic ulcer results from inability of the corneal epithelium to heal. But on the viral aciclovir eye ointment q.i.d can be used together a systemic antiviral drug t.d.s for 10/7. :
Epithelial keratitis
Epithelial keratitis is treated with topical antivirals, which are very effective with low incidence of resistance. Treatment of the disease with topical antivirals generally should be continued for 10–14 days. Aciclovir ophthalmic ointment and Trifluridine eye drops have similar effectiveness but are more effective than Idoxuridine and Vidarabine eye drops. Oral acyclovir is as effective as topical antivirals for treating epithelial keratitis, and it has the advantage of no eye surface toxicity. For this reason, oral therapy is preferred by some ophthalmologists.
Ganciclovir and brivudine treatments were found to be equally as effective as acyclovir in a systematic review.
Valacyclovir, a pro-drug of acyclovir likely to be just as effective for ocular disease, can cause thrombotic thrombocytopenic purpura/Hemolytic-uremic syndrome in severely immunocompromised patients such as those with AIDS; thus, it must be used with caution if the immune status is unknown.
Topical corticosteroids are contraindicated in the presence of active herpetic epithelial keratitis; patients with this disease who are using systemic corticosteroids for other indications should be treated aggressively with systemic antiviral therapy.
The effect of interferon with an antiviral agent or an antiviral agent with debridement needs further assessment.
Stromal keratitis
Herpetic stromal keratitis is treated initially with prednisolone drops every 2 hours
accompanied by a prophylactic antiviral drug: either topical antiviral or an oral agent such as acyclovir or valacyclovir. The prednisolone drops are tapered every 1–2 weeks depending on the degree of clinical improvement. Topical antiviral medications are not absorbed by the cornea through an intact epithelium, but orally administered acyclovir penetrates an intact cornea and anterior chamber. In this context, oral acyclovir might benefit the deep corneal inflammation of disciform keratitis.
Metaherpetic ulcer
Treatment includes artificial tears and eye lubricants, stopping toxic medications, performing punctal occlusion, bandage contact lens and amniotic membrane transplant. These measures intend to improve corneal epithelial healing.
Antiviral medication may reduce the risk of HSV keratitis recurring in people having a graft due to HSV infection and may improve the chances of graft survival.
| Biology and health sciences | Viral diseases | Health |
38019008 | https://en.wikipedia.org/wiki/Arrowroot | Arrowroot | Arrowroot is a starch obtained from the rhizomes (rootstock) of several tropical plants, traditionally Maranta arundinacea, but also Florida arrowroot from Zamia integrifolia, and tapioca from cassava (Manihot esculenta), which is often labeled arrowroot. Polynesian arrowroot or pia (Tacca leontopetaloides), from Palawan-Philippines arrowroot ("uraro/araro"), Guyana arrowroot (Dioscorea alata), Japanese arrowroot (Pueraria lobata), also called kudzu, and purple arrowroot Canna indica, are used in similar ways. In Odisha, India, it is called ପାଳୁଅ (paḷua).
History
Archaeological studies in the Americas show evidence of arrowroot cultivation as early as 7,000 years ago. The name may come from aru-aru (meal of meals) in the language of the Caribbean Arawak people, for whom the plant was a staple. It has also been suggested that the name comes from arrowroot's use in treating poison-arrow wounds, as it draws out the poison when applied to the site of the injury.
In the early days of carbonless copy paper, arrowroot, because of its fine grain-size, was a widely used ingredient. After an economical way of centrifugally separating wheat flour was devised, arrowroot lost its role in papermaking.
Uses
Cultivation in Saint Vincent and the Grenadines
Saint Vincent has a long history of arrowroot production. The industry started as the food and medicine of the Carib and Garifuna peoples and developed to the status of a major export of St. Vincent during the period 1900 to 1965. It became an important commodity in colonial trade in the 1930s. As the sugar industry declined in the nineteenth century, cultivation of arrowroot was developed to fill the void. Since then, the area cultivated has declined steadily as other crops, particularly bananas, have gained wider acceptance by farmers. Evidence of its former importance is indicated by the ruins of the various magnificent 19th-century factories located in valleys on the St. Vincent mainland.
Arrowroot cultivation is now concentrated on farms located north of the Rabacca River, particularly in the Owia area. This is also the area where the population of Carib descent is concentrated. In 1998/99, the industry produced of starch, about 3% of the peak level in the 1960s.
In the past, the St. Vincent arrowroot industry played an important role in the economy of the island, contributing close to 50% of the country's foreign export earnings, and was the principal source of employment and income of the rural people from the 1930s to the 1960s.
The plant is propagated from rhizomes and cultivation takes place at elevations up to 300 metres on the eastern and windward facing side of the highlands of St. Vincent. Cultivation covers an area of about 3,700 ha and some 80% of the crop is grown by small farmers. The arrowroot plant is very hardy and not very demanding in its requirements. St. Vincent, particularly the north-east coast, provides the ideal growing conditions for optimal yields: deep, well drained, slightly acidic soils and a hot, humid climate. Some farmers produce the crop by shifting cultivation on the cleared forested slopes.
The harvesting season extends from October to May. On the larger estates, the harvesting of the rhizome usually proceeds from the base of a hill towards the top. Harvesting involves breaking off the rhizome from the shoot. Planting and harvesting are inter-related in that when the rhizomes are harvested the shoot is replanted at the same time. In St. Vincent, much use is made of rural unemployment, and many women workers are involved in the various phases of operation. Mechanical harvesters have recently been introduced, allowing faster arrowroot harvesting.
Six factories process the island's arrowroot and large processing plants are located at Belle Vue and at Owia.
Starch extraction process
Arrowroot tubers contain about 23% starch. They are first washed and then cleaned of the paper-like scale. The scales must be carefully removed before extracting the starch because they impart a disagreeable flavour. After removing the scale, the roots are washed again, drained and finally reduced to a pulp by beating them in mortars or subjecting them to the action of a wheel rasp. The milky liquid thus obtained is passed through a coarse cloth or hair sieve and the pure starch, which is insoluble, is allowed to settle at the bottom. The wet starch is dried in the sun or in a drying house. The result is a powder, the "arrowroot" of commerce, that is quickly packed for market in air-tight cans, packages or cases.
Arrowroot starch has in the past been quite extensively adulterated with potato starch and other similar substances. Pure arrowroot, like other pure starches, is a light, white powder (the mass feeling firm to the finger and crackling like newly fallen snow when rubbed or pressed), odourless when dry, but emitting a faint, peculiar odour when mixed with boiling water, and swelling on cooking into a perfect jelly, which can be used to make a food that is very smooth in consistency—unlike adulterated articles, mixed with potato flour and other starches of lower value, which contain larger particles.
Microscopically the arrow root starch is oval in shape and with hilum at the proximal end.
Culinary
Arrowroot can be consumed in the form of biscuits, puddings, jellies, cakes, hot sauces, and also with beef tea, milk or veal broth. Kudzu arrowroot (Pueraria lobata) is used in noodles in Korean and Vietnamese cuisine. In the Victorian era it was used, boiled with a little flavouring added, as an easily digestible food for children and people with dietary restrictions. In Burma, arrowroot tubers, which are called artarlut, are boiled or steamed and eaten with salt and oil.
Arrowroot makes clear, shimmering fruit gels and prevents ice crystals from forming in homemade ice cream. It can also be used as a thickener for acidic foods, such as East Asian sweet and sour sauce. It is used in cooking to produce a clear, thickened sauce, such as a fruit sauce. It will not make the sauce go cloudy, like cornstarch, flour, or other starchy thickening agents would.
The lack of gluten in arrowroot flour makes it useful as a replacement for wheat flour for those with a gluten intolerance. It is, however, relatively high in carbohydrates and low in protein (approximately 7.7%) and does not provide a complete substitute for wheat flour in bread-making.
Arrowroot thickens at a lower temperature than flour or cornstarch, is not weakened by acidic ingredients, has a more neutral taste, and is not affected by freezing. It does not mix well with dairy, forming a slimy mixture. It is recommended that arrowroot be mixed with a cool liquid before adding to a hot fluid. The mixture should be heated only until the mixture thickens and removed immediately to prevent the mixture from thinning. Overheating tends to break down arrowroot's thickening property. Two teaspoons of arrowroot can be substituted for one tablespoon of cornstarch, or one teaspoon of arrowroot for one tablespoon of wheat flour.
Shove halfpenny
The English pub game of shove ha'penny, involving sliding a coin across a graduated slate board, traditionally uses arrowroot powder as a lubricating medium.
| Biology and health sciences | Monocots | null |
60056473 | https://en.wikipedia.org/wiki/Horsehair%20crab | Horsehair crab | The horsehair crab, Erimacrus isenbeckii (Japanese: ケガニ, kegani), is a species of crab which is found mainly in the Northwest Pacific, around the Hokkaido coast in the Sea of Okhotsk and the Western Bering Sea and is an important commercial species used in Japanese cuisine. Despite the importance of the species, biological studies are usually specialized and limited. The catch for the species reached a peak in the 1950s at 27,000 tons and has decreased since, reaching 2,000 tons in 2003. Due to the commercial importance of the species, many stock enhancement programs have been utilized to help maintain a successful fishery. The species is commonly found on sandy benthic environments from shallow water to depths of up to 350 meters.
Biology and description
E. isenbeckii has a hard shell and soft spines which cover the shell and appendages. It can reach over 1 kg in weight. Erimacrus isenbeckii is known to feed three to four times in a ten to twelve hour time-span and cannibalism is common for E. isenbeckii in the spring. The carapace of E. isenbeckii can reach 100 to 120 mm in length in adults. Like the two other species in the same family, the gonophores of females are exposed.
In the western Bering Sea, males typically live in areas of around 3.4 °C and depths of around 66 meters, while females can be found in temperatures of 2.4 °C and depths of around 64 meters.
Life cycle
The embryonic development of the species can be divided into nine stages, each defined by cleavage and formation of distinct appendages. To incubate the eggs after spawning, females attach them to their pleopods. Based on many surveys conducted during the spawning and hatching seasons, the incubation period of the species is estimated to be over a year, with the embryonic growth rate mainly being controlled by the temperature of the water. Young hatch between March and May and remain as zooplankton until they reach the bottom of the sea by July. The hatching process occurs during the spring phytoplankton bloom in the Sea of Okhotsk. The zoea of this species can be mistaken for the two other species in the same family, but E. isenbeckii zoea lack carapace spines and have shorter lateral spines on the fork of the telson.
| Biology and health sciences | Crabs and hermit crabs | Animals |
45515504 | https://en.wikipedia.org/wiki/Microbiome | Microbiome | A microbiome () is the community of microorganisms that can usually be found living together in any given habitat. It was defined more precisely in 1988 by Whipps et al. as "a characteristic microbial community occupying a reasonably well-defined habitat which has distinct physio-chemical properties. The term thus not only refers to the microorganisms involved but also encompasses their theatre of activity". In 2020, an international panel of experts published the outcome of their discussions on the definition of the microbiome. They proposed a definition of the microbiome based on a revival of the "compact, clear, and comprehensive description of the term" as originally provided by Whipps et al., but supplemented with two explanatory paragraphs, the first pronouncing the dynamic character of the microbiome, and the second clearly separating the term microbiota from the term microbiome.
The microbiota consists of all living members forming the microbiome. Most microbiome researchers agree bacteria, archaea, fungi, algae, and small protists should be considered as members of the microbiome. The integration of phages, viruses, plasmids, and mobile genetic elements is more controversial. Whipps's "theatre of activity" includes the essential role secondary metabolites play in mediating complex interspecies interactions and ensuring survival in competitive environments. Quorum sensing induced by small molecules allows bacteria to control cooperative activities and adapts their phenotypes to the biotic environment, resulting, e.g., in cell-cell adhesion or biofilm formation.
All animals and plants form associations with microorganisms, including protists, bacteria, archaea, fungi, and viruses. In the ocean, animal–microbial relationships were historically explored in single host–symbiont systems. However, new explorations into the diversity of microorganisms associating with diverse marine animal hosts is moving the field into studies that address interactions between the animal host and the multi-member microbiome. The potential for microbiomes to influence the health, physiology, behaviour, and ecology of marine animals could alter current understandings of how marine animals adapt to change. This applies to especially the growing climate-related and anthropogenic-induced changes already impacting the ocean. The plant microbiome plays key roles in plant health and food production and has received significant attention in recent years. Plants live in association with diverse microbial consortia, referred to as the plant microbiota, living both inside (the endosphere) and outside (the episphere) of plant tissues. They play important roles in the ecology and physiology of plants. The core plant microbiome is thought to contain keystone microbial taxa essential for plant health and for the fitness of the plant holobiont. Likewise, the mammalian gut microbiome has emerged as a key regulator of host physiology, and coevolution between host and microbial lineages has played a key role in the adaptation of mammals to their diverse lifestyles.
Microbiome research originated in microbiology back in the seventeenth century. The development of new techniques and equipment boosted microbiological research and caused paradigm shifts in understanding health and disease. The development of the first microscopes allowed the discovery of a new, unknown world and led to the identification of microorganisms. Infectious diseases became the earliest focus of interest and research. However, only a small proportion of microorganisms are associated with disease or pathogenicity. The overwhelming majority of microbes are essential for healthy ecosystem functioning and known for beneficial interactions with other microbes and organisms. The concept that microorganisms exist as single cells began to change as it became increasingly obvious that microbes occur within complex assemblages in which species interactions and communication are critical. Discovery of DNA, the development of sequencing technologies, PCR, and cloning techniques enabled the investigation of microbial communities using cultivation-independent approaches. Further paradigm shifts occurred at the beginning of this century and still continue, as new sequencing technologies and accumulated sequence data have highlighted both the ubiquity of microbial communities in association within higher organisms and the critical roles of microbes in human, animal, and plant health. These have revolutionised microbial ecology. The analysis of genomes and metagenomes in a high-throughput manner now provide highly effective methods for researching the functioning of both individual microorganisms as well as whole microbial communities in natural habitats.
Background
History
Microbiome research originated in microbiology and started back in the seventeenth century. The development of new techniques and equipment has boosted microbiological research and caused paradigm shifts in understanding health and disease. Since infectious diseases have affected human populations throughout most of history, medical microbiology was the earliest focus of research and public interest. Additionally, food microbiology is an old field of empirical applications. The development of the first microscopes allowed the discovery of a new, unknown world and led to the identification of microorganisms.
Access to the previously invisible world opened the eyes and the minds of the researchers of the seventeenth century. Antonie van Leeuwenhoek investigated diverse bacteria of various shapes, fungi, and protozoa, which he called animalcules, mainly from water, mud, and dental plaque samples, and discovered biofilms as a first indication of microorganisms interacting within complex communities. Robert Koch's explanation of the origin of human and animal diseases as a consequence of microbial infection and development of the concept of pathogenicity was an important milestone in microbiology. These findings shifted the focus of the research community and the public on the role of microorganisms as disease-forming agents that needed to be eliminated.
However, comprehensive research over the past century has shown only a small proportion of microorganisms are associated with disease or pathogenicity. The overwhelming majority of microbes are essential for ecosystem functioning and known for beneficial interactions with other microbes as well as macroorganisms. In fact, maintaining a healthy microbiome is essential for human health and may be a target for new therapeutics. At the end of the nineteenth century, microbial ecology started with the pioneering work by Martinus W. Beijerinck and Sergei Winogradsky. The newly established science of environmental microbiology resulted in another paradigm shift: microorganisms are everywhere in natural environments, often associated with hosts and, for the first time, beneficial effects on their hosts were reported.
Subsequently, the concept that microorganisms exist as single cells began to change as it became increasingly obvious that microbes occur within complex assemblages in which species interactions and communication are critical to population dynamics and functional activities. Discovery of DNA, the development of sequencing technologies, PCR, and cloning techniques enabled the investigation of microbial communities using cultivation-independent, DNA and RNA-based approaches.
A further important step was the introduction of phylogenetic markers such as the 16S rRNA gene for microbial community analysis by Carl Woese and George E. Fox in 1977. Nowadays biologists can barcode bacteria, archaea, fungi, algae, and protists in their natural habitats, e.g., by targeting their 16S and 18S rRNA genes, internal transcribed spacer (ITS), or, alternatively, specific functional regions of genes coding for specific enzymes.
Another major paradigm shift was initiated at the beginning of this century and continues through today, as new sequencing technologies and accumulated sequence data have highlighted both the ubiquity of microbial communities in association within higher organisms and the critical roles of microbes in human, animal, and plant health. These new possibilities have revolutionized microbial ecology, because the analysis of genomes and metagenomes in a high-throughput manner provides efficient methods for addressing the functional potential of individual microorganisms as well as of whole communities in their natural habitats. Multiomics technologies including metatranscriptome, metaproteome and metabolome approaches now provide detailed information on microbial activities in the environment. Based on the rich foundation of data, the cultivation of microbes, which was often ignored or underestimated over the last thirty years, has gained new importance, and high throughput culturomics is now an important part of the toolbox to study microbiomes. The high potential and power of combining multiple "omics" techniques to analyze host-microbe interactions are highlighted in several reviews.
Etymology
The word microbiome (from the Greek micro meaning "small" and bíos meaning "life") was first used by J.L. Mohr in 1952 in The Scientific Monthly to mean the microorganisms found in a specific environment.
Definitions
Microbial communities have commonly been defined as the collection of microorganisms living together. More specifically, microbial communities are defined as multi-species assemblages, in which (micro) organisms interact with each other in a contiguous environment. In 1988, Whipps and colleagues working on the ecology of rhizosphere microorganisms provided the first definition of the term microbiome. They described the microbiome as a combination of the words micro and biome, naming a "characteristic microbial community" in a "reasonably well-defined habitat which has distinct physio-chemical properties" as their "theatre of activity". This definition represents a substantial advancement of the definition of a microbial community, as it defines a microbial community with distinct properties and functions and its interactions with its environment, resulting in the formation of specific ecological niches.
However, many other microbiome definitions have been published in recent decades. By 2020 the most cited definition was by Lederberg, and described microbiomes within an ecological context as a community of commensal, symbiotic, and pathogenic microorganisms within a body space or other environment. Marchesi and Ravel focused in their definition on the genomes and microbial (and viral) gene expression patterns and proteomes in a given environment and its prevailing biotic and abiotic conditions. All these definitions imply that general concepts of macro-ecology could be easily applied to microbe-microbe as well as to microbe-host interactions. However, the extent to which these concepts, developed for macro-eukaryotes, can be applied to prokaryotes with their different lifestyles regarding dormancy, variation of phenotype, and horizontal gene transfer as well as to micro-eukaryotes that is not quite clear. This raises the challenge of considering an entirely novel body of conceptual ecology models and theory for microbiome ecology, particularly in relation to the diverse hierarchies of interactions of microbes with one another and with the host biotic and abiotic environments. Many current definitions fail to capture this complexity and describe the term microbiome as encompassing the genomes of microorganisms only.
In 2020, a panel of international experts, organised by the EU-funded MicrobiomeSupport project, published the results of their deliberations on the definition of the microbiome. The panel was composed of about 40 leaders from diverse microbiome areas, and about one hundred further experts from around the world contributed through an online survey. They proposed a definition of the microbiome based on a revival of what they characterised as the "compact, clear, and comprehensive description of the term" as originally provided by Whipps et al. in 1988, amended with a set of recommendations considering subsequent technological developments and research findings. They clearly separate the terms microbiome and microbiota and provide a comprehensive discussion considering the composition of microbiota, the heterogeneity and dynamics of microbiomes in time and space, the stability and resilience of microbial networks, the definition of core microbiomes, and functionally relevant keystone species as well as co-evolutionary principles of microbe-host and inter-species interactions within the microbiome.
The panel extended the Whipps et al. definition, which contains all important points that are valid even 30 years after its publication in 1988, by two explanatory paragraphs differentiating the terms microbiome and microbiota and pronouncing its dynamic character, as follows:
The microbiome is defined as a characteristic microbial community occupying a reasonable well-defined habitat which has distinct physio-chemical properties. The microbiome not only refers to the microorganisms involved but also encompass their theatre of activity, which results in the formation of specific ecological niches. The microbiome, which forms a dynamic and interactive micro-ecosystem prone to change in time and scale, is integrated in macro-ecosystems including eukaryotic hosts, and here crucial for their functioning and health.
The microbiota consists of the assembly of microorganisms belonging to different kingdoms (prokaryotes (bacteria, archaea), eukaryotes (algae, protozoa, fungi etc), while "their theatre of activity" includes microbial structures, metabolites, mobile genetic elements (such as transposons, phages, and viruses), and relic DNA embedded in the environmental conditions of the habitat.
Membership
Microbiota
The microbiota comprises all living members forming the microbiome. Most microbiome researchers agree bacteria, archaea, fungi, algae, and small protists should be considered as members of the microbiome. The integration of phages, viruses, plasmids, and mobile genetic elements is a more controversial issue in the definition of the microbiome. There is also no clear consensus as to whether extracellular DNA derived from dead cells, so-called "relic DNA", belongs to the microbiome. Relic DNA can be up to 40% of the sequenced DNA in soil, and was up to 33% of the total bacterial DNA on average in a broader analysis of habitats with the highest proportion of 80% in some samples. Despite its omnipresence and abundance, relic DNA had a minimal effect on estimates of taxonomic and phylogenetic diversity.
When it comes to the use of specific terms, a clear differentiation between microbiome and microbiota helps to avoid the controversy concerning the members of a microbiome. Microbiota is usually defined as the assemblage of living microorganisms present in a defined environment. As phages, viruses, plasmids, prions, viroids, and free DNA are usually not considered as living microorganisms, they do not belong to the microbiota.
The term microbiome, as it was originally postulated by Whipps and coworkers, includes not only the community of the microorganisms but also their "theatre of activity". The latter involves the whole spectrum of molecules produced by the microorganisms, including their structural elements (nucleic acids, proteins, lipids, polysaccharides), metabolites (signalling molecules, toxins, organic, and inorganic molecules), and molecules produced by coexisting hosts and structured by the surrounding environmental conditions. Therefore, all mobile genetic elements, such as phages, viruses, and "relic" and extracellular DNA, should be included in the term microbiome, but are not a part of microbiota. The term microbiome is also sometimes confused with the metagenome. Metagenome is, however, clearly defined as a collection of genomes and genes from the members of a microbiota.
Microbiome studies sometimes focus on the behaviour of a specific group of microbiota, generally in relation to or justified by a clear hypothesis. More and more terms like bacteriome, archaeome, mycobiome, or virome have started appearing in the scientific literature, but these terms do not refer to biomes (a regional ecosystem with a distinct assemblage of (micro) organisms, and physical environment often reflecting a certain climate and soil) as the microbiome itself. Consequently, it would be better to use the original terms (bacterial, archaeal, or fungal community). In contrast to the microbiota, which can be studied separately, the microbiome is always composed by all members, which interact with each other, live in the same habitat, and form their ecological niche together. The well-established term virome is derived from virus and genome and is used to describe viral shotgun metagenomes consisting of a collection of nucleic acids associated with a particular ecosystem or holobiont. Viral metagenomes can be suggested as a semantically and scientifically better term.
Networks
Microbes interact with one another, and these symbiotic interactions have diverse consequences for microbial fitness, population dynamics, and functional capacities within the microbiome. The microbial interactions can either be between microorganisms of the same species or between different species, genera, families, and domains of life. The interactions can be separated into positive, negative, and neutral types. Positive interactions include mutualism, synergism, and commensalism. Negative interactions include amensalism, predation, parasitism, antagonism, and competition. Neutral interactions are interactions where there is no observed effect on the functional capacities or fitness of interacting species microbial life strategy concepts.
Microbiomes exhibit different adaptive strategies. Oligotrophs are organisms that can live in an environment offering very low levels of nutrients, particularly carbon. They are characterised by slow growth, low rates of metabolism, and generally low population density. Oligotrophic environments include deep oceanic sediments, caves, glacial and polar ice, deep subsurface soil, aquifers, ocean waters, and leached soils. In contrast are the copiotrophs, which thrive in much higher carbon concentrations, and do well in high organic substrate conditions such as sewage lagoons.
In addition to oligotrophic and copiotrophic strategists, the competitor–stress tolerator–ruderals framework can influence the outcomes of interactions. For example, microorganisms competing for the same source can also benefit from each other when competing for the same compound at different trophic levels. Stability of a complex microbial ecosystem depends on trophic interactions for the same substrate at different concentration levels. As of 2020 microbial social adaptations in nature have been understudied. Here molecular markers can provide insight into social adaptations by supporting the theories, e.g., of altruists and cheaters in native microbiomes.
Coevolution
According to the "separation" approach, the microorganisms can be divided into pathogens, neutral, and symbionts, depending on their interaction with their host. The coevolution between host and its associated microbiota may be accordingly described as antagonistic (based on negative interactions) or mutualistic (based on positive interactions).
As of 2020, the emergence in publications about opportunistic pathogens and pathobionts has produced a shift towards a holistic approach in the coevolutions theory. The holistic approach sees the host and its associated microbiota as one unit (the so-called holobiont), that coevolves as one entity. According to the holistic approach, holobiont's disease state is linked to dysbiosis, low diversity of the associated microbiota, and their variability: a so-called pathobiome state. The healthy state, on the other hand, is accompanied with eubiosis, high diversity, and uniformity of the respective microbiota.
Types
Terrestrial
Plant
The plant microbiome plays roles in plant health and food production and has received significant attention in recent years. Plants live in association with diverse microbial consortia. These microbes, referred to as the plant's microbiota, live both inside (the endosphere) and outside (the episphere) of plant tissues, and play important roles in the ecology and physiology of plants. "The core plant microbiome is thought to comprise keystone microbial taxa that are important for plant fitness and established through evolutionary mechanisms of selection and enrichment of microbial taxa containing essential functions genes for the fitness of the plant holobiont".
Plant microbiomes are shaped by both factors related to the plant itself, such as genotype, organ, species and health status, as well as factors related to the plant's environment, such as management, land use and climate. The health status of a plant has been reported in some studies to be reflected by or linked to its microbiome.
Plant and plant-associated microbiota colonise different niches on and inside the plant tissue. All the above-ground plant parts together, called the phyllosphere, are a continuously evolving habitat due to ultraviolet (UV) radiation and altering climatic conditions. It is primarily composed of leaves. Below-ground plant parts, mainly roots, are generally influenced by soil properties. Harmful interactions affect the plant growth through pathogenic activities of some microbiota members. On the other hand, beneficial microbial interactions promote plant growth.
The addition of synthetic nitrogen fertiliser may have little impact on soil microbiome structure or composition, but drastically reduces the microbiome network connectivity.
Animal
The mammalian gut microbiome has emerged as a key regulator of host physiology, and coevolution between host and microbial lineages has played a key role in the adaptation of mammals to their diverse lifestyles. Diet, especially herbivory, is an important correlate of microbial diversity in mammals. Most mammalian microbiomes are also strongly correlated with host phylogeny, despite profound shifts in diet. This suggests host factors that themselves change across host phylogeny, such as gut physiology, play an important role in structuring the gut microbiomes across mammals. The vertebrate adaptive immune system is even speculated to have evolved as just such a factor for selective maintenance of symbiotic homeostasis.
The importance of phylogeny-correlated factors to the diversity of vertebrate microbiomes more generally is still poorly understood. Phylosymbiosis, or the observation that more closely related host species have more similar microbiomes, has been described in a number of nonmammalian taxa. Other analyses have found substantial variation in phylosymbiotic signals among mammalian taxa, sometimes with conflicting results. The presence of a robust phylosymbiotic correlation implies that host factors control microbial assembly. Even if the specific mechanisms are unknown, variation in the strength or presence of a measurable phylosymbiotic signal across host phylogeny could prove useful for identifying such mechanisms through comparative studies. However, as of 2020 most studies have focused on just a few taxa at a time, and variable methods for both surveying the microbiome and measuring phylosymbiosis and host specificity (or the restriction of microbes to specific host lineages) have made generalisations difficult.
Without broader evolutionary context, it is unclear how universally conserved patterns of host-microbe phylosymbiosis actually are. Growing evidence indicates that the strong patterns identified in mammals are the exception rather than the rule in vertebrates. Meta-analyses of fish and birds have failed to detect the strength of correlations to diet and phylogeny reported in mammals. A recent analysis of samples from more than 100 vertebrate species also found the strength of phylogenetic correlation to be much higher in mammals than in birds, reptiles, amphibians, or fish. It is increasingly appreciated in nonvertebrate animals that fundamental aspects of the host's relationship to its symbiotic community can change drastically between taxa: many insects depend entirely on microbes for key metabolites, while others seem to be devoid of resident gut microbes.
Human
The human microbiome is the aggregate of all microbiota that reside on or within human tissues and biofluids along with the corresponding anatomical sites in which they reside, including the skin, mammary glands, seminal fluid, uterus, ovarian follicles, lung, saliva, oral mucosa, conjunctiva, biliary tract, and gastrointestinal tract. Types of human microbiota include bacteria, archaea, fungi, protists and viruses. Though micro-animals can also live on the human body, they are typically excluded from this definition. In the context of genomics, the term human microbiome is sometimes used to refer to the collective genomes of resident microorganisms; the term human metagenome has the same meaning.
Humans are colonised by many microorganisms, with approximately the same order of magnitude of non-human cells as human cells. Some microorganisms that colonize humans are commensal, meaning they co-exist without harming or benefiting humans; others have a mutualistic relationship with their human hosts. Conversely, some non-pathogenic microorganisms can harm human hosts via the metabolites they produce, like trimethylamine, which the human body converts to trimethylamine N-oxide via FMO3-mediated oxidation. Certain microorganisms perform tasks that are known to be useful to the human host, but the role of most of them is not well understood. Those that are expected to be present, and that under normal circumstances do not cause disease, are sometimes deemed normal flora or normal microbiota.
The Human Microbiome Project (HMP) took on the project of sequencing the genome of the human microbiota, focusing particularly on the microbiota that normally inhabit the skin, mouth, nose, digestive tract, and vagina. It reached a milestone in 2012 when it published its initial results.
Marine
All animals on Earth form associations with microorganisms, including protists, bacteria, archaea, fungi, and viruses. In the ocean, animal–microbial relationships were historically explored in single host–symbiont systems. However, new explorations into the diversity of microorganisms associating with diverse marine animal hosts is moving the field into studies that address interactions between the animal host and a more multi-member microbiome. The potential for microbiomes to influence the health, physiology, behavior, and ecology of marine animals could alter current understandings of how marine animals adapt to change, and especially the growing climate-related and anthropogenic-induced changes already impacting the ocean environment.
The microbiomes of diverse marine animals are currently under study, from simplistic organisms including sponges and ctenophores to more complex organisms such as sea squirts and sharks.
The relationship between the Hawaiian bobtail squid and the bioluminescent bacterium Aliivibrio fischeri is one of the best studied symbiotic relationships in the sea and is a choice system for general symbiosis research. This relationship has provided insight into fundamental processes in animal-microbial symbioses, and especially biochemical interactions and signaling between the host and bacterium.
The gutless marine oligochaete worm Olavius algarvensis is another relatively well-studied marine host to microbes. These three centimetre long worms reside within shallow marine sediments of the Mediterranean Sea. The worms do not contain a mouth or a digestive or excretory system, but are instead nourished with the help of a suite of extracellular bacterial endosymbionts that reside upon coordinated use of sulfur present in the environment. This system has benefited from some of the most sophisticated 'omics and visualization tools. For example, multi-labeled probing has improved visualization of the microbiome and transcriptomics and proteomics have been applied to examine host–microbiome interactions, including energy transfer between the host and microbes and recognition of the consortia by the worm's innate immune system. The major strength of this system is that it does offer the ability to study host–microbiome interactions with a low diversity microbial consortium, and it also offers a number of host and microbial genomic resources
Corals are one of the more common examples of an animal host whose symbiosis with microalgae can turn to dysbiosis, and is visibly detected as bleaching. Coral microbiomes have been examined in a variety of studies, which demonstrate how variations in the ocean environment, most notably temperature, light, and inorganic nutrients, affect the abundance and performance of the microalgal symbionts, as well as calcification and physiology of the host. Studies have also suggested that resident bacteria, archaea, and fungi additionally contribute to nutrient and organic matter cycling within the coral, with viruses also possibly playing a role in structuring the composition of these members, thus providing one of the first glimpses at a multi-domain marine animal symbiosis. The gammaproteobacterium Endozoicomonas is emerging as a central member of the coral's microbiome, with flexibility in its lifestyle. Given the recent mass bleaching occurring on reefs, corals will likely continue to be a useful and popular system for symbiosis and dysbiosis research.
Sponges are common members of the ocean's diverse benthic habitats and their abundance and ability to filter large volumes of seawater have led to the awareness that these organisms play critical roles in influencing benthic and pelagic processes in the ocean. They are one of the oldest lineages of animals, and have a relatively simple body plan that commonly associates with bacteria, archaea, algal protists, fungi, and viruses. Sponge microbiomes are composed of specialists and generalists, and complexity of their microbiome appears to be shaped by host phylogeny. Studies have shown that the sponge microbiome contributes to nitrogen cycling in the oceans, especially through the oxidation of ammonia by archaea and bacteria. Most recently, microbial symbionts of tropical sponges were shown to produce and store polyphosphate granules, perhaps enabling the host to survive periods of phosphate depletion in oligotrophic marine environments. The microbiomes of some sponge species do appear to change in community structure in response to changing environmental conditions, including temperature and ocean acidification, as well as synergistic impacts.
Cetacean microbiomes can be difficult to assess because of difficulties accessing microbial samples. For example, many whale species are rare and are deep divers. There are different techniques for sampling a cetacean's gut microbiome. The most common is collecting fecal samples from the environment and taking a probe from the center that is non-contaminated.
The skin is a barrier protecting marine mammals from the outside world. The epidermal microbiome on the skin is an indicator of how healthy the animal is, and is also an ecological indicator of the state of the surrounding environment. Knowing what the microbiome of the skin of marine mammals looks like under typical conditions allows understanding of how these communities different from free microbial communities found in the sea. Cetaceans are in danger because they are affected by multiple stress factors which make them more vulnerable to various diseases. They have been high susceptibility to airway infections, but little is known about their respiratory microbiome. Sampling the exhaled breath or "blow" of cetaceans can provide an assessment of their state of health. Blow is composed of a mixture of microorganisms and organic material, including lipids, proteins , and cellular debris derived from the linings of the airways which, when released into the relatively cooler outdoor air, condense to form a visible mass of vapor, which can be collected. There are various methods for collecting exhaled breath samples, one of the most recent is through the use of aerial drones. This method provides a safer, quieter, and less invasive alternative and often a cost-effective option for monitoring fauna and flora. Blow samples are taken to the laboratory where the respiratory tract microbiota are amplified and sequenced. The use of aerial drones has been more successful with large cetaceans due to slow swim speeds and larger blow sizes.
Assessment
Currently available methods for studying microbiomes, so-called multi-omics, range from high throughput isolation (culturomics) and visualization (microscopy), to targeting the taxonomic composition (metabarcoding), or addressing the metabolic potential (metabarcoding of functional genes, metagenomics) to analyze microbial activity (metatranscriptomics, metaproteomics, metabolomics). Based on metagenome data, microbial genomes can be reconstructed. While first metagenome-assembled genomes were reconstructed from environmental samples, in recent years, several thousands of bacterial genomes were binned without culturing the organisms behind. For example, 154,723 microbial genomes of the global human microbiome were reconstructed in 2019 from 9,428 metagenomes.
Computational modeling of microbiomes has been used to complement experimental methods for investigating microbial function by utilizing multi-omic data to predict complex inter-species and host-species dynamics. A popular in silico method is to combine metabolic network models of microbial taxa present in a community and use a mathematical modeling strategy such as flux balance analysis to predict the metabolic function of the microbial community at a taxon and community-level.
As of 2020, understanding remains limited due to missing links between the massive availability of microbiome DNA sequence data on the one hand and limited availability of microbial isolates needed to confirm metagenomic predictions of gene function on the other hand. Metagenome data provides a playground for new predictions, yet much more data is needed to strengthen the links between sequence and rigorous functional predictions. This becomes obvious when considering that the replacement of one single amino acid residue by another may lead to a radical functional change, resulting in an incorrect functional assignment to a given gene sequence. Additionally, cultivation of new strains is needed to help identify the large fraction of unknown sequences obtained from metagenomics analyses, which for poorly studied ecosystems can be more than 70%. Depending on the applied method, even in well-studied microbiomes, 40–70% of the annotated genes in fully sequenced microbial genomes have no known or predicted function. As of 2019, 85 of the then established 118 phyla had not had a single species described, presenting a challenge to understanding prokaryotic functional diversity.
The number of prokaryotic phyla may reach hundreds, and archaeal ones are among the least studied. The growing gap between the diversity of Bacteria and Archaea held in pure culture and those detected by molecular methods has led to the proposal to establish a formal nomenclature for not-yet cultured taxa, primarily based on sequence information. According to this proposal, the concept of Candidatus species would be extended to the groups of closely related genome sequences, and their names would be published following established rules of bacterial nomenclature.
Each microbiome system is suited to address different types of questions based on the culturability of microbes, genetic tractability of microbes and host (where relevant), ability to maintain system in laboratory setting, and ability to make host/environment germfree.
| Biology and health sciences | Ecology | Biology |
43674427 | https://en.wikipedia.org/wiki/Regenerative%20agriculture | Regenerative agriculture | Regenerative agriculture is a conservation and rehabilitation approach to food and farming systems. It focuses on topsoil regeneration, increasing biodiversity, improving the water cycle, enhancing ecosystem services, supporting biosequestration, increasing resilience to climate change, and strengthening the health and vitality of farm soil.
Regenerative agriculture is not a specific practice. It combines a variety of sustainable agriculture techniques. Practices include maximal recycling of farm waste and adding composted material from non-farm sources. Regenerative agriculture on small farms and gardens is based on permaculture, agroecology, agroforestry, restoration ecology, keyline design, and holistic management. Large farms are also increasingly adopting regenerative techniques, using "no-till" and/or "reduced till" practices.
As soil health improves, input requirements may decrease, and crop yields may increase as soils are more resilient to extreme weather and harbor fewer pests and pathogens.
Regenerative agriculture mitigates climate change through carbon dioxide removal from the atmosphere and sequestration. Along with reduction of carbon emissions, carbon sequestration is gaining popularity in agriculture, and individuals as well as groups are taking action to fight climate change.
History
Origins
Regenerative agriculture is based on various agricultural and ecological practices, with a particular emphasis on minimal soil disturbance and the practice of composting. Similar ideas focus on "sea minerals." His work led to innovations in no-till practices, such as slash and mulch in tropical regions. Sheet mulching is a regenerative agriculture practice that smothers weeds and adds nutrients to the soil below.
In the early 1980s, the Rodale Institute began using the term ‘regenerative agriculture’. Rodale Publishing formed the Regenerative Agriculture Association, which began publishing regenerative agriculture books in 1987 and 1988.
However, the institute stopped using the term in the late 1980s, and it only appeared sporadically (in 2005 and 2008), until they released a white paper in 2014, titled "Regenerative Organic Agriculture and Climate Change". The paper's summary states, "we could sequester more than 100% of current annual CO2 emissions with a switch to common and inexpensive organic management practices, which we term 'regenerative organic agriculture.'" The paper described agricultural practices, like crop rotation, compost application, and reduced tillage, that are similar to organic agriculture methods.
In 2002, Storm Cunningham documented the beginning of what he called "restorative agriculture" in his first book, The Restoration Economy. Cunningham defined restorative agriculture as a technique that rebuilds the quantity and quality of topsoil, while also restoring local biodiversity (especially native pollinators) and watershed function. Restorative agriculture was one of the eight sectors of restorative development industries/disciplines in The Restoration Economy.
Developments (since 2010)
Regenerative agriculture has showed up in academic research since the early to mid 2010s in the fields of environmental science, plant science, and ecology. As the term expands in use, many books have been published on the topic and several organizations started to promote regenerative agriculture techniques. Allan Savory gave a TED talk on fighting and reversing climate change in 2013. He also launched The Savory Institute, which educates ranchers on methods of holistic land management. Abe Collins created LandStream to monitor ecosystem performance in regenerative agriculture farms. Eric Toensmeier had a book published on the subject in 2016. However, researchers at Wageningen University in the Netherlands found there to be no consistent definition of what people referencing "regenerative agriculture" meant. They also found that most of the work around this topic were instead the authors' attempt at shaping what regenerative agriculture meant.
In 2011, the (not for profit) Mulloon Institute was founded in New South Wales, Australia, to develop and promote regenerative practices to reclaim land as water-retentive areas by slowing the loss of water from land. The members of the Institute created a 22-weir in-stream project with neighbours over 2 kilometers of Mulloon Creek. A study indicates that the outcomes were positive but relatively unpredictable, and that suitability of ground conditions on site was a key for success. Bottom-up change in the context of Australian regenerative agriculture is a complex set of narratives and barriers to change affecting farmers. A West Australian government funded survey of land hydration was conducted by the Mulloon Institute in June 2022, which concluded that water retention projects supported the regeneration of native plant species.
Founded in 2013, 501(c)3 non-profit Kiss the Ground was one of the first to publicize the term to a broader audience. Today the group runs a series of media, farmland, education, and policy programs to raise awareness around soil health and support farmers who aim to transition from conventional to regenerative land management practices. The film Kiss the Ground, executive produced by Julian Lennon and Gisele Bündchen and narrated by Woody Harrelson, was released in 2020. A follow-up documentary, Common Ground, premiered in 2023 and was the recipient of the 2023 Human/Nature Award at the Tribeca Film Festival.
Not all regenerative systems emphasize ruminants. In 2017, Reginaldo Haslett Marroquin published "In the Shadow of Green Man" with Per Andreeason, which detailed Haslett Marroquin's early life as a campesino in Guatemala and how these experiences led him to develop regenerative poultry agroforestry systems that are now being practiced and expanding in the United States and elsewhere.
Several large corporations have also announced regenerative agriculture initiatives in the last few years. In 2019, General Mills announced an effort to promote regenerative agriculture practices in their supply chain. The farming practices have received criticism from academic and government experiments on sustainability in farming. In particular, Gunsmoke Farm partnered with General Mills to transition to regenerative agriculture practices and become a teaching hub for others. Experts from the area have expressed concerns about the farm now doing more harm than good, with agronomist Ruth Beck stating that "Environmental marketing got ahead of what farmers can actually do".
In February 2021, the regenerative agriculture market gained traction after Joe Biden's Secretary of Agriculture Tom Vilsack made reference to it during his Senate Confirmation hearing. The Biden administration wants to utilize $30 billion from the USDA's Commodity Credit Corporations to incentivise farmers to adopt sustainable practices. Vilsack stated in the hearing, "It is a great tool for us to create the kind of structure that will inform future farm bills about what will encourage carbon sequestration, what will encourage precision agriculture, what will encourage soil health and regenerative agricultural practices." After this announcement from the Biden administration, several national and international corporations announced initiatives into regenerative agriculture. During the House of Representatives Committee on Agriculture's first hearing on climate change, Gabe Brown, a proponent of regenerative agriculture, testified about the role of regenerative agriculture in both the economics and sustainability of farming.
In 2021, PepsiCo announced that by 2030 they will work with the farmers in their supply chain to establish regenerative agriculture practices across their approximately 7 million acres. In 2021, Unilever announced an extensive implementation plan to incorporate regenerative agriculture throughout their supply chain. VF Corporation, the parent company of The North Face, Timberland, and Vans, announced in 2021 a partnership with Terra Genesis International to create a supply chain for their rubber that comes from sources utilizing regenerative agriculture. Nestle announced in 2021 a $1.8 billion investment in regenerative agriculture in an effort to reduce their emissions by 95%.
Several days before the opening of the 2022 United Nations Climate Change Conference, a report was published, sponsored by some of the biggest agricultural companies. The report was produced by Sustainable Markets Initiative, an organisation of companies trying to become climate friendly, established by King Charles III. According to the report, regenerative agriculture is already implemented on 15% of all cropland. Despite this, the rate of transition is "far too slow" and must be tripled by the year 2030 to prevent the global temperature passing the threshold of 1.5 degrees above preindustrial levels. Agricultural practices must immediately change in order to avoid the damage that would result. One of the authors emphasised that “The interconnection between human health and planetary health is more evident than ever before.” The authors proposed a set of measures for accelerating the transition, like creating metrics for measuring how much farming is sustainable, and paying farmers who will change their farming practices to more sustainable ones.
Principles
Several individuals, groups, and organizations have attempted to define the principles of regenerative agriculture. In their review of the existing literature on regenerative agriculture, researchers at Wageningen University created a database of 279 research articles on regenerative agriculture. Their analysis of this database found that people using the term regenerative agriculture were using different principles to guide regenerative agriculture efforts. The 4 most consistent principles were found to be, 1) enhancing and improving soil health, 2) optimization of resource management, 3) alleviation of climate change, and 4) improvement of water quality and availability.
Notable definitions of principles
The organization The Carbon Underground created a set of principles that have been signed on to by a number of non-profits and corporations including Ben & Jerry's, Annie's, and the Rodale Institute, which was one of the first organization to use the term "Regenerative Agriculture". The principles they've outlined include building soil health and fertility, increase water percolation and retention, increasing biodiversity and ecosystem health, and reducing carbon emissions and current atmospheric CO2 levels.
The group Terra Genesis International, and VF Corporation's partner in their regenerative agriculture initiative, created a set of 4 principles, which include:
"Progressively improve whole agroecosystems (soil, water and biodiversity)"
"Create context-specific designs and make holistic decisions that express the essence of each farm"
"Ensure and develop just and reciprocal relationships amongst all stakeholders"
"Continually grow and evolve individuals, farms, and communities to express their innate potential"
Instead of focusing on the specifics of food production technologies, human ecologist Philip Loring suggests a food system-level focus on regeneration, arguing that it is the combination of flexibility and diversity in our food systems that supports regenerative ecological practices. Loring argues that, depending on the relative flexibility of people in the food system with respect to the foods they eat and the overall diversity of foods being produced and harvested, food systems can fall into one of four general patterns:
Regenerative (high diversity, high flexibility), where ecosystems are able to recycle and replenish used energy to usable forms, such as found in many Indigenous food systems
Degenerative (High diversity, low flexibility), where people fixate on specific resources and only switch to alternatives once the preferred commodity is exhausted, such as fishing down the food web.
Coerced (low diversity, low flexibility), where people subsidize prized resources at the expense of the surrounding ecosystem, such as in the Maine Lobster fishery
Impoverished (low diversity, high flexibility), where people are willing to be flexible but, because they are living in degraded ecosystems and possibly a povery trap, cannot allow ecosystems and resources to regenerate.
Loring's typology is based on a principle he calls the Conservation of Change, which states that change must always happen somewhere in ecosystems, and derives from the Second Law of Thermodynamics and Barry Commoner's premise in that, in ecosystems, "there is no free lunch".
Practices
Practices and principles used in regenerative farming include:
Alternative food networks (AFNs), commonly defined by attributes such as the spatial proximity between farmers and consumers.
Aquaculture
Ecological aquaculture
Regenerative ocean farming
Agroecology
Agroforestry
Biochar/terra preta
Borders planted for pollinator habitat and other beneficial insects
Compost, compost tea, animal manures and thermal compost
Conservation farming, no-till farming, minimum tillage, and pasture cropping
Cover crops & multi-species cover crops
Home gardens, to mitigate the adverse effect of global food shocks and food price volatilities, also as a strategy to enhance household food security and nutrition
Regrowing vegetables, for recycling and sustainable living
Keyline subsoiling
Livestock: well-managed grazing, animal integration and holistically managed grazing
Grass-fed cattle
Natural farming
Natural sequence farming
Organic annual cropping and crop rotations
Perennial crops
Ponding banks, to prevent soil erosion also known as grading banks and, in parts of Australia, commonly known as Purvis banks, after Ron Purvis Jr of Woodgreen Station in the Northern Territory
Permaculture design
Polyculture and full-time succession planting of multiple and inter-crop plantings
Silvopasture
Soil food web
Environmental impacts
Carbon sequestration
Conventional agricultural practices such as plowing and tilling release carbon dioxide (CO2) from the soil by exposing organic matter to the surface and thus promoting oxidation. It is estimated that roughly a third of the total anthropogenic inputs of CO2 to the atmosphere since the industrial revolution have come from the degradation of soil organic matter and that 30–75% of global soil organic matter has been lost since the advent of tillage-based farming. Greenhouse gas (GHG) emissions associated with conventional soil and cropping activities represent 13.7% of anthropogenic emissions, or 1.86 Pg-C y−1. The raising of ruminant livestock also contributes GHGs, representing 11.6% of anthropogenic emissions, or 1.58 Pg-C y−1. Furthermore, runoff and siltation of water bodies associated with conventional farming practices promote eutrophication and emissions of methane.
Regenerative agriculture practices such as no-till farming, rotational grazing, mixed crop rotation, cover cropping, and the application of compost and manure have the potential to reverse this trend. No-till farming reintroduces carbon back into the soil as crop residues are pressed down when seeding. Some studies suggest that adoption of no-till practices could triple soil carbon content in less than 15 years. Additionally, 1 Pg-C y−1, representing roughly a fourth to a third of anthropogenic CO2 emissions, may be sequestered by converting croplands to no-till systems on a global scale.
There is mixed evidence on the carbon sequestration potential of regenerative grazing. A meta-analysis of relevant studies between 1972 and 2016 found that Holistic Planned Grazing had no better effect than continuous grazing on plant cover and biomass, although it may have benefited some areas with higher precipitation. However, some studies have found positive impacts compared to conventional grazing. One study found that regenerative grazing management, particularly adaptive multipaddock (AMP) grazing, has been shown to reduce soil degradation compared to continuous grazing and thus has the potential to mitigate carbon emissions from soil. Another study found that crop rotation and maintenance of permanent cover crops help to reduce soil erosion as well, and in conjunction with AMP grazing, may result in net carbon sequestration.
There is a less developed evidence base comparing regenerative grazing with the absence of livestock on grasslands. Several peer-reviewed studies have found that excluding livestock completely from semi-arid grasslands can lead to significant recovery of vegetation and soil carbon sequestration. A 2021 peer-reviewed paper found that sparsely grazed and natural grasslands account for 80% of the total cumulative carbon sink of the world’s grasslands, whereas managed grasslands (i.e. with greater livestock density) have been a net greenhouse gas source over the past decade. A 2011 study found that multi-paddock grazing of the type endorsed by Savory resulted in more soil carbon sequestration than heavy continuous grazing, but very slightly less soil carbon sequestration than "graze exclosure" (excluding grazing livestock from land). Another peer-reviewed paper found that if current pastureland was restored to its former state as wild grasslands, shrublands, and sparse savannas without livestock this could store an estimated 15.2 - 59.9 Gt additional carbon.
The total carbon sequestration potential of regenerative grazing has been debated between advocates and critics. One study suggests that total conversion of livestock raising to AMP grazing practices coupled with conservation cropping has the potential to convert North American farmlands to a carbon sink, sequestering approximately 1.2 Pg-C y−1. Over the next 25–50 years, the cumulative sequestration potential is 30-60 Pg-C. Additions of organic manures and compost further build soil organic carbon, thus contributing to carbon sequestration potential. However, a study by the Food and Climate Research Network in 2017 estimates that, on the basis of meta-study of the scientific literature, the total global soil carbon sequestration potential from grazing management ranges from 0.3-0.8 Gt CO2eq per year, which is equivalent to offsetting a maximum of 4-11% of current total global livestock emissions, and that “Expansion or intensification in the grazing sector as an approach to sequestering more carbon would lead to substantial increases in methane, nitrous oxide and land use change-induced CO2 emissions”, leading to an overall increase in emissions. Consistent with this, Project Drawdown (referenced in the film Kiss the Ground) estimates the total carbon sequestration potential of improved managed grazing at 13.72 - 20.92 Gigatons CO2eq between 2020–2050, equal to 0.46-0.70 Gt CO2eq per year. A 2022 peer-reviewed paper estimated the carbon sequestration potential of improved grazing management at a similar level of 0.15-0.70 Gt CO2eq per year.
A research made by the Rodale institute suggests that a worldwide transition to regenerative agriculture can soak more than 100% of the currently emitted by people.
Nutrient cycling
Soil organic matter is the primary sink of nutrients necessary for plant growth such as nitrogen, phosphorus, zinc, sulfur, and molybdenum. Conventional tillage-based farming promotes rapid erosion and degradation of soil organic matter, depleting soil of plant nutrients and thus lowering productivity. Tillage, in conjunction with additions of inorganic fertilizer, also destroys soil microbial communities, reducing production of organic nutrients in soil. In contrast, use of organic fertilizer will significantly increase the organic matter in the soil. Practices that restore organic matter may be used to increase the total nutrient load of soil. For example, regenerative management of ruminant livestock in mixed-crop and grazing agroecosystems has been shown to improve soil nutrient cycling by encouraging the consumption and decomposition of residual crop biomass and promoting the recovery of nitrogen-fixing plant species. Regenerative crop management practices, namely the use of crop rotation to ensure permanent ground cover, have the potential to increase soil fertility and nutrient levels if nitrogen-fixing crops are included in the rotation. Crop rotation and rotational grazing also allow the nutrients in soil to recover between growing and grazing periods, thus further enhancing overall nutrient load and cycling.
Soil Microbiome and its role in Nutrient Cycling
The soil microbiome which consist of bacteria, fungi, and other microorganisms play an essential role in nutrient cycling by decomposing organic matter and releasing essential nutrients for plant growth. Their activities are needed for decomposition and mineralization processes, which help to transform complex organic compounds into simpler forms that plants can absorb. In nitrogen cycling, nitrogen-fixing bacteria convert organic nitrogen into ammonium (NH₄⁺), which is then converted into nitrate (NO₃⁻) by nitrifying bacteria. While both ammonium and nitrate are important for plant growth, nitrate is the most preferred for many plants due to its mobility, less toxicity, and efficient transport systems. Ammonium is also a great alternative as it is more readily assimilated once inside the plant, it can cause toxicity if taken up in excess. Environmental conditions such as soil pH, and nutrient availability play major roles in determining which form of nitrogen is absorbed first . Soil microbes also play a key role in phosphorus cycling, helping to dissolve phosphorus from organic material for plant availability. A diverse microbial community also helps to prevent soil-borne diseases and reduces the need for synthetic fertilizers.
Impact of Farming Practices on Nutrient Cycling: Conventional vs Regenerative
Conventional farming disrupts nutrient cycling by using practices like tillage, which breaks down soil structure, reduces soil organic matter (SOM), and negatively impacts the overall soil health. Conventional practices lead to reduced crop yields, increased reliance on synthetic fertilizers, and environmental problems like nutrient runoff and water pollution. Over-reliance on synthetic fertilizers depletes soil health by favoring the growth of certain microorganisms over others, thereby reducing microbial diversity, organic matter decomposition, leading to soil degradation. In contrast, regenerative agriculture promotes practices that enhance soil health and nutrient cycling. These practices include reduced tillage which helps to preserve SOM, the use of organic fertilizers such as compost for soil enrichment, incorporating regenerative livestock management, practicing crop rotation with leguminous plants like soybean to promote nitrogen fixation that occurs from the symbiotic relationship between nitrogen-fixing bacteria and the root nodules. Integrating livestock into cropping systems has been shown to improve nutrient cycling as animal manure enriches the soil and promotes microbial diversity. Cover cropping is another practice that helps to prevent erosion, leading to healthier and more resilient soils.
Biodiversity
Conventional agricultural practices are generally understood to simplify agroecosystems through introduction of monocultures and eradication of diversity in soil microbial communities through chemical fertilization. In natural ecosystems, biodiversity serves to regulate ecosystem function internally, but under conventional agricultural systems, such control is lost and requires increasing levels of external, anthropogenic input. By contrast, regenerative agriculture practices including polycultures, mixed crop rotation, cover cropping, organic soil management, and low- or no-tillage methods have been shown to increase overall species diversity while reducing pest population densities. Additionally, practices that favor organic over inorganic inputs aid in restoring below-ground biodiversity by enhancing the functioning of soil microbial communities. A survey of organic and conventional farms in Europe found that on the whole, species across several taxa were higher in richness and/or abundance on organic farms compared to conventional ones, especially species whose populations have been demonstrably harmed as a direct result of conventional agriculture.
AMP grazing can help improve biodiversity since increased soil organic carbon stocks also promotes a diversity of soil microbial communities. Implementation of AMP in North American prairies, for example, has been correlated with an increase in forage productivity and the restoration of plant species that had previously been decimated by continuous grazing practices. Furthermore, studies of arid and semiarid regions of the world where regenerative grazing has been practiced for a long time following prior periods of continuous grazing have shown a recovery of biodiversity, grass species, and pollinator species. Furthermore, crop diversification ensures that the agroecosystem remains productive when facing lower levels of soil fertility. Higher levels of plant diversity led to increases in numerous factors that contribute to soil fertility, such as soil N, K, Ca, Mg, and C, in CEC and in soil pH.
Global Efforts
United States
The United States has seen a groundswell of interest in regenerative agriculture, with both private-sector support and government funding:
USDA Partnerships for Climate-Smart Commodities: This $2.8 billion initiative funds projects that implement regenerative practices aimed at carbon sequestration and sustainable farming.
Regenerative Organic Certification: The Rodale Institute’s certification builds on organic standards, incorporating soil health, animal welfare, and farmer equity to promote comprehensive regenerative practices.
Canada
Canada supports regenerative agriculture with federal and provincial programs:
Living Laboratories Initiative: This collaborative project, involving farmers, scientists, and government, supports the development of RA practices to improve soil health and resilience.
Sustainable Agriculture Strategy: This strategy promotes soil and water conservation, incentivizes cover cropping, and focuses on reducing chemical inputs in agriculture.
Mexico
Mexican organizations focus on sustainable land management and promoting agroecology:
Colectivo Ecologista Jalisco: This initiative helps local communities implement agroecological practices to restore degraded lands and diversify crops.
South America
Brazil
Brazil’s initiatives emphasize low-carbon agriculture and rainforest preservation:
Programa ABC (Agricultura de Baixa Emissão de Carbono): This program incentivizes no-till farming, crop-livestock integration, and reforestation.
Regenerative Agroforestry Projects: NGOs in Brazil work with farmers to integrate food crops with native trees, promoting biodiversity and sequestering carbon.
Argentina
Argentina has adopted regenerative grazing on its grasslands:
Holistic Planned Grazing: Led by Savory Network’s Argentinean hubs, this model promotes rotational grazing to reduce soil erosion and enhance biodiversity.
Colombia
Post-conflict land restoration is a focus in Colombia:
Agroforestry and Silvopastoral Systems: Local cooperatives and NGOs integrate trees, shrubs, and crops to diversify farmer incomes and improve land resilience.
Europe
European Union (EU)
The EU promotes regenerative agriculture through policy frameworks and funding:
Common Agricultural Policy (CAP) 2023-2027: CAP incentivizes eco-schemes supporting soil health and biodiversity.
European Green Deal: This policy aims for carbon-neutral agriculture by reducing chemical inputs and promoting organic farming.
United Kingdom
Since Brexit, the UK has initiated its own policies to encourage RA:
Environmental Land Management Scheme (ELMS): ELMS rewards practices that enhance soil health, biodiversity, and water conservation.
The Soil Association: Through certification and advocacy, this organization supports RA practices like cover cropping, crop rotation, and reduced tillage.
France
France has promoted regenerative practices in its climate goals:
4 per 1000 Initiative: Announced at the 2015 Paris Climate Summit, this initiative aims to increase soil carbon stocks by 0.4% annually through regenerative practices, helping offset emissions.
Africa
Kenya
Kenya has become a leader in regenerative agriculture in East Africa:
World Agroforestry Centre (ICRAF): Programs promote agroforestry, integrating trees with crops and livestock to improve soil fertility.
Startups like ForestFoods and L.E.A.F. Africa are pioneering innovative techniques such as syntropic agroforestry that have proven successful in other regions of the world.
Ethiopia
Ethiopia’s focus is on combating land degradation:
Sustainable Land Management Program: A World Bank-supported initiative that promotes soil restoration and water retention in degraded areas.
South Africa
South Africa combines RA with smallholder and commercial agriculture:
LandCare South Africa: This project focuses on soil conservation and rotational grazing in semi-arid regions to prevent soil erosion.
Asia
India
India’s regenerative agriculture movement is driven by both state and federal support:
Zero Budget Natural Farming (ZBNF): Supported by the Andhra Pradesh government, ZBNF promotes chemical-free farming, natural fertilizers, and soil microbiome health.
National Mission for Sustainable Agriculture (NMSA): This government initiative promotes soil health and water conservation practices on a large scale.
China
China has extensive RA initiatives aimed at desertification and soil health:
Loess Plateau Rehabilitation Project: This project transformed degraded lands with practices such as terracing, agroforestry, and soil improvement.
Green Agricultural Development Fund: This fund promotes regenerative practices in water-scarce regions, with a focus on soil carbon sequestration.
Japan
Japan’s regenerative agriculture aligns with organic and natural farming:
Shizen Nōhō: Rooted in principles from Masanobu Fukuoka, this natural farming method emphasizes minimal soil disturbance and composting.
Local Government Subsidies: Various local governments in Japan subsidize regenerative practices to improve rural economies and support sustainable land use.
Oceania
Australia
Australia’s initiatives focus on soil health and carbon farming:
Carbon Farming Initiative (CFI): Part of Australia’s carbon market, CFI rewards practices such as no-till farming and regenerative grazing.
Savory Institute’s Ecological Outcome Verification: This program incentivizes regenerative grazing by monitoring soil and biodiversity health.
New Zealand
New Zealand’s RA movement emphasizes biodiversity and community engagement:
Regenerative Agriculture Network of New Zealand (RANNZ): This grassroots network supports RA practices and educates the public on sustainable land management.
Government Initiatives on Carbon Neutrality: In line with net-zero carbon goals, New Zealand supports sustainable farming practices to reduce emissions and restore native ecosystems.
Criticism
Some members of the scientific community have criticized some of the claims made by proponents of regenerative agriculture as exaggerated and unsupported by evidence.
One of the prominent proponents of regenerative agriculture, Allan Savory, claimed in his TED talk that holistic grazing could reduce carbon-dioxide levels to pre-industrial levels in a span of 40 years. According to Skeptical Science: "it is not possible to increase productivity, increase numbers of cattle and store carbon using any grazing strategy, never-mind Holistic Management [...] Long term studies on the effect of grazing on soil carbon storage have been done before, and the results are not promising.[...] Because of the complex nature of carbon storage in soils, increasing global temperature, risk of desertification and methane emissions from livestock, it is unlikely that Holistic Management, or any management technique, can reverse climate change."
Commenting on his TED talk "How to Fight Desertification and Reverse Climate Change", Savory has since denied claiming that holistic grazing can reverse climate change, saying that “I have only used the words address climate change… although I have written and talked about reversing man-made desertification”. Savory has faced criticisms for claiming the carbon sequestration potential of holistic grazing is immune from empirical scientific study. For instance, in 2000, Savory said that "the scientific method never discovers anything" and “the scientific method protects us from cranks like me". A 2017 factsheet authored by Savory stated that “Every study of holistic planned grazing that has been done has provided results that are rejected by range scientists because there was no replication!". TABLE Debates sums this up by saying "Savory argues that standardisation, replication, and therefore experimental testing of HPG [Holistic Planned Grazing] as a whole (rather than just the grazing system associated with it) is not possible, and that therefore, it is incapable of study by experimental science", but "he does not explain how HPG can make causal knowledge claims with regards to combating desertification and climate mitigation, without recourse to science demonstrating such connections."
According to a 2016 study published by the Swedish University of Agricultural Sciences, the actual rate at which improved grazing management could contribute to carbon sequestration is seven times lower than the claims made by Savory. The study concludes that holistic management cannot reverse climate change. A study by the Food and Climate Research Network in 2017 concluded that Savory's claims about carbon sequestration are "unrealistic" and very different from those issued by peer-reviewed studies.
Tim Searchinger and Janet Ranganathan have expressed concerns about emphasis upon "Practices That Increase Soil Carbon at the Field Level" because "overestimating potential soil carbon gains could undermine efforts to advance effective climate mitigation in the agriculture sector." Instead Tim Searchinger and Janet Ranganathan say, "preserving the huge, existing reservoirs of vegetative and soil carbon in the world’s remaining forests and woody savannas by boosting productivity on existing agricultural land (a land sparing strategy) is the largest, potential climate mitigation prize of regenerative and other agricultural practices. Realizing these benefits requires implementing practices in ways that boost productivity and then linking those gains to governance and finance to protect natural ecosystems. In short, produce, protect and prosper are the most important opportunities for agriculture."
| Technology | Agriculture_2 | null |
53735803 | https://en.wikipedia.org/wiki/Vitamin%20B3 | Vitamin B3 | {{DISPLAYTITLE:Vitamin B3}}
Vitamin B3, colloquially referred to as niacin, is a vitamin family that includes three forms, or vitamers: niacin (nicotinic acid), nicotinamide (niacinamide), and nicotinamide riboside. All three forms of vitamin B3 are converted within the body to nicotinamide adenine dinucleotide (NAD). NAD is required for human life and people are unable to make it within their bodies without either vitamin B3 or tryptophan. Nicotinamide riboside was identified as a form of vitamin B3 in 2004.
Niacin (the nutrient) can be manufactured by plants and animals from the amino acid tryptophan. Niacin is obtained in the diet from a variety of whole and processed foods, with highest contents in fortified packaged foods, meat, poultry, red fish such as tuna and salmon, lesser amounts in nuts, legumes and seeds. Niacin as a dietary supplement is used to treat pellagra, a disease caused by niacin deficiency. Signs and symptoms of pellagra include skin and mouth lesions, anemia, headaches, and tiredness. Many countries mandate its addition to wheat flour or other food grains, thereby reducing the risk of pellagra.
The amide nicotinamide is a component of the coenzymes nicotinamide adenine dinucleotide (NAD) and nicotinamide adenine dinucleotide phosphate (NADP+). Although nicotinic acid and nicotinamide are identical in their vitamin activity, nicotinamide does not have the same pharmacological, lipid-modifying effects or side effects as nicotinic acid, i.e., when nicotinic acid takes on the -amide group, it does not reduce cholesterol nor cause flushing. Nicotinamide is recommended as a treatment for niacin deficiency because it can be administered in remedial amounts without causing the flushing, considered an adverse effect. In the past, the group was loosely referred to as vitamin B3 complex.
Niacin term
The United States Government adopted the terms niacin and niacinamide in 1942 as alternate names for nicotinic acid and nicotinamide, respectively, and encouraged their use in nontechnical contexts to avoid the public’s confusing them with the nearly unrelated (and toxic) nicotine. The terms were incorporated into the United States Adopted Name dictionary that was created in 1961.
The term niacin was then adopted internationally by multiple institutions (WHO/FAO, EFSA, FDA, Anvisa) using a broader meaning including all dietary NAD precursors that can prevent signs of deficiency. In other words, the term is used with the same meaning as vitamin B3, including not just nicotinic acid, but also nicotinamide, and nicotinamide riboside.
Mechanism of action
Nicotinamide adenine dinucleotide (NAD), along with its phosphorylated variant nicotinamide adenine dinucleotide phosphate (NADP), are utilized in transfer reactions within DNA repair and calcium mobilization. NAD also plays a critical role in human metabolism, acting as a coenzyme in both glycolysis and the Krebs cycle.
Vitamin deficiency
Severe vitamin B3 deficiency in the diet causes the disease pellagra, characterized by diarrhea, sun-sensitive dermatitis involving hyperpigmentation and thickening of the skin (see image), inflammation of the mouth and tongue, delirium, dementia, and if left untreated, death. Common psychiatric symptoms include irritability, poor concentration, anxiety, fatigue, loss of memory, restlessness, apathy, and depression. The biochemical mechanisms for the observed deficiency-caused neurodegeneration are not well understood, but may rest on A) the requirement for nicotinamide adenine dinucleotide (NAD+) to suppress the creation of neurotoxic tryptophan metabolites; B) inhibition of mitochondrial ATP generation resulting in cell damage; C) activation of the poly (ADP-ribose) polymerase (PARP) pathway, as PARP is a nuclear enzyme involved in DNA repair, but in the absence of NAD+ can lead to cell death; D) reduced synthesis of neuro-protective brain-derived neurotrophic factor or its receptor tropomyosin receptor kinase B; or, E) changes to genome expression directly due to the niacin deficiency.
Niacin deficiency is rarely seen in developed countries, and it is more typically associated with poverty, malnutrition or malnutrition secondary to chronic alcoholism. It also tends to occur in areas where people eat maize (corn) as a staple food, as maize is low in digestible niacin. A cooking technique called nixtamalization, that is, pretreating with alkali ingredients, increases the bioavailability of niacin during maize meal or flour production. For this reason, people who consume corn as tortillas or hominy are at less risk of niacin deficiency.
For treating deficiency, the World Health Organization (WHO) recommends administering nicotinamide instead of nicotinic acid, to avoid the flushing side effect commonly caused by the latter. Guidelines suggest using 300 mg/day for three to four weeks. Dementia and dermatitis show improvement within a week. Because deficiencies of other B-vitamins may be present, the WHO recommends a multi-vitamin in addition to the nicotinamide.
Hartnup disease is a hereditary nutritional disorder resulting in niacin deficiency. It is named after an English family with a genetic disorder that resulted in a failure to absorb the essential amino acid tryptophan, tryptophan being a precursor for niacin synthesis. The symptoms are similar to pellagra, including red, scaly rash and sensitivity to sunlight. Oral nicotinic acid or nicotinamide is given as a treatment for this condition in doses ranging from 50 to 100 mg twice a day, with a good prognosis if identified and treated early. Niacin synthesis is also deficient in carcinoid syndrome, because of metabolic diversion of its precursor tryptophan to form serotonin.
Measuring vitamin status
Plasma concentrations of niacin and niacin metabolites are not useful markers of niacin status. Urinary excretion of the methylated metabolite N1-methyl-nicotinamide is considered reliable and sensitive. The measurement requires a 24-hour urine collection. For adults, a value of less than 5.8 μmol/day represent deficient niacin status and 5.8 to 17.5 μmol/day represents low. According to the World Health Organization, an alternative mean of expressing urinary N1-methyl-nicotinamide is as mg/g creatinine in a 24-hour urine collection, with deficient defined as <0.5, low 0.5-1.59, acceptable 1.6-4.29, and high >4.3 Niacin deficiency occurs before the signs and symptoms of pellagra appear. Erythrocyte nicotinamide adenine dinucleotide (NAD) concentrations potentially provide another sensitive indicator of niacin depletion, although definitions of deficient, low and adequate have not been established. Lastly, plasma tryptophan decreases on a low niacin diet because tryptophan converts to niacin. However, low tryptophan could also be caused by a diet low in this essential amino acid, so it is not specific to confirming vitamin status.
Dietary recommendations
The U.S. Institute of Medicine (renamed National Academy of Medicine in 2015) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for niacin in 1998, as well as Tolerable upper intake levels (ULs). In lieu of an RDA, Adequate Intakes (AIs) are identified for populations for which there is not enough evidence to identify a dietary intake level that is sufficient to meet the nutrient requirements of most people. (see table).
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values (DRV), with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. For the EU, AIs and ULs have the same definition as in the US, except that units are milligrams per megajoule (MJ) of energy consumed rather than mg/day. For women (including those pregnant or lactating), men and children the PRI is 1.6 mg per megajoule. As the conversion is 1 MJ = 239 kcal, an adult consuming 2390 kilocalories should be consuming 16 mg niacin. This is comparable to US RDAs (14 mg/day for adult women, 16 mg/day for adult men).
ULs are established by identifying amounts of vitamins and minerals that cause adverse effects, and then selecting as an upper limit amounts that are the "maximum daily intake unlikely to cause adverse health effects". Regulatory agencies from different countries do not always agree. For the US, 30 or 35 mg of niacin for teenagers and adults, less for children. The EFSA UL for adults is set at 10 mg/day for nicotinic acid to avoid the skin flush reaction, and 900 mg/day for nicotinamide that doesn't cause flushing.
Both the DRI and DRV describe amounts needed as niacin equivalents (NE), calculated as 1 mg NE = 1 mg niacin or 60 mg of the essential amino acid tryptophan. This is because the amino acid is utilized to synthesize the vitamin.
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For niacin labeling purposes 100% of the Daily Value is 16 mg. Prior to May 27, 2016, it was 20 mg, revised to bring it into agreement with the RDA.
Compliance with the updated labeling regulations was required by January 1, 2020, for manufacturers with US$10 million or more in annual food sales, and by January 1, 2021, for manufacturers with lower volume food sales. A table of the old and new adult daily values is provided at Reference Daily Intake.
Sources
Niacin is found in a variety of whole and processed foods, including fortified packaged foods, meat from various animal sources, seafoods, and spices. In general, animal-sourced foods provide about 5–10 mg niacin per serving, although dairy foods and eggs have little. Some plant-sourced foods such as nuts, legumes and grains provide about 2–5 mg niacin per serving, although in some grain products this naturally present niacin is largely bound to polysaccharides and glycopeptides, making it only about 30% bioavailable. Fortified food ingredients such as wheat flour have niacin added, which is bioavailable. Among whole food sources with the highest niacin content per 100 grams:
Vegetarian and vegan diets can provide adequate amounts if products such as nutritional yeast, peanuts, peanut butter, tahini, brown rice, mushrooms, avocado and sunflower seeds are included. Fortified foods and dietary supplements can also be consumed to ensure adequate intake.
Food preparation
Niacin naturally found in food is susceptible to destruction from high heat cooking, especially in the presence of acidic foods and sauces. It is soluble in water, and so may also be lost from foods boiled in water.
Food fortification
Countries fortify foods with nutrients to address known deficiencies. As of 2020, 54 countries required food fortification of wheat flour with nicotinic acid or nicotinamide; 14 also mandate fortification of maize flour, and 6 mandate fortification of rice. From country to country, niacin fortification ranges from 1.3 to 6.0 mg/100 g.
As a dietary supplement
In the United States, nicotinic acid is sold as a non-prescription dietary supplement with a range of 100 to 1000 mg per serving. These products often have a Structure/Function health claim allowed by the US Food & Drug Administration (FDA). An example would be "Supports a healthy blood lipid profile." The American Heart Association (AHA) strongly advises against the use of non-prescription dietary supplement nicotinic acid rather than prescription nicotinic acid because of potentially serious side effects. For this reason and because the manufacture of dietary supplement nicotinic acid is not as well-regulated by the FDA as is prescription nicotinic acid, the AHA advises that supplemental nicotinic acid only be used under the supervision of a health care professional. More than 30 mg nicotinic acid consumed as a dietary supplement can cause skin flushing. Face, arms and chest skin turns a reddish color because of vasodilation of small subcutaneous blood vessels, accompanied by sensations of heat, tingling and itching. These signs and symptoms are typically transient, lasting minutes to hours; they are considered unpleasant rather than toxic.
Toxicity
The US Food and Nutrition Board has set a daily limit of 35 mg for vitamin B3, unless under medical supervision. At daily doses of nicotinic acid as low as 30 mg, flushing has been reported, always starting in the face and sometimes accompanied by skin dryness, itching, paresthesia, and headache. (These effects do not occur with nicotinamide). Liver toxicity is the most serious toxic reaction and it occurs at doses >2 grams/day, and is possible with either nicotinic acid or nicotinamide. Fulminant hepatitis has been reported at doses between 3-9 grams/day with needs for liver transplantation. Other reactions include glucose intolerance, hyperuricemia, macular edema, and macular cysts.
History
Corn (maize) became a staple food in the southeast United States and in parts of Europe. A disease that was characterized by dermatitis of sunlight-exposed skin was described in Spain in 1735 by Gaspar Casal. He attributed the cause to poor diet. In northern Italy it was named pellagra from the Lombard language (agra = holly-like or serum-like; pell = skin). In time, the disease was more closely linked specifically to corn. In the US, Joseph Goldberger was assigned to study pellagra by the Surgeon General of the United States. His studies confirmed a corn-based diet as the culprit, but he did not identify the root cause.
Nicotinic acid was extracted from the liver by biochemist Conrad Elvehjem in 1937. He later identified the active ingredient, referring to it as "pellagra-preventing factor" and the "anti-blacktongue factor." It was also referred to as "vitamin PP", "vitamin P-P" and "PP-factor", all derived from the term "pellagra-preventive factor". In the late 1930s, studies by Tom Douglas Spies, Marion Blankenhorn, and Clark Cooper confirmed that niacin cured pellagra in humans. The prevalence of the disease was greatly reduced as a result.
In 1942, when flour enrichment with nicotinic acid began, a headline in the popular press said "Tobacco in Your Bread." In response, the Council on Foods and Nutrition of the American Medical Association approved of the Food and Nutrition Board's new names niacin and niacin amide for use primarily by non-scientists. It was thought appropriate to choose a name to dissociate nicotinic acid from nicotine, to avoid the perception that vitamins or niacin-rich foods contain nicotine, or that cigarettes contain vitamins. The resulting name niacin was derived from cotinic id + vitam.
J. Laguna and K.J. Carpenter found in 1951, that niacin in corn is biologically unavailable and can be released only in very alkaline lime water of pH 11. This explains why a Latin-American culture that used alkali-treated (nixtamalized) cornmeal to make tortilla was not at risk for niacin deficiency.
| Biology and health sciences | Vitamins | Health |
40815891 | https://en.wikipedia.org/wiki/Boom%20%28navigational%20barrier%29 | Boom (navigational barrier) | A boom or a chain (also boom defence, harbour chain, river chain, chain boom, boom chain or variants) is an obstacle strung across a navigable stretch of water to control or block navigation.
In modern times they usually have civil uses, such as to prevent access to a dangerous river channel. But, especially historically, they have been used militarily, with the goal of denying access to an enemy's ships: a modern example is the anti-submarine net.
Booms have also been used to force passing vessels to pay a toll.
Description
A boom generally floats on the surface, while a chain can be on the surface or below the water. A chain could be made to float with rafts, logs, ships or other wood, making the chain a boom as well.
Historical uses
Especially in medieval times, the end of a chain could be attached to a chain tower or boom tower. This allowed safe raising or lowering of the chain, as they were often heavily fortified. By raising or lowering a chain or boom, access could be selectively granted rather than simply rendering the stretch of water completely inaccessible. The raising and lowering could be accomplished by a windlass mechanism or a capstan.
Booms or chains could be broken by a sufficiently large or heavy ship, and this occurred on many occasions, including the siege of Damietta, the raid on the Medway and the Battle of Vigo Bay. Frequently, however, attackers instead seized the defences and cut the chain or boom by more conventional methods. The boom at the siege of Derry, for example, was cut by sailors in a longboat.
As a key portion of defences, booms were usually heavily defended. This involved shore-based chain towers, artillery batteries, or forts. In the Age of Sail, a boom protecting a harbour could have several ships defending it with their broadsides, discouraging assaults on the boom. On some occasions, multiple booms spanned a single stretch of water.
Gallery
Examples
Historical
The entrance to the Cothon at Carthage was protected by a chain.
The chain at Fort Blockhouse, protecting Portsmouth Harbour from 1431 to 1539.
The Leonine Wall included a chain blocking the Tiber
A chain spanned the Golden Horn
A chain and boom blocked the River Medway during the Raid on the Medway
Hudson River Chain
The chain blocking the Parana River during the Battle of Vuelta de Obligado
A chain was placed from Columbus, Kentucky across the Mississippi River to Missouri in order to block Union ships during the American Civil War
Between the in Mugardos and , in ria of Ferrol, to defend the city and naval base.
| Technology | Naval warfare | null |
40817590 | https://en.wikipedia.org/wiki/Ebola | Ebola | Ebola, also known as Ebola virus disease (EVD) and Ebola hemorrhagic fever (EHF), is a viral hemorrhagic fever in humans and other primates, caused by ebolaviruses. Symptoms typically start anywhere between two days and three weeks after infection. The first symptoms are usually fever, sore throat, muscle pain, and headaches. These are usually followed by vomiting, diarrhoea, rash and decreased liver and kidney function, at which point some people begin to bleed both internally and externally. It kills between 25% and 90% of those infected – about 50% on average. Death is often due to shock from fluid loss, and typically occurs between six and 16 days after the first symptoms appear. Early treatment of symptoms increases the survival rate considerably compared to late start. An Ebola vaccine was approved by the US FDA in December 2019.
The virus spreads through direct contact with body fluids, such as blood from infected humans or other animals, or from contact with items that have recently been contaminated with infected body fluids. There have been no documented cases, either in nature or under laboratory conditions, of spread through the air between humans or other primates. After recovering from Ebola, semen or breast milk may continue to carry the virus for anywhere between several weeks to several months. Fruit bats are believed to be the normal carrier in nature; they are able to spread the virus without being affected by it. The symptoms of Ebola may resemble those of several other diseases, including malaria, cholera, typhoid fever, meningitis and other viral hemorrhagic fevers. Diagnosis is confirmed by testing blood samples for the presence of viral RNA, viral antibodies or the virus itself.
Control of outbreaks requires coordinated medical services and community engagement, including rapid detection, contact tracing of those exposed, quick access to laboratory services, care for those infected, and proper disposal of the dead through cremation or burial. Prevention measures involve wearing proper protective clothing and washing hands when in close proximity to patients and while handling potentially infected bushmeat, as well as thoroughly cooking bushmeat. An Ebola vaccine was approved by the US FDA in December 2019. While there is no approved treatment for Ebola , two treatments (atoltivimab/maftivimab/odesivimab and ansuvimab) are associated with improved outcomes. Supportive efforts also improve outcomes. These include oral rehydration therapy (drinking slightly sweetened and salty water) or giving intravenous fluids, and treating symptoms. In October 2020, atoltivimab/maftivimab/odesivimab (Inmazeb) was approved for medical use in the United States to treat the disease caused by Zaire ebolavirus.
History and name
Ebola was first identified in 1976, in two simultaneous outbreaks, one in Nzara (a town in South Sudan) and the other in Yambuku (the Democratic Republic of the Congo), a village near the Ebola River, for which the disease was named. Ebola outbreaks occur intermittently in tropical regions of sub-Saharan Africa. Between 1976 and 2012, according to the World Health Organization, there were 24 outbreaks of Ebola resulting in a total of 2,387 cases, and 1,590 deaths. The largest Ebola outbreak to date was an epidemic in West Africa from December 2013 to January 2016, with cases and deaths. On 29 March 2016, it was declared to no longer be an emergency. Other outbreaks in Africa began in the Democratic Republic of the Congo in May 2017, and 2018. In July 2019, the World Health Organization declared the Congo Ebola outbreak a world health emergency.
Signs and symptoms
Onset
The length of time between exposure to the virus and the development of symptoms (incubation period) is between 2 and 21 days, and usually between 4 and 10 days. However, recent estimates based on mathematical models predict that around 5% of cases may take longer than 21 days to develop.
Symptoms usually begin with a sudden influenza-like stage characterised by fatigue, fever, weakness, decreased appetite, muscular pain, joint pain, headache, and sore throat. The fever is usually higher than . This is often followed by nausea, vomiting, diarrhoea, abdominal pain, and sometimes hiccups. The combination of severe vomiting and diarrhoea often leads to severe dehydration. Next, shortness of breath and chest pain may occur, along with swelling, headaches, and confusion. In about half of the cases, the skin may develop a maculopapular rash, a flat red area covered with small bumps, five to seven days after symptoms begin.
Bleeding
In some cases, internal and external bleeding may occur. This typically begins five to seven days after the first symptoms. All infected people show some decreased blood clotting. Bleeding from mucous membranes or from sites of needle punctures has been reported in 40–50% of cases. This may cause vomiting blood, coughing up of blood, or blood in stool. Bleeding into the skin may create petechiae, purpura, ecchymoses or haematomas (especially around needle injection sites). Bleeding into the whites of the eyes may also occur. Heavy bleeding is uncommon; if it occurs, it is usually in the gastrointestinal tract. The incidence of bleeding into the gastrointestinal tract was reported to be ~58% in the 2001 outbreak in Gabon, but in the 2014–15 outbreak in the US it was ~18%, possibly due to improved prevention of disseminated intravascular coagulation.
Recovery or death
Recovery may begin between seven and 14 days after first symptoms. Death, if it occurs, follows typically six to sixteen days from first symptoms and is often due to shock from fluid loss. In general, bleeding often indicates a worse outcome, and blood loss may result in death. People are often in a coma near the end of life.
Those who survive often have ongoing muscular and joint pain, liver inflammation, and decreased hearing, and may have continued tiredness, continued weakness, decreased appetite, and difficulty returning to pre-illness weight. Problems with vision may develop. It is recommended that survivors of EVD wear condoms for at least twelve months after initial infection or until the semen of a male survivor tests negative for Ebola virus on two separate occasions.
Survivors develop antibodies against Ebola that last at least 10 years, but it is unclear whether they are immune to additional infections.
Cause
EVD in humans is caused by four of six viruses of the genus Ebolavirus. The four are Bundibugyo virus (BDBV), Sudan virus (SUDV), Taï Forest virus (TAFV) and one simply called Ebola virus (EBOV, formerly Zaire Ebola virus). EBOV, species Zaire ebolavirus, is the most dangerous of the known EVD-causing viruses, and is responsible for the largest number of outbreaks. The fifth and sixth viruses, Reston virus (RESTV) and Bombali virus (BOMV), are not thought to cause disease in humans, but have caused disease in other primates. All six viruses are closely related to marburgviruses.
Virology
Ebolaviruses contain single-stranded, non-infectious RNA genomes. Ebolavirus genomes contain seven genes including 3'-UTR-NP-VP35-VP40-GP-VP30-VP24-L-5'-UTR. The genomes of the five different ebolaviruses (BDBV, EBOV, RESTV, SUDV and TAFV) differ in sequence and the number and location of gene overlaps. As with all filoviruses, ebolavirus virions are filamentous particles that may appear in the shape of a shepherd's crook, of a "U" or of a "6," and they may be coiled, toroid or branched. In general, ebolavirions are 80 nanometers (nm) in width and may be as long as 14,000 nm.
Their life cycle is thought to begin with a virion attaching to specific cell-surface receptors such as C-type lectins, DC-SIGN, or integrins, which is followed by fusion of the viral envelope with cellular membranes. The virions taken up by the cell then travel to acidic endosomes and lysosomes where the viral envelope glycoprotein GP is cleaved. This processing appears to allow the virus to bind to cellular proteins enabling it to fuse with internal cellular membranes and release the viral nucleocapsid. The Ebolavirus structural glycoprotein (known as GP1,2) is responsible for the virus' ability to bind to and infect targeted cells. The viral RNA polymerase, encoded by the L gene, partially uncoats the nucleocapsid and transcribes the genes into positive-strand mRNAs, which are then translated into structural and nonstructural proteins. The most abundant protein produced is the nucleoprotein, whose concentration in the host cell determines when L switches from gene transcription to genome replication. Replication of the viral genome results in full-length, positive-strand antigenomes that are, in turn, transcribed into genome copies of negative-strand virus progeny. Newly synthesised structural proteins and genomes self-assemble and accumulate near the inside of the cell membrane. Virions bud off from the cell, gaining their envelopes from the cellular membrane from which they bud. The mature progeny particles then infect other cells to repeat the cycle. The genetics of the Ebola virus are difficult to study because of EBOV's virulent characteristics.
Transmission
It is believed that between people, Ebola disease spreads only by direct contact with the blood or other body fluids of a person who has developed symptoms of the disease. Body fluids that may contain Ebola viruses include saliva, mucus, vomit, feces, sweat, tears, breast milk, urine and semen. The WHO states that only people who are very sick are able to spread Ebola disease in saliva, and the virus has not been reported to be transmitted through sweat. Most people spread the virus through blood, feces and vomit. Entry points for the virus include the nose, mouth, eyes, open wounds, cuts and abrasions. Ebola may be spread through large droplets; however, this is believed to occur only when a person is very sick. This contamination can happen if a person is splashed with droplets. Contact with surfaces or objects contaminated by the virus, particularly needles and syringes, may also transmit the infection. The virus is able to survive on objects for a few hours in a dried state, and can survive for a few days within body fluids outside of a person.
The Ebola virus may be able to persist for more than three months in the semen after recovery, which could lead to infections via sexual intercourse. Virus persistence in semen for over a year has been recorded in a national screening programme. Ebola may also occur in the breast milk of women after recovery, and it is not known when it is safe to breastfeed again. The virus was also found in the eye of one patient, in 2014, two months after it was cleared from his blood. Otherwise, people who have recovered are not infectious.
The potential for widespread infections in countries with medical systems capable of observing correct medical isolation procedures is considered low. Usually when someone has symptoms of the disease, they are unable to travel without assistance.
Dead bodies remain infectious; thus, people handling human remains in practices such as traditional burial rituals or more modern processes such as embalming are at risk. Of the cases of Ebola infections in Guinea during the 2014 outbreak, 69% are believed to have been contracted via unprotected (or unsuitably protected) contact with infected corpses during certain Guinean burial rituals.
Health-care workers treating people with Ebola are at greatest risk of infection. The risk increases when they do not have appropriate protective clothing such as masks, gowns, gloves and eye protection; do not wear it properly; or handle contaminated clothing incorrectly. This risk is particularly common in parts of Africa where the disease mostly occurs and health systems function poorly. There has been transmission in hospitals in some African countries that reuse hypodermic needles. Some health-care centres caring for people with the disease do not have running water. In the United States the spread to two medical workers treating infected patients prompted criticism of inadequate training and procedures.
Human-to-human transmission of EBOV through the air has not been reported to occur during EVD outbreaks, and airborne transmission has only been demonstrated in very strict laboratory conditions, and then only from pigs to primates, but not from primates to primates. Spread of EBOV by water, or food other than bushmeat, has not been observed. No spread by mosquitos or other insects has been reported. Other possible methods of transmission are being studied.
Airborne transmission among humans is theoretically possible due to the presence of Ebola virus particles in saliva, which can be discharged into the air with a cough or sneeze, but observational data from previous epidemics suggests the actual risk of airborne transmission is low. A number of studies examining airborne transmission broadly concluded that transmission from pigs to primates could happen without direct contact because, unlike humans and primates, pigs with EVD get very high ebolavirus concentrations in their lungs, and not their bloodstream. Therefore, pigs with EVD can spread the disease through droplets in the air or on the ground when they sneeze or cough. By contrast, humans and other primates accumulate the virus throughout their body and specifically in their blood, but not very much in their lungs. It is believed that this is the reason researchers have observed pig to primate transmission without physical contact, but no evidence has been found of primates being infected without actual contact, even in experiments where infected and uninfected primates shared the same air.
Initial case
Although it is not entirely clear how Ebola initially spreads from animals to humans, the spread is believed to involve direct contact with an infected wild animal or fruit bat. Besides bats, other wild animals that are sometimes infected with EBOV include several species of monkeys such as baboons, great apes (chimpanzees and gorillas), and duikers (a species of antelope).
Animals may become infected when they eat fruit partially eaten by bats carrying the virus. Fruit production, animal behavior and other factors may trigger outbreaks among animal populations.
Evidence indicates that both domestic dogs and pigs can also be infected with EBOV. Dogs do not appear to develop symptoms when they carry the virus, and pigs appear to be able to transmit the virus to at least some primates. Although some dogs in an area in which a human outbreak occurred had antibodies to EBOV, it is unclear whether they played a role in spreading the disease to people.
Reservoir
The natural reservoir for Ebola has yet to be confirmed; however, bats are considered to be the most likely candidate. Three types of fruit bats (Hypsignathus monstrosus, Epomops franqueti and Myonycteris torquata) were found to possibly carry the virus without getting sick. , whether other animals are involved in its spread is not known. Plants, arthropods, rodents, and birds have also been considered possible viral reservoirs.
Bats were known to roost in the cotton factory in which the first cases of the 1976 and 1979 outbreaks were observed, and they have also been implicated in Marburg virus infections in 1975 and 1980. Of 24 plant and 19 vertebrate species experimentally inoculated with EBOV, only bats became infected. The bats displayed no clinical signs of disease, which is considered evidence that these bats are a reservoir species of EBOV. In a 2002–2003 survey of 1,030 animals including 679 bats from Gabon and the Republic of the Congo, immunoglobulin G (IgG) immune defense molecules indicative of Ebola infection were found in three bat species; at various periods of study, between 2.2 and 22.6% of bats were found to contain both RNA sequences and IgG molecules indicating Ebola infection. Antibodies against Zaire and Reston viruses have been found in fruit bats in Bangladesh, suggesting that these bats are also potential hosts of the virus and that the filoviruses are present in Asia.
Between 1976 and 1998, in 30,000 mammals, birds, reptiles, amphibians and arthropods sampled from regions of EBOV outbreaks, no Ebola virus was detected apart from some genetic traces found in six rodents (belonging to the species Mus setulosus and Praomys) and one shrew (Sylvisorex ollula) collected from the Central African Republic. However, further research efforts have not confirmed rodents as a reservoir. Traces of EBOV were detected in the carcasses of gorillas and chimpanzees during outbreaks in 2001 and 2003, which later became the source of human infections. However, the high rates of death in these species resulting from EBOV infection make it unlikely that these species represent a natural reservoir for the virus.
Deforestation has been mentioned as a possible contributor to recent outbreaks, including the West African Ebola virus epidemic. Index cases of EVD have often been close to recently deforested lands.
Pathophysiology
Like other filoviruses, EBOV replicates very efficiently in many cells, producing large amounts of virus in monocytes, macrophages, dendritic cells and other cells including liver cells, fibroblasts, and adrenal gland cells. Viral replication triggers high levels of inflammatory chemical signals and leads to a septic state.
EBOV is thought to infect humans through contact with mucous membranes or skin breaks. After infection, endothelial cells (cells lining the inside of blood vessels), liver cells, and several types of immune cells such as macrophages, monocytes, and dendritic cells are the main targets of attack. Following infection, immune cells carry the virus to nearby lymph nodes where further reproduction of the virus takes place. From there the virus can enter the bloodstream and lymphatic system and spread throughout the body. Macrophages are the first cells infected with the virus, and this infection results in programmed cell death. Other types of white blood cells, such as lymphocytes, also undergo programmed cell death leading to an abnormally low concentration of lymphocytes in the blood. This contributes to the weakened immune response seen in those infected with EBOV.
Endothelial cells may be infected within three days after exposure to the virus. The breakdown of endothelial cells leading to blood vessel injury can be attributed to EBOV glycoproteins. This damage occurs due to the synthesis of Ebola virus glycoprotein (GP), which reduces the availability of specific integrins responsible for cell adhesion to the intercellular structure and causes liver damage, leading to improper clotting. The widespread bleeding that occurs in affected people causes swelling and shock due to loss of blood volume. The dysfunctional bleeding and clotting commonly seen in EVD has been attributed to increased activation of the extrinsic pathway of the coagulation cascade due to excessive tissue factor production by macrophages and monocytes.
After infection, a secreted glycoprotein, small soluble glycoprotein (sGP or GP) is synthesised. EBOV replication overwhelms protein synthesis of infected cells and the host immune defences. The GP forms a trimeric complex, which tethers the virus to the endothelial cells. The sGP forms a dimeric protein that interferes with the signalling of neutrophils, another type of white blood cell. This enables the virus to evade the immune system by inhibiting early steps of neutrophil activation. Furthermore, the virus is capable of hijacking cellular metabolism. Studies have shown that Ebola virus-like particles can reprogram metabolism in both vascular and immune cells.
Immune system evasion
Filoviral infection also interferes with proper functioning of the innate immune system. EBOV proteins blunt the human immune system's response to viral infections by interfering with the cells' ability to produce and respond to interferon proteins such as interferon-alpha, interferon-beta, and interferon gamma.
The VP24 and VP35 structural proteins of EBOV play a key role in this interference. When a cell is infected with EBOV, receptors located in the cell's cytosol (such as RIG-I and MDA5) or outside of the cytosol (such as Toll-like receptor 3 (TLR3), TLR7, TLR8 and TLR9) recognise infectious molecules associated with the virus. On TLR activation, proteins including interferon regulatory factor 3 and interferon regulatory factor 7 trigger a signalling cascade that leads to the expression of type 1 interferons. The type 1 interferons are then released and bind to the IFNAR1 and IFNAR2 receptors expressed on the surface of a neighbouring cell. Once interferon has bound to its receptors on the neighbouring cell, the signalling proteins STAT1 and STAT2 are activated and move to the cell's nucleus. This triggers the expression of interferon-stimulated genes, which code for proteins with antiviral properties. EBOV's V24 protein blocks the production of these antiviral proteins by preventing the STAT1 signalling protein in the neighbouring cell from entering the nucleus. The VP35 protein directly inhibits the production of interferon-beta. By inhibiting these immune responses, EBOV may quickly spread throughout the body.
Diagnosis
When EVD is suspected, travel, work history, and exposure to wildlife are important factors with respect to further diagnostic efforts.
Laboratory testing
Possible non-specific laboratory indicators of EVD include a low platelet count; an initially decreased white blood cell count followed by an increased white blood cell count; elevated levels of the liver enzymes alanine aminotransferase (ALT) and aspartate aminotransferase (AST); and abnormalities in blood clotting often consistent with disseminated intravascular coagulation (DIC) such as a prolonged prothrombin time, partial thromboplastin time, and bleeding time. Filovirions such as EBOV may be identified by their unique filamentous shapes in cell cultures examined with electron microscopy.
The specific diagnosis of EVD is confirmed by isolating the virus, detecting its RNA or proteins, or detecting antibodies against the virus in a person's blood. Isolating the virus by cell culture, detecting the viral RNA by polymerase chain reaction (PCR) and detecting proteins by enzyme-linked immunosorbent assay (ELISA) are methods best used in the early stages of the disease and also for detecting the virus in human remains. Detecting antibodies against the virus is most reliable in the later stages of the disease and in those who recover. IgM antibodies are detectable two days after symptom onset and IgG antibodies can be detected six to 18 days after symptom onset. During an outbreak, isolation of the virus with cell culture methods is often not feasible. In field or mobile hospitals, the most common and sensitive diagnostic methods are real-time PCR and ELISA. In 2014, with new mobile testing facilities deployed in parts of Liberia, test results were obtained 3–5 hours after sample submission. In 2015, a rapid antigen test which gives results in 15 minutes was approved for use by WHO. It is able to confirm Ebola in 92% of those affected and rule it out in 85% of those not affected.
Differential diagnosis
Early symptoms of EVD may be similar to those of other diseases common in Africa, including malaria and dengue fever. The symptoms are also similar to those of other viral haemorrhagic fevers such as Marburg virus disease, Crimean–Congo haemorrhagic fever, and Lassa fever.
The complete differential diagnosis is extensive and requires consideration of many other infectious diseases such as typhoid fever, shigellosis, rickettsial diseases, cholera, sepsis, borreliosis, EHEC enteritis, leptospirosis, scrub typhus, plague, Q fever, candidiasis, histoplasmosis, trypanosomiasis, visceral leishmaniasis, measles, and viral hepatitis among others.
Non-infectious diseases that may result in symptoms similar to those of EVD include acute promyelocytic leukaemia, haemolytic uraemic syndrome, snake envenomation, clotting factor deficiencies/platelet disorders, thrombotic thrombocytopenic purpura, hereditary haemorrhagic telangiectasia, Kawasaki disease, and warfarin poisoning.
Prevention
Vaccines
An Ebola vaccine, rVSV-ZEBOV, was approved in the United States in December 2019. It appears to be fully effective ten days after being given. It was studied in Guinea between 2014 and 2016. More than 100,000 people have been vaccinated against Ebola . The WHO reported that approximately 345,000 people were given the vaccine during the Kivu Ebola epidemic from 2018 to 2020.
Infection control
Community awareness of the benefits on survival chances of admitting cases early is important for the infected and infection control
Caregivers
People who care for those infected with Ebola should wear protective clothing including masks, gloves, gowns and goggles. The U.S. Centers for Disease Control (CDC) recommend that the protective gear leaves no skin exposed. These measures are also recommended for those who may handle objects contaminated by an infected person's body fluids. In 2014, the CDC began recommending that medical personnel receive training on the proper suit-up and removal of personal protective equipment (PPE); in addition, a designated person, appropriately trained in biosafety, should be watching each step of these procedures to ensure they are done correctly. In Sierra Leone, the typical training period for the use of such safety equipment lasts approximately 12 days.
In 2022 in Uganda, lighter personal protection equipment has become available as well as possibilities to monitor and communicate with patients from windows in the treatment tents until it is necessary to enter if e.g. a patient's oxygen levels drop.
Patients and household members
The infected person should be in barrier-isolation from other people. All equipment, medical waste, patient waste and surfaces that may have come into contact with body fluids need to be disinfected. During the 2014 outbreak, kits were put together to help families treat Ebola disease in their homes, which included protective clothing as well as chlorine powder and other cleaning supplies. Education of caregivers in these techniques, and providing such barrier-separation supplies has been a priority of Doctors Without Borders.
Disinfection
Ebolaviruses can be eliminated with heat (heating for 30 to 60 minutes at 60 °C or boiling for five minutes). To disinfect surfaces, some lipid solvents such as some alcohol-based products, detergents, sodium hypochlorite (bleach) or calcium hypochlorite (bleaching powder), and other suitable disinfectants may be used at appropriate concentrations.
General population
Education of the general public about the risk factors for Ebola infection and of the protective measures individuals may take to prevent infection is recommended by the World Health Organization. These measures include avoiding direct contact with infected people and regular hand washing using soap and water.
Bushmeat
Bushmeat, an important source of protein in the diet of some Africans, should be handled and prepared with appropriate protective clothing and thoroughly cooked before consumption. Some research suggests that an outbreak of Ebola disease in the wild animals used for consumption may result in a corresponding human outbreak. Since 2003, such animal outbreaks have been monitored to predict and prevent Ebola outbreaks in humans.
Corpses, burial
If a person with Ebola disease dies, direct contact with the body should be avoided. Certain burial rituals, which may have included making various direct contacts with a dead body, require reformulation so that they consistently maintain a proper protective barrier between the dead body and the living. Social anthropologists may help find alternatives to traditional rules for burials.
Transport, travel, contact
Transportation crews are instructed to follow a certain isolation procedure, should anyone exhibit symptoms resembling EVD. , the WHO does not consider travel bans to be useful in decreasing spread of the disease. In October 2014, the CDC defined four risk levels used to determine the level of 21-day monitoring for symptoms and restrictions on public activities. In the United States, the CDC recommends that restrictions on public activity, including travel restrictions, are not required for the following defined risk levels:
having been in a country with widespread Ebola disease transmission and having no known exposure (low risk); or having been in that country more than 21 days ago (no risk)
encounter with a person showing symptoms; but not within three feet of the person with Ebola without wearing PPE; and no direct contact with body fluids
having had brief skin contact with a person showing symptoms of Ebola disease when the person was believed to be not very contagious (low risk)
in countries without widespread Ebola disease transmission: direct contact with a person showing symptoms of the disease while wearing PPE (low risk)
contact with a person with Ebola disease before the person was showing symptoms (no risk).
The CDC recommends monitoring for the symptoms of Ebola disease for those both at "low risk" and at higher risk.
Laboratory
In laboratories where diagnostic testing is carried out, biosafety level 4-equivalent containment is required. Laboratory researchers must be properly trained in BSL-4 practices and wear proper PPE.
Isolation
Isolation refers to separating those who are sick from those who are not. Quarantine refers to separating those who may have been exposed to a disease until they either show signs of the disease or are no longer at risk. Quarantine, also known as enforced isolation, is usually effective in decreasing spread. Governments often quarantine areas where the disease is occurring or individuals who may transmit the disease outside of an initial area. In the United States, the law allows quarantine of those infected with ebolaviruses.
Contact tracing
Contact tracing is considered important to contain an outbreak. It involves finding everyone who had close contact with infected individuals and monitoring them for signs of illness for 21 days. If any of these contacts comes down with the disease, they should be isolated, tested and treated. Then the process is repeated, tracing the contacts' contacts.
Management
two treatments (atoltivimab/maftivimab/odesivimab and ansuvimab) are associated with improved outcomes. The U.S. Food and Drug Administration (FDA) advises people to be careful of advertisements making unverified or fraudulent claims of benefits supposedly gained from various anti-Ebola products.
In October 2020, the U.S. Food and Drug Administration (FDA) approved atoltivimab/maftivimab/odesivimab with an indication for the treatment of infection caused by Zaire ebolavirus.
Standard support
Treatment is primarily supportive in nature. Early supportive care with rehydration and symptomatic treatment improves survival. Rehydration may be via the oral or intravenous route. These measures may include pain management, and treatment for nausea, fever, and anxiety. The World Health Organization (WHO) recommends avoiding aspirin or ibuprofen for pain management, due to the risk of bleeding associated with these medications.
Blood products such as packed red blood cells, platelets, or fresh frozen plasma may also be used. Other regulators of coagulation have also been tried including heparin in an effort to prevent disseminated intravascular coagulation and clotting factors to decrease bleeding. Antimalarial medications and antibiotics are often used before the diagnosis is confirmed, though there is no evidence to suggest such treatment helps. Several experimental treatments are being studied.
Where hospital care is not possible, the WHO's guidelines for home care have been relatively successful. Recommendations include using towels soaked in a bleach solution when moving infected people or bodies and also applying bleach on stains. It is also recommended that the caregivers wash hands with bleach solutions and cover their mouth and nose with a cloth.
Intensive care
Intensive care is often used in the developed world. This may include maintaining blood volume and electrolytes (salts) balance as well as treating any bacterial infections that may develop. Dialysis may be needed for kidney failure, and extracorporeal membrane oxygenation may be used for lung dysfunction.
Prognosis
EVD has a risk of death in those infected of between 25% and 90%. , the average risk of death among those infected is 50%. The highest risk of death was 90% in the 2002–2003 Republic of the Congo outbreak.
Early admission significantly increases survival rates
Death, if it occurs, follows typically six to sixteen days after symptoms appear and is often due to low blood pressure from fluid loss. Early supportive care to prevent dehydration may reduce the risk of death.
Post-Ebola virus syndrome
If an infected person survives, recovery may be quick and complete. However, a large portion of survivors develop post-Ebola virus syndrome after the acute phase of the infection.
Prolonged cases are often complicated by the occurrence of long-term problems, such as inflammation of the testicles, joint pains, fatigue, hearing loss, mood and sleep disturbances, muscular pain, abdominal pain, menstrual abnormalities, miscarriages, skin peeling, or hair loss. Inflammation and swelling of the uveal layer of the eye is the most common eye complication in survivors of Ebola virus disease. Eye symptoms, such as light sensitivity, excess tearing, and vision loss have been described.
Ebola can stay in some body parts like the eyes, breasts, and testicles after infection. Sexual transmission after recovery has been suspected. If sexual transmission occurs following recovery it is believed to be a rare event. One case of a condition similar to meningitis has been reported many months after recovery, .
Epidemiology
The disease typically occurs in outbreaks in tropical regions of Sub-Saharan Africa. From 1976 (when it was first identified) through 2013, the WHO reported 2,387 confirmed cases with 1,590 overall fatalities. The largest outbreak to date was the Ebola virus epidemic in West Africa, which caused a large number of deaths in Guinea, Sierra Leone, and Liberia.
1976
Sudan
The first known outbreak of EVD was identified only after the fact. It occurred between June and November 1976, in Nzara, South Sudan (then part of Sudan), and was caused by Sudan virus (SUDV). The Sudan outbreak infected 284 people and killed 151. The first identifiable case in Sudan occurred on 27 June in a storekeeper in a cotton factory in Nzara, who was hospitalised on 30 June and died on 6 July. Although the WHO medical staff involved in the Sudan outbreak knew that they were dealing with a heretofore unknown disease, the actual "positive identification" process and the naming of the virus did not occur until some months later in Zaire.
Zaire
On 26 August 1976, the second outbreak of EVD began in Yambuku, a small rural village in Mongala District in northern Zaire (now known as the Democratic Republic of the Congo). This outbreak was caused by EBOV, formerly designated Zaire ebolavirus, a different member of the genus Ebolavirus than in the first Sudan outbreak. The first person infected with the disease was the village school's headmaster Mabalo Lokela, who began displaying symptoms on 26 August 1976. Lokela had returned from a trip to Northern Zaire near the border of the Central African Republic, after visiting the Ebola River between 12 and 22 August. He was originally believed to have malaria and was given quinine. However, his symptoms continued to worsen, and he was admitted to Yambuku Mission Hospital on 5 September. Lokela died on 8 September 14 days after he began displaying symptoms.
Soon after Lokela's death, others who had been in contact with him also died, and people in Yambuku began to panic. The country's Minister of Health and Zaire President Mobutu Sese Seko declared the entire region, including Yambuku and the country's capital, Kinshasa, a quarantine zone. No-one was permitted to enter or leave the area, and roads, waterways, and airfields were placed under martial law. Schools, businesses and social organisations were closed. The initial response was led by Congolese doctors, including Jean-Jacques Muyembe-Tamfum, one of the discoverers of Ebola. Muyembe took a blood sample from a Belgian nun; this sample would eventually be used by Peter Piot to identify the previously unknown Ebola virus. Muyembe was also the first scientist to come into direct contact with the disease and survive. Researchers from the Centers for Disease Control and Prevention (CDC), including Piot, co-discoverer of Ebola, later arrived to assess the effects of the outbreak, observing that "the whole region was in panic."
Piot concluded that Belgian nuns had inadvertently started the epidemic by giving unnecessary vitamin injections to pregnant women without sterilizing the syringes and needles. The outbreak lasted 26 days and the quarantine lasted two weeks. Researchers speculated that the disease disappeared due to the precautions taken by locals, the quarantine of the area, and discontinuing of the injections.
During this outbreak, Ngoy Mushola recorded the first clinical description of EVD in Yambuku, where he wrote the following in his daily log: "The illness is characterised with a high temperature of about , haematemesis, diarrhoea with blood, retrosternal abdominal pain, prostration with 'heavy' articulations, and rapid evolution death after a mean of three days."
The virus responsible for the initial outbreak, first thought to be the Marburg virus, was later identified as a new type of virus related to the genus Marburgvirus. Virus strain samples isolated from both outbreaks were named "Ebola virus" after the Ebola River, near the first-identified viral outbreak site in Zaire. Reports conflict about who initially coined the name: either Karl Johnson of the American CDC team or Belgian researchers. Subsequently, a number of other cases were reported, almost all centred on the Yambuku mission hospital or close contacts of another case. In all, 318 cases and 280 deaths (an 88% fatality rate) occurred in Zaire. Although the two outbreaks were at first believed connected, scientists later realised that they were caused by two distinct ebolaviruses, SUDV and EBOV.
1995–2014
The second major outbreak occurred in Zaire (now the Democratic Republic of the Congo, DRC), in 1995, affecting 315 and killing 254.
In 2000, Uganda had an outbreak infecting 425 and killing 224; in this case, the Sudan virus was found to be the Ebola species responsible for the outbreak.
In 2003, an outbreak in the DRC infected 143 and killed 128, a 90% death rate, the highest of a genus Ebolavirus outbreak to date.
In 2004, a Russian scientist died from Ebola after sticking herself with an infected needle.
Between April and August 2007, a fever epidemic in a four-village region of the DRC was confirmed in September to have been cases of Ebola. Many people who attended the recent funeral of a local village chief died. The 2007 outbreak eventually infected 264 individuals and killed 187.
On 30 November 2007, the Uganda Ministry of Health confirmed an outbreak of Ebola in the Bundibugyo District in Western Uganda. After confirming samples tested by the United States National Reference Laboratories and the Centers for Disease Control, the World Health Organization (WHO) confirmed the presence of a new species of genus Ebolavirus, which was tentatively named Bundibugyo. The WHO reported 149 cases of this new strain and 37 of those led to deaths.
The WHO confirmed two small outbreaks in Uganda in 2012, both caused by the Sudan variant. The first outbreak affected seven people, killing four, and the second affected 24, killing 17.
On 17 August 2012, the Ministry of Health of the DRC reported an outbreak of the Ebola-Bundibugyo variant in the eastern region. Other than its discovery in 2007, this was the only time that this variant has been identified as responsible for an outbreak. The WHO revealed that the virus had sickened 57 people and killed 29. The probable cause of the outbreak was tainted bush meat hunted by local villagers around the towns of Isiro and Viadana.
In 2014, an outbreak occurred in the DRC. Genome-sequencing showed that this outbreak was not related to the 2014–15 West Africa Ebola virus outbreak, but was the same EBOV species, the Zaire species. It began in August 2014, and was declared over in November with 66 cases and 49 deaths. This was the 7th outbreak in the DRC, three of which occurred during the period when the country was known as Zaire.
2013–2016 West Africa
In March 2014, the World Health Organization (WHO) reported a major Ebola outbreak in Guinea, a West African nation. Researchers traced the outbreak to a one-year-old child who died in December 2013. The disease rapidly spread to the neighbouring countries of Liberia and Sierra Leone. It was the largest Ebola outbreak ever documented, and the first recorded in the region. On 8 August 2014, the WHO declared the epidemic an international public health emergency. Urging the world to offer aid to the affected regions, its Director-General said, "Countries affected to date simply do not have the capacity to manage an outbreak of this size and complexity on their own. I urge the international community to provide this support on the most urgent basis possible." By mid-August 2014, Doctors Without Borders reported the situation in Liberia's capital, Monrovia, was "catastrophic" and "deteriorating daily". They reported that fears of Ebola among staff members and patients had shut down much of the city's health system, leaving many people without medical treatment for other conditions. In a 26 September statement, WHO said, "The Ebola epidemic ravaging parts of West Africa is the most severe acute public health emergency seen in modern times. Never before in recorded history has a biosafety level four pathogen infected so many people so quickly, over such a broad geographical area, for so long."
Intense contact tracing and strict isolation largely prevented further spread of the disease in the countries that had imported cases. , suspected cases and deaths were reported; however, the WHO said that these numbers may be underestimated. Because they work closely with the body fluids of infected patients, healthcare workers were especially vulnerable to infection; in August 2014, the WHO reported that 10% of the dead were healthcare workers.
In September 2014, it was estimated that the countries' capacity for treating Ebola patients was insufficient by the equivalent of 2,122 beds; by December there were a sufficient number of beds to treat and isolate all reported Ebola cases, although the uneven distribution of cases was causing serious shortfalls in some areas. On 28 January 2015, the WHO reported that for the first time since the week ending 29 June 2014, there had been fewer than 100 new confirmed cases reported in a week in the three most-affected countries. The response to the epidemic then moved to a second phase, as the focus shifted from slowing transmission to ending the epidemic. On 8 April 2015, the WHO reported only 30 confirmed cases, the lowest weekly total since the third week of May 2014.
On 29 December 2015, 42 days after the last person tested negative for a second time, Guinea was declared free of Ebola transmission. At that time, a 90-day period of heightened surveillance was announced by that agency. "This is the first time that all three countries – Guinea, Liberia and Sierra Leone – have stopped the original chains of transmission ...", the organisation stated in a news release. A new case was detected in Sierra Leone on 14 January 2016. However, the outbreak was declared no longer an emergency on 29 March 2016.
2014 spread outside West Africa
On 19 September, Eric Duncan flew from his native Liberia to Texas; five days later he began showing symptoms and visited a hospital but was sent home. His condition worsened and he returned to the hospital on 28 September, where he died on 8 October. Health officials confirmed a diagnosis of Ebola on 30 September – the first case in the United States.
In early October, Teresa Romero, a 44-year-old Spanish nurse, contracted Ebola after caring for a priest who had been repatriated from West Africa. This was the first transmission of the virus to occur outside Africa. Romero tested negative for the disease on 20 October, suggesting that she may have recovered from Ebola infection.
On 12 October, the Centers for Disease Control and Prevention (CDC) confirmed that a nurse in Texas, Nina Pham, who had treated Duncan tested positive for the Ebola virus, the first known case of transmission in the United States. On 15 October, a second Texas health-care worker who had treated Duncan was confirmed to have the virus. Both of these people recovered. An unrelated case involved a doctor in New York City, who returned to the United States from Guinea after working with Médecins Sans Frontières and tested positive for Ebola on 23 October. The person recovered and was discharged from Bellevue Hospital on 11 November. On 24 December 2014, a laboratory in Atlanta, Georgia reported that a technician had been exposed to Ebola.
On 29 December 2014, Pauline Cafferkey, a British nurse who had just returned to Glasgow from Sierra Leone, was diagnosed with Ebola at Glasgow's Gartnavel General Hospital. After initial treatment in Glasgow, she was transferred by air to RAF Northolt, then to the specialist high-level isolation unit at the Royal Free Hospital in London for longer-term treatment.
2017 Democratic Republic of the Congo
On 11 May 2017, the DRC Ministry of Public Health notified the WHO about an outbreak of Ebola. Four people died, and four people survived; five of these eight cases were laboratory-confirmed. A total of 583 contacts were monitored. On 2 July 2017, the WHO declared the end of the outbreak.
2018 Équateur province
On 14 May 2018, the World Health Organization reported that "the Democratic Republic of Congo reported 39 suspected, probable or confirmed cases of Ebola between 4 April and 13 May, including 19 deaths." Some 393 people identified as contacts of Ebola patients were being followed up. The outbreak centred on the Bikoro, Iboko, and Wangata areas in Equateur province, including in the large city of Mbandaka. The DRC Ministry of Public Health approved the use of an experimental vaccine. On 13 May 2018, WHO Director-General Tedros Adhanom Ghebreyesus visited Bikoro. Reports emerged that maps of the area were inaccurate, not so much hampering medical providers as epidemiologists and officials trying to assess the outbreak and containment efforts. The 2018 outbreak in the DRC was declared over on 24 July 2018.
2018–2020 Kivu
On 1 August 2018, the world's 10th Ebola outbreak was declared in North Kivu province of the Democratic Republic of the Congo. It was the first Ebola outbreak in a military conflict zone, with thousands of refugees in the area. By November 2018, nearly 200 Congolese had died of Ebola, about half of them from the city of Beni, where armed groups are fighting over the region's mineral wealth, impeding medical relief efforts.
By March 2019, this became the second largest Ebola outbreak ever recorded, with more than 1,000 cases and insecurity continuing to be the major resistance to providing an adequate response. , the WHO reported 2025 confirmed and probable cases with 1357 deaths. In June 2019, two people died of Ebola in neighbouring Uganda.
In July 2019, an infected man travelled to Goma, home to more than two million people. One week later, on 17 July 2019, the WHO declared the Ebola outbreak a global health emergency, the fifth time such a declaration has been made by the organisation. A government spokesman said that half of the Ebola cases are unidentified, and he added that the current outbreak could last up to three years.
On 25 June 2020, the second biggest EVD outbreak ever was declared over.
2020 Équateur province
On 1 June 2020, the Congolese health ministry announced a new DRC outbreak of Ebola in Mbandaka, Équateur Province, a region along the Congo River. Genome sequencing suggests that this outbreak, the 11th outbreak since the virus was first discovered in the country in 1976, is unrelated to the one in North Kivu Province or the previous outbreak in the same area in 2018. It was reported that six cases had been identified; four of the people had died. It is expected that more people will be identified as surveillance activities increase. By 15 June the case count had increased to 17 with 11 deaths, with more than 2,500 people having been vaccinated. The 11th EVD outbreak was officially declared over on 19 November 2020. By the time the Équateur outbreak ended, it had 130 confirmed cases with 75 recoveries and 55 deaths.
2021
North Kivu
On 7 February 2021, the Congolese health ministry announced a new case of Ebola near Butembo, North Kivu detected a day before. The case was a 42-year-old woman who had symptoms of Ebola in Biena on 1 February 2021. A few days after, she died in a hospital in Butembo. The WHO said that more than 70 people with contact with the woman had been tracked.
On 11 February 2021, another woman who had contact with the previous woman died in the same town, and the number of traced contacts increased to 100. A day after, a third case was detected in Butembo.
On 3 May 2021, the 12th EVD outbreak was declared over, resulting in 12 cases and six deaths. Heightened surveillance will continue for 90 days after the declaration, in case of resurgence.
Guinea
In February 2021, Sakoba Keita, head of Guinea's national health agency confirmed that three people had died of Ebola in the south-eastern region near the city of Nzérékoré. A further five people also tested positive. Keita also confirmed more testing was underway, and attempts to trace and isolate further cases had begun. On 14 February, the Guinean government declared an Ebola epidemic. The outbreak may have started following reactivation of a latent case in a survivor of an earlier outbreak. As of 4 May 2021, 23 cases were reported, with no new cases or deaths since 3 April 2021. A 42-day countdown period was started on 8 May 2021, and on 19 June, the outbreak was declared over.
Ivory Coast
On 14 August 2021, The Ministry of Health of Cote d’Ivoire confirmed the country's first case of Ebola since 1994. This came after the Institut Pasteur in Cote d'Ivoire confirmed the Ebola Virus Disease in samples collected from a patient, who was hospitalized in the commercial capital of Abidjan, after arriving from Guinea.
However, on 31 August 2021, the WHO found that, after further tests in a laboratory in Lyon, the patient did not have Ebola. The cause of her disease is still being analyzed.
2022
On 23 April 2022, a case of Ebola was confirmed in the DRC in the Equateur province. The case was a 31-year-old man whose symptoms began on 5 April, but did not seek treatment for over a week. On 21 April, he was admitted to an Ebola treatment centre and died later that day. By 24 May 2022, there were 5 recorded deaths in the DRC. On 15 August, the fifth case was buried, and the outbreak was declared over, 42 days after, on 4 July 2022.
In September 2022, Uganda reported 7 cases infected with the Ebola Sudan strain, but by mid-October the count had increased to 63.
In November 2022, the outbreak in Uganda continued - still without a vaccine. On 10 January 2023, the outbreak was considered over after no new cases had been reported for 42 days; the outbreak killed nearly 80 people.
Society and culture
Weaponisation
Ebolavirus is classified as a biosafety level 4 agent, as well as a Category A bioterrorism agent by the Centers for Disease Control and Prevention. It has the potential to be weaponised for use in biological warfare, and was investigated by Biopreparat for such use, but might be difficult to prepare as a weapon of mass destruction because the virus becomes ineffective quickly in open air. Fake emails pretending to be Ebola information from the WHO or the Mexican government have, in 2014, been misused to spread computer malware. The BBC reported in 2015 that "North Korean state media has suggested the disease was created by the U.S. military as a biological weapon."
Literature
Richard Preston's 1995 best-selling book, The Hot Zone, dramatised the Ebola outbreak in Reston, Virginia.
William Close's 1995 Ebola: A Documentary Novel of Its First Explosion and 2002 Ebola: Through the Eyes of the People focused on individuals' reactions to the 1976 Ebola outbreak in Zaire.
Tom Clancy's 1996 novel, Executive Orders, involves a Middle Eastern terrorist attack on the United States using an airborne form of a deadly Ebola virus strain named "Ebola Mayinga" (see Mayinga N'Seka).
As the Ebola virus epidemic in West Africa developed in 2014, a number of popular self-published and well-reviewed books containing sensational and misleading information about the disease appeared in electronic and printed formats. The authors of some such books admitted that they lacked medical credentials and were not technically qualified to give medical advice. The World Health Organization and the United Nations stated that such misinformation had contributed to the spread of the disease.
Other animals
Wild animals
Ebola has a high mortality rate among primates. Frequent outbreaks of Ebola may have resulted in the deaths of 5,000 gorillas. Outbreaks of Ebola may have been responsible for an 88% decline in tracking indices of observed chimpanzee populations in the 420 km2 Lossi Sanctuary between 2002 and 2003. Transmission among chimpanzees through meat consumption constitutes a significant risk factor, whereas contact between the animals, such as touching dead bodies and grooming, is not.
Recovered gorilla carcasses have contained multiple Ebola virus strains, suggesting multiple introductions of the virus. Bodies decompose quickly and carcasses are not infectious after three to four days. Contact between gorilla groups is rare, suggesting that transmission among gorilla groups is unlikely, and that outbreaks result from transmission between viral reservoirs and animal populations.
Domestic animals
In 2012, it was demonstrated that the virus can travel without contact from pigs to nonhuman primates, although the same study failed to achieve transmission in that manner between primates.
Dogs may become infected with EBOV but not develop symptoms. Dogs in some parts of Africa scavenge for food, and they sometimes eat EBOV-infected animals and also the corpses of humans. A 2005 survey of dogs during an EBOV outbreak found that although they remain asymptomatic, about 32 percent of dogs closest to an outbreak showed a seroprevalence for EBOV versus nine percent of those farther away. The authors concluded that there were "potential implications for preventing and controlling human outbreaks."
Reston virus
In late 1989, Hazelton Research Products' Reston Quarantine Unit in Reston, Virginia, had an outbreak of fatal illness amongst certain lab monkeys. This lab outbreak was initially diagnosed as simian haemorrhagic fever virus (SHFV) and occurred amongst a shipment of crab-eating macaque monkeys imported from the Philippines. Hazelton's veterinary pathologist in Reston sent tissue samples from dead animals to the United States Army Medical Research Institute of Infectious Diseases (USAMRIID) at Fort Detrick, Maryland, where an ELISA test indicated the antibodies present in the tissue were a response to Ebola virus and not SHFV. An electron microscopist from USAMRIID discovered filoviruses similar in appearance, in crystalloid aggregates and as single filaments with a shepherd's hook, to Ebola in the tissue samples sent from Hazelton Research Products' Reston Quarantine Unit.
A US Army team headquartered at USAMRIID euthanised the surviving monkeys, and brought all the dead monkeys to Fort Detrick for study by the Army's veterinary pathologists and virologists, and eventual disposal under safe conditions. Blood samples were taken from 178 animal handlers during the incident. Of those, six animal handlers eventually seroconverted, including one who had cut himself with a bloody scalpel. Despite its status as a Level‑4 organism and its apparent pathogenicity in monkeys, when the handlers did not become ill, the CDC concluded that the virus had a very low pathogenicity to humans.
The Philippines and the United States had no previous cases of Ebola infection, and upon further isolation, researchers concluded it was another strain of Ebola, or a new filovirus of Asian origin, which they named Reston ebolavirus (RESTV) after the location of the incident. Reston virus (RESTV) can be transmitted to pigs. Since the initial outbreak it has since been found in nonhuman primates in Pennsylvania, Texas, and Italy, where the virus had infected pigs. According to the WHO, routine cleaning and disinfection of pig (or monkey) farms with sodium hypochlorite or detergents should be effective in inactivating the Reston ebolavirus. Pigs that have been infected with RESTV tend to show symptoms of the disease.
Research
Treatments
, no medication has been proven safe and effective for treating Ebola. By the time the Ebola virus epidemic in West Africa began in 2013, there were at least nine different candidate treatments. Several trials were conducted in late 2014, and early 2015, but some were abandoned due to lack of efficacy or lack of people to study.
, two experimental treatments known as atoltivimab/maftivimab/odesivimab and ansuvimab were found to be 90% effective.
Diagnostic tests
The diagnostic tests currently available require specialised equipment and highly trained personnel. Since there are few suitable testing centres in West Africa, this leads to delay in diagnosis.
On 29 November 2014, a new 15-minute Ebola test was reported that if successful, "not only gives patients a better chance of survival, but it prevents transmission of the virus to other people." The new equipment, about the size of a laptop and solar-powered, allows testing to be done in remote areas.
On 29 December 2014, the U.S. Food and Drug Administration (FDA) approved the LightMix Ebola Zaire rRT-PCR test for patients with symptoms of Ebola.
Disease models
Animal models and in particular non-human primates are being used to study different aspects of Ebola virus disease. Developments in organ-on-a-chip technology have led to a chip-based model for Ebola haemorrhagic syndrome.
| Biology and health sciences | Infectious disease | null |
48534772 | https://en.wikipedia.org/wiki/Penetrator%20%28aircraft%29 | Penetrator (aircraft) | A penetrator is a long-range bomber aircraft designed to intrude against and penetrate enemy defenses. The term is mostly applied to aircraft that fly at low altitude to avoid radar, a strategic counterpart to the shorter-ranged tactical interdictor designs like the TSR-2 and F-111 Aardvark. The term can be applied to any aircraft that is designed to survive over enemy airspace, and has been used for the penetration fighter designs intended to escort bombers.
The classic penetrator design is the Rockwell B-1 Lancer, where the term was first widely used. The larger Tupolev Tu-160 is also a member of this class. Other aircraft, like the Boeing B-52 Stratofortress and some versions of the F-111 have also been adapted to this role. More modern designs, like the Northrop Grumman B-2 Spirit, can be technically classified as penetrators, but the term is not generally applied to these aircraft. The mission for the Next-Generation Bomber has been described as "penetrate and persist".
| Technology | Military aviation | null |
60072585 | https://en.wikipedia.org/wiki/Dmanisi%20hominins | Dmanisi hominins | The Dmanisi hominins, Dmanisi people, or Dmanisi man were a population of Early Pleistocene hominins whose fossils have been recovered at Dmanisi, Georgia. The fossils and stone tools recovered at Dmanisi range in age from 1.85 to 1.77 million years old, making the Dmanisi hominins the earliest well-dated hominin fossils in Eurasia and the best preserved fossils of early Homo from a single site so early in time, though earlier fossils and artifacts have been found in Asia. Though their precise classification is controversial and disputed, the Dmanisi fossils are highly significant within research on early hominin migrations out of Africa. The Dmanisi hominins are known from over a hundred postcranial fossils and five famous well-preserved skulls, referred to as Dmanisi Skulls 1–5.
The taxonomic status of the Dmanisi hominins is somewhat unclear due to their small brain size, primitive skeletal architecture, and the range of variation exhibited between the skulls. Their initial description classified them as Homo (erectus?) ergaster (an otherwise African taxon), or potentially an early offshoot of later Asian H. erectus. The discovery of a massive jaw, D2600, in 2000 led researchers to hypothesize that more than one hominin taxon had been present at the site and in 2002, the jaw was designated as the type specimen of the new species Homo georgicus. Later analyses by the Dmanisi research team have concluded that all the skulls likely represent the same taxon with significant age-related and sexual dimorphism, though this is not a universally held view. In 2006, the team favoured subsuming the taxon under Homo erectus as H. erectus georgicus or H. e. ergaster georgicus. The nomenclature is still debated.
Anatomically, the Dmanisi hominins exhibited a mosaic of traits, possessing some features reminiscent of later and more derived H. erectus and modern humans, while retaining features of earlier Homo and Australopithecus. The length and morphology of their legs was essentially modern and they would have been adapted to long-range walking and running, but their arms were likely more similar to the arms of Australopithecus and modern non-human apes than to later hominins. The Dmanisi hominins would also have differed from later (non-insular) Homo in their small body (145–166 cm; 4.8–5.4 ft) and brain size (545–775 cc), both of which are more comparable to H. habilis than to later H. erectus. Morphological traits unifying all of the skulls, though the degree in which they are pronounced differ, include large brow ridges and faces.
In the Pleistocene, the climate of Georgia was more humid and forested than it is today, comparable to a mediterranean climate. The Dmanisi fossil site was located near an ancient lake shore, surrounded by forests and grasslands and home to a diverse fauna of Pleistocene animals. The favourable climate at Dmanisi might have acted as a refuge for hominins in the Early Pleistocene and it would have been reachable from Africa through the Levantine corridor. Stone tools found at the site are of the Oldowan tradition.
Taxonomy
Research history
Early excavations at Dmanisi
Dmanisi is located in southern Georgia, about 85 kilometres (52.8 miles) from the country's capital, Tbilisi. It was founded as a city in the Middle Ages and has thus been a site of archaeological interest for some time, with a prominent archaeological excavation site being located within the ruins of the old city on a promontory overlooking the Mashavera and Pinazauri rivers. Archaeological excavations began in 1936 on the initiative of historian Ivane Javakhishvili, who directed several expeditions. In 1982, archaeologists at Dmanisi discovered 3 metre (10 ft) deep pits, cut in compact sandy clay. The archaeologists believed the pits were made for some economic purpose in the Middle Ages. After they cleaned them out, they discovered fossilised animal bones on the walls and bottom of the pits. The Georgian Paleobiological Institute of the Academy of Sciences was informed immediately and systematic palaeontological excavations began in 1983, but ended in 1991 on account of financial issues.
During the 1983–1991 excavations, a large amount of animal fossils were collected, alongside some stone tools. The stone tools were quickly noted as highly archaic, far more primitive than other tools found in Eastern Europe. Biostratigraphically (dating through comparisons with fauna at other well-dated sites), they were determined to be from the Late Pliocene to the Early Pleistocene. Every year since 1991, the Georgian palaeontologists, joined by specialists from the Romano-Germanic Museum in Cologne, have undertaken new excavations, completely funded by the Romano-Germanic Museum until 1999.
Discovery of hominin remains
The expedition in 1991 was highly productive, uncovering abundant animal fossils and a considerable quantity of stone tools. On the morning of 25 September, a group of young archaeologists led by Medea Nioradze and Antje Justus uncovered a mandible.
As the heads of the expedition, Georgian archaeologists and anthropologist Abesalom Vekua and David Lordkipanidze (then in Tbilisi) were summoned to the site and on the next morning, the mandible was freed from the rock around it, a complicated process that took nearly an entire day. Once freed, the mandible was unmistakably the jaw of a primate and importantly, it preserved a complete row of teeth with little sign of wear. The lack of wear suggested that the primate would have been young, about 20–24 years old, though its classification was as of yet unknown. After they returned to Tbilisi, the mandible was studied in detail by Vekua, Lordkipanidze and archaeologist Leo Gabunia. It was quickly determined to represent a hominid, though its precise position within the family was unclear. Although a number of primitive features were observed, it was clear that the fossil (now given the designation D211) was the most similar to fossils of Homo, not earlier australopithecines. After prolonged discussion, Vekua and Gabunia came to the conclusion that the Dmanisi hominin was probably an early Homo erectus, and that it represented the earliest Homo outside Africa. This was confirmed once the basalts lying directly below the Pleistocene sediments were determined to be about 1.8 million years old.
Excavations continued at the site, though hominin remains proved to be rare. In 1997, the right third metatarsal bone of a hominin was discovered in the same layer as the jaw. Further discoveries were made in May 1999. Because of long-lasting periods of rainfall, the site was damaged. Archaeologist and expedition member Gocha Kiladze found a thin, coin-sized skull fragment. Kiladze, Vekua, Lordkipanidze, alongside archaeologist Kakha Kakhiani and the head of the 1999 expedition, archaeologist Giorgi Kopaliani, then visited the site and discovered further fragments. With these fragments, they were able to piece together the skull of an archaic human, with broken off teeth and a broken off upper jaw. That same year, a more well-preserved skull was discovered and together, the two skulls allowed for inferences as to the nature and classification of the fossil hominins. The first skull, dubbed Skull 2, was given the designation D2282 and the second skull, Skull 1, was given the designation D2280. After studying the fossils for almost a year, it was determined that they somewhat differed from H. erectus in their jaws and skulls and were closer to the earlier African species H. ergaster (now considered an early African representative of H. erectus by some). The discovery of the two skulls was highly publicised in international media and the Georgian fossils were for the first time widely acknowledged as the earliest known hominins outside of Africa.
Further discoveries
More discoveries followed. In 2000, another hominin jaw (D2600) was discovered, this time at a slightly lower layer (i.e. older) than the rest of the fossils. This jaw was very large and had highly developed posterior molar teeth. The following year, Skull 3 (D2700) and its corresponding jaw (D2735) was discovered, almost perfectly preserved. On account of its erupting wisdom teeth, Skull 3 was determined to be the skull of a subadult. In 2002, the toothless skull of an old individual, Skull 4 (D3444, the associated jaw, D3900, was discovered in 2003) was discovered. Both Skull 3 and Skull 4 were noted as preserving a series of very primitive characteristics. The final skull, Skull 5 (D4500), was discovered in 2005. The skull matched the jaw found in 2000 and the two were concluded as having come from the same individual. The skulls were significant not only in their set of unique features. Skull 5 was the first found completely preserved adult hominin skull from the Early Pleistocene, and Skull 4 is the only toothless hominin discovered in such early sediments.
In addition to the skulls, about a hundred postcranial remains have been discovered. The first postcranial fossil discovered was a third metatarsal bone, recovered in 1997. Postcranial fossils comprise bones from all parts of the body and include parts of the arms, legs, axial skeleton (vertebrae and ribs) and feet. The bones, some of them confidently associated with Skull 3, are from both adolescent and adult individuals.
Together, the fossils at Dmanisi represent the most complete and richest collection of early Homo fossils at a single site with a comparable temporal context. The variability in age (i.e. Skull 3 being subadult and Skull 4 being significantly older) and presumably sex also gives unique insight into the variability in early populations of Homo.
Classification
The classification of the Dmanisi hominins is disputed and a discussion on whether they represent an early form of H. erectus, a distinct species of their own dubbed H. georgicus or something else entirely are ongoing.
Early attempts at classification
The D211 mandible was described in 1995 by Gabunia and Vekua, who classified it as belonging to a basal population of H. erectus based on dental similarity especially with African specimens (sometimes called H. ergaster). In 1996, palaeoanthropologists Günter Bräuer and Michael Shultz made note of both basal and derived traits, and instead concluded the mandible came from a derived population of H. erectus, despite being so old. In 1998, palaeoanthropologists Antonio Rosas and José Bermúdez De Castro pointed out that such a mosaic anatomy is also documented in H. ergaster, and suggested the classification Homo sp. indet. (aff. ergaster)".
Gabunia and colleagues described Skulls 1 and 2 in 2000, and noted they were reminiscent of H. ergaster skulls. Numerous traits were noted as suggesting a close relation to H. ergaster, including the presence and morphology of the brow ridge, the overall proportions of the facial skeleton, the relative narrowness of the skull beyond the face (post-orbital constriction) as well as a comparable height of the cranial vault and the thickness of the cranial vault bones. The same features typically used to distinguish H. ergaster from Asian specimens of H. erectus were found to distinguish the Dmanisi fossils from Asian H. erectus; notably the lower cranial vault and somewhat thinner cranial vault bones in H. erectus and the smaller cranial capacity of the Dmanisi fossils. A handful of features were noted as present in the Dmanisi fossils and Asian H. erectus, but not H. ergaster, such as the presence of a supramastoid crest. Since these features also appeared in some African fossils, such as Olduvai hominids 9 and 12, they were deemed to not hold "any special phylogenetic significance". Gabunia and colleagues concluded by referring the Dmanisi fossils to Homo ex. gr. ergaster ("ex. gr. ergaster" meaning "of the group including ergaster"). Gabunia and colleagues stated that the combination of features made it a possibility that the Dmanisi hominins were forerunners of both later H. erectus in Asia and hominins ancestral to H. sapiens.
Classification following the discovery of further fossils
In 2002, Vekua and colleagues described Skull 3 (D2700), including its associated mandible (D2735). They conclude that, though the individual resembled H. habilis in brain size and some facial features, it overall is consistent with an incredibly small H. ergaster.
The D2600 mandible was also described in 2002 by Gabunia, Vekua and Lordkipanidze, together with French archaeologists and palaeoanthropologists Henry and Marie-Antionette de Lumley. The mandible differed in its large size, morphological features and teeth proportions not only from the previously discovered jaw at Dmanisi but also from all other hominin jaws found to date, blending primitive features otherwise seen in Australopithecus and early Homo with derived features otherwise seen in H. erectus. They considered it sufficient grounds for the creation of a new species, which they dubbed Homo georgicus. They assigned all the Dmanisi hominins to the new species, and believed the significant disparity in robustness was caused by marked sexual dimorphism. Gabunia and colleagues interpreted H. georgicus as a descendant of H. habilis or H. rudolfensis and an early species "near the roots of the Homo branch...foretelling the emergence of Homo ergaster". Palaeoanthropologist Sang-Hee Lee supported the classification of all the Dmanisi hominin fossils as belonging to the same species (though made no comment on if that species should be H. erectus or H. georgicus) in 2005, noting that despite the differences in brain capacity between the skulls, they were not more morphologically distinct from each other than individuals of different sexes in modern great apes.
Lordkipanidze and colleagues described Skull 4 and its mandible in 2006, noting that it was similar to the fossils discovered previously and stating that with the possible exception of the D2600 mandible, all of the Dmanisi fossils were assignable to a single species. They agreed that the Dmanisi hominins were ancestral to later H. erectus, potentially even to later Asian subspecies. That same year, a comparative analysis of Skulls 1 to 4 and the D2600 mandible by palaeoanthropologist G. Philip Rightmire, Lordkipanidze and Vekua again concluded that Skulls 1 through 4 could be assigned to the same species, but that the status of D2600 was more questionable. They noted that though the fossils were similar to H. habilis in some respects, especially in size and (for some) cranial capacity, they shared far more features with H. erectus. In this respect, many of the primitive features could simply be interpreted as primitive retentions. Rightmire, Lordkipanidze and Vekua concluded that if some of the H. habilis-like traits, such as the size, cranial capacity and parts of the facial morphology, were considered plesiomorphic and primitive retentions, there would be no reason to exclude Skulls 1 to 4 from H. erectus. Though the others were unsure, Vekua supported the classification of D2600 as representing a distinct species separate from the rest of the fossils, preferring to keep its designation as H. georgicus. They noted that if future analyses suggested that D2600 belonged to the same hominin population as the other fossils, the subspecies designation would appropriately be Homo erectus georgicus, but that if it was distinct (as H. georgicus), a new subspecies name would have to be selected for the other fossils.
A 2006 comparative analysis of D211 and D2600 by palaeoanthropologists Matthew M. Skinner, Adam D. Gordon and Nicole J. Collard found that the degree of dimorphism expressed between the two mandibles was greater than expected in modern great apes and human, as well as in other extinct hominin species. They suggested two alternative hypotheses: either that the fossils represented a single taxon with unusually high sexual dimorphism whose inclusion in Homo was thus doubtful, or that D2600 should be considered as a representative of a separate, second species of hominins (i.e. H. georgicus). A more detailed 2008 comparative analysis of the mandibles, taking more anatomical features into account, by Rightmire, Lordkipanidze and palaeoanthropologist Adam Van Arsdale concluded that while the dimorphism between the mandibles was excessive when compared to modern humans, and to some chimpanzees, it was comparable to (or in cases, less than) the dimorphism between gorillas. They concluded that "in our view, there are currently no compelling anatomical grounds for sorting any of the Dmanisi fossils to other than a single species", but noted that this species would have possessed sexual dimorphism greater than later Homo. Preferring the designation of H. erectus, the researchers noted that although H. erectus is generally held to not be this dimorphic, some fossils, such as smaller skulls recovered at Ileret and Olorgesailie in Kenya and larger skulls recovered at Olduvai Gorge, Tanzania and Bouri, Ethiopia, could disprove this notion.
A 2008 analysis of the teeth of Skulls 2 and 3 and the D2600 mandible by Lordkipanidze, Vekua and palaeoanthropologists María Martinón-Torres, José María Bermúdez de Castro, Aida Gómez-Robles, Ann Mergvelashvili and Leyre Prado found that like other parts of the fossils, the teeth too showed a combination of primitive Australopithecus- and H. habilis-type traits and more derived H. erectus-type traits. The teeth of Skulls 2 and 3 were found to be similar, whereas D2600 somewhat diverged in the size of the teeth and in the morphology of its roots. However, H. habilis has the same range of dental dimorphism. In 2010, palaeoanthropologist P. James Macaluso Jr. concluded that Skulls 2 and 3 could comfortably be referred to the same species, but whether D2600 could also be referred to the same species as the rest was less clear.
Classification following the description of Skull 5
Skull 5, recovered in 2005 and described in 2013 by Lordkipanidze and colleagues, was upon its description determined to be from the same individual as the D2600 mandible and together, the two fossils significantly expanded the morphological range of the Dmanisi hominin fossils. Lordkipanidze and colleagues interpreted Skull 5 as part of the same population as the rest of the Dmanisi fossils, as they came from the same general time and place, and had a range of variation similar to what is exhibited in chimpanzee, bonobo and modern human samples. Individuals in all four samples generally varied in size and in the orientation of the face relative to the braincase. Lordkipanidze and colleagues interpreted that the small-faced and more orthognathic skulls represented females and/or subadults and that the more prognathic and large-faced skulls represented males. The large degree of variation expressed in the Dmanisi fossils led Lordkipanidze and colleagues to suggest that the variation seen in other Pliocene and Pleistocene hominid fossils, typically used to justify several distinct fossil species, might have been misinterpreted as species diversity. Thus, the morphological diversity in contemporary African hominins, typically used to justify H. ergaster as a species distinct from H. erectus, might thus instead be due to regional variation in a single evolving lineage of hominins (H. erectus). With this in mind, the classification of the African material as H. erectus ergaster (a chronosubspecies rather than a distinct species) was suggested and since the Dmanisi hominins are believed to have originated from an early migration by the H. erectus lineage out of Africa, it was determined that they be best placed within H. e. ergaster with a quadrinomial (4-part) name; H. e. e. georgicus. The researchers considered it possible that earlier Homo, such as H. habilis and H. rudolfensis also belonged to the same single evolving lineage of Homo, though no morphological comparisons were made to test this theory.
Palaeoanthropologists Jeffrey H. Schwartz, Ian Tattersall and Zhang Chi responded to Lordkipanidze and colleagues in 2014, disagreeing with the idea that all five skulls were from the same species. Schwartz, Tattersall and Chi also suggested that the use of a quadrinomial name, H. e. e. georgicus, was invalid in zoological nomenclature. Most importantly, Schwartz, Tattersall and Chi questioned if the morphological comparisons were detailed enough to come to this conclusion and questioned the methods which Lordkipanidze and colleagues had used to determine what is and is not interspecific variation. The researchers did not see the fact that the fossils were from the same site and a relatively short time period as enough to determine that they all came from the same species and that the previous claims of Gorilla-type mandibular variation but H. sapiens/Pan-type cranial variation could not both be correct at the same time. They also questioned if all morphological differences could truly be attributed to age, wear and pathology. Several traits within the skulls and teeth of all the Dmanisi skulls were put forward as "potentially species-distinguishing features" and Schwartz, Tattersall and Chi concluded that at least the D2600 mandible, and thus Skull 5 as a whole, should remain classified as a distinct species, H. georgicus, writing that "to deny this hominin a distinct identity is effectively to deny the utility of morphology in systematics, a radical proposition to which few would subscribe".
The Dmanisi research team, composed of those palaeontologists and researchers excavating at the Dmanisi site and studying the fossils, responded to Schwartz, Tattersall and Chi in the same year, maintaining that the fossils represented a single species. They noted that the distinction of H. georgicus, and the further suggestion that some of the other skulls might represent distinct taxa as well, would mean that Dmanisi would have been home to at least four different hominid taxa and thus "hold the world record in hominid palaeospecies diversity documented at a single site that extends over a mere , and probably over a mere couple of centuries". The Dmanisi team wrote that Schwartz, Tattersall and Chi had deliberately ignored previous morphological analyses and also noted that character state variation in Asian and African Homo specimens, and the Dmanisi fossils, suggest that the fossil cannot be assigned to different species, accusing Schwartz, Tattersall and Chi of effectively denying the morphological evidence from the Dmanisi fossils that did not fit with their hypothesis. One of the primary distinguishing features noted by Schwartz, Tattersall and Chi, the number of premolar tooth roots, was pointed out as not actually carrying taxonomical significance since modern Sub-Saharan humans exhibit significant variation in this specific trait. The name Homo erectus ergaster georgicus was also defended in that it was used to denote a local population of a subspecies, similar to how quadrinomials are used in botany. The researchers pointed out that although the use of quadrinomials is not regulated by the International Code of Zoological Nomenclature, it is not considered invalid.
A 2017 analysis of Skull 5 specifically, with comparisons to the other skulls and to skulls of H. sapiens, Paranthropus boisei and other archaic hominins, by the team reaffirmed that the variation between the Dmanisi fossils was not excessive relative to the variation in most other hominins, with some features, such as certain midfacial measurements, even being more variable in modern humans. Although certain traits were noted as setting Skull 5 "toward the periphery of the Dmanisi shape distribution", they concluded that "neither these differences, nor the proportions of the D2600 mandible, offer sufficient grounds for labeling Skull 5 as the 'holotype of the morphologically very distinctive species H. georgicus'". The results of the analysis, which compared the skulls to many specimens of both H. erectus and H. habilis somewhat questioned the current recognition of species-level diversity in early Homo in so far that the Dmanisi hominins were found to broadly share many similarities with both species. The researchers found that the Dmanisi hominins "cannot unequivocally be referred either to H. habilis or to H. erectus" and that there, in regards to early Homo, was a "continuum of forms"; Skull 5 appears to share many primitive features with H. habilis whereas Skull 1, with the largest brain, is more similar to African H. ergaster/H. erectus. This led the researchers to hypothesize that H. erectus and H. habilis constitute a single evolutionary lineage which emerged in Africa and later spread throughout Eurasia. Phylogenetically, the Dmanisi population was suggested to represent a part of an anagenetic sequence, descended from H. habilis and ancestral to later H. erectus, placed near the base of the H. erectus lineage and already differentiated from H. habilis.
Chronology and geography
The timing of the first archaic human migration out of Africa and the identity of the hominin species that undertook this migration are controversial. This derives from the sparse Early Pleistocene hominin fossil record outside of Africa. Before the discovery of the Dmanisi skulls, the earliest known hominin fossils in Europe and Asia were either too incomplete and fragmentary to be reliably identified at the species level or exhibited morphological traits specific to the region where they were recovered. Furthermore, most of the sites where these fossils were recovered preserved geological contexts that could not be reliably dated. Because of this, there was some debate in regards to if archaic humans spread from Africa in the Late Pliocene or Early Pleistocene as the result of a web of ecomorphological factors, or around 1 million years ago as the result of technological innovations such as the Acheulean tool culture. Since the discovery of the Dmanisi fossils, further even older hominin fossils been dated and discovered in China. Stone tools manufactured by hominins have been discovered on the Loess Plateau in China and dated to 2.12 million years old, meaning that hominins must have left Africa before that time.
The Dmanisi hominins represent the earliest known hominins in Europe. The Pleistocene sediments at Dmanisi are deposited directly atop a thick layer of volcanic rock that has been radiometrically dated to 1.85 million years old. The contours of the Pleistocene sediments indicate that relatively little time passed between the deposition of this volcanic rocks and the deposition of the newer sediments. Through palaeomagnetic analyses it has been determined that the sediments are probably about 1.77 million years old, deposited in the earliest Upper Matuyama chron. The fossils of other animals found at the site, such as the rodent Mimomys (which is only known to have lived from 2.0 to 1.6 million years ago), reinforces this date.
In 2010, the hominin-bearing level of the Dmanisi fossil site was dated through argon–argon dating as 1.81 ± 0.03 million years old, only slightly younger than the underlying layer of volcanic rock. This earlier date contradicted the previous 1.77 million year old estimate based on palaeomagnetic data. Since the D2600 jaw was found in a slightly lower layer, it was considered possible that this particular fossil was even earlier in age, but since there were no estimates of the sedimentation rate at the site, there could also only be a few millennia separating the jaw from the rest of the fossils. Stone tools found at Dmanisi site range in age from 1.85 million years old to 1.78 million years old, suggesting that hominins inhabited the site throughout the time between the two estimated ages of the fossils themselves.
In the late Pliocene and Early Pleistocene, Georgia may have acted as a refuge for hominin groups living in regions of diminishing resources. The environment at Dmanisi would have been favourable to hominins due to the region's physical geography, including a temperate and varied environment and the fact that the Greater Caucasus mountain range served as a barrier for air masses from the north. They would probably have reached Georgia through the Levantine corridor, which already existed at this time. They may have established a foothold at Dmanisi before expanding elsewhere, since similar-aged animal fossils are present at sites in Romania, the Balkans and even Spain, some accompanied by stone tools reminiscent of those found at Dmanisi.
Anatomy
Skull
The cranial capacity of the Dmanisi hominins ranges from 546 to 775 cc, with an average of 631 cc. As such, their brain size overlaps with that of H. habilis ( 548–680 cc) and falls below the standard cranial capacity otherwise ascribed to H. erectus and H. ergaster (800–1000 cc). The encephalization quotient (brain-to-body-mass ratio) of the Dmanisi hominins (based on Skulls 1 to 4) is in the range of 2.6–3.1, at the lower end of estimates for H. ergaster/H. erectus and more similar to H. habilis and australopithecines. The encephalization quotient of Skull 5 was estimated at 2.4, within the range of variation for Australopithecus. There are several features that distinguish the Dmanisi hominins from early Homo such as H. habilis, including the well-developed brow ridge, sagittal keels, large orbits, the premolar teeth in the upper jaw having single roots and the angulation of the cranial vault.
The only fully complete skull found at Dmanisi is Skull 5, which can be distinguished from all other known fossil Homo specimens (including the other Dmanisi skulls) by its large prognathic face and small braincase. The combination of large teeth and large face with a small braincase is otherwise unknown in early Homo, and the two features have previously separately been used to define different species. Had the braincase and face of Skull 5 been found as separate fossils at different localities, it is likely that they would have been attributed to different species. Despite the exterior morphological similarities to earlier Homo, the anatomy of its braincase is considerably more similar to later H. erectus.
Skull 5 indicates that small brains, large faces (though it is most pronounced in Skull 5, the face is relatively prognathic in all specimens) and a generally prognathic and robust morphology was all within the range of variation of the Dmanisi hominin population. Based on the skulls and the postcranial material, the Dmanisi hominins appears to have been small-brained individuals with prominent brow ridges, and stature, body mass and limb proportions at the lower range limit of modern human variation.
Postcranial anatomy
Prior to the discovery of the Dmanisi fossils, knowledge of postcranial morphology in early Homo had been very limited. Well-preserved fossils of earlier hominins, such as Australopithecus and later Homo, such as the well-preserved skeleton of KNM WT 15000 ("Turkana Boy"; a 1.55 million year old H. ergaster/H. erectus), gave little insight into early transitions in body proportions and stature. Australopithecus were small, about 105 cm (3.4 ft) tall, and had limb proportions intermediate between those of modern humans and those of other great apes, whereas the body proportions and stature of Turkana Boy were more or less modern. Postcranial fossils attributed to H. habilis and H. rudolfensis are fragmentary, and so the time and means of transition from hominins capable of bipedalism (Australopithecus) to hominins that were obligately bipedal (H. ergaster) remained unclear. In these respects, the Dmanisi fossils fill in a number of gaps.
Through calculations based on the size of their limb bones and a humerus (no complete skeleton has yet been recovered), the Dmanisi individuals were approximately 145–166 cm (4.8–5.4 ft) tall and weighed about 40–50 kg (88–110 lbs). They were smaller than H. ergaster in Africa, possibly either due to being more primitive (H. habilis was also smaller than H. ergaster) or due to having adapted to a different environment. Limb proportions (measured through the length of the femur relative to the tibia) in the Dmanisi fossils are comparable to those of modern humans, but are also comparable to some of the earliest Homo and fossils referred to Australopithecus garhi, dated to 2.5 million years old. In terms of the absolute length of the legs, the Dmanisi hominins were more similar to later Homo (including modern humans) than to australopithecines, though the length of legs and the morphology of the metatarsals in the Dmanisi hominins was not as derived as later H. ergaster/H. erectus (such as Turkana Boy). This might indicate that the evolution of improved walking and running performance was not a sudden change, but a continual process throughout the Early and Middle Pleistocene.
Humeral torsion (the angle formed between the proximal and distal articular axis of the humerus) influences the range of movement and the orientation of the arms relative to the torso. In modern humans, the scapula (which might otherwise restrict movement) is placed dorsally, which is compensated by a high degree of humeral torsion. Comparably, the torsion in the Dmanisi fossils is quite low, which indicates differing arm movement and orientation. It might mean that the arms would have been habitually oriented more supinely (horizontally) and that the shoulder girdle might have been positioned more laterally. Athletes that require high levels of mobility in their arms tend to have reduced humeral torsion, and the Dmanisi hominins might thus have been capable of a diverse range of arm movement. Humeral torsion is also low (or entirely absent) in H. floresiensis, which means that this might be a basal trait in Homo (though it is unclear how basal or derived H. floresiensis is). Either way, the functionality and morphology of the arms in the Dmanisi hominins appears to have been more similar to the arms of earlier Homo or australopithecines than to modern humans.
Overall, the spine in the Dmanisi hominins appears to have been more similar to the spines of modern humans and early H. erectus than to the spines of australopithecines. The fossil vertebrae recovered at Dmanisi show lumbar lordosis, the orientation of the facet joints suggests that the range of spinal flexion in the Dmanisi hominins was comparable to modern humans and the relatively large cross-sectional areas of the vertebrae indicates resistance to increased compressive loads, suggesting that the hominins were capable of running and long-range walking. Because fossils of the shins and feet have been found, it is possible to reconstruct the orientation and positioning of the feet of the Dmanisi hominins relative to their walking direction. In the Dmanisi hominins, the feet would have been oriented more medially (closer together) and load would have been distributed more evenly over the rays (metatarsals and toes) than in modern humans. Despite these differences, the bones recovered suggest that the feet were overall similar to the feet of modern humans. In 2008, palaeoanthropologists Ian J. Wallace, Brigitte Demes, William L. Jungers, Martin Alvero and Anne Su stated that they believed that the Dmanisi fossils were too fragmentary to infer the position of the feet (as medially positioned) with this much certainty, believing that more fossils, particularly of the pelvis and additional foot bones, were required.
Palaeoecology
The fossils recovered at Dmanisi are all from a relatively short temporal interval and represent a 'snapshot in time'. With the sole exception of Skull 5 and its mandible (which are somewhat earlier in age), all of the hominin fossils are contemporaneous, with all of the fossils (including Skull 5) probably being deposited over a time interval possibly as short as 10–100 thousand years.
In the Pleistocene, the Dmanisi site would have been near a lake shore formed though the damming of the Mashavera and Pinazauri rivers by lava flow. The environment would have been temperate, relatively humid and forested; with woodland and gallery forests, open grasslands, bush lands, tree savannahs and rocky terrains with shrub vegetation. The environment, which would also have experienced cold winters, would have been quite unlike that of the dry and hot steppes of East Africa, where earlier (and contemporary) H. ergaster/H. erectus. Even then, Pleistocene Dmanisi was probably warmer and drier than present day Georgia, perhaps comparable to a mediterranean climate.
Though most of the preserved animal fossils suggest a predominantly forest-steppe ecosystem, some parts of the faunal assemblage highlight that parts of the environment would have been full-on steppe (as shown by ostrich and pika fossils) and full-on forest (as shown through deer fossils). The forests probably covered the mountain highlands and ground along the river channels whereas the flat river valleys were covered in steppe vegetation. Because deer fossils are particularly common (representing about 80% of the fossil found at Dmanisi), it is likely that forests were the dominant type of environment.
Animal fossils recovered in the same sediments as the hominin remains demonstrate that Pleistocene Dmanisi would have been home to a highly diverse fauna, including pikas, lizards, hamsters, tortoises, hares, jackals and fallow deer. Most of the animals found are Villafranchian (a European land mammal age) mammals and several extinct species are represented, including Megantereon megantereon and Homotherium crenatidens (both saber-toothed cats), Panthera gombaszoegensis (the European jaguar), Ursus etruscus (the Etruscan bear), Equus stenonis (the Stenon zebra), Stephanorhinus etruscus (the Etruscan rhinoceros), Pachystruthio dmanisensis (the giant ostrich), deer Cervus perrieri and Cervidae cf. Arvernoceros, the hyena Pliocrocuta perrieri, rodents Mimomys tornensis, M. ostramosensis and Kowalskia sp., Gazella cf. borbonica (the European gazelle), the goat-antelope Soergelia sp., the bison Bison georgicus and the giraffe Giraffidae cf. Palaeotraginae. The co-occurrence of so many large carnivores; Megantereon, Homotherium, Panthera and Pliocrocuta, highlights that the environment must have been quite diverse. Carnivore activity might account for the fact that all of the hominin skulls were found within just a few square metres of each other.
A large number of fossilised plant seeds have also been recovered at Dmanisi, mainly from Boraginaceae and beetroot plants. Most of the plants identified are modern species that are inedible, though some edible plants were present, such as Celtis (hackberries) and Ephedra. In conjunction with Celtis seeds being frequent at other hominin sites as well (notably Tautavel in France and Zhoukoudian in China), it is possible that hackberries (and also possibly Ephedra) were eaten by the Dmanisi hominins. The abundance of Boraginaceae seeds, often taken in later sites as an indication of human occupation, could mean that hominins were already having an impact on local flora at this early time. In addition to berries and fruit, the hominins were probably capable of exploiting a wide range of resources for food. Meat is likely to have made up a major portion of their diet, especially during the winters, when other sources of food would have been more difficult to come by.
A majority of the fossils (including all hominin fossils) have been recovered from the fourth of five layers at the site, with the upper (somewhat younger) layers preserving later sediments. Layers 2 and 3 preserve substantially less fossil material, preserving almost no carnivore fossils and no rodent or reptile remains. Although this might be partly attributable to preservation bias, it probably also reflects some palaeoecological changes, probably coinciding with the aridisation of eastern Georgia in the Early Pleistocene. The aridisation brought with it a considerable reduction in forested regions and the further spread of open vegetation and steppe environments.
Culture
Technology
Over 10,000 stone tools have been recovered at Dmanisi and their stratigraphic and spatial concentrations suggests a complex record of several reoccupations at the site. The tools found at Dmanisi are quite simple and are much the same as the tools of the Oldowan tradition created by hominins in Africa at least nearly a million years earlier. Most of the tools recovered are flake tools, but a smaller number of lithic cores and choppers have also been recovered. The raw materials to make these stone tools probably came from the rivers and outcrops near the fossil site. The presence of cores, flakes and chunks in addition to finished tools show that all the stages of knapping (shaping of stone to create tools) took place at Dmanisi. Although the technique was not very elaborate, quality rocks (such as volcanic, magmatic and sedimentary stones as well as silicified tuff) were used. The precise technique used differed from stone to stone, influenced by the shape of the initial stone. No new angles appear to have been created through the process.
In addition to the tools found at the site, many unmodified stones that must have originated elsewhere on account of their mineralogical composition (meaning they had not arrived there naturally, but had been brought by hominins) have also been recovered. Larger unmodified stones may have been used as tools for smashing bones, cutting meat and pounding flesh whereas smaller stones would have served other purposes, such as throwing. The large collections of manuports (unmodified stones moved from their natural context) recovered at Dmanisi are generally interpreted as stone reserves created by the hominins to avoid repeated visits to stone collection sites.
Social cooperation
Lordkipanidze believes that the small Dmanisi hominins may have employed aggressive scavenging, throwing small rocks to pilfer food from local carnivores. It is possible that this power-scavenging was done in groups for protection, and it may have led to the development of kinship-dependent social cooperation.
There is also indirect evidence of social cooperation in Skull 4, which is from an individual that had lost all but a single tooth by the time of his death. The old individual would have lived for a relatively long time after losing the teeth, indicated by the sockets of the teeth roots having been filled with bone tissue, something that is only possible if the individual in question is alive. Without fire to cook food, it would have been difficult for a toothless individual to survive for several years in a periodically cold environment. Though it is possible, through the use of pounding tools, that he would have survived on his own through consuming soft animal tissues, such as brains and marrow, a more compelling possibility is that he might have been cared for by other members of his species.
| Biology and health sciences | Homo | Biology |
38046595 | https://en.wikipedia.org/wiki/Evolution%20of%20tetrapods | Evolution of tetrapods | The evolution of tetrapods began about 400 million years ago in the Devonian Period with the earliest tetrapods evolved from lobe-finned fishes. Tetrapods (under the apomorphy-based definition used on this page) are categorized as animals in the biological superclass Tetrapoda, which includes all living and extinct amphibians, reptiles, birds, and mammals. While most species today are terrestrial, little evidence supports the idea that any of the earliest tetrapods could move about on land, as their limbs could not have held their midsections off the ground and the known trackways do not indicate they dragged their bellies around. Presumably, the tracks were made by animals walking along the bottoms of shallow bodies of water. The specific aquatic ancestors of the tetrapods, and the process by which land colonization occurred, remain unclear. They are areas of active research and debate among palaeontologists at present.
Most amphibians today remain semiaquatic, living the first stage of their lives as fish-like tadpoles. Several groups of tetrapods, such as the snakes and cetaceans, have lost some or all of their limbs. In addition, many tetrapods have returned to partially aquatic or fully aquatic lives throughout the history of the group (modern examples of fully aquatic tetrapods include cetaceans and sirenians). The first returns to an aquatic lifestyle may have occurred as early as the Carboniferous Period whereas other returns occurred as recently as the Cenozoic, as in cetaceans, pinnipeds, and several modern amphibians.
The change from a body plan for breathing and navigating in water to a body plan enabling the animal to move on land is one of the most profound evolutionary changes known. It is also one of the best understood, largely thanks to a number of significant transitional fossil finds in the late 20th century combined with improved phylogenetic analysis.
Origin
Evolution of fish
The Devonian period is traditionally known as the "Age of Fish", marking the diversification of numerous extinct and modern major fish groups. Among them were the early bony fishes, who diversified and spread in freshwater and brackish environments at the beginning of the period. The early types resembled their cartilaginous ancestors in many features of their anatomy, including a shark-like tailfin, spiral gut, large pectoral fins stiffened in front by skeletal elements and a largely unossified axial skeleton.
They did, however, have certain traits separating them from cartilaginous fishes, traits that would become pivotal in the evolution of terrestrial forms. With the exception of a pair of spiracles, the gills did not open singly to the exterior as they do in sharks; rather, they were encased in a gill chamber stiffened by membrane bones and covered by a bony operculum, with a single opening to the exterior. The cleithrum bone, forming the posterior margin of the gill chamber, also functioned as anchoring for the pectoral fins. The cartilaginous fishes do not have such an anchoring for the pectoral fins. This allowed for a movable joint at the base of the fins in the early bony fishes, and would later function in a weight bearing structure in tetrapods. As part of the overall armour of rhomboid cosmin scales, the skull had a full cover of dermal bone, constituting a skull roof over the otherwise shark-like cartilaginous inner cranium. Importantly, they also had a pair of ventral paired lungs, a feature lacking in sharks and rays.
It was assumed that fishes to a large degree evolved around reefs, but since their origin about 480 million years ago, they lived in near-shore environments like intertidal areas or permanently shallow lagoons and didn't start to proliferate into other biotopes before 60 million years later. A few adapted to deeper water, while solid and heavily built forms stayed where they were or migrated into freshwater. The increase of primary productivity on land during the late Devonian changed the freshwater ecosystems. When nutrients from plants were released into lakes and rivers, they were absorbed by microorganisms which in turn were eaten by invertebrates, which served as food for vertebrates. Some fish also became detritivores. Early tetrapods evolved a tolerance to environments which varied in salinity, such as estuaries or deltas.
Lungs before land
The lung/swim bladder originated as an outgrowth of the gut, forming a gas-filled bladder above the digestive system. In its primitive form, the air bladder was open to the alimentary canal, a condition called physostome and still found in many fish. The primary function of swim bladder is not entirely certain. One consideration is buoyancy. The heavy scale armour of the early bony fishes would certainly weigh the animals down. In cartilaginous fishes, lacking a swim bladder, the open sea sharks need to swim constantly to avoid sinking into the depths, the pectoral fins providing lift. Another factor is oxygen consumption. Ambient oxygen was relatively low in the early Devonian, possibly about half of modern values. Per unit volume, there is much more oxygen in air than in water, and vertebrates (especially nektonic ones) are active animals with a higher energy requirement compared to invertebrates of similar sizes. The Devonian saw increasing oxygen levels which opened up new ecological niches by allowing groups able to exploit the additional oxygen to develop into active, large-bodied animals. Particularly in tropical swampland habitats, atmospheric oxygen is much more stable, and may have prompted a reliance of proto-lungs (performing essentially an evolved type of enteral respiration) rather than gills for primary oxygen uptake. In the end, both buoyancy and breathing may have been important, and some modern physostome fishes do indeed use their bladders for both.
To function in gas exchange, lungs require a blood supply. In cartilaginous fishes and teleosts, the heart lies low in the body and pumps blood forward through the ventral aorta, which splits up in a series of paired aortic arches, each corresponding to a gill arch. The aortic arches then merge above the gills to form a dorsal aorta supplying the body with oxygenated blood. In lungfishes, bowfin and bichirs, the swim bladder is supplied with blood by paired pulmonary arteries branching off from the hindmost (6th) aortic arch. The same basic pattern is found in the lungfish Protopterus and in terrestrial salamanders, and was probably the pattern found in the tetrapods' immediate ancestors as well as the first tetrapods. In most other bony fishes the swim bladder is supplied with blood by the dorsal aorta.
The breath
In order for the lungs to allow gas exchange, the lungs first need to have gas in them. In modern tetrapods, three important breathing mechanisms are conserved from early ancestors, the first being a CO2/H+ detection system. In modern tetrapod breathing, the impulse to take a breath is triggered by a buildup of CO2 in the bloodstream and not a lack of O2. A similar CO2/H+ detection system is found in all Osteichthyes, which implies that the last common ancestor of all Osteichthyes had a need of this sort of detection system. The second mechanism for a breath is a surfactant system in the lungs to facilitate gas exchange. This is also found in all Osteichthyes, even those that are almost entirely aquatic. The highly conserved nature of this system suggests that even aquatic Osteichthyes have some need for a surfactant system, which may seem strange as there is no gas underwater. The third mechanism for a breath is the actual motion of the breath. This mechanism predates the last common ancestor of Osteichthyes, as it can be observed in Lampetra camtshatica, the sister clade to Osteichthyes. In Lampreys, this mechanism takes the form of a "cough", where the lamprey shakes its body to allow water flow across its gills. When CO2 levels in the lamprey's blood climb too high, a signal is sent to a central pattern generator that causes the lamprey to "cough" and allow CO2 to leave its body. This linkage between the CO2 detection system and the central pattern generator is extremely similar to the linkage between these two systems in tetrapods, which implies homology.
External and internal nares
The nostrils in most bony fish differ from those of tetrapods. Normally, bony fish have four nares (nasal openings), one naris behind the other on each side. As the fish swims, water flows into the forward pair, across the olfactory tissue, and out through the posterior openings. This is true not only of ray-finned fish but also of the coelacanth, a fish included in the Sarcopterygii, the group that also includes the tetrapods. In contrast, the tetrapods have only one pair of nares externally but also sport a pair of internal nares, called choanae, allowing them to draw air through the nose. Lungfish are also sarcopterygians with internal nostrils, but these are sufficiently different from tetrapod choanae that they have long been recognized as an independent development.
The evolution of the tetrapods' internal nares was hotly debated in the 20th century. The internal nares could be one set of the external ones (usually presumed to be the posterior pair) that have migrated into the mouth, or the internal pair could be a newly evolved structure. To make way for a migration, however, the two tooth-bearing bones of the upper jaw, the maxilla and the premaxilla, would have to separate to let the nostril through and then rejoin; until recently, there was no evidence for a transitional stage, with the two bones disconnected. Such evidence is now available: a small lobe-finned fish called Kenichthys, found in China and dated at around 395 million years old, represents evolution "caught in mid-act", with the maxilla and premaxilla separated and an aperture—the incipient choana—on the lip in between the two bones. Kenichthys is more closely related to tetrapods than is the coelacanth, which has only external nares; it thus represents an intermediate stage in the evolution of the tetrapod condition. The reason for the evolutionary movement of the posterior nostril from the nose to lip, however, is not well understood.
Into the shallows
The relatives of Kenichthys soon established themselves in the waterways and brackish estuaries and became the most numerous of the bony fishes throughout the Devonian and most of the Carboniferous.
The basic anatomy of the group is well known thanks to the very detailed work on Eusthenopteron by Erik Jarvik in the second half of the 20th century. The bones of the skull roof were broadly similar to those of early tetrapods and the teeth had an infolding of the enamel similar to that of labyrinthodonts. The paired fins had a build with bones distinctly homologous to the humerus, ulna, and radius in the fore-fins and to the femur, tibia, and fibula in the pelvic fins.
There were a number of families: Rhizodontida, Canowindridae, Elpistostegidae, Megalichthyidae, Osteolepidae and Tristichopteridae. Most were open-water fishes, and some grew to very large sizes; adult specimens are several meters in length. The Rhizodontid Rhizodus is estimated to have grown to , making it the largest freshwater fish known.
While most of these were open-water fishes, one group, the Elpistostegalians, adapted to life in the shallows. They evolved flat bodies for movement in very shallow water, and the pectoral and pelvic fins took over as the main propulsion organs. Most median fins disappeared, leaving only a protocercal tailfin. Since the shallows were subject to occasional oxygen deficiency, the ability to breathe atmospheric air with the swim bladder became increasingly important. The spiracle became large and prominent, enabling these fishes to draw air.
Skull morphology
The tetrapods have their root in the early Devonian tetrapodomorph fish. Primitive tetrapods developed from an osteolepid tetrapodomorph lobe-finned fish (sarcopterygian-crossopterygian), with a two-lobed brain in a flattened skull. The coelacanth group represents marine sarcopterygians that never acquired these shallow-water adaptations. The sarcopterygians apparently took two different lines of descent and are accordingly separated into two major groups: the Actinistia (including the coelacanths) and the Rhipidistia (which include extinct lines of lobe-finned fishes that evolved into the lungfish and the tetrapodomorphs).
From fins to feet
The oldest known tetrapodomorph is Kenichthys from China, dated at around 395 million years old. Two of the earliest tetrapodomorphs, dating from 380 Ma, were Gogonasus and Panderichthys. They had choanae and used their fins to move through tidal channels and shallow waters choked with dead branches and rotting plants. Their fins could have been used to attach themselves to plants or similar while they were lying in ambush for prey. The universal tetrapod characteristics of front limbs that bend forward from the elbow and hind limbs that bend backward from the knee can plausibly be traced to early tetrapods living in shallow water. Pelvic bone fossils from Tiktaalik shows, if representative for early tetrapods in general, that hind appendages and pelvic-propelled locomotion originated in water before terrestrial adaptations.
Another indication that feet and other tetrapod traits evolved while the animals were still aquatic is how they were feeding. They did not have the modifications of the skull and jaw that allowed them to swallow prey on land. Prey could be caught in the shallows, at the water's edge or on land, but had to be eaten in water where hydrodynamic forces from the expansion of their buccal cavity would force the food into their esophagus.
It has been suggested that the evolution of the tetrapod limb from fins in lobe-finned fishes is related to expression of the HOXD13 gene or the loss of the proteins actinodin 1 and actinodin 2, which are involved in fish fin development. Robot simulations suggest that the necessary nervous circuitry for walking evolved from the nerves governing swimming, utilizing the sideways oscillation of the body with the limbs primarily functioning as anchoring points and providing limited thrust. This type of movement, as well as changes to the pectoral girdle are similar to those seen in the fossil record, can be induced in bichirs by raising them out of water.
A 2012 study using 3D reconstructions of Ichthyostega concluded that it was incapable of typical quadrupedal gaits. The limbs could not move alternately as they lacked the necessary rotary motion range. In addition, the hind limbs lacked the necessary pelvic musculature for hindlimb-driven land movement. Their most likely method of terrestrial locomotion is that of synchronous "crutching motions", similar to modern mudskippers. (Viewing several videos of mudskipper "walking" shows that they move by pulling themselves forward with both pectoral fins at the same time (left & right pectoral fins move simultaneously, not alternatively). The fins are brought forward and planted; the shoulders then rotate rearward, advancing the body & dragging the tail as a third point of contact. There are no rear "limbs"/fins, and there is no significant flexure of the spine involved.)
Denizens of the swamp
The first tetrapods probably evolved in coastal and brackish marine environments, and in shallow and swampy freshwater habitats. Formerly, researchers thought the timing was towards the end of the Devonian. In 2010, this belief was challenged by the discovery of the oldest known tetrapod tracks named the Zachelmie trackways, preserved in marine sediments of the southern coast of Laurasia, now Świętokrzyskie (Holy Cross) Mountains of Poland. They were made during the Eifelian age, early Middle Devonian. The tracks, some of which show digits, date to about 395 million years ago—18 million years earlier than the oldest known tetrapod body fossils. Additionally, the tracks show that the animal was capable of thrusting its arms and legs forward, a type of motion that would have been impossible in tetrapodomorph fish like Tiktaalik. The animal that produced the tracks is estimated to have been up to long with footpads up to wide, although most tracks are only wide.
The new finds suggest that the first tetrapods may have lived as opportunists on the tidal flats, feeding on marine animals that were washed up or stranded by the tide. Currently, however, fish are stranded in significant numbers only at certain times of year, as in alewife spawning season; such strandings could not provide a significant supply of food for predators. There is no reason to suppose that Devonian fish were less prudent than those of today. According to Melina Hale of University of Chicago, not all ancient trackways are necessarily made by early tetrapods, but could also be created by relatives of the tetrapods who used their fleshy appendages in a similar substrate-based locomotion.
Palaeozoic tetrapods
Devonian tetrapods
Research by Jennifer A. Clack and her colleagues showed that the very earliest tetrapods, animals similar to Acanthostega, were wholly aquatic and quite unsuited to life on land. This is in contrast to the earlier view that fish had first invaded the land — either in search of prey (like modern mudskippers) or to find water when the pond they lived in dried out — and later evolved legs, lungs, etc.
By the late Devonian, land plants had stabilized freshwater habitats, allowing the first wetland ecosystems to develop, with increasingly complex food webs that afforded new opportunities. Freshwater habitats were not the only places to find water filled with organic matter and dense vegetation near the water's edge. Swampy habitats like shallow wetlands, coastal lagoons and large brackish river deltas also existed at this time, and there is much to suggest that this is the kind of environment in which the tetrapods evolved. Early fossil tetrapods have been found in marine sediments, and because fossils of primitive tetrapods in general are found scattered all around the world, they must have spread by following the coastal lines — they could not have lived in freshwater only.
One analysis from the University of Oregon suggests no evidence for the "shrinking waterhole" theory — transitional fossils are not associated with evidence of shrinking puddles or ponds — and indicates that such animals would probably not have survived short treks between depleted waterholes. The new theory suggests instead that proto-lungs and proto-limbs were useful adaptations to negotiate the environment in humid, wooded floodplains.
The Devonian tetrapods went through two major bottlenecks during what is known as the Late Devonian extinction; one at the end of the Frasnian stage, and one twice as large at the end of the following Famennian stage. These events of extinctions led to the disappearance of primitive tetrapods with fish-like features like Ichthyostega and their primary more aquatic relatives. When tetrapods reappear in the fossil record after the Devonian extinctions, the adult forms are all fully adapted to a terrestrial existence, with later species secondarily adapted to an aquatic lifestyle.
Lungs
It is now clear that the common ancestor of the bony fishes (Osteichthyes) had a primitive air-breathing lung—later evolved into a swim bladder in most actinopterygians (ray-finned fishes). This suggests that crossopterygians evolved in warm shallow waters, using their simple lung when the oxygen level in the water became too low.
Fleshy lobe-fins supported on bones rather than ray-stiffened fins seem to have been an ancestral trait of all bony fishes (Osteichthyes). The lobe-finned ancestors of the tetrapods evolved them further, while the ancestors of the ray-finned fishes (Actinopterygii) evolved their fins in a different direction. The most primitive group of actinopterygians, the bichirs, still have fleshy frontal fins.
Fossils of early tetrapods
Nine genera of Devonian tetrapods have been described, several known mainly or entirely from lower jaw material. All but one were from the Laurasian supercontinent, which comprised Europe, North America and Greenland. The only exception is a single Gondwanan genus, Metaxygnathus, which has been found in Australia.
The first Devonian tetrapod identified from Asia was recognized from a fossil jawbone reported in 2002. The Chinese tetrapod Sinostega pani was discovered among fossilized tropical plants and lobe-finned fish in the red sandstone sediments of the Ningxia Hui Autonomous Region of northwest China. This finding substantially extended the geographical range of these animals and has raised new questions about the worldwide distribution and great taxonomic diversity they achieved within a relatively short time.
These earliest tetrapods were not terrestrial. The earliest confirmed terrestrial forms are known from the early Carboniferous deposits, some 20 million years later. Still, they may have spent very brief periods out of water and would have used their legs to paw their way through the mud.
Why they went to land in the first place is still debated. One reason could be that the small juveniles who had completed their metamorphosis had what it took to make use of what land had to offer. Already adapted to breathe air and move around in shallow waters near land as a protection (just as modern fish and amphibians often spend the first part of their life in the comparative safety of shallow waters like mangrove forests), two very different niches partially overlapped each other, with the young juveniles in the diffuse line between. One of them was overcrowded and dangerous while the other was much safer and much less crowded, offering less competition over resources. The terrestrial niche was also a much more challenging place for primarily aquatic animals, but because of the way evolution and selection pressure work, those juveniles who could take advantage of this would be rewarded. Once they gained a small foothold on land, thanks to their pre-adaptations, favourable variations in their descendants would gradually result in continuing evolution and diversification.
At this time the abundance of invertebrates crawling around on land and near water, in moist soil and wet litter, offered a food supply. Some were even big enough to eat small tetrapods, but the land was free from dangers common in the water.
From water to land
Initially making only tentative forays onto land, tetrapods adapted to terrestrial environments over time and spent longer periods away from the water. It is also possible that the adults started to spend some time on land (as the skeletal modifications in early tetrapods such as Ichthyostega suggests) to bask in the sun close to the water's edge, while otherwise being mostly aquatic.
However, recent microanatomical and histological analysis of tetrapod fossil specimens found that early tetrapods like Acanthostega were fully aquatic, suggesting that adaptation to land happened later.
Research by Per Ahlberg and colleagues suggest that tides could have been a driving force for the evolution of tetrapods. The hypothesis proposes that as "the tide retreated, fishes became stranded in shallow water tidal-pool environments, where they would be subjected to raised temperatures and hypoxic conditions" and then fishes that developed "efficient air-breathing organs, as well as for appendages adapted for land navigation" would be selected.
Carboniferous tetrapods
Until the 1990s, there was a 30 million year gap in the fossil record between the late Devonian tetrapods and the reappearance of tetrapod fossils in recognizable mid-Carboniferous amphibian lineages. It was referred to as "Romer's Gap", which now covers the period from about 360 to 345 million years ago (the Devonian-Carboniferous transition and the early Mississippian), after the palaeontologist who recognized it.
During the "gap", tetrapod backbones developed, as did limbs with digits and other adaptations for terrestrial life. Ears, skulls and vertebral columns all underwent changes too. The number of digits on hands and feet became standardized at five, as lineages with more digits died out. Thus, those very few tetrapod fossils found in this "gap" are all the more prized by palaeontologists because they document these significant changes and clarify their history.
The transition from an aquatic, lobe-finned fish to an air-breathing amphibian was a significant and fundamental one in the evolutionary history of the vertebrates. For an organism to live in a gravity-neutral aqueous environment, then colonize one that requires an organism to support its entire weight and possess a mechanism to mitigate dehydration, required significant adaptations or exaptations within the overall body plan, both in form and in function. Eryops, an example of an animal that made such adaptations, refined many of the traits found in its fish ancestors. Sturdy limbs supported and transported its body while out of water. A thicker, stronger backbone prevented its body from sagging under its own weight. Also, through the reshaping of vestigial fish jaw bones, a rudimentary middle ear began developing to connect to the piscine inner ear, allowing Eryops to amplify, and so better sense, airborne sound.
By the Visean (mid early-Carboniferous) stage, the early tetrapods had radiated into at least three or four main branches. Some of these different branches represent the ancestors to all living tetrapods. This means that the common ancestor of all living tetrapods likely lived in the early Carboniferous. Under a narrow cladistic definition of Tetrapoda (also known as crown-Tetrapoda), which only includes descendants of this common ancestor, tetrapods first appeared in the Carboniferous. Recognizable early tetrapods (in the broad sense) are representative of the temnospondyls (e.g. Eryops) lepospondyls (e.g. Diplocaulus), anthracosaurs, which were the relatives and ancestors of the Amniota, and possibly the baphetids, which are thought to be related to temnospondyls and whose status as a main branch is yet unresolved. Depending on which authorities one follows, modern amphibians (frogs, salamanders and caecilians) are most probably derived from either temnospondyls or lepospondyls (or possibly both, although this is now a minority position).
The first amniotes (clade of vertebrates that today includes reptiles, mammals, and birds) are known from the early part of the Late Carboniferous. By the Triassic, this group had already radiated into the earliest mammals, turtles, and crocodiles (lizards and birds appeared in the Jurassic, and snakes in the Cretaceous). This contrasts sharply with the (possibly fourth) Carboniferous group, the baphetids, which have left no extant surviving lineages.
Carboniferous rainforest collapse
Amphibians and reptiles were strongly affected by the Carboniferous rainforest collapse (CRC), an extinction event that occurred ~307 million years ago. The Carboniferous period has long been associated with thick, steamy swamps and humid rainforests. Since plants form the base of almost all of Earth's ecosystems, any changes in plant distribution have always affected animal life to some degree. The sudden collapse of the vital rainforest ecosystem profoundly affected the diversity and abundance of the major tetrapod groups that relied on it. The CRC, which was a part of one of the top two most devastating plant extinctions in Earth's history, was a self-reinforcing and very rapid change of environment wherein the worldwide climate became much drier and cooler overall (although much new work is being done to better understand the fine-grained historical climate changes in the Carboniferous-Permian transition and how they arose).
The ensuing worldwide plant reduction resulting from the difficulties plants encountered in adjusting to the new climate caused a progressive fragmentation and collapse of rainforest ecosystems. This reinforced and so further accelerated the collapse by sharply reducing the amount of animal life which could be supported by the shrinking ecosystems at that time. The outcome of this animal reduction was a crash in global carbon dioxide levels, which impacted the plants even more. The aridity and temperature drop which resulted from this runaway plant reduction and decrease in a primary greenhouse gas caused the Earth to rapidly enter a series of intense Ice Ages.
This impacted amphibians in particular in a number of ways. The enormous drop in sea level due to greater quantities of the world's water being locked into glaciers profoundly affected the distribution and size of the semiaquatic ecosystems which amphibians favored, and the significant cooling of the climate further narrowed the amount of new territory favorable to amphibians. Given that among the hallmarks of amphibians are an obligatory return to a body of water to lay eggs, a delicate skin prone to desiccation (thereby often requiring the amphibian to be relatively close to water throughout its life), and a reputation of being a bellwether species for disrupted ecosystems due to the resulting low resilience to ecological change, amphibians were particularly devastated, with the Labyrinthodonts among the groups faring worst. In contrast, reptiles - whose amniotic eggs have a membrane that enables gas exchange out of water, and which thereby can be laid on land - were better adapted to the new conditions. Reptiles invaded new niches at a faster rate and began diversifying their diets, becoming herbivorous and carnivorous, rather than feeding exclusively on insects and fish. Meanwhile, the severely impacted amphibians simply could not out-compete reptiles in mastering the new ecological niches, and so were obligated to pass the tetrapod evolutionary torch to the increasingly successful and swiftly radiating reptiles.
Permian tetrapods
In the Permian period: early "amphibia" (labyrinthodonts) clades included temnospondyl and anthracosaur; while amniote clades included the Sauropsida and the Synapsida. Sauropsida would eventually evolve into today's reptiles and birds; whereas Synapsida would evolve into today's mammals. During the Permian, however, the distinction was less clear—amniote fauna being typically described as either reptile or as mammal-like reptile. The latter (synapsida) were the most important and successful Permian animals.
The end of the Permian saw a major turnover in fauna during the Permian–Triassic extinction event: probably the most severe mass extinction event of the phanerozoic. There was a protracted loss of species, due to multiple extinction pulses. Many of the once large and diverse groups died out or were greatly reduced.
Mesozoic tetrapods
Life on Earth seemed to recover quickly after the Permian extinctions, though this was mostly in the form of disaster taxa such as the hardy Lystrosaurus. Specialized animals that formed complex ecosystems with high biodiversity, complex food webs, and a variety of niches, took much longer to recover. Current research indicates that this long recovery was due to successive waves of extinction, which inhibited recovery, and to prolonged environmental stress to organisms that continued into the Early Triassic. Recent research indicates that recovery did not begin until the start of the mid-Triassic, 4M to 6M years after the extinction; and some writers estimate that the recovery was not complete until 30M years after the P-Tr extinction, i.e. in the late Triassic.
A small group of reptiles, the diapsids, began to diversify during the Triassic, notably the dinosaurs. By the late Mesozoic, the large labyrinthodont groups that first appeared during the Paleozoic such as temnospondyls and reptile-like amphibians had gone extinct. All current major groups of sauropsids evolved during the Mesozoic, with birds first appearing in the Jurassic as a derived clade of theropod dinosaurs. Many groups of synapsids such as anomodonts and therocephalians that once comprised the dominant terrestrial fauna of the Permian also became extinct during the Mesozoic; during the Triassic, however, one group (Cynodontia) gave rise to the descendant taxon Mammalia, which survived through the Mesozoic to later diversify during the Cenozoic.
Cenozoic tetrapods
The Cenozoic era began with the end of the Mesozoic era and the Cretaceous epoch; and continues to this day. The beginning of the Cenozoic was marked by the Cretaceous-Paleogene extinction event during which all non-avian dinosaurs became extinct. The Cenozoic is sometimes called the "Age of Mammals".
During the Mesozoic, the prototypical mammal was a small nocturnal insectivore something like a tree shrew. Due to their nocturnal habits, most mammals lost their color vision, and greatly improved their sense of olfaction and hearing. All mammals of today are shaped by this origin. Primates and some Australian marsupials later re-evolved color-vision.
During the Paleocene and Eocene, most mammals remained small (under 20 kg). Cooling climate in the Oligocene and Miocene, and the expansion of grasslands favored the evolution of larger mammalian species.
Ratites run, and penguins swim and waddle: but the majority of birds are rather small, and can fly. Some birds use their ability to fly to complete epic globe-crossing migrations, while others such as frigate birds fly over the oceans for months on end.
Bats have also taken flight, and along with cetaceans have developed echolocation or sonar.
Whales, seals, manatees, and sea otters have returned to the ocean and an aquatic lifestyle.
Vast herds of ruminant ungulates populate the grasslands and forests. Carnivores have evolved to keep the herd-animal populations in check.
Extant (living) tetrapods
Following the great faunal turnover at the end of the Mesozoic, only seven groups of tetrapods were left, with one, the Choristodera, becoming extinct 11 million years ago due to unknown reasons. The other six persisting today also include many extinct members:
Lissamphibia: frogs and toads, salamanders, and caecilians
Testudines: turtle, tortoises and terrapins
Lepidosauria: tuataras, lizards, amphisbaenians and snakes
Crocodilia: crocodiles, alligators, caimans and gharials
Neornithes: extant birds
Mammalia: mammals
| Biology and health sciences | Basics_4 | Biology |
46870556 | https://en.wikipedia.org/wiki/Valence%20and%20conduction%20bands | Valence and conduction bands | In solid-state physics, the valence band and conduction band are the bands closest to the Fermi level, and thus determine the electrical conductivity of the solid. In nonmetals, the valence band is the highest range of electron energies in which electrons are normally present at absolute zero temperature, while the conduction band is the lowest range of vacant electronic states. On a graph of the electronic band structure of a semiconducting material, the valence band is located below the Fermi level, while the conduction band is located above it.
The distinction between the valence and conduction bands is meaningless in metals, because conduction occurs in one or more partially filled bands that take on the properties of both the valence and conduction bands.
Band gap
In semiconductors and insulators the two bands are separated by a band gap, while in conductors the bands overlap. A band gap is an energy range in a solid where no electron states can exist due to the quantization of energy. Within the concept of bands, the energy gap between the valence band and the conduction band is the band gap. Electrical conductivity of non-metals is determined by the susceptibility of electrons to be excited from the valence band to the conduction band.
Electrical conductivity
Semiconductor band structureSee electrical conduction and semiconductor for a more detailed description of band structure.
In solids, the ability of electrons to act as charge carriers depends on the availability of vacant electronic states. This allows the electrons to increase their energy (i.e., accelerate) when an electric field is applied. Similarly, holes (empty states) in the almost filled valence band also allow for conductivity.
As such, the electrical conductivity of a solid depends on its capability to flow electrons from the valence to the conduction band. Hence, in the case of a semimetal with an overlap region, the electrical conductivity is high. If there is a small band gap (Eg), then the flow of electrons from valence to conduction band is possible only if an external energy (thermal, etc.) is supplied; these groups with small Eg are called semiconductors. If the Eg is sufficiently high, then the flow of electrons from valence to conduction band becomes negligible under normal conditions; these groups are called insulators.
There is some conductivity in semiconductors, however. This is due to thermal excitation—some of the electrons get enough energy to jump the band gap in one go. Once they are in the conduction band, they can conduct electricity, as can the hole they left behind in the valence band. The hole is an empty state that allows electrons in the valence band some degree of freedom.
Band edge shifts of semiconductor nanoparticles
The edge shifting of size-dependent conduction and/or valence band is a phenomenon being studied in the field of semiconductor nanocrystals. The radius limit of occurrence of the semiconductor nanocrystal is the effective Bohr radius of the nanocrystal. The conduction and/or valence band edges shift to higher energy levels under this radius limit due to discrete optical transitions when semiconductor nanocrystal is restricted by the exciton. As a result of this edge shifting, the size of the conduction and/or valence band is decreased. This size-dependent edge shifting of conduction and/or valence band can provide plenty of useful information regarding the size or concentration of the semiconductor nanoparticles or band structures.
| Physical sciences | Basics_2 | Physics |
32440734 | https://en.wikipedia.org/wiki/Pathogenic%20Escherichia%20coli | Pathogenic Escherichia coli | Escherichia coli ( ; commonly abbreviated E. coli) is a gram-negative, rod-shaped bacterium that is commonly found in the lower intestine of warm-blooded organisms (endotherms). Most E. coli strains are harmless, but pathogenic varieties cause serious food poisoning, septic shock, meningitis, or urinary tract infections in humans. Unlike normal flora E. coli, the pathogenic varieties produce toxins and other virulence factors that enable them to reside in parts of the body normally not inhabited by E. coli, and to damage host cells. These pathogenic traits are encoded by virulence genes carried only by the pathogens.
Introduction
E. coli and related bacteria constitute about 0.1% of gut flora, and fecal–oral transmission is the major route through which pathogenic strains of the bacterium cause disease. Cells are able to survive outside the body for only a limited amount of time, which makes them ideal indicator organisms to test environmental samples for fecal contamination. The bacterium can also be grown easily and inexpensively in a laboratory setting, and has been intensively investigated for over 60 years. E. coli is the most widely studied prokaryotic model organism, and an important species in the fields of biotechnology and microbiology, where it has served as the host organism for the majority of work with recombinant DNA.
German paediatrician and bacteriologist Theodor Escherich discovered E. coli in 1885, and it is now classified as part of the Gammaproteobacterial family Enterobacteriaceae.
Serotypes
Pathogenic E. coli strains can be categorized based on elements that can elicit an immune response in animals, namely:
O antigen: part of lipopolysaccharide layer
K antigen: capsule
H antigen: flagellin
For example, E. coli strain EDL933 is of the O157:H7 group.
O antigen
The outer membrane of an E. coli cell contains millions of lipopolysaccharide (LPS) molecules, which consists of:
O antigen, a polymer of immunogenic repeating oligosaccharides (1–40 units)
Core region of phosphorylated nonrepeating oligosaccharides
Lipid A (endotoxin)
The O antigen is used for serotyping E. coli and these O group designations go from O1 to O181, with the exception of some groups which have been historically removed, namely O31, O47, O67, O72, O93 (now K84), O94, and O122; groups 174 to 181 are provisional (O174=OX3 and O175=OX7) or are under investigation (176 to 181 are STEC/VTEC). Additionally subtypes exist for many O groups (e.g. O128ab and O128ac).
Antibodies towards several O antigens cross-react with other O antigens and partially to K antigens not only from E. coli, but also from other Escherichia species and Enterobacteriaceae species.
The O antigen is encoded by the rfb gene cluster. rol (cld) gene encodes the regulator of lipopolysaccharide O-chain length.
K antigen
The acidic capsular polysaccharide (CPS) is a thick, mucous-like, layer of polysaccharide that surrounds some pathogen E. coli.
There are two separate groups of K-antigen groups, named group I and group II (while a small in-between subset (K3, K10, and K54/K96) has been classified as group III). The former (I) consist of 100 kDa (large) capsular polysaccharides, while the latter (II), associated with extraintestinal diseases, are under 50 kDa in size.
Group I K antigens are only found with certain O-antigens (O8, O9, O20, and O101 groups), they are further subdivided on the basis of absence (IA, similar to that of Klebsiella species in structure) or presence (IB) of amino sugars and some group I K-antigens are attached to the lipid A-core of the lipopolysaccharide (KLPS), in a similar way to O antigens (and being structurally identical to O antigens in some instances are only considered as K antigens when co-expressed with another authentic O antigen).
Group II K antigens closely resemble those in gram-positive bacteria and greatly differ in composition and are further subdivided according to their acidic components, generally 20–50% of the CPS chains are bound to phospholipids.
In total there are 60 different K antigens that have been recognized (K1, K2a/ac, K3, K4, K5, K6, K7 (=K56), K8, K9 (=O104), K10, K11, K12 (K82), K13(=K20 and =K23), K14, K15, K16, K18a, K18ab (=K22), K19, K24, K26, K27, K28, K29, K30, K31, K34, K37, K39, K40, K41, K42, K43, K44, K45, K46, K47, K49 (O46), K50, K51, K52, K53, K54 (=K96), K55, K74, K84, K85ab/ac (=O141), K87 (=O32), K92, K93, K95, K97, K98, K100, K101, K102, K103, KX104, KX105, and KX106).
H antigen
The H antigen is a major component of flagella, involved in E. coli movement. It is generally encoded by the fliC gene
There are 53 identified H antigens, numbered from H1 to H56 (H13 and H22 were not E. coli antigens but from Citrobacter freundii, and H50 was found to be the same as H10).
Role in disease
In humans and in domestic animals, virulent strains of E. coli can cause various diseases.
In humans: gastroenteritis, urinary tract infection, and neonatal meningitis. In rarer cases, virulent strains are also responsible for hemolytic-uremic syndrome, peritonitis, mastitis, gram-negative pneumonia and sepsis.
Gastrointestinal infection
Certain strains of E. coli, such as O157:H7, O104:H4, O121, O26, O103, O111, O145, and O104:H21, produce potentially lethal toxins. Food poisoning caused by E. coli can result from eating unwashed vegetables or poorly butchered and undercooked meat.
O157:H7 is also notorious for causing serious and even life-threatening complications such as hemolytic-uremic syndrome. This particular strain is linked to the 2006 United States E. coli outbreak due to fresh spinach.
The O104:H4 strain is equally virulent. Antibiotic and supportive treatment protocols for it are not as well-developed, as it has the ability to be very enterohemorrhagic like O157:H7, causing bloody diarrhea, but also is more enteroaggregative, meaning it adheres well and clumps to intestinal membranes. It is the strain behind the deadly June 2011 E. coli outbreak in Europe. Severity of the illness varies considerably; it can be fatal, particularly to young children, the elderly or the immunocompromised, but is more often mild.
Earlier, poor hygienic methods of preparing meat in Scotland killed seven people in 1996 due to E. coli poisoning, and left hundreds more infected.
E. coli can harbour both heat-stable and heat-labile enterotoxins. The latter, termed LT, contain one A subunit and five B subunits arranged into one holotoxin, and are highly similar in structure and function to cholera toxins. The B subunits assist in adherence and entry of the toxin into host intestinal cells, while the A subunit is cleaved and prevents cells from absorbing water, causing diarrhea. LT is secreted by the Type 2 secretion pathway.
If E. coli bacteria escape the intestinal tract through a perforation (for example from an ulcer, a ruptured appendix, or due to a surgical error) and enter the abdomen, they usually cause peritonitis that can be fatal without prompt treatment. However, E. coli are extremely sensitive to such antibiotics as streptomycin or gentamicin. Recent research suggests treatment of enteropathogenic E. coli with antibiotics may significantly increase the chance of developing haemolytic-uremic syndrome.
Intestinal mucosa-associated E. coli are observed in increased numbers in the inflammatory bowel diseases, Crohn's disease and ulcerative colitis. Invasive strains of E. coli exist in high numbers in the inflamed tissue, and the number of bacteria in the inflamed regions correlates to the severity of the bowel inflammation.
Gastrointestinal infections can cause the body to develop memory T cells to attack gut microbes that are in the intestinal tract. Food poisoning can trigger an immune response to microbial gut bacteria. Some researchers suggest that it can lead to inflammatory bowel disease.
Virulence properties
Enteric E. coli (EC) are classified on the basis of serological characteristics and virulence properties. The major pathotypes of E. coli that cause diarrhea are listed below.
{| class="wikitable"
|-
! Name
! Hosts
!Type of diarrhea
! Description
|-
| EnterotoxigenicE. coli (ETEC)
| causative agent of diarrhea (without fever) in humans, pigs, sheep, goats, cattle, dogs, and horses
|Watery
| ETEC uses various colonization factors (CFs) to bind enterocyte cells in the small intestine. ETEC can produce two proteinaceous enterotoxins:
The larger of the two proteins, LT enterotoxin, is similar to cholera toxin in structure and function.
The smaller protein, ST enterotoxin causes cGMP accumulation in the target cells and a subsequent secretion of fluid and electrolytes into the intestinal lumen.
ETEC strains are noninvasive, and they do not leave the intestinal lumen. ETEC is the leading bacterial cause of diarrhea in children in the developing world, as well as the most common cause of traveler's diarrhea. Each year, there are estimated to be 840 million cases of ETEC in developing countries. About 280 million of these cases, as well as 325,000 deaths, are in children under the age of five.
|-
| Enteropathogenic E. coli (EPEC)
| causative agent of diarrhea in humans, rabbits, dogs, cats and horses
|Watery
| Like ETEC, EPEC also causes diarrhea, but the molecular mechanisms of colonization and aetiology are different. EPEC lack ST and LT toxins, but they use an adhesin known as intimin to bind host intestinal cells. This pathotype has an array of virulence factors that are similar to those found in Shigella. Adherence to the intestinal mucosa causes a rearrangement of actin in the host cell, causing significant deformation. EPEC cells are moderately invasive (i.e. they enter host cells) and elicit an inflammatory response. Changes in intestinal cell ultrastructure due to "attachment and effacement" is likely the prime cause of diarrhea in those afflicted with EPEC.
|-
| EnteroaggregativeE. coli (EAEC)
| found only in humans
|Watery
| So named because they have fimbriae which aggregate tissue culture cells, EAEC bind to the intestinal mucosa to cause watery diarrhea without fever. EAEC are noninvasive. They produce a hemolysin and an ST enterotoxin similar to that of ETEC.
|-
| [[Enteroinvasive Escherichia coli|EnteroinvasiveE. coli]] (EIEC)
| found only in humans
|Bloody or nonbloody
| EIEC infection causes a syndrome that is identical to shigellosis, with profuse diarrhea and high fever.
|-
| EnterohemorrhagicE. coli (EHEC)
| found in humans, cattle, and goats
|Bloody or nonbloody
| The most infamous member of this pathotype is strain O157:H7, which causes bloody diarrhea and no fever. EHEC can cause hemolytic-uremic syndrome and sudden kidney failure. It uses bacterial fimbriae for attachment (E. coli common pilus, ECP), is moderately invasive and possesses a phage-encoded shiga toxin that can elicit an intense inflammatory response.
|-
| Adherent-Invasive E. coli (AIEC)
| found in humans
| -
| AIEC are able to invade intestinal epithelial cells and replicate intracellularly. It is likely that AIEC are able to proliferate more effectively in hosts with defective innate immunity. They are associated with the ileal mucosa in Crohn's disease.
|}
Epidemiology of gastrointestinal infection
Transmission of pathogenic E. coli often occurs via fecal–oral transmission. Common routes of transmission include: unhygienic food preparation, farm contamination due to manure fertilization, irrigation of crops with contaminated greywater or raw sewage, feral pigs on cropland, or direct consumption of sewage-contaminated water. Dairy and beef cattle are primary reservoirs of E. coli O157:H7, and they can carry it asymptomatically and shed it in their feces. Food products associated with E. coli outbreaks include cucumber, raw ground beef, raw seed sprouts or spinach, raw milk, unpasteurized juice, unpasteurized cheese and foods contaminated by infected food workers via fecal–oral route.
According to the U.S. Food and Drug Administration, the fecal-oral cycle of transmission can be disrupted by cooking food properly, preventing cross-contamination, instituting barriers such as gloves for food workers, instituting health care policies so food industry employees seek treatment when they are ill, pasteurization of juice or dairy products and proper hand washing requirements.
Shiga toxin-producing E. coli (STEC), specifically serotype O157:H7, have also been transmitted by flies, as well as direct contact with farm animals, petting zoo animals, and airborne particles found in animal-rearing environments.
Urinary tract infection
Uropathogenic E. coli (UPEC) is responsible for approximately 90% of urinary tract infections (UTI) seen in individuals with ordinary anatomy. In ascending infections, fecal bacteria colonize the urethra and spread up the urinary tract to the bladder as well as to the kidneys (causing pyelonephritis), or the prostate in males. Because women have a shorter urethra than men, they are 14 times more likely to suffer from an ascending UTI.
Uropathogenic E. coli use P fimbriae (pyelonephritis-associated pili) to bind urinary tract urothelial cells and colonize the bladder. These adhesins specifically bind D-galactose-D-galactose moieties on the P blood-group antigen of erythrocytes and uroepithelial cells. Approximately 1% of the human population lacks this receptor, and its presence or absence dictates an individual's susceptibility or non-susceptibility, respectively, to E. coli urinary tract infections. Uropathogenic E. coli produce alpha- and beta-hemolysins, which cause lysis of urinary tract cells.
Another virulence factor commonly present in UPEC is the Dr family of adhesins, which are particularly associated with cystitis and pregnancy-associated pyelonephritis. The Dr adhesins bind Dr blood group antigen (Dra) which is present on decay accelerating factor (DAF) on erythrocytes and other cell types. There, the Dr adhesins induce the development of long cellular extensions that wrap around the bacteria, accompanied by the activation of several signal transduction cascades, including activation of PI-3 kinase.
UPEC can evade the body's innate immune defences (e.g. the complement system) by invading superficial umbrella cells to form intracellular bacterial communities (IBCs). They also have the ability to form K antigen, capsular polysaccharides that contribute to biofilm formation. Biofilm-producing E. coli are recalcitrant to immune factors and antibiotic therapy, and are often responsible for chronic urinary tract infections. K antigen-producing E. coli infections are commonly found in the upper urinary tract.
Descending infections, though relatively rare, occur when E. coli cells enter the upper urinary tract organs (kidneys, bladder or ureters) from the blood stream.
Neonatal meningitis (NMEC)
It is produced by a serotype of Escherichia coli that contains a capsular antigen called K1. The colonization of the newborn's intestines with these strains, that are present in the mother's vagina, lead to bacteremia, which leads to meningitis. And because of the absence of the IgM antibodies from the mother (these do not cross the placenta because FcRn only mediates the transfer of IgG), plus the fact that the body recognizes as self the K1 antigen, as it resembles the cerebral glycopeptides, this leads to a severe meningitis in the neonates.
Possible role in colorectal cancer
Some E. coli strains contain a polyketide synthase genomic island (pks), which encodes a multi-enzymatic machinery that produces colibactin, a substance that damages DNA. About 20% of humans are colonized with E. coli that harbor the pks island. Colibactin can cause cellular senescence or cancer by damaging DNA. However, the mucosal barrier prevents E. coli from reaching the surface of enterocytes. Mucin production diminishes in the presence of inflammation. Only when some inflammatory condition co-occurs with E. coli infection is the bacterium able to deliver colibactin to enterocytes and induce tumorogenesis.
Animal diseases
In animals, virulent strains of E. coli are responsible of a variety of diseases, among others sepsis and diarrhea in newborn calves, acute mastitis in dairy cows, colibacillosis also associated with chronic respiratory disease with Mycoplasma where it causes perihepatitis, pericarditis, septicaemic lungs, peritonitis etc. in poultry, and Alabama rot in dogs.
Most of the serotypes isolated from poultry are pathogenic only for birds. So avian sources of E. coli do not seem to be important sources of infections in other animals.
Laboratory diagnosis
Diagnosis of infectious diarrhea and identification of antimicrobial resistance is performed using a stool culture with subsequent antibiotic sensitivity testing. It requires a minimum of 2 days and maximum of several weeks to culture gastrointestinal pathogens. The sensitivity (true positive) and specificity (true negative) rates for stool culture vary by pathogen, although a number of human pathogens can not be cultured. For culture-positive samples, antimicrobial resistance testing takes an additional 12–24 hours to perform.
Current point of care molecular diagnostic tests can identify E. coli and antimicrobial resistance in the identified strains much faster than culture and sensitivity testing. Microarray-based platforms can identify specific pathogenic strains of E. coli and E. coli-specific AMR genes in two hours or less with high sensitivity and specificity, but the size of the test panel (i.e., total pathogens and antimicrobial resistance genes) is limited. Newer metagenomics-based infectious disease diagnostic platforms are currently being developed to overcome the various limitations of culture and all currently available molecular diagnostic technologies.
In stool samples, microscopy will show gram-negative rods, with no particular cell arrangement. Then, either MacConkey agar or EMB agar (or both) are inoculated with the stool. On MacConkey agar, deep red colonies are produced, as the organism is lactose-positive, and fermentation of this sugar will cause the medium's pH to drop, leading to darkening of the medium. Growth on EMB agar produces black colonies with a greenish-black metallic sheen. This is diagnostic of E. coli. The organism is also lysine positive, and grows on TSI slant with a (A/A/g+/H2S-) profile. Also, IMViC is {+ + – -} for E. coli; as it is indole-positive (red ring) and methyl red-positive (bright red), but VP-negative (no change-colourless) and citrate-negative (no change-green colour). Tests for toxin production can use mammalian cells in tissue culture, which are rapidly killed by shiga toxin. Although sensitive and very specific, this method is slow and expensive.
Typically, diagnosis has been done by culturing on sorbitol-MacConkey medium and then using typing antiserum. However, current latex assays and some typing antisera have shown cross reactions with non-E. coli O157 colonies. Furthermore, not all E. coli O157 strains associated with HUS are nonsorbitol fermentors.
The Council of State and Territorial Epidemiologists recommend that clinical laboratories screen at least all bloody stools for this pathogen. The U.S. Centers for Disease Control and Prevention recommend that "all stools submitted for routine testing from patients with acute community-acquired diarrhea (regardless of patient age, season of the year, or presence or absence of blood in the stool) be simultaneously cultured for E. coli O157:H7 (O157 STEC) and tested with an assay that detects Shiga toxins to detect non-O157 STEC".
Antibiotic therapy and resistance
Bacterial infections are usually treated with antibiotics. However, the antibiotic sensitivities of different strains of E. coli vary widely. As gram-negative organisms, E. coli are resistant to many antibiotics that are effective against gram-positive organisms. Antibiotics which may be used to treat E. coli infection include amoxicillin, as well as other semisynthetic penicillins, many cephalosporins, carbapenems, aztreonam, trimethoprim-sulfamethoxazole, ciprofloxacin, nitrofurantoin and the aminoglycosides.
Antibiotic resistance is a growing problem. Some of this is due to overuse of antibiotics in humans, but some of it is probably due to the use of antibiotics as growth promoters in animal feeds. A study published in the journal Science in August 2007 found the rate of adaptative mutations in E. coli is "on the order of 10−5 per genome per generation, which is 1,000 times as high as previous estimates," a finding which may have significance for the study and management of bacterial antibiotic resistance.
Antibiotic-resistant E. coli may also pass on the genes responsible for antibiotic resistance to other species of bacteria, such as Staphylococcus aureus, through a process called horizontal gene transfer. E. coli bacteria often carry multiple drug resistance plasmids, and under stress, readily transfer those plasmids to other species. Mixing of species in the intestines allows E. coli to accept and transfer plasmids from and to other bacteria. Thus, E. coli and the other enterobacteria are important reservoirs of transferable antibiotic resistance.
Beta-lactamase strains
Resistance to beta-lactam antibiotics has become a particular problem in recent decades, as strains of bacteria that produce extended-spectrum beta-lactamases have become more common. These beta-lactamase enzymes make many, if not all, of the penicillins and cephalosporins ineffective as therapy. Extended-spectrum beta-lactamase–producing E. coli (ESBL E. coli) are highly resistant to an array of antibiotics, and infections by these strains are difficult to treat. In many instances, only two oral antibiotics and a very limited group of intravenous antibiotics remain effective. In 2009, a gene called New Delhi metallo-beta-lactamase (shortened NDM-1) that even gives resistance to intravenous antibiotic carbapenem, were discovered in India and Pakistan on E. coli bacteria.
Increased concern about the prevalence of this form of "superbug" in the United Kingdom has led to calls for further monitoring and a UK-wide strategy to deal with infections and the deaths. Susceptibility testing should guide treatment in all infections in which the organism can be isolated for culture.
Phage therapy
Phage therapy—viruses that specifically target pathogenic bacteria—has been developed over the last 80 years, primarily in the former Soviet Union, where it was used to prevent diarrhea caused by E. coli. Presently, phage therapy for humans is available only at the Phage Therapy Center in the Republic of Georgia and in Poland. However, on January 2, 2007, the United States FDA gave Omnilytics approval to apply its E. coli O157:H7 killing phage in a mist, spray or wash on live animals that will be slaughtered for human consumption.
The enterobacteria phage T4, a highly studied phage, targets E. coli for infection.
While phage therapy as a treatment for E. coli is unavailable in the US, some commercially available dietary supplements contain strains of phage that target E. coli and have been shown to reduce E. coli load in healthy subjects. This is not considered phage therapy, however, because it does not involve selection of phages with activity against a patient's specific strain of bacterium.
Vaccination
Researchers have actively been working to develop safe, effective vaccines to lower the worldwide incidence of E. coli infection. In March 2006, a vaccine eliciting an immune response against the E. coli O157:H7 O-specific polysaccharide conjugated to recombinant exotoxin A of Pseudomonas aeruginosa (O157-rEPA) was reported to be safe in children two to five years old. Previous work had already indicated it was safe for adults. A phase III clinical trial to verify the large-scale efficacy of the treatment is planned.
In 2006, Fort Dodge Animal Health (Wyeth) introduced an effective, live, attenuated vaccine to control airsacculitis and peritonitis in chickens. The vaccine is a genetically modified avirulent vaccine that has demonstrated protection against O78 and untypeable strains.
In January 2007, the Canadian biopharmaceutical company Bioniche announced it has developed a cattle vaccine which reduces the number of O157:H7 shed in manure by a factor of 1000, to about 1000 pathogenic bacteria per gram of manure.
In April 2009, a Michigan State University researcher announced he had developed a working vaccine for a strain of E. coli. Dr. Mahdi Saeed, Professor of epidemiology and infectious disease in MSU's colleges of Veterinary Medicine and Human Medicine, has applied for a patent for his discovery and has made contact with pharmaceutical companies for commercial production.
In May 2018, a team led by researchers at Washington University School of Medicine collaborated with Johns Hopkins University to conduct a study which delves deeper into the known link between blood type and the severity of E. coli infection. Results of the study showed that "the bacterium is more likely to cause severe diarrhea in people with type A blood," and this finding may aid current and future efforts to develop an effective vaccine against the pathogenic strains of E. coli. | Biology and health sciences | Bacterial infections | Health |
38058647 | https://en.wikipedia.org/wiki/Origin%20of%20the%20Moon | Origin of the Moon | The origin of the Moon is usually explained by a Mars-sized body striking the Earth, creating a debris ring that eventually collected into a single natural satellite, the Moon, but there are a number of variations on this giant-impact hypothesis, as well as alternative explanations, and research continues into how the Moon came to be formed. Other proposed scenarios include captured body, fission, formed together (condensation theory, synestia), planetesimal collisions (formed from asteroid-like bodies), and collision theories.
The standard giant-impact hypothesis suggests that a Mars-sized body called Theia impacted the proto-Earth, creating a large debris ring around Earth, which then accreted to form the Moon. This collision also resulted in the 23.5° tilted axis of the Earth, thus causing the seasons. The Moon's oxygen isotopic ratios seem to be essentially identical to Earth's. Oxygen isotopic ratios, which may be measured very precisely, yield a unique and distinct signature for each Solar System body. If Theia had been a separate protoplanet, it probably would have had a different oxygen isotopic signature than proto-Earth, as would the ejected mixed material. Also, the Moon's titanium isotope ratio (50Ti/47Ti) appears so close to the Earth's (within 4 parts per million) that little if any of the colliding body's mass could have been part of the Moon.
Formation
Some theories have been stated that presume the proto-Earth had no large moons early in the formation of the Solar System, 4.425 billion years ago, Earth being basically rock and lava. Theia, an early protoplanet the size of Mars, hit Earth in such a way that it ejected a considerable amount of material away from Earth. Some proportion of these ejecta escaped into space, but the rest consolidated into a single spherical body in orbit about Earth, creating the Moon.
The hypothesis requires a collision between a proto-Earth about 90% of the diameter of present Earth, and another body the diameter of Mars (half of the terrestrial diameter and a tenth of its mass). The latter has sometimes been referred to as Theia, the name of the mother of Selene, the Moon goddess in Greek mythology. This size ratio is needed in order for the resulting system to have sufficient angular momentum to match the current orbital configuration. Such an impact would have put enough material into orbit around Earth to have eventually accumulated to form the Moon.
Computer simulations show a need for a glancing blow, which causes a portion of the collider to form a long arm of material that then shears off. The asymmetrical shape of the Earth following the collision then causes this material to settle into an orbit around the main mass. The energy involved in this collision is impressive: possibly trillions of tonnes of material would have been vaporized and melted. In parts of the Earth, the temperature would have risen to .
The Moon's relatively small iron core (compared to other rocky planets and moons in the Solar System) is explained by Theia's core mostly merging into that of Earth. The lack of volatiles in the lunar samples is also explained in part by the energy of the collision. The energy liberated during the reaccretion of material in orbit around Earth would have been sufficient to melt a large portion of the Moon, leading to the generation of a magma ocean.
The newly formed Moon orbited at about one-tenth the distance that it does today, and spiraled outward because of tidal friction transferring angular momentum from the rotations of both bodies to the Moon's orbital motion. Along the way, the Moon's rotation became tidally locked to Earth, so that one side of the Moon continually faces toward Earth. Also, the Moon would have collided with and incorporated any small preexisting satellites of Earth, which would have shared the Earth's composition, including isotopic abundances. The geology of the Moon has since been more independent of the Earth.
A 2012 study on the depletion of zinc isotopes on the Moon found evidence for volatile depletion consistent with the giant-impact origin for Earth and the Moon. In 2013, a study was released that indicated that water in lunar magma is indistinguishable from that in carbonaceous chondrites and nearly the same as that of Earth in isotopic composition.
Derivatives of the hypothesis
Although the giant-impact hypothesis explains many aspects of the Earth–Moon system, there are still a few unresolved problems, such as the Moon's volatile elements not being as depleted as expected from such an energetic impact.
Another issue is lunar and Earth isotope comparisons. In 2001, the most precise measurement yet of the isotopic signatures of Moon rocks was published. Surprisingly, the Apollo lunar samples carried an isotopic signature identical to Earth rocks, but different from other Solar System bodies. Because most of the material that went into orbit to form the Moon was thought to come from Theia, this observation was unexpected. In 2007, researchers from Caltech showed that the likelihood of Theia having an identical isotopic signature as the Earth is very small (less than 1 percent chance). Published in 2012, an analysis of titanium isotopes in Apollo lunar samples showed that the Moon has the same composition as Earth, which conflicts with the Moon forming far from Earth's orbit.
Merger of two planets
To help resolve these problems, a theory published in 2012 posits that two bodies—each five times the size of Mars—collided, then recollided, forming a large disc of mixed debris that eventually formed Earth and the Moon.
Immediate origin of the Moon as a post-impact satellite
The Moon is traditionally thought to have coalesced from the debris ejected by a giant impact onto the early Earth. However, such models struggle to explain the similar isotopic compositions of Earth and lunar rocks at the same time as the system's angular momentum, and the details of potential impact scenarios are hotly debated. Above a high resolution threshold for simulations, a study published in 2022 finds that giant impacts can immediately place a satellite with similar mass and iron content to the Moon into orbit far outside Earth's Roche limit. Even satellites that initially pass within the Roche limit can reliably and predictably survive, by being partially stripped and then torqued onto wider, stable orbits. Furthermore, the outer layers of these directly formed satellites are molten over cooler interiors and are composed of around 60% proto-Earth material. This could alleviate the tension between the Moon's Earth-like isotopic composition and the different signature expected for the impactor. Immediate formation opens up new options for the Moon's early orbit and evolution, including the possibility of a highly tilted orbit to explain the lunar inclination, and offers a simpler, single-stage scenario for the origin of the Moon.
Multiple impacts
In 2004, Russian astrophysicist Nikolai Gorkavyi proposed a novel model titled the multiple large asteroid impacts model, which found support from a notable group of Russian astronomers in 2013 and later, in 2017, by planetary researchers at Weizmann Institute of Science in Rehovot, Israel. In general terms, the main idea of the model suggests that the Moon was formed as a result of a violent rain of large asteroids (1–100 km) that repeatedly hammered the fledgling Earth over millions of years. Such a series of smaller impacts, which were probably more common in the early Solar System, could blast enough rocky Earth debris into orbit to form a protosatellite disk which later forms into a small moonlet. As repeated impacts created more balls of debris, the moonlets could merge over time into one large moon.
Synestia hypothesis
In 2018, researchers at Harvard and UC Davis developed computer models demonstrating that one possible outcome of a planetary collision is that it creates a synestia, a mass of vaporized rock and metal which forms a biconcave disc extending beyond the lunar orbit. The synestia will eventually shrink and cool to accrete the satellite and reform the impacted planet.
Other hypotheses
Capture
This hypothesis states that the Moon was captured by the Earth. This model was popular until the 1980s, and some points in its favor are the Moon's size, orbit, and tidal locking.
One problem is understanding the capture mechanism. A close encounter of two planetary bodies typically results in either collision or altered trajectories. For this hypothesis to work, there might have been a large atmosphere around the primitive Earth, which would slow the movement of the Moon by aerobraking before it could escape. The hypothesis may also explain the irregular satellite orbits of Jupiter and Saturn. However, this hypothesis does not adequately explain the essentially identical oxygen isotope ratios of the two bodies.
Fission
This is the now discredited hypothesis that an ancient, rapidly spinning Earth expelled a piece of its mass. This was first proposed by George Darwin (son of the famous biologist Charles Darwin) in 1879 and retained some popularity until Apollo. The Austrian geologist Otto Ampferer in 1925 also suggested the emerging of the Moon as cause for continental drift.
It was proposed that the Pacific Ocean represented the scar of this event. Today it is known that the oceanic crust that makes up this ocean basin is relatively young, about 200 million years old or less, whereas the Moon is much older. The Moon does not consist of oceanic crust but of mantle material, which originated inside the proto-Earth in the Precambrian.
Accretion
The hypothesis of accretion suggests that the Earth and the Moon formed together as a double system from the primordial accretion disk of the Solar System or even a black hole.
The problem with this hypothesis is that it does not explain the angular momentum of the Earth-Moon system or why the Moon has a relatively small iron core compared to the Earth (25% of its radius compared to 50% for the Earth).
Nuclear explosion
A more radical alternative hypothesis, published in 1997 by russian scientist Vladimir Anisichkin: "The Moon could have formed as a result of explosion of the Protoearth" proposes that the Moon may have been formed from the nuclear explosion of actinides located on the solid inner core of the Earth. Dutch scientists Rob de Meijer and Wim van Westrenen suggested in 2010 that the Moon may have formed from a nuclear explosion caused by the centrifugal force of an earlier, spinning proto-Earth. The centrifugal force would have concentrated heavy elements such as thorium and uranium on the equatorial plane and at the boundary between the Earth's outer core and mantle. If the concentrations of these radioactive elements were high enough, this could have led to a nuclear chain reaction that became supercritical, causing a nuclear explosion ejecting the Moon into orbit. This natural nuclear fission reactor has been observed on Earth at a much smaller scale. The fission hypothesis can adequately explain the similarities and differences in the elemental and isotopic compositions of the Earth and the Moon.
Additional theories and studies
2011
In 2011, it was theorized that a second moon existed 4.5 billion years ago, and later had an impact with the Moon, as a part of the accretion process in the formation of the Moon.
2013
One hypothesis, presented only as a possibility, was that the Earth captured the Moon from Venus.
2017
Uranium–lead dating of Apollo 14 zircon fragments shows the age of the Moon to be about 4.51 billion years.
2020
A team of researchers of the Miniature Radio Frequency (Mini-RF) instrument on NASA's Lunar Reconnaissance Orbiter (LRO) spacecraft concluded that the Moon's subsurface may be richer in metals, like iron and titanium, more than scientists had believed.
In July 2020 scientists report that the Moon formed 4.425 ±0.025 bya, about 85 million years later than thought, and that it hosted an ocean of magma for substantially longer than previously thought (for ~200 million years).
2023
On 1 November 2023, scientists reported that, according to computer simulations, remnants of a protoplanet, named Theia, could be inside the Earth, left over from a collision with the Earth in ancient times, and afterwards becoming the Moon.
| Physical sciences | Solar System | Astronomy |
49905070 | https://en.wikipedia.org/wiki/Radioactive%20source | Radioactive source | A radioactive source is a known quantity of a radionuclide which emits ionizing radiation, typically one or more of the radiation types gamma rays, alpha particles, beta particles, and neutron radiation.
Sources can be used for irradiation, where the radiation performs a significant ionising function on a target material, or as a radiation metrology source, which is used for the calibration of radiometric process and radiation protection instrumentation. They are also used for industrial process measurements, such as thickness gauging in the paper and steel industries. Sources can be sealed in a container (highly penetrating radiation) or deposited on a surface (weakly penetrating radiation), or they can be in a fluid.
As an irradiation source they are used in medicine for radiation therapy and in industry for such as industrial radiography, food irradiation, sterilization, vermin disinfestation, and irradiation crosslinking of PVC.
Radionuclides are chosen according to the type and character of the radiation they emit, intensity of emission, and the half-life of their decay. Common source radionuclides include cobalt-60, iridium-192, and strontium-90. The SI measurement quantity of source activity is the Becquerel, though the historical unit Curies is still in partial use, such as in the US, despite their NIST strongly advising the use of the SI unit. The SI unit for health purposes is mandatory in the EU.
An irradiation source typically lasts for between 5 and 15 years before its activity drops below useful levels. However sources with long half-life radionuclides when utilised as calibration sources can be used for much longer.
Sealed sources
Many radioactive sources are sealed, meaning they are permanently either completely contained in a capsule or firmly bonded solid to a surface. Capsules are usually made of stainless steel, titanium, platinum or another inert metal. The use of sealed sources removes almost all risk of dispersion of radioactive material into the environment due to mishandling, but the container is not intended to attenuate radiation, so further shielding is required for radiation protection. Sealed sources are used in almost all applications where the source does not need to be chemically or physically included in a liquid or gas.
Categorisation of sealed sources
Sealed sources are categorised by the IAEA according to their activity in relation to a minimum dangerous source (where a dangerous source is one that could cause significant injury to humans). The ratio used is A/D, where A is the activity of the source and D is the minimum dangerous activity.
Note that sources with sufficiently low radioactive output (such as those used in Smoke detectors) as to not cause harm to humans are not categorised.
Calibration sources
Calibration sources are used primarily for the calibration of radiometric instrumentation, which is used on process monitoring or in radiological protection.
Capsule sources, where the radiation effectively emits from a point, are used for beta, gamma and X-ray instrument calibration. High level sources are normally used in a calibration cell: a room with thick walls to protect the operator and the provision of remote operation of the source exposure.
The plate source is in common use for the calibration of radioactive contamination instruments. This has a known amount of radioactive material fixed to its surface, such as an alpha and/or beta emitter, to allow the calibration of large area radiation detectors used for contamination surveys and personnel monitoring. Such measurements are typically counts per unit time received by the detector, such as counts per minute or counts per second.
Unlike the capsule source, the plate source emitting material must be on the surface to prevent attenuation by a container or self-shielding due to the material itself. This is particularly important with alpha particles which are easily stopped by a small mass. The Bragg curve shows the attenuation effect in free air.
Unsealed sources
Unsealed sources are sources that are not in a permanently sealed container, and are used extensively for medical purposes. They are used when the source needs to be dissolved in a liquid for injection into a patient or ingestion by the patient. Unsealed sources are also used in industry in a similar manner for leak detection as a Radioactive tracer.
Disposal
Disposal of expired radioactive sources presents similar challenges to the disposal of other nuclear waste, although to a lesser degree. Spent low level sources will sometimes be sufficiently inactive that they are suitable for disposal via normal waste disposal methods — usually landfill. Other disposal methods are similar to those for higher-level radioactive waste, using various depths of borehole depending on the activity of the waste.
A notorious incident of neglect in disposing of a high level source was the Goiânia accident, which resulted in several fatalities. The Tammiku radioactive material theft involved the accidental theft of caesium-137 material in Tammiku, Estonia, in 1994.
| Physical sciences | Nuclear physics | Physics |
53769380 | https://en.wikipedia.org/wiki/Webbed%20foot | Webbed foot | The webbed foot is a specialized limb with interdigital membranes (webbings) that aids in aquatic locomotion, present in a variety of tetrapod vertebrates. This adaptation is primarily found in semiaquatic species, and has convergently evolved many times across vertebrate taxa.
It likely arose from mutations in developmental genes that normally cause tissue between the digits to apoptose. These mutations were beneficial to many semiaquatic animals because the increased surface area from the webbing allowed for more swimming propulsion and swimming efficiency, especially in surface swimmers. The webbed foot also has enabled other novel behaviors like escape responses and mating behaviors. A webbed foot may also be called a paddle to contrast it from a more hydrofoil-like flipper.
Morphology
A webbed foot has connecting tissue between the toes of the foot. Several distinct conditions can give rise to webbed feet, including interdigital webbing and syndactyly. The webbing can consist of membrane, skin, or other connective tissue and varies widely in different taxa. This modification significantly increases the surface area of the feet. One of the consequences of this modification in some species, specifically birds, is that the feet are a major location for heat loss. In birds, the legs utilize countercurrent heat exchange so that blood reaching the feet is already cooled by blood returning to the heart to minimize this effect. Webbed feet take on a variety of different shapes; in birds, the webbing can even be discontinuous, as seen in lobate-footed birds like grebes. However, one of the most common is the delta (Δ) or triangular shape seen in most waterfowl and frogs. This delta wing shape is a solution that has convergently evolved in many taxa, and is also used in aircraft to allow for high lift forces at high attack angles. This shape allows for the production of large forces during swimming through both drag-based and lift-based propulsion.
Webbed feet are a compromise between aquatic and terrestrial locomotion. Aquatic control surfaces of non-piscine vertebrates may be paddles or hydrofoils. Paddles generate less lift than hydrofoils, and paddling is associated with drag-based control surfaces. The roughly triangular design of webbed feet, with a broad distal end, is specialized to increase propulsive efficiency by affecting a larger mass of water over generating increased lift. This is in contrast to a more hydrofoil-like flipper of many permanently aquatic animals.
Evolution
Development
Webbed feet are the result of mutations in genes that normally cause interdigital tissue between the toes to apoptose. Apoptosis, or programmed cell death, in development is mediated by a variety of pathways, and normally causes the creation of digits by death of tissue separating the digits. Different vertebrate species with webbed feet have different mutations that disrupt this process, indicating that the structure arose independently in these lineages.
In humans, syndactyly can arise from as many as nine unique subtypes with their own clinical, morphological, and genetic fingerprints. In addition, the same genetic mutations can underlie different phenotypic expressions of syndactyly. While these conditions are disorders in humans, the variability in genetic cause of webbed digits informs our understanding of how this morphological change arose in species where webbed feet were selectively advantageous. These conditions also demonstrate a variety of genetic targets for mutation resulting in webbed feet, which may explain how this homologous structure could have arisen many times over the course of evolutionary history.
One pathway implicated in interdigital necrosis is the bone morphogenetic protein (BMP) signaling pathway. BMP signaling molecules (BMPs) are expressed in the tissue regions between digits during development. In experiments with chickens, mutations to a BMP receptor disrupted the apoptosis of interdigital tissue and caused webbed feet similar to ducks to develop. In ducks, BMPs are not expressed at all. These results indicate that in avian lineages, the disruption of BMP signaling in interdigital tissue caused webbed feet to arise. The magnitude of attenuation in this pathway is correlated with the amount of interdigital tissue preserved. Other genetic changes implicated in webbed feet development in avians include reduction of TGFβ-induced chondrogenesis and reduction of msx-1 and msx-2 gene expression.
Webbed feet could also arise due to being linked to other morphological changes, without a selective advantage. In salamanders, webbed feet have arisen in multiple lineages, but in most do not contribute to increased function. However, in the cave salamander species Chiropterotriton magnipes (bigfoot splayfoot salamander), their webbed feet are morphologically unique from other salamanders and may serve a functional purpose. This demonstrates that webbed feet arise from developmental changes, but do not necessarily correlate with a selective advantage functionally.
Phylogeny
Webbed feet have arisen in all major vertebrate lineages with limbed animals. Most webbed-footed species spend part of their time in aquatic environments, indicating that this homologous structure provides some advantage to swimmers. Some examples from each class are highlighted here, but this is not a complete listing.
Amphibians
Of the three orders of amphibians, Anura (frogs and toads) and Urodela (salamanders) have representative species with webbed feet. Frogs that live in aquatic environments, like the common frog (Rana temporaria), have webbed feet. Salamanders in arboreal and cave environments also have webbed feet, but in most species, this morphological change does not likely have a functional advantage.
Reptiles
Reptiles have webbed-footed representatives that include freshwater turtles and geckos. While turtles with webbed feet are aquatic, most geckos live in terrestrial and arboreal environments.
Birds
Birds are typically classified as a sub-group of reptiles, but they are a distinct class within vertebrates, so are discussed separately. Birds have a wide span of representatives with webbed feet, due to the diversity of waterfowl. Ducks, geese, and swans all have webbed feet. They utilize different foraging behaviors in water, but use similar modes of locomotion. There is a wide variety of webbing and lobation styles in bird feet, including birds with all digits joined in webbing, like the Brandt's cormorant and birds with lobed digits, like grebes. Palmations and lobes enable swimming or help walking on loose ground such as mud. Penguins are notable for being the only birds (as well as animals in general) with both webbed feet and flippers (they have 2 each). The webbed or palmated feet of birds can be categorized into several types:
Palmate: only the anterior digits (2–4) are joined by webbing. Found in ducks, geese and swans, gulls and terns, and other aquatic birds (penguins, auks, flamingos, fulmars, jaegers, loons, petrels, shearwaters and skimmers). Diving ducks also have a lobed hind toe (1), and gulls, terns and allies have a reduced hind toe.
Totipalmate: all four digits (1–4) are joined by webbing. Found in gannets and boobies, pelicans, cormorants, anhingas, frigatebirds, and tropicbirds. Some gannets have brightly colored feet used in display.
Semipalmate: a small web between the anterior digits (2–4). Found in some plovers (Eurasian dotterels) and sandpipers (semipalmated sandpipers, stilt sandpipers, upland sandpipers, greater yellowlegs and willet), avocet, herons (only two toes), magpie geese, screamers, all grouse, and some domesticated breeds of chicken. Plovers and lapwings have a vestigial hind toe (1), and sandpipers and their allies have a reduced and raised hind toe barely touching the ground. The sanderling is the only sandpiper having 3 toes (tridactyl foot).
Lobate: the anterior digits (2–4) are edged with lobes of skin. Lobes expand or contract when a bird swims. In grebes, coots, phalaropes, finfoots and some palmate-footed ducks on the hallux (1). Grebes have more webbing between the toes than coots and phalaropes.
The palmate foot is most common.
Mammals
Some semiaquatic mammals have webbed feet. Most of these have interdigital webbing, as opposed to the syndactyly found in birds. Some notable examples include the platypus, the beaver, the otter, and the water opossum. Capybaras have slightly webbed feet, while hippopotamuses have webbed toes.
Function
Swimming propulsion
In many species, webbed feet likely evolved to aid in generation of propulsion during swimming. Most webbed-footed animals utilize paddling modes of locomotion where their feet stroke backwards relative to their whole body motion, generating a propulsive force. The interdigital membrane increases the surface area, which increases the propulsive drag the animal can generate with each stroke of its foot. This is a drag-based mode of propulsion. However, some waterfowl also utilize lift-based modes of propulsion, where their feet generate hydrodynamic lift due to the angle of attack of the foot and the relative water velocity. For example, great-crested grebes use solely lift-based propulsion due to their lateral foot stroke and asymmetric, lobated toes. Most waterfowl use a combination of these two modes of propulsion, where the first third of their foot stroke generates propulsive drag and the last two-thirds of the stroke generates propulsive lift.
The stroke of the foot through the water also generates vortices that aid propulsion. During the transition from drag-based to lift-based propulsion in ducks, leading edge vortices formed on the front of the foot are shed, which creates a flow of water over the foot that likely aids lift production. Other species also create these vortices during their webbed foot stroke. Frogs also create vortices that shed off their feet when swimming in water. The vortices from the two feet do not interfere with each other; therefore, each foot is generating forward propulsion independently.
Most fully aquatic vertebrates do not use paddling modes of locomotion, instead using undulatory modes of locomotion or flipper locomotion. Fully aquatic mammals and animals typically have flippers instead of webbed feet, which are a more heavily specialized and modified limb. It is hypothesized that an evolutionary transition between semiaquatic and fully aquatic higher vertebrates (especially mammals) involved both the specialization of swimming limbs and the transition to underwater, undulatory modes of motion. However, for semiaquatic animals that mainly swim at the surface, webbed feet are highly functional; they trade-off effectively between efficient terrestrial and aquatic locomotion. In addition, some waterfowl can also use paddling modes for underwater swimming, with added propulsion from flapping their wings. Diving ducks can swim underwater to forage. These ducks expend more than 90% of their energy to overcome their own buoyancy when they dive. They can also achieve higher speeds underwater due to surface speeds being limited to their hull speed; at this speed, the wave drag increases to the point where the duck cannot swim faster.
Other behaviors
In ducks, webbed feet have also enabled extreme forms of propulsion that are used for escape behaviors and courtship display. Surface swimmers are speed-limited due to increasing drag as they approach a physically defined hull speed, which is determined by their body length. In order to achieve speeds higher than hull speed, some ducks, like eider ducks, use distinctive modes of locomotion that involve lifting the body out of the water. They can hydroplane, where they lift part of their body out of the water and paddle with their webbed feet to generate forces that allow them to overcome gravity; they also use paddle-assisted flying, where the whole body is lifted out of the water, and the wings and feet work in concert to generate lift forces. In extreme cases, this type of behavior is used for sexual selection. Western and Clark's grebes utilize their lobated feet to generate nearly 50% of the force required to allow them to walk on water in elaborate sexual displays; they are likely the largest animal to "walk" on water, and are an order of magnitude heavier than the well-known lizards that exhibit a similar behavior.
Terrestrial locomotion
While webbed feet have mainly arisen in swimming species, they can also aid in terrestrial locomotors by increasing contact area on slick or soft surfaces. For P. rangei, the Namib sand gecko, their webbed feet may serve as sand shoes that enable them to move atop sand dunes. However, some ecologists believe that their webbed feet do not aid aboveground locomotion, but are mainly utilized as shovels for burrowing and digging in the sand. In salamanders, most species do not benefit from the increased surface area of their feet. However, some, like the bigfoot splayfoot salamander (Chiropterotriton magnipes) increase their body size to foot surface area ratio enough to provide increased suction. This species lives in cave environments where they often encounter wet, slick surfaces. Therefore, their webbed feet may enable them to move on these surfaces with ease.
| Biology and health sciences | External anatomy and regions of the body | Biology |
36645032 | https://en.wikipedia.org/wiki/Curiosity%20%28rover%29 | Curiosity (rover) | Curiosity is a car-sized Mars rover exploring Gale crater and Mount Sharp on Mars as part of NASA's Mars Science Laboratory (MSL) mission. Curiosity was launched from Cape Canaveral (CCAFS) on November 26, 2011, at 15:02:00 UTC and landed on Aeolis Palus inside Gale crater on Mars on August 6, 2012, 05:17:57 UTC. The Bradbury Landing site was less than from the center of the rover's touchdown target after a journey.
Mission goals include an investigation of the Martian climate and geology, an assessment of whether the selected field site inside Gale has ever offered environmental conditions favorable for microbial life (including investigation of the role of water), and planetary habitability studies in preparation for human exploration.
In December 2012, Curiosity two-year mission was extended indefinitely, and on August 5, 2017, NASA celebrated the fifth anniversary of the Curiosity rover landing. On August 6, 2022, a detailed overview of accomplishments by the Curiosity rover for the last ten years was reported. The rover is still operational, and as of , Curiosity has been active on Mars for sols ( total days; ) since its landing (see current status).
The NASA/JPL Mars Science Laboratory/Curiosity Project Team was awarded the 2012 Robert J. Collier Trophy by the National Aeronautic Association "In recognition of the extraordinary achievements of successfully landing Curiosity on Mars, advancing the nation's technological and engineering capabilities, and significantly improving humanity's understanding of ancient Martian habitable environments." Curiosity rover design serves as the basis for NASA's 2021 Perseverance mission, which carries different scientific instruments.
Mission
Goals and objectives
As established by the Mars Exploration Program, the main scientific goals of the MSL mission are to help determine whether Mars could ever have supported life, as well as determining the role of water, and to study the climate and geology of Mars. The mission results will also help prepare for human exploration. To contribute to these goals, MSL has eight main scientific objectives:
Biological
Determine the nature and inventory of organic carbon compounds
Investigate the chemical building blocks of life (carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur)
Identify features that may represent the effects of biological processes (biosignatures and biomolecules)
Geological and geochemical
Investigate the chemical, isotopic, and mineralogical composition of the Martian surface and near-surface geological materials
Interpret the processes that have formed and modified rocks and soils
Planetary process
Assess long-timescale (i.e., 4-billion-year) Martian atmospheric evolution processes
Determine present state, distribution, and cycling of water and carbon dioxide
Surface radiation
Characterize the broad spectrum of surface radiation, including galactic and cosmic radiation, solar proton events and secondary neutrons. As part of its exploration, it also measured the radiation exposure in the interior of the spacecraft as it traveled to Mars, and it is continuing radiation measurements as it explores the surface of Mars. This data would be important for a future crewed mission.
About one year into the surface mission, and having assessed that ancient Mars could have been hospitable to microbial life, the MSL mission objectives evolved to developing predictive models for the preservation process of organic compounds and biomolecules; a branch of paleontology called taphonomy. The region it is set to explore has been compared to the Four Corners region of the North American west.
Name
A NASA panel selected the name Curiosity following a nationwide student contest that attracted more than 9,000 proposals via the Internet and mail. A sixth-grade student from Kansas, 12-year-old Clara Ma from Sunflower Elementary School in Lenexa, Kansas, submitted the winning entry. As her prize, Ma won a trip to NASA's Jet Propulsion Laboratory (JPL) in Pasadena, California, where she signed her name directly onto the rover as it was being assembled.
Ma wrote in her winning essay:
Cost
Adjusted for inflation, Curiosity has a life-cycle cost of US$3.2 billion in 2020 dollars. By comparison, the 2021 Perseverance rover has a life-cycle cost of US$2.9 billion.
Rover and lander specifications
Curiosity is long by wide by high, larger than Mars Exploration Rovers, which are long and have a mass of including of scientific instruments. In comparison to Pancam on the Mars Exploration Rovers, the MastCam-34 has 1.25× higher spatial resolution and the MastCam-100 has 3.67× higher spatial resolution.
Curiosity has an advanced payload of scientific equipment on Mars. It is the fourth NASA robotic rover sent to Mars since 1996. Previous successful Mars rovers are Sojourner from the Mars Pathfinder mission (1997), and Spirit (2004–2010) and Opportunity (2004–2018) rovers from the Mars Exploration Rover mission.
Curiosity comprised 23% of the mass of the spacecraft at launch. The remaining mass was discarded in the process of transport and landing.
Dimensions: Curiosity has a mass of including of scientific instruments. The rover is long by wide by in height.
The main box-like chassis forms the Warm Electronics Box (WEB).
Power source: Curiosity is powered by a radioisotope thermoelectric generator (RTG), like the successful Viking 1 and Viking 2 Mars landers in 1976.
Radioisotope power systems (RPSs) are generators that produce electricity from the decay of radioactive isotopes, such as plutonium-238, which is a non-fissile isotope of plutonium. Heat given off by the decay of this isotope generates electrical power using thermocouples, providing consistent power during all seasons and through the day and night. Waste heat is also used via pipes to warm systems, freeing electrical power for the operation of the vehicle and instruments. Curiosity RTG is fueled by of plutonium-238 dioxide supplied by the U.S. Department of Energy.
Curiositys RTG is the Multi-Mission Radioisotope Thermoelectric Generator (MMRTG), designed and built by Rocketdyne and Teledyne Energy Systems under contract to the U.S. Department of Energy, and fueled and tested by the Idaho National Laboratory. Based on legacy RTG technology, it represents a more flexible and compact development step, and is designed to produce 110 watts of electrical power and about 2,000 watts of thermal power at the start of the mission. The MMRTG produces less power over time as its plutonium fuel decays: at its minimum lifetime of 14 years, electrical power output is down to 100 watts. The power source generates of electrical energy each day, much more than the solar panels of the now retired Mars Exploration Rovers, which generated about each day. The electrical output from the MMRTG charges two rechargeable lithium-ion batteries. This enables the power subsystem to meet peak power demands of rover activities when the demand temporarily exceeds the generator's steady output level. Each battery has a capacity of about 42 ampere hours.
Heat rejection system: The temperatures at the landing site vary seasonally and the thermal system warms the rover as needed. The thermal system does so in several ways: passively, through the dissipation to internal components; by electrical heaters strategically placed on key components; and by using the rover heat rejection system (HRS). It uses fluid pumped through of tubing in the rover body so that sensitive components are kept at optimal temperatures. The fluid loop serves the additional purpose of rejecting heat when the rover has become too warm, and it can also gather waste heat from the power source by pumping fluid through two heat exchangers that are mounted alongside the RTG. The HRS also has the ability to cool components if necessary.
Computers: The two identical on-board rover computers, called Rover Compute Element (RCE) contain radiation hardened memory to tolerate the extreme radiation from space and to safeguard against power-off cycles. The computers run the VxWorks real-time operating system (RTOS). Each computer's memory includes 256 kilobytes (kB) of EEPROM, 256 megabytes (MB) of dynamic random-access memory (DRAM), and 2 gigabytes (GB) of flash memory. For comparison, the Mars Exploration Rovers used 3 MB of EEPROM, 128 MB of DRAM, and 256 MB of flash memory.
The RCE computers use the RAD750 Central processing unit (CPU), which is a successor to the RAD6000 CPU of the Mars Exploration Rovers. The IBM RAD750 CPU, a radiation-hardened version of the PowerPC 750, can execute up to 400 Million instructions per second (MIPS), while the RAD6000 CPU is capable of up to only 35 MIPS. Of the two on-board computers, one is configured as backup and will take over in the event of problems with the main computer. On February 28, 2013, NASA was forced to switch to the backup computer due to a problem with the active computer's flash memory, which resulted in the computer continuously rebooting in a loop. The backup computer was turned on in safe mode and subsequently returned to active status on March 4, 2013. The same problem happened in late March, resuming full operations on March 25, 2013.
The rover has an inertial measurement unit (IMU) that provides 3-axis information on its position, which is used in rover navigation. The rover's computers are constantly self-monitoring to keep the rover operational, such as by regulating the rover's temperature. Activities such as taking pictures, driving, and operating the instruments are performed in a command sequence that is sent from the flight team to the rover. The rover installed its full surface operations software after the landing because its computers did not have sufficient main memory available during flight. The new software essentially replaced the flight software.
The rover has four processors. One of them is a SPARC processor that runs the rover's thrusters and descent-stage motors as it descended through the Martian atmosphere. Two others are PowerPC processors: the main processor, which handles nearly all of the rover's ground functions, and that processor's backup. The fourth one, another SPARC processor, commands the rover's movement and is part of its motor controller box. All four processors are single core.
Communications Communications: Curiosity is equipped with significant telecommunication redundancy by several means: an X band transmitter and receiver that can communicate directly with Earth, and an Ultra high frequency (UHF) Electra-Lite software-defined radio for communicating with Mars orbiters. Communication with orbiters is the main path for data return to Earth, since the orbiters have both more power and larger antennas than the lander, allowing for faster transmission speeds. Telecommunication included a small deep space transponder on the descent stage and a solid-state power amplifier on the rover for X-band. The rover also has two UHF radios, the signals of which orbiting relay satellites are capable of relaying back to Earth. Signals between Earth and Mars take an average of 14 minutes, 6 seconds. Curiosity can communicate with Earth directly at speeds up to 32 kbit/s, but the bulk of the data transfer is being relayed through the Mars Reconnaissance Orbiter and Odyssey orbiter. Data transfer speeds between Curiosity and each orbiter may reach 2000 kbit/s and 256 kbit/s, respectively, but each orbiter is able to communicate with Curiosity for only about eight minutes per day (0.56% of the time). Communication from and to Curiosity relies on internationally agreed space data communications protocols as defined by the Consultative Committee for Space Data Systems.
Jet Propulsion Laboratory (JPL) is the central data distribution hub where selected data products are provided to remote science operations sites as needed. JPL is also the central hub for the uplink process, though participants are distributed at their respective home institutions. At landing, telemetry was monitored by three orbiters, depending on their dynamic location: the 2001 Mars Odyssey, Mars Reconnaissance Orbiter and ESA's Mars Express satellite. As of February 2019, the MAVEN orbiter is being positioned to serve as a relay orbiter while continuing its science mission.
Mobility systems
Mobility systems: Curiosity is equipped with six diameter wheels in a rocker-bogie suspension. These are scaled versions of those used on Mars Exploration Rovers (MER). The suspension system also served as landing gear for the vehicle, unlike its smaller predecessors. Each wheel has cleats and is independently actuated and geared, providing for climbing in soft sand and scrambling over rocks. Each front and rear wheel can be independently steered, allowing the vehicle to turn in place as well as execute arcing turns. Each wheel has a pattern that helps it maintain traction but also leaves patterned tracks in the sandy surface of Mars. That pattern is used by on-board cameras to estimate the distance traveled. The pattern itself is Morse code for "JPL" (·--- ·--· ·-··). The rover is capable of climbing sand dunes with slopes up to 12.5°. Based on the center of mass, the vehicle can withstand a tilt of at least 50° in any direction without overturning, but automatic sensors limit the rover from exceeding 30° tilts. After six years of use, the wheels are visibly worn with punctures and tears.
Curiosity can roll over obstacles approaching in height, and it has a ground clearance of . Based on variables including power levels, terrain difficulty, slippage and visibility, the maximum terrain-traverse speed is estimated to be per day by automatic navigation. The rover landed about from the base of Mount Sharp, (officially named Aeolis Mons) and it is expected to traverse a minimum of during its primary two-year mission. It can travel up to per hour but average speed is about per hour. The vehicle is 'driven' by several operators led by Vandi Verma, group leader of Autonomous Systems, Mobility and Robotic Systems at JPL, who also cowrote the PLEXIL language used to operate the rover.
Landing
Curiosity landed in Quad 51 (nicknamed Yellowknife) of Aeolis Palus in the crater Gale. The landing site coordinates are: . The location was named Bradbury Landing on August 22, 2012, in honor of science fiction author Ray Bradbury. Gale, an estimated 3.5 to 3.8 billion-year-old impact crater, is hypothesized to have first been gradually filled in by sediments; first water-deposited, and then wind-deposited, possibly until it was completely covered. Wind erosion then scoured out the sediments, leaving an isolated mountain, Aeolis Mons ("Mount Sharp"), at the center of the wide crater. Thus, it is believed that the rover may have the opportunity to study two billion years of Martian history in the sediments exposed in the mountain. Additionally, its landing site is near an alluvial fan, which is hypothesized to be the result of a flow of ground water, either before the deposition of the eroded sediments or else in relatively recent geologic history.
According to NASA, an estimated 20,000 to 40,000 heat-resistant bacterial spores were on Curiosity at launch, and as many as 1,000 times that number may not have been counted.
Rover's landing system
Previous NASA Mars rovers became active only after the successful entry, descent and landing on the Martian surface. Curiosity, on the other hand, was active when it touched down on the surface of Mars, employing the rover suspension system for the final set-down.
Curiosity transformed from its stowed flight configuration to a landing configuration while the MSL spacecraft simultaneously lowered it beneath the spacecraft descent stage with a tether from the "sky crane" system to a soft landingwheels downon the surface of Mars. After the rover touched down it waited 2 seconds to confirm that it was on solid ground then fired several pyrotechnic fasteners activating cable cutters on the bridle to free itself from the spacecraft descent stage. The descent stage then flew away to a crash landing, and the rover prepared itself to begin the science portion of the mission.
Travel status
As of August 16, 2024, the rover has driven from its landing site over 4255 sols (Martian days).
Duplicate testing rovers
Curiosity has two full sized, vehicle system test beds (VSTB), a twin rover used for testing and problem solving, MAGGIE rover (Mars Automated Giant Gizmo for Integrated Engineering) with a computer brain and a Scarecrow''' rover without a computer brain. They are housed at the JPL Mars Yard for problem solving on simulated Mars terrain.
Scientific instruments
The general sample analysis strategy begins with high-resolution cameras to look for features of interest. If a particular surface is of interest, Curiosity can vaporize a small portion of it with an infrared laser and examine the resulting spectra signature to query the rock's elemental composition. If that signature is intriguing, the rover uses its long arm to swing over a microscope and an X-ray spectrometer to take a closer look. If the specimen warrants further analysis, Curiosity can drill into the boulder and deliver a powdered sample to either the Sample Analysis at Mars (SAM) or the CheMin analytical laboratories inside the rover.
The MastCam, Mars Hand Lens Imager (MAHLI), and Mars Descent Imager (MARDI) cameras were developed by Malin Space Science Systems and they all share common design components, such as on-board digital image processing boxes, 1600 × 1200 charge-coupled device (CCDs), and an RGB Bayer pattern filter.
In total, the rover carries 17 cameras: HazCams (8), NavCams (4), MastCams (2), MAHLI (1), MARDI (1), and ChemCam (1).
Mast Camera (Mastcam)
The Mastcam system provides multiple spectra and true-color imaging with two cameras. The cameras can take true-color images at 1600×1200 pixels and up to 10 frames per second hardware-compressed video at 720p (1280×720).
One Mastcam camera is the Medium Angle Camera (MAC; also referred to as Mastcam-34 and Mastcam-Left), which has a focal length, a 15° field of view, and can yield 22 cm/pixel (8.7 in/pixel) scale at . The other camera in the Mastcam is the Narrow Angle Camera (NAC; also Mastcam-100 and Mastcam-Right), which has a focal length, a 5.1° field of view, and can yield 7.4 cm/pixel (2.9 in/pixel) scale at . Malin also developed a pair of Mastcams with zoom lenses, but these were not included in the rover because of the time required to test the new hardware and the looming November 2011 launch date. However, the improved zoom version was selected to be incorporated on the Mars 2020 mission as Mastcam-Z.
Each camera has eight gigabytes of flash memory, which is capable of storing over 5,500 raw images, and can apply real time lossless data compression. The cameras have an autofocus capability that allows them to focus on objects from to infinity. In addition to the fixed RGBG Bayer pattern filter, each camera has an eight-position filter wheel. While the Bayer filter reduces visible light throughput, all three colors are mostly transparent at wavelengths longer than 700 nm, and have minimal effect on such infrared observations.
Chemistry and Camera complex (ChemCam)
ChemCam is a suite of two remote sensing instruments combined as one: a laser-induced breakdown spectroscopy (LIBS) and a Remote Micro Imager (RMI) telescope. The ChemCam instrument suite was developed by the French CESR laboratory and the Los Alamos National Laboratory. The flight model of the mast unit was delivered from the French CNES to Los Alamos National Laboratory. The purpose of the LIBS instrument is to provide elemental compositions of rock and regolith, while the RMI gives ChemCam scientists high-resolution images of the sampling areas of the rocks and regolith that LIBS targets. The LIBS instrument can target a rock or regolith sample up to away, vaporizing a small amount of it with about 50 to 75 5-nanosecond pulses from a 1067 nm infrared laser and then observes the spectrum of the light emitted by the vaporized rock.
ChemCam has the ability to record up to 6,144 different wavelengths of ultraviolet, visible, and infrared light. Detection of the ball of luminous plasma is done in the visible, near-UV and near-infrared ranges, between 240 nm and 800 nm. The first initial laser testing of the ChemCam by Curiosity on Mars was performed on a rock, N165 ("Coronation" rock), near Bradbury Landing on August 19, 2012. The ChemCam team expects to take approximately one dozen compositional measurements of rocks per day. Using the same collection optics, the RMI provides context images of the LIBS analysis spots. The RMI resolves objects at distance, and has a field of view covering at that distance.
Navigation cameras (Navcams)
The rover has two pairs of black and white navigation cameras mounted on the mast to support ground navigation. The cameras have a 45° angle of view and use visible light to capture stereoscopic 3-D imagery.
Rover Environmental Monitoring Station (REMS)
REMS comprises instruments to measure the Mars environment: humidity, pressure, temperatures, wind speeds, and ultraviolet radiation. It is a meteorological package that includes an ultraviolet sensor provided by the Spanish Ministry of Education and Science. The investigative team is led by Javier Gómez-Elvira of the Spanish Astrobiology Center and includes the Finnish Meteorological Institute as a partner. All sensors are located around three elements: two booms attached to the rover's mast, the Ultraviolet Sensor (UVS) assembly located on the rover top deck, and the Instrument Control Unit (ICU) inside the rover body. REMS provides new clues about the Martian general circulation, micro scale weather systems, local hydrological cycle, destructive potential of UV radiation, and subsurface habitability based on ground-atmosphere interaction.
Hazard avoidance cameras (Hazcams)
The rover has four pairs of black and white navigation cameras called hazcams, two pairs in the front and two pairs in the back. They are used for autonomous hazard avoidance during rover drives and for safe positioning of the robotic arm on rocks and regolith. Each camera in a pair is hardlinked to one of two identical main computers for redundancy; only four out of the eight cameras are in use at any one time. The cameras use visible light to capture stereoscopic three-dimensional (3-D) imagery. The cameras have a 120° field of view and map the terrain at up to in front of the rover. This imagery safeguards against the rover crashing into unexpected obstacles, and works in tandem with software that allows the rover to make its own safety choices.
Mars Hand Lens Imager (MAHLI)
MAHLI is a camera on the rover's robotic arm, and acquires microscopic images of rock and regolith. MAHLI can take true-color images at 1600×1200 pixels with a resolution as high as 14.5 µm per pixel. MAHLI has an focal length and a 33.8–38.5° field of view. MAHLI has both white and ultraviolet Light-emitting diode (LED) illumination for imaging in darkness or fluorescence imaging. MAHLI also has mechanical focusing in a range from infinite to millimeter distances. This system can make some images with focus stacking processing. MAHLI can store either the raw images or do real time lossless predictive or JPEG compression. The calibration target for MAHLI includes color references, a metric bar graphic, a 1909 VDB Lincoln penny, and a stair-step pattern for depth calibration.
Alpha Particle X-ray Spectrometer (APXS)
The APXS instrument irradiates samples with alpha particles and maps the spectra of X-rays that are re-emitted for determining the elemental composition of samples. Curiosity APXS was developed by the Canadian Space Agency (CSA). MacDonald Dettwiler (MDA), the Canadian aerospace company that built the Canadarm and RADARSAT, were responsible for the engineering design and building of the APXS. The APXS science team includes members from the University of Guelph, the University of New Brunswick, the University of Western Ontario, NASA, the University of California, San Diego and Cornell University. The APXS instrument takes advantage of particle-induced X-ray emission (PIXE) and X-ray fluorescence, previously exploited by the Mars Pathfinder and the two Mars Exploration Rovers.
Chemistry and Mineralogy (CheMin)
CheMin is the Chemistry and Mineralogy X-ray powder diffraction and fluorescence instrument. CheMin is one of four spectrometers. It can identify and quantify the abundance of the minerals on Mars. It was developed by David Blake at NASA Ames Research Center and the Jet Propulsion Laboratory, and won the 2013 NASA Government Invention of the year award. The rover can drill samples from rocks and the resulting fine powder is poured into the instrument via a sample inlet tube on the top of the vehicle. A beam of X-rays is then directed at the powder and the crystal structure of the minerals deflects it at characteristic angles, allowing scientists to identify the minerals being analyzed.
On October 17, 2012, at "Rocknest", the first X-ray diffraction analysis of Martian regolith was performed. The results revealed the presence of several minerals, including feldspar, pyroxenes and olivine, and suggested that the Martian regolith in the sample was similar to the "weathered basaltic soils" of Hawaiian volcanoes. The paragonetic tephra from a Hawaiian cinder cone has been mined to create Martian regolith simulant for researchers to use since 1998.
Sample Analysis at Mars (SAM)
The SAM instrument suite analyzes organics and gases from both atmospheric and solid samples. It consists of instruments developed by the NASA Goddard Space Flight Center, the NASA Jet Propulsion Laboratory the Laboratoire atmosphères, milieux, observations spatiales (LATMOS), the Laboratoire Inter-Universitaire des Systèmes Atmosphériques (LISA) (jointly operated by France's CNRS and Parisian universities), and Honeybee Robotics, along with many additional external partners. The three main instruments are a Quadrupole Mass Spectrometer (QMS), a gas chromatograph (GC) and a tunable laser spectrometer (TLS). These instruments perform precision measurements of oxygen and carbon isotope ratios in carbon dioxide (CO2) and methane (CH4) in the atmosphere of Mars in order to distinguish between their geochemical or biological origin.
Dust Removal Tool (DRT)
The Dust Removal Tool (DRT) is a motorized, wire-bristle brush on the turret at the end of Curiosity arm. The DRT was first used on a rock target named Ekwir_1 on January 6, 2013. Honeybee Robotics built the DRT.
Radiation assessment detector (RAD)
The role of the Radiation assessment detector (RAD) instrument is to characterize the broad spectrum of radiation environment found inside the spacecraft during the cruise phase and while on Mars. These measurements have never been done before from the inside of a spacecraft in interplanetary space. Its primary purpose is to determine the viability and shielding needs for potential human explorers, as well as to characterize the radiation environment on the surface of Mars, which it started doing immediately after MSL landed in August 2012. Funded by the Exploration Systems Mission Directorate at NASA Headquarters and Germany's Space Agency (DLR), RAD was developed by Southwest Research Institute (SwRI) and the extraterrestrial physics group at Christian-Albrechts-Universität zu Kiel, Germany.
Dynamic Albedo of Neutrons (DAN)
The DAN instrument employs a neutron source and detector for measuring the quantity and depth of hydrogen or ice and water at or near the Martian surface.
The instrument consists of the detector element (DE) and a 14.1 MeV pulsing neutron generator (PNG). The die-away time of neutrons is measured by the DE after each neutron pulse from the PNG.
DAN was provided by the Russian Federal Space Agency and funded by Russia.
Mars Descent Imager (MARDI)
MARDI is fixed to the lower front left corner of the body of Curiosity. During the descent to the Martian surface, MARDI took color images at 1600×1200 pixels with a 1.3-millisecond exposure time starting at distances of about to near from the ground, at a rate of four frames per second for about two minutes. MARDI has a pixel scale of at to at and has a 90° circular field of view. MARDI has eight gigabytes of internal buffer memory that is capable of storing over 4,000 raw images. MARDI imaging allowed the mapping of surrounding terrain and the location of landing. JunoCam, built for the Juno spacecraft, is based on MARDI.
Robotic arm
The rover has a long robotic arm with a cross-shaped turret holding five devices that can spin through a 350° turning range. The arm makes use of three joints to extend it forward and to stow it again while driving. It has a mass of and its diameter, including the tools mounted on it, is about . It was designed, built, and tested by MDA US Systems, building upon their prior robotic arm work on the Mars Surveyor 2001 Lander, the Phoenix lander, and the two Mars Exploration Rovers, Spirit and Opportunity.
Two of the five devices are in-situ or contact instruments known as the X-ray spectrometer (APXS), and the Mars Hand Lens Imager (MAHLI camera). The remaining three are associated with sample acquisition and sample preparation functions: a percussion drill; a brush; and mechanisms for scooping, sieving, and portioning samples of powdered rock and regolith. The diameter of the hole in a rock after drilling is and up to deep. The drill carries two spare bits. The rover's arm and turret system can place the APXS and MAHLI on their respective targets, and also obtain powdered sample from rock interiors, and deliver them to the SAM and CheMin analyzers inside the rover.
Since early 2015 the percussive mechanism in the drill that helps chisel into rock has had an intermittent electrical short. On December 1, 2016, the motor inside the drill caused a malfunction that prevented the rover from moving its robotic arm and driving to another location. The fault was isolated to the drill feed brake, and internal debris is suspected of causing the problem. By December 9, 2016, driving and robotic arm operations were cleared to continue, but drilling remained suspended indefinitely. The Curiosity team continued to perform diagnostics and testing on the drill mechanism throughout 2017, and resumed drilling operations on May 22, 2018.
Media, cultural impact and legacy
Live video showing the first footage from the surface of Mars was available at NASA TV, during the late hours of August 6, 2012, PDT, including interviews with the mission team. The NASA website momentarily became unavailable from the overwhelming number of people visiting it, and a 13-minute NASA excerpt of the landings on its YouTube channel was halted an hour after the landing by an automated copyright takedown notice from Scripps Local News, which prevented access for several hours. Around 1,000 people gathered in New York City's Times Square, to watch NASA's live broadcast of Curiosity landing, as footage was being shown on the giant screen. Bobak Ferdowsi, Flight Director for the landing, became an Internet meme and attained Twitter celebrity status, with 45,000 new followers subscribing to his Twitter account, due to his Mohawk hairstyle with yellow stars that he wore during the televised broadcast.
On August 13, 2012, U.S. President Barack Obama, calling from aboard Air Force One to congratulate the Curiosity team, said, "You guys are examples of American know-how and ingenuity. It's really an amazing accomplishment". (Video (07:20))
Scientists at the Getty Conservation Institute in Los Angeles, California, viewed the CheMin instrument aboard Curiosity as a potentially valuable means to examine ancient works of art without damaging them. Until recently, only a few instruments were available to determine the composition without cutting out physical samples large enough to potentially damage the artifacts. CheMin directs a beam of X-rays at particles as small as and reads the radiation scattered back to determine the composition of the artifact in minutes. Engineers created a smaller, portable version named the X-Duetto. Fitting into a few briefcase-sized boxes, it can examine objects on site, while preserving their physical integrity. It is now being used by Getty scientists to analyze a large collection of museum antiques and the Roman ruins of Herculaneum, Italy.
Prior to the landing, NASA and Microsoft released Mars Rover Landing, a free downloadable game on Xbox Live that uses Kinect to capture body motions, which allows users to simulate the landing sequence.
NASA gave the general public the opportunity from 2009 until 2011 to submit their names to be sent to Mars. More than 1.2 million people from the international community participated, and their names were etched into silicon using an electron-beam machine used for fabricating micro devices at JPL, and this plaque is now installed on the deck of Curiosity. In keeping with a 40-year tradition, a plaque with the signatures of President Barack Obama and Vice President Joe Biden was also installed. Elsewhere on the rover is the autograph of Clara Ma, the 12-year-old girl from Kansas who gave Curiosity its name in an essay contest, writing in part that "curiosity is the passion that drives us through our everyday lives".
On August 6, 2013, Curiosity audibly played "Happy Birthday to You" in honor of the one Earth year mark of its Martian landing, the first time for a song to be played on another planet. This was also the first time music was transmitted between two planets.
On June 24, 2014, Curiosity completed a Martian year687 Earth daysafter finding that Mars once had environmental conditions favorable for microbial life. Curiosity served as the basis for the design of the Perseverance rover for the Mars 2020 rover mission. Some spare parts from the build and ground test of Curiosity are being used in the new vehicle, but it will carry a different instrument payload.
In 2014, project chief engineer wrote a book detailing the development of the Curiosity rover. "Mars Rover Curiosity: An Inside Account from Curiosity's Chief Engineer," is a firsthand account of the development and landing of the Curiosity Rover.
On August 5, 2017, NASA celebrated the fifth anniversary of the Curiosity rover mission landing, and related exploratory accomplishments, on the planet Mars. (Videos: Curiosity First Five Years (02:07); Curiosity POV: Five Years Driving (05:49); Curiosity Discoveries About Gale Crater (02:54))
As reported in 2018, drill samples taken in 2015 uncovered organic molecules of benzene and propane in 3 billion year old rock samples in Gale.
In popular culture, the launch of Curiosity is referenced in the music video for Harry Styles' 2023 song, "Satellite".
Images
Components of Curiosity
Example rover images
| Technology | Rovers | null |
60117345 | https://en.wikipedia.org/wiki/Taylor%E2%80%93Culick%20flow | Taylor–Culick flow | In fluid dynamics, Taylor–Culick flow describes the axisymmetric flow inside a long slender cylinder with one end closed, supplied by a constant flow injection through the sidewall. The flow is named after Geoffrey Ingram Taylor and F. E. C. Culick. In 1956, Taylor showed that when a fluid forced into porous sheet of cone or wedge, a favorable longitudinal pressure gradient is set up in the direction of the flow inside the cone or wedge and the flow is rotational; this is in contrast in the vice versa case wherein the fluid is forced out of the cone or wedge sheet from inside in which case, the flow is uniform inside the cone or wedge and is obviously potential. Taylor also obtained solutions for the velocity in the limiting case where the cone or the wedge degenerates into a circular tube or parallel plates. Later in 1966, Culick found the solution corresponding to the tube problem, in problem applied to solid-propellant rocket combustion. Here the thermal expansion of the gas due to combustion occurring at
the inner surface of the combustion chamber (long slender cylinder) generates a flow directed towards the axis.
Flow description
The axisymmetric inviscid equation is governed by the Hicks equation, that reduces when no swirl is present (i.e., zero circulation) to
where is the stream function, is the radial distance from the axis, and is the axial distance measured from the closed end of the cylinder. The function is found to predict the correct solution. The solution satisfying the required boundary conditions is given by
where is the radius of the cylinder and is the injection velocity at the wall. Despite the simple-looking formula, the solution has been experimentally verified to be accurate. The solution is wrong for distances of order since boundary layer separation at is inevitable; that is, the Taylor–Culick profile is correct for . The Taylor–Culick profile with injection at the closed end of the cylinder can also be solved analytically.
Although the solution is derived for the inviscid equation, it satisfies the non-slip condition at the wall since, as Taylor argued, any boundary layer at the sidewall will be blown off by flow injection. Hence, the flow is referred to as quasi-viscous.
| Physical sciences | Fluid mechanics | Physics |
36660129 | https://en.wikipedia.org/wiki/Lifting%20equipment | Lifting equipment | Lifting equipment, also known as lifting gear, is a general term for any equipment that can be used to lift and lower loads. Types of lifting equipment include heavy machinery such as the patient lift, overhead cranes, forklifts, jacks, building cradles, and passenger lifts, and can also include smaller accessories such as chains, hooks, and rope. Generally, this equipment is used to move material that cannot be moved with manual labor, and are tools used in most work environments, such as warehouses, and is a requirement for most construction projects, such as bridges and buildings. This equipment can also be used to equip a larger number of packages and goods, requiring less persons to move material. Lifting equipment includes any form of equipment that is used for vertical lifting, and equipment used to move material horizontally is not considered lifting equipment, nor is equipment designed to support. As lifting equipment can be dangerous to use, it is a common subject of safety regulations in most countries, and heavy machinery usually requires certified workers to limit workplace injury.
Safety issues
Failure or misuse of heavy machinery can lead to severe or fatal injury, leading regulations to be one of the largest debates in labor laws across the world. Each country sets its own regulations, and enforces different aspects of workplace safety when using lifting equipment.
In the United States
The Occupational Safety and Health Administration sets regulations for all equipment. Contractors are forced to uphold usually strict rules to ensure safety of workers. All machinery is required to be developed by a certified engineer, contractors must follow manufacturer procedures, all users be professionally trained before operating equipment, and equipment must be inspected regularly.
In the United Kingdom
The Health and Safety Executive sets regulations on equipment in the United Kingdom, under the Lifting Operations and Lifting Equipment Regulations. These regulations require equipment be registered on a Statutory Inspection Report Form, is adequate for the task, be subject to routine inspection, and the use of the equipment be properly planned out.
Working load limit
Lifting equipment can be assigned a Working Load Limit (WLL) in the interests of avoiding failure; Working Load Limit is calculated by dividing the Minimum Breaking Load of the equipment by a safety factor. WLL as a concept is not restricted to lifting, being also relevant for mooring ropes. Minimum Breaking Load is also known under the terms of Minimum Breaking Strength or Minimum Breaking Force. WLL of ropes are usually much smaller than their Minimum Breaking Load. WLL is sometimes known as Safe Working Load, but this alternative term is sometimes avoided due to giving the connotation of safety, which may not be guaranteed.
| Technology | Tools | null |
32472154 | https://en.wikipedia.org/wiki/Deep%20learning | Deep learning | Deep learning is a subset of machine learning that focuses on utilizing neural networks to perform tasks such as classification, regression, and representation learning. The field takes inspiration from biological neuroscience and is centered around stacking artificial neurons into layers and "training" them to process data. The adjective "deep" refers to the use of multiple layers (ranging from three to several hundred or thousands) in the network. Methods used can be either supervised, semi-supervised or unsupervised.
Some common deep learning network architectures include fully connected networks, deep belief networks, recurrent neural networks, convolutional neural networks, generative adversarial networks, transformers, and neural radiance fields. These architectures have been applied to fields including computer vision, speech recognition, natural language processing, machine translation, bioinformatics, drug design, medical image analysis, climate science, material inspection and board game programs, where they have produced results comparable to and in some cases surpassing human expert performance.
Early forms of neural networks were inspired by information processing and distributed communication nodes in biological systems, particularly the human brain. However, current neural networks do not intend to model the brain function of organisms, and are generally seen as low-quality models for that purpose.
Overview
Most modern deep learning models are based on multi-layered neural networks such as convolutional neural networks and transformers, although they can also include propositional formulas or latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep Boltzmann machines.
Fundamentally, deep learning refers to a class of machine learning algorithms in which a hierarchy of layers is used to transform input data into a progressively more abstract and composite representation. For example, in an image recognition model, the raw input may be an image (represented as a tensor of pixels). The first representational layer may attempt to identify basic shapes such as lines and circles, the second layer may compose and encode arrangements of edges, the third layer may encode a nose and eyes, and the fourth layer may recognize that the image contains a face.
Importantly, a deep learning process can learn which features to optimally place at which level on its own. Prior to deep learning, machine learning techniques often involved hand-crafted feature engineering to transform the data into a more suitable representation for a classification algorithm to operate on. In the deep learning approach, features are not hand-crafted and the model discovers useful feature representations from the data automatically. This does not eliminate the need for hand-tuning; for example, varying numbers of layers and layer sizes can provide different degrees of abstraction.
The word "deep" in "deep learning" refers to the number of layers through which the data is transformed. More precisely, deep learning systems have a substantial credit assignment path (CAP) depth. The CAP is the chain of transformations from input to output. CAPs describe potentially causal connections between input and output. For a feedforward neural network, the depth of the CAPs is that of the network and is the number of hidden layers plus one (as the output layer is also parameterized). For recurrent neural networks, in which a signal may propagate through a layer more than once, the CAP depth is potentially unlimited. No universally agreed-upon threshold of depth divides shallow learning from deep learning, but most researchers agree that deep learning involves CAP depth higher than two. CAP of depth two has been shown to be a universal approximator in the sense that it can emulate any function. Beyond that, more layers do not add to the function approximator ability of the network. Deep models (CAP > two) are able to extract better features than shallow models and hence, extra layers help in learning the features effectively.
Deep learning architectures can be constructed with a greedy layer-by-layer method. Deep learning helps to disentangle these abstractions and pick out which features improve performance.
Deep learning algorithms can be applied to unsupervised learning tasks. This is an important benefit because unlabeled data are more abundant than the labeled data. Examples of deep structures that can be trained in an unsupervised manner are deep belief networks.
The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986, and to artificial neural networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons. Although the history of its appearance is apparently more complicated.
Interpretations
Deep neural networks are generally interpreted in terms of the universal approximation theorem or probabilistic inference.
The classic universal approximation theorem concerns the capacity of feedforward neural networks with a single hidden layer of finite size to approximate continuous functions. In 1989, the first proof was published by George Cybenko for sigmoid activation functions and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik. Recent work also showed that universal approximation also holds for non-bounded activation functions such as Kunihiko Fukushima's rectified linear unit.
The universal approximation theorem for deep neural networks concerns the capacity of networks with bounded width but the depth is allowed to grow. Lu et al. proved that if the width of a deep neural network with ReLU activation is strictly larger than the input dimension, then the network can approximate any Lebesgue integrable function; if the width is smaller or equal to the input dimension, then a deep neural network is not a universal approximator.
The probabilistic interpretation derives from the field of machine learning. It features inference, as well as the optimization concepts of training and testing, related to fitting and generalization, respectively. More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative distribution function. The probabilistic interpretation led to the introduction of dropout as regularizer in neural networks. The probabilistic interpretation was introduced by researchers including Hopfield, Widrow and Narendra and popularized in surveys such as the one by Bishop.
History
Before 1980
There are two types of artificial neural network (ANN): feedforward neural network (FNN) or multilayer perceptron (MLP) and recurrent neural networks (RNN). RNNs have cycles in their connectivity structure, FNNs don't. In the 1920s, Wilhelm Lenz and Ernst Ising created the Ising model which is essentially a non-learning RNN architecture consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive. His learning RNN was republished by John Hopfield in 1982. Other early recurrent neural networks were published by Kaoru Nakano in 1971. Already in 1948, Alan Turing produced work on "Intelligent Machinery" that was not published in his lifetime, containing "ideas related to artificial evolution and learning RNNs".
Frank Rosenblatt (1958) proposed the perceptron, an MLP with 3 layers: an input layer, a hidden layer with randomized weights that did not learn, and an output layer. He later published a 1962 book that also introduced variants and computer experiments, including a version with four-layer perceptrons "with adaptive preterminal networks" where the last two layers have learned weights (here he credits H. D. Block and B. W. Knight). The book cites an earlier network by R. D. Joseph (1960) "functionally equivalent to a variation of" this four-layer system (the book mentions Joseph over 30 times). Should Joseph therefore be considered the originator of proper adaptive multilayer perceptrons with learning hidden units? Unfortunately, the learning algorithm was not a functional one, and fell into oblivion.
The first working deep learning algorithm was the Group method of data handling, a method to train arbitrarily deep neural networks, published by Alexey Ivakhnenko and Lapa in 1965. They regarded it as a form of polynomial regression, or a generalization of Rosenblatt's perceptron. A 1971 paper described a deep network with eight layers trained by this method, which is based on layer by layer training through regression analysis. Superfluous hidden units are pruned using a separate validation set. Since the activation functions of the nodes are Kolmogorov-Gabor polynomials, these were also the first deep networks with multiplicative units or "gates".
The first deep learning multilayer perceptron trained by stochastic gradient descent was published in 1967 by Shun'ichi Amari. In computer experiments conducted by Amari's student Saito, a five layer MLP with two modifiable layers learned internal representations to classify non-linearily separable pattern classes. Subsequent developments in hardware and hyperparameter tunings have made end-to-end stochastic gradient descent the currently dominant training technique.
In 1969, Kunihiko Fukushima introduced the ReLU (rectified linear unit) activation function. The rectifier has become the most popular activation function for deep learning.
Deep learning architectures for convolutional neural networks (CNNs) with convolutional layers and downsampling layers began with the Neocognitron introduced by Kunihiko Fukushima in 1979, though not trained by backpropagation.
Backpropagation is an efficient application of the chain rule derived by Gottfried Wilhelm Leibniz in 1673 to networks of differentiable nodes. The terminology "back-propagating errors" was actually introduced in 1962 by Rosenblatt, but he did not know how to implement this, although Henry J. Kelley had a continuous precursor of backpropagation in 1960 in the context of control theory. The modern form of backpropagation was first published in Seppo Linnainmaa's master thesis (1970). G.M. Ostrovski et al. republished it in 1971. Paul Werbos applied backpropagation to neural networks in 1982 (his 1974 PhD thesis, reprinted in a 1994 book, did not yet describe the algorithm). In 1986, David E. Rumelhart et al. popularised backpropagation but did not cite the original work.
1980s-2000s
The time delay neural network (TDNN) was introduced in 1987 by Alex Waibel to apply CNN to phoneme recognition. It used convolutions, weight sharing, and backpropagation. In 1988, Wei Zhang applied a backpropagation-trained CNN to alphabet recognition.
In 1989, Yann LeCun et al. created a CNN called LeNet for recognizing handwritten ZIP codes on mail. Training required 3 days. In 1990, Wei Zhang implemented a CNN on optical computing hardware. In 1991, a CNN was applied to medical image object segmentation and breast cancer detection in mammograms. LeNet-5 (1998), a 7-level CNN by Yann LeCun et al., that classifies digits, was applied by several banks to recognize hand-written numbers on checks digitized in 32x32 pixel images.
Recurrent neural networks (RNN) were further developed in the 1980s. Recurrence is used for sequence processing, and when a recurrent network is unrolled, it mathematically resembles a deep feedforward layer. Consequently, they have similar properties and issues, and their developments had mutual influences. In RNN, two early influential works were the Jordan network (1986) and the Elman network (1990), which applied RNN to study problems in cognitive psychology.
In the 1980s, backpropagation did not work well for deep learning with long credit assignment paths. To overcome this problem, in 1991, Jürgen Schmidhuber proposed a hierarchy of RNNs pre-trained one level at a time by self-supervised learning where each RNN tries to predict its own next input, which is the next unexpected input of the RNN below. This "neural history compressor" uses predictive coding to learn internal representations at multiple self-organizing time scales. This can substantially facilitate downstream deep learning. The RNN hierarchy can be collapsed into a single RNN, by distilling a higher level chunker network into a lower level automatizer network. In 1993, a neural history compressor solved a "Very Deep Learning" task that required more than 1000 subsequent layers in an RNN unfolded in time. The "P" in ChatGPT refers to such pre-training.
Sepp Hochreiter's diploma thesis (1991) implemented the neural history compressor, and identified and analyzed the vanishing gradient problem. Hochreiter proposed recurrent residual connections to solve the vanishing gradient problem. This led to the long short-term memory (LSTM), published in 1995. LSTM can learn "very deep learning" tasks with long credit assignment paths that require memories of events that happened thousands of discrete time steps before. That LSTM was not yet the modern architecture, which required a "forget gate", introduced in 1999, which became the standard RNN architecture.
In 1991, Jürgen Schmidhuber also published adversarial neural networks that contest with each other in the form of a zero-sum game, where one network's gain is the other network's loss. The first network is a generative model that models a probability distribution over output patterns. The second network learns by gradient descent to predict the reactions of the environment to these patterns. This was called "artificial curiosity". In 2014, this principle was used in generative adversarial networks (GANs).
During 1985–1995, inspired by statistical mechanics, several architectures and methods were developed by Terry Sejnowski, Peter Dayan, Geoffrey Hinton, etc., including the Boltzmann machine, restricted Boltzmann machine, Helmholtz machine, and the wake-sleep algorithm. These were designed for unsupervised learning of deep generative models. However, those were more computationally expensive compared to backpropagation. Boltzmann machine learning algorithm, published in 1985, was briefly popular before being eclipsed by the backpropagation algorithm in 1986. (p. 112 ). A 1988 network became state of the art in protein structure prediction, an early application of deep learning to bioinformatics.
Both shallow and deep learning (e.g., recurrent nets) of ANNs for speech recognition have been explored for many years. These methods never outperformed non-uniform internal-handcrafting Gaussian mixture model/Hidden Markov model (GMM-HMM) technology based on generative models of speech trained discriminatively. Key difficulties have been analyzed, including gradient diminishing and weak temporal correlation structure in neural predictive models. Additional difficulties were the lack of training data and limited computing power.
Most speech recognition researchers moved away from neural nets to pursue generative modeling. An exception was at SRI International in the late 1990s. Funded by the US government's NSA and DARPA, SRI researched in speech and speaker recognition. The speaker recognition team led by Larry Heck reported significant success with deep neural networks in speech processing in the 1998 NIST Speaker Recognition benchmark. It was deployed in the Nuance Verifier, representing the first major industrial application of deep learning.
The principle of elevating "raw" features over hand-crafted optimization was first explored successfully in the architecture of deep autoencoder on the "raw" spectrogram or linear filter-bank features in the late 1990s, showing its superiority over the Mel-Cepstral features that contain stages of fixed transformation from spectrograms. The raw features of speech, waveforms, later produced excellent larger-scale results.
2000s
Neural networks entered a null, and simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) became the preferred choices in the 1990s and 2000s, because of artificial neural networks' computational cost and a lack of understanding of how the brain wires its biological networks.
In 2003, LSTM became competitive with traditional speech recognizers on certain tasks. In 2006, Alex Graves, Santiago Fernández, Faustino Gomez, and Schmidhuber combined it with connectionist temporal classification (CTC) in stacks of LSTMs. In 2009, it became the first RNN to win a pattern recognition contest, in connected handwriting recognition.
In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh deep belief networks were developed for generative modeling. They are trained by training one restricted Boltzmann machine, then freezing it and training another one on top of the first one, and so on, then optionally fine-tuned using supervised backpropagation. They could model high-dimensional probability distributions, such as the distribution of MNIST images, but convergence was slow.
The impact of deep learning in industry began in the early 2000s, when CNNs already processed an estimated 10% to 20% of all the checks written in the US, according to Yann LeCun. Industrial applications of deep learning to large-scale speech recognition started around 2010.
The 2009 NIPS Workshop on Deep Learning for Speech Recognition was motivated by the limitations of deep generative models of speech, and the possibility that given more capable hardware and large-scale data sets that deep neural nets might become practical. It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets. However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems. The nature of the recognition errors produced by the two types of systems was characteristically different, offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems. Analysis around 2009–2010, contrasting the GMM (and other generative speech models) vs. DNN models, stimulated early industrial investment in deep learning for speech recognition. That analysis was done with comparable performance (less than 1.5% in error rate) between discriminative DNNs and generative models.
In 2010, researchers extended deep learning from TIMIT to large vocabulary speech recognition, by adopting large output layers of the DNN based on context-dependent HMM states constructed by decision trees.
Deep learning revolution
The deep learning revolution started around CNN- and GPU-based computer vision.
Although CNNs trained by backpropagation had been around for decades and GPU implementations of NNs for years, including CNNs, faster implementations of CNNs on GPUs were needed to progress on computer vision. Later, as deep learning becomes widespread, specialized hardware and algorithm optimizations were developed specifically for deep learning.
A key advance for the deep learning revolution was hardware advances, especially GPU. Some early work dated back to 2004. In 2009, Raina, Madhavan, and Andrew Ng reported a 100M deep belief network trained on 30 Nvidia GeForce GTX 280 GPUs, an early demonstration of GPU-based deep learning. They reported up to 70 times faster training.
In 2011, a CNN named DanNet by Dan Ciresan, Ueli Meier, Jonathan Masci, Luca Maria Gambardella, and Jürgen Schmidhuber achieved for the first time superhuman performance in a visual pattern recognition contest, outperforming traditional methods by a factor of 3. It then won more contests. They also showed how max-pooling CNNs on GPU improved performance significantly.
In 2012, Andrew Ng and Jeff Dean created an FNN that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken from YouTube videos.
In October 2012, AlexNet by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton won the large-scale ImageNet competition by a significant margin over shallow machine learning methods. Further incremental improvements included the VGG-16 network by Karen Simonyan and Andrew Zisserman and Google's Inceptionv3.
The success in image classification was then extended to the more challenging task of generating descriptions (captions) for images, often as a combination of CNNs and LSTMs.
In 2014, the state of the art was training “very deep neural network” with 20 to 30 layers. Stacking too many layers led to a steep reduction in training accuracy, known as the "degradation" problem. In 2015, two techniques were developed to train very deep networks: the Highway Network was published in May 2015, and the residual neural network (ResNet) in Dec 2015. ResNet behaves like an open-gated Highway Net.
Around the same time, deep learning started impacting the field of art. Early examples included Google DeepDream (2015), and neural style transfer (2015), both of which were based on pretrained image classification neural networks, such as VGG-19.
Generative adversarial network (GAN) by (Ian Goodfellow et al., 2014) (based on Jürgen Schmidhuber's principle of artificial curiosity)
became state of the art in generative modeling during 2014-2018 period. Excellent image quality is achieved by Nvidia's StyleGAN (2018) based on the Progressive GAN by Tero Karras et al. Here the GAN generator is grown from small to large scale in a pyramidal fashion. Image generation by GAN reached popular success, and provoked discussions concerning deepfakes. Diffusion models (2015) eclipsed GANs in generative modeling since then, with systems such as DALL·E 2 (2022) and Stable Diffusion (2022).
In 2015, Google's speech recognition improved by 49% by an LSTM-based model, which they made available through Google Voice Search on smartphone.
In 2017, Topological deep learning was introduced by integrating topological data analysis and convolutional neural networks.
Topological deep learning surpasses competing methods in predicting protein-ligand binding affinities and protein stability changes caused by mutations.
In 2017-2019, mathematical deep learning achieved first place in multiple categories of the D3R Grand Challenges, an annual competition series focused on computer-aided drug design.
Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision and automatic speech recognition (ASR). Results on commonly used evaluation sets such as TIMIT (ASR) and MNIST (image classification), as well as a range of large-vocabulary speech recognition tasks have steadily improved. Convolutional neural networks were superseded for ASR by LSTM. but are more successful in computer vision.
Yoshua Bengio, Geoffrey Hinton and Yann LeCun were awarded the 2018 Turing Award for "conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing".
Neural networks
Artificial neural networks (ANNs) or connectionist systems are computing systems inspired by the biological neural networks that constitute animal brains. Such systems learn (progressively improve their ability) to do tasks by considering examples, generally without task-specific programming. For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as "cat" or "no cat" and using the analytic results to identify cats in other images. They have found most use in applications difficult to express with a traditional computer algorithm using rule-based programming.
An ANN is based on a collection of connected units called artificial neurons, (analogous to biological neurons in a biological brain). Each connection (synapse) between neurons can transmit a signal to another neuron. The receiving (postsynaptic) neuron can process the signal(s) and then signal downstream neurons connected to it. Neurons may have state, generally represented by real numbers, typically between 0 and 1. Neurons and synapses may also have a weight that varies as learning proceeds, which can increase or decrease the strength of the signal that it sends downstream.
Typically, neurons are organized in layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first (input), to the last (output) layer, possibly after traversing the layers multiple times.
The original goal of the neural network approach was to solve problems in the same way that a human brain would. Over time, attention focused on matching specific mental abilities, leading to deviations from biology such as backpropagation, or passing information in the reverse direction and adjusting the network to reflect that information.
Neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.
As of 2017, neural networks typically have a few thousand to a few million units and millions of connections. Despite this number being several order of magnitude less than the number of neurons on a human brain, these networks can perform many tasks at a level beyond that of humans (e.g., recognizing faces, or playing "Go").
Deep neural networks
A deep neural network (DNN) is an artificial neural network with multiple layers between the input and output layers. There are different types of neural networks but they always consist of the same components: neurons, synapses, weights, biases, and functions. These components as a whole function in a way that mimics functions of the human brain, and can be trained like any other ML algorithm.
For example, a DNN that is trained to recognize dog breeds will go over the given image and calculate the probability that the dog in the image is a certain breed. The user can review the results and select which probabilities the network should display (above a certain threshold, etc.) and return the proposed label. Each mathematical manipulation as such is considered a layer, and complex DNN have many layers, hence the name "deep" networks.
DNNs can model complex non-linear relationships. DNN architectures generate compositional models where the object is expressed as a layered composition of primitives. The extra layers enable composition of features from lower layers, potentially modeling complex data with fewer units than a similarly performing shallow network. For instance, it was proved that sparse multivariate polynomials are exponentially easier to approximate with DNNs than with shallow networks.
Deep architectures include many variants of a few basic approaches. Each architecture has found success in specific domains. It is not always possible to compare the performance of multiple architectures, unless they have been evaluated on the same data sets.
DNNs are typically feedforward networks in which data flows from the input layer to the output layer without looping back. At first, the DNN creates a map of virtual neurons and assigns random numerical values, or "weights", to connections between them. The weights and inputs are multiplied and return an output between 0 and 1. If the network did not accurately recognize a particular pattern, an algorithm would adjust the weights. That way the algorithm can make certain parameters more influential, until it determines the correct mathematical manipulation to fully process the data.
Recurrent neural networks, in which data can flow in any direction, are used for applications such as language modeling. Long short-term memory is particularly effective for this use.
Convolutional neural networks (CNNs) are used in computer vision. CNNs also have been applied to acoustic modeling for automatic speech recognition (ASR).
Challenges
As with ANNs, many issues can arise with naively trained DNNs. Two common issues are overfitting and computation time.
DNNs are prone to overfitting because of the added layers of abstraction, which allow them to model rare dependencies in the training data. Regularization methods such as Ivakhnenko's unit pruning or weight decay (-regularization) or sparsity (-regularization) can be applied during training to combat overfitting. Alternatively dropout regularization randomly omits units from the hidden layers during training. This helps to exclude rare dependencies. Another interesting recent development is research into models of just enough complexity through an estimation of the intrinsic complexity of the task being modelled. This approach has been successfully applied for multivariate time series prediction tasks such as traffic prediction. Finally, data can be augmented via methods such as cropping and rotating such that smaller training sets can be increased in size to reduce the chances of overfitting.
DNNs must consider many training parameters, such as the size (number of layers and number of units per layer), the learning rate, and initial weights. Sweeping through the parameter space for optimal parameters may not be feasible due to the cost in time and computational resources. Various tricks, such as batching (computing the gradient on several training examples at once rather than individual examples) speed up computation. Large processing capabilities of many-core architectures (such as GPUs or the Intel Xeon Phi) have produced significant speedups in training, because of the suitability of such processing architectures for the matrix and vector computations.
Alternatively, engineers may look for other types of neural networks with more straightforward and convergent training algorithms. CMAC (cerebellar model articulation controller) is one such kind of neural network. It doesn't require learning rates or randomized initial weights. The training process can be guaranteed to converge in one step with a new batch of data, and the computational complexity of the training algorithm is linear with respect to the number of neurons involved.
Hardware
Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method for training large-scale commercial cloud AI . OpenAI estimated the hardware computation used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017) and found a 300,000-fold increase in the amount of computation required, with a doubling-time trendline of 3.4 months.
Special electronic circuits called deep learning processors were designed to speed up deep learning algorithms. Deep learning processors include neural processing units (NPUs) in Huawei cellphones and cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform. Cerebras Systems has also built a dedicated system to handle large deep learning models, the CS-2, based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2).
Atomically thin semiconductors are considered promising for energy-efficient deep learning hardware where the same basic device structure is used for both logic operations and data storage.
In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs).
In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing. The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds. Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.
Applications
Automatic speech recognition
Large-scale automatic speech recognition is the first and most convincing successful case of deep learning. LSTM RNNs can learn "Very Deep Learning" tasks that involve multi-second intervals containing speech events separated by thousands of discrete time steps, where one time step corresponds to about 10 ms. LSTM with forget gates is competitive with traditional speech recognizers on certain tasks.
The initial success in speech recognition was based on small-scale recognition tasks based on TIMIT. The data set contains 630 speakers from eight major dialects of American English, where each speaker reads 10 sentences. Its small size lets many configurations be tried. More importantly, the TIMIT task concerns phone-sequence recognition, which, unlike word-sequence recognition, allows weak phone bigram language models. This lets the strength of the acoustic modeling aspects of speech recognition be more easily analyzed. The error rates listed below, including these early results and measured as percent phone error rates (PER), have been summarized since 1991.
The debut of DNNs for speaker recognition in the late 1990s and speech recognition around 2009-2011 and of LSTM around 2003–2007, accelerated progress in eight major areas:
Scale-up/out and accelerated DNN training and decoding
Sequence discriminative training
Feature processing by deep models with solid understanding of the underlying mechanisms
Adaptation of DNNs and related deep models
Multi-task and transfer learning by DNNs and related deep models
CNNs and how to design them to best exploit domain knowledge of speech
RNN and its rich LSTM variants
Other types of deep models including tensor-based models and integrated deep generative/discriminative models.
All major commercial speech recognition systems (e.g., Microsoft Cortana, Xbox, Skype Translator, Amazon Alexa, Google Now, Apple Siri, Baidu and iFlyTek voice search, and a range of Nuance speech products, etc.) are based on deep learning.
Image recognition
A common evaluation set for image classification is the MNIST database data set. MNIST is composed of handwritten digits and includes 60,000 training examples and 10,000 test examples. As with TIMIT, its small size lets users test multiple configurations. A comprehensive list of results on this set is available.
Deep learning-based image recognition has become "superhuman", producing more accurate results than human contestants. This first occurred in 2011 in recognition of traffic signs, and in 2014, with recognition of human faces.
Deep learning-trained vehicles now interpret 360° camera views. Another example is Facial Dysmorphology Novel Analysis (FDNA) used to analyze cases of human malformation connected to a large database of genetic syndromes.
Visual art processing
Closely related to the progress that has been made in image recognition is the increasing application of deep learning techniques to various visual art tasks. DNNs have proven themselves capable, for example, of
identifying the style period of a given painting
Neural Style Transfer capturing the style of a given artwork and applying it in a visually pleasing manner to an arbitrary photograph or video
generating striking imagery based on random visual input fields.
Natural language processing
Neural networks have been used for implementing language models since the early 2000s. LSTM helped to improve machine translation and language modeling.
Other key techniques in this field are negative sampling and word embedding. Word embedding, such as word2vec, can be thought of as a representational layer in a deep learning architecture that transforms an atomic word into a positional representation of the word relative to other words in the dataset; the position is represented as a point in a vector space. Using word embedding as an RNN input layer allows the network to parse sentences and phrases using an effective compositional vector grammar. A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN. Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing. Deep neural architectures provide the best results for constituency parsing, sentiment analysis, information retrieval, spoken language understanding, machine translation, contextual entity linking, writing style recognition, named-entity recognition (token classification), text classification, and others.
Recent developments generalize word embedding to sentence embedding.
Google Translate (GT) uses a large end-to-end long short-term memory (LSTM) network. Google Neural Machine Translation (GNMT) uses an example-based machine translation method in which the system "learns from millions of examples". It translates "whole sentences at a time, rather than pieces". Google Translate supports over one hundred languages. The network encodes the "semantics of the sentence rather than simply memorizing phrase-to-phrase translations". GT uses English as an intermediate between most language pairs.
Drug discovery and toxicology
A large percentage of candidate drugs fail to win regulatory approval. These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects. Research has explored use of deep learning to predict the biomolecular targets, off-targets, and toxic effects of environmental chemicals in nutrients, household products and drugs.
The integration of advanced mathematics, such as persistent homology and graph theory, and deep neural networks gives rise to victories in drug scoring and pose prediction.
AtomNet is a deep learning system for structure-based rational drug design. AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus and multiple sclerosis.
In 2017 graph neural networks were used for the first time to predict various properties of molecules in a large toxicology data set. In 2019, generative neural networks were used to produce molecules that were validated experimentally all the way into mice.
Customer relationship management
Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables. The estimated value function was shown to have a natural interpretation as customer lifetime value.
Recommendation systems
Recommendation systems have used deep learning to extract meaningful features for a latent factor model for content-based music and journal recommendations. Multi-view deep learning has been applied for learning user preferences from multiple domains. The model uses a hybrid collaborative and content-based approach and enhances recommendations in multiple tasks.
Bioinformatics
An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.
In medical informatics, deep learning was used to predict sleep quality based on data from wearables and predictions of health complications from electronic health record data.
Deep neural networks have shown unparalleled performance in predicting protein structure, according to the sequence of the amino acids that make it up. In 2020, AlphaFold, a deep-learning based system, achieved a level of accuracy significantly higher than all previous computational methods.
Deep Neural Network Estimations
Deep neural networks can be used to estimate the entropy of a stochastic process and called Neural Joint Entropy Estimator (NJEE). Such an estimation provides insights on the effects of input random variables on an independent random variable. Practically, the DNN is trained as a classifier that maps an input vector or matrix X to an output probability distribution over the possible classes of random variable Y, given input X. For example, in image classification tasks, the NJEE maps a vector of pixels' color values to probabilities over possible image classes. In practice, the probability distribution of Y is obtained by a Softmax layer with number of nodes that is equal to the alphabet size of Y. NJEE uses continuously differentiable activation functions, such that the conditions for the universal approximation theorem holds. It is shown that this method provides a strongly consistent estimator and outperforms other methods in case of large alphabet sizes.
Medical image analysis
Deep learning has been shown to produce competitive results in medical application such as cancer cell classification, lesion detection, organ segmentation and image enhancement. Modern deep learning tools demonstrate the high accuracy of detecting various diseases and the helpfulness of their use by specialists to improve the diagnosis efficiency.
Mobile advertising
Finding the appropriate mobile audience for mobile advertising is always challenging, since many data points must be considered and analyzed before a target segment can be created and used in ad serving by any ad server. Deep learning has been used to interpret large, many-dimensioned advertising datasets. Many data points are collected during the request/serve/click internet advertising cycle. This information can form the basis of machine learning to improve ad selection.
Image restoration
Deep learning has been successfully applied to inverse problems such as denoising, super-resolution, inpainting, and film colorization. These applications include learning methods such as "Shrinkage Fields for Effective Image Restoration" which trains on an image dataset, and Deep Image Prior, which trains on the image that needs restoration.
Financial fraud detection
Deep learning is being successfully applied to financial fraud detection, tax evasion detection, and anti-money laundering.
Materials science
In November 2023, researchers at Google DeepMind and Lawrence Berkeley National Laboratory announced that they had developed an AI system known as GNoME. This system has contributed to materials science by discovering over 2 million new materials within a relatively short timeframe. GNoME employs deep learning techniques to efficiently explore potential material structures, achieving a significant increase in the identification of stable inorganic crystal structures. The system's predictions were validated through autonomous robotic experiments, demonstrating a noteworthy success rate of 71%. The data of newly discovered materials is publicly available through the Materials Project database, offering researchers the opportunity to identify materials with desired properties for various applications. This development has implications for the future of scientific discovery and the integration of AI in material science research, potentially expediting material innovation and reducing costs in product development. The use of AI and deep learning suggests the possibility of minimizing or eliminating manual lab experiments and allowing scientists to focus more on the design and analysis of unique compounds.
Military
The United States Department of Defense applied deep learning to train robots in new tasks through observation.
Partial differential equations
Physics informed neural networks have been used to solve partial differential equations in both forward and inverse problems in a data driven manner. One example is the reconstructing fluid flow governed by the Navier-Stokes equations. Using physics informed neural networks does not require the often expensive mesh generation that conventional CFD methods rely on.
Deep backward stochastic differential equation method
Deep backward stochastic differential equation method is a numerical method that combines deep learning with Backward stochastic differential equation (BSDE). This method is particularly useful for solving high-dimensional problems in financial mathematics. By leveraging the powerful function approximation capabilities of deep neural networks, deep BSDE addresses the computational challenges faced by traditional numerical methods in high-dimensional settings. Specifically, traditional methods like finite difference methods or Monte Carlo simulations often struggle with the curse of dimensionality, where computational cost increases exponentially with the number of dimensions. Deep BSDE methods, however, employ deep neural networks to approximate solutions of high-dimensional partial differential equations (PDEs), effectively reducing the computational burden.
In addition, the integration of Physics-informed neural networks (PINNs) into the deep BSDE framework enhances its capability by embedding the underlying physical laws directly into the neural network architecture. This ensures that the solutions not only fit the data but also adhere to the governing stochastic differential equations. PINNs leverage the power of deep learning while respecting the constraints imposed by the physical models, resulting in more accurate and reliable solutions for financial mathematics problems.
Image reconstruction
Image reconstruction is the reconstruction of the underlying images from the image-related measurements. Several works showed the better and superior performance of the deep learning methods compared to analytical methods for various applications, e.g., spectral imaging and ultrasound imaging.
Weather prediction
Traditional weather prediction systems solve a very complex system of partial differential equations. GraphCast is a deep learning based model, trained on a long history of weather data to predict how weather patterns change over time. It is able to predict weather conditions for up to 10 days globally, at a very detailed level, and in under a minute, with precision similar to state of the art systems.
Epigenetic clock
An epigenetic clock is a biochemical test that can be used to measure age. Galkin et al. used deep neural networks to train an epigenetic aging clock of unprecedented accuracy using >6,000 blood samples. The clock uses information from 1000 CpG sites and predicts people with certain conditions older than healthy controls: IBD, frontotemporal dementia, ovarian cancer, obesity. The aging clock was planned to be released for public use in 2021 by an Insilico Medicine spinoff company Deep Longevity.
Relation to human cognitive and brain development
Deep learning is closely related to a class of theories of brain development (specifically, neocortical development) proposed by cognitive neuroscientists in the early 1990s. These developmental theories were instantiated in computational models, making them predecessors of deep learning systems. These developmental models share the property that various proposed learning dynamics in the brain (e.g., a wave of nerve growth factor) support the self-organization somewhat analogous to the neural networks utilized in deep learning models. Like the neocortex, neural networks employ a hierarchy of layered filters in which each layer considers information from a prior layer (or the operating environment), and then passes its output (and possibly the original input), to other layers. This process yields a self-organizing stack of transducers, well-tuned to their operating environment. A 1995 description stated, "...the infant's brain seems to organize itself under the influence of waves of so-called trophic-factors ... different regions of the brain become connected sequentially, with one layer of tissue maturing before another and so on until the whole brain is mature".
A variety of approaches have been used to investigate the plausibility of deep learning models from a neurobiological perspective. On the one hand, several variants of the backpropagation algorithm have been proposed in order to increase its processing realism. Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchical generative models and deep belief networks, may be closer to biological reality. In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex.
Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported. For example, the computations performed by deep learning units could be similar to those of actual neurons and neural populations. Similarly, the representations developed by deep learning models are similar to those measured in the primate visual system both at the single-unit and at the population levels.
Commercial activity
Facebook's AI lab performs tasks such as automatically tagging uploaded pictures with the names of the people in them.
Google's DeepMind Technologies developed a system capable of learning how to play Atari video games using only pixels as data input. In 2015 they demonstrated their AlphaGo system, which learned the game of Go well enough to beat a professional Go player. Google Translate uses a neural network to translate between more than 100 languages.
In 2017, Covariant.ai was launched, which focuses on integrating deep learning into factories.
As of 2008, researchers at The University of Texas at Austin (UT) developed a machine learning framework called Training an Agent Manually via Evaluative Reinforcement, or TAMER, which proposed new methods for robots or computer programs to learn how to perform tasks by interacting with a human instructor. First developed as TAMER, a new algorithm called Deep TAMER was later introduced in 2018 during a collaboration between U.S. Army Research Laboratory (ARL) and UT researchers. Deep TAMER used deep learning to provide a robot with the ability to learn new tasks through observation. Using Deep TAMER, a robot learned a task with a human trainer, watching video streams or observing a human perform a task in-person. The robot later practiced the task with the help of some coaching from the trainer, who provided feedback such as "good job" and "bad job".
Criticism and comment
Deep learning has attracted both criticism and comment, in some cases from outside the field of computer science.
Theory
A main criticism concerns the lack of theory surrounding some methods. Learning in the most common deep architectures is implemented using well-understood gradient descent. However, the theory surrounding other algorithms, such as contrastive divergence is less clear. (e.g., Does it converge? If so, how fast? What is it approximating?) Deep learning methods are often looked at as a black box, with most confirmations done empirically, rather than theoretically.
Others point out that deep learning should be looked at as a step towards realizing strong AI, not as an all-encompassing solution. Despite the power of deep learning methods, they still lack much of the functionality needed to realize this goal entirely. Research psychologist Gary Marcus noted:
Realistically, deep learning is only part of the larger challenge of building intelligent machines. Such techniques lack ways of representing causal relationships (...) have no obvious ways of performing logical inferences, and they are also still a long way from integrating abstract knowledge, such as information about what objects are, what they are for, and how they are typically used. The most powerful A.I. systems, like Watson (...) use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.
In further reference to the idea that artistic sensitivity might be inherent in relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article on The Guardian's website.
Errors
Some deep learning architectures display problematic behaviors, such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images (2014) and misclassifying minuscule perturbations of correctly classified images (2013). Goertzel hypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-component artificial general intelligence (AGI) architectures. These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar decompositions of observed entities and events. Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition and artificial intelligence (AI).
Cyber threat
As deep learning moves from the lab into the world, research and experience show that artificial neural networks are vulnerable to hacks and deception. By identifying patterns that these systems use to function, attackers can modify inputs to ANNs in such a way that the ANN finds a match that human observers would not recognize. For example, an attacker can make subtle changes to an image such that the ANN finds a match even though the image looks to a human nothing like the search target. Such manipulation is termed an "adversarial attack".
In 2016 researchers used one ANN to doctor images in trial and error fashion, identify another's focal points, and thereby generate images that deceived it. The modified images looked no different to human eyes. Another group showed that printouts of doctored images then photographed successfully tricked an image classification system. One defense is reverse image search, in which a possible fake image is submitted to a site such as TinEye that can then find other instances of it. A refinement is to search using only parts of the image, to identify images from which that piece may have been taken.
Another group showed that certain psychedelic spectacles could fool a facial recognition system into thinking ordinary people were celebrities, potentially allowing one person to impersonate another. In 2017 researchers added stickers to stop signs and caused an ANN to misclassify them.
ANNs can however be further trained to detect attempts at deception, potentially leading attackers and defenders into an arms race similar to the kind that already defines the malware defense industry. ANNs have been trained to defeat ANN-based anti-malware software by repeatedly attacking a defense with malware that was continually altered by a genetic algorithm until it tricked the anti-malware while retaining its ability to damage the target.
In 2016, another group demonstrated that certain sounds could make the Google Now voice command system open a particular web address, and hypothesized that this could "serve as a stepping stone for further attacks (e.g., opening a web page hosting drive-by malware)".
In "data poisoning", false data is continually smuggled into a machine learning system's training set to prevent it from achieving mastery.
Data collection ethics
The deep learning systems that are trained using supervised learning often rely on data that is created and/or annotated by humans. It has been argued that not only low-paid clickwork (such as on Amazon Mechanical Turk) is regularly deployed for this purpose, but also implicit forms of human microwork that are often not recognized as such. The philosopher Rainer Mühlhoff distinguishes five types of "machinic capture" of human microwork to generate training data: (1) gamification (the embedding of annotation or computation tasks in the flow of a game), (2) "trapping and tracking" (e.g. CAPTCHAs for image recognition or click-tracking on Google search results pages), (3) exploitation of social motivations (e.g. tagging faces on Facebook to obtain labeled facial images), (4) information mining (e.g. by leveraging quantified-self devices such as activity trackers) and (5) clickwork.
| Technology | Artificial intelligence concepts | null |
45578566 | https://en.wikipedia.org/wiki/Vulkan | Vulkan | Vulkan is a low-level, low-overhead cross-platform API and open standard for 3D graphics and computing. It was intended to address the shortcomings of OpenGL, and allow developers more control over the GPU. It is designed to support a wide variety of GPUs, CPUs and operating systems, and it is also designed to work with modern multi-core CPUs.
Overview
Vulkan targets high-performance real-time 3D-graphics applications, such as video games and interactive media, and highly parallelized computing. Vulkan is intended to offer higher performance and more efficient CPU and GPU usage compared to the older OpenGL and Direct3D 11 APIs. It does so by providing a considerably lower-level API for the application than the older APIs, that more closely resembles how modern GPUs work.
Vulkan is comparable to Apple's Metal API and Microsoft's Direct3D 12. In addition to its lower CPU usage, Vulkan is designed to allow developers to better distribute work among multiple CPU cores.
Vulkan was first announced by the non-profit Khronos Group at GDC 2015. The Vulkan API was initially referred to as the "next generation OpenGL initiative", or "OpenGL next" by Khronos, but use of those names was discontinued when "Vulkan" was announced.
Vulkan is derived from and built upon components of AMD's Mantle API, which was donated by AMD to Khronos with the intent of giving Khronos a foundation on which to begin developing a low-level API that they could standardize across the industry.
Features
Vulkan is intended to provide a variety of advantages over other APIs as well as its predecessor, OpenGL. Vulkan offers lower overhead, more direct control over the GPU, and lower CPU usage. The overall concept and feature set of Vulkan is similar to concepts seen in Mantle and later adopted by Microsoft with Direct3D 12 and Apple with Metal.
Intended advantages of Vulkan over previous-generation APIs include the following:
Cross-platform
Vulkan is available on multiple modern operating systems and architectures, and provides a single API for both desktop and mobile graphics devices, whereas previously these were split between OpenGL and OpenGL ES respectively. Like OpenGL, and in contrast to Direct3D 12, the Vulkan API is not locked to a single OS or device form factor. Vulkan runs natively on Android, Linux, BSD Unix, QNX, Haiku, Nintendo Switch,
Raspberry Pi, Stadia, Fuchsia, Tizen, and Windows 7, 8, 10, and 11.
MoltenVK provides freely licensed third-party support for macOS, iOS and tvOS by wrapping over Apple's Metal API.
Lower CPU usage
Vulkan reduces load on CPUs through the use of batching and other low-level optimizations, therefore reducing CPU workloads and leaving the CPU free to do more computation or rendering than would otherwise be possible.
Multi-threading friendly design
Direct3D 11 and OpenGL 4 were initially designed for use with single-core CPUs and only received augmentation to be executed on multi-cores. Even when application developers use the augmentations, these APIs regularly do not scale well on multi-cores. Vulkan offers improved scalability on multi-core CPUs due to the modernized threading architecture.
Pre-compiled shaders
OpenGL uses the high-level language GLSL for writing shaders, which forces each OpenGL driver to implement its own compiler for GLSL. This then executes at application runtime to translate the program's shaders into the GPU's machine code. In contrast, Vulkan drivers are supposed to ingest shaders already translated into an intermediate binary format called SPIR-V (Standard Portable Intermediate Representation), analogous to the binary format that HLSL shaders are compiled into in Direct3D. By allowing shader pre-compilation, application initialization speed is improved and a larger variety of shaders can be used per scene. A Vulkan driver only needs to perform GPU specific optimization and code generation, resulting in easier driver maintenance, and potentially smaller driver packages. The developers of applications now can also more easily obfuscate proprietary shader code, due to shaders not being stored directly as source code, however tools are provided that can decompile SPIR-V to human-readable high-level code.
Others
Vulkan provides unified management of compute kernels and graphical shaders, eliminating the need to use a separate compute API in conjunction with a graphics API.
Ray tracing is provided in a set of cross-vendor extensions, which together are analogous to the OptiX and DirectX Raytracing APIs. No such functionality is exposed in OpenGL.
Video acceleration for decoding and encoding, such as H.264 and H.265.
OpenGL vs. Vulkan
In 2016 NVIDIA stated that "OpenGL is still a great option for a lot of use cases, as it comes at a much lower complexity and maintenance burden than Vulkan, while in many cases still providing great overall performance."
AMD states that "Vulkan supports close-to-metal control, enabling faster performance and better image quality across Windows 7, Windows 8.1, Windows 10, and Linux. No other graphics API offers the same powerful combination of OS compatibility, rendering features, and hardware efficiency."
Versions
Vulkan 1.0
Vulkan 1.0 was released in February 2016.
Vulkan 1.1
At SIGGRAPH 2016, Khronos announced that Vulkan would be getting support for automatic multi-GPU features, similar to what is offered by Direct3D 12. Multi-GPU support included in-API removes the need for SLI or Crossfire which requires graphics cards to be of the same model. API multi-GPU instead allows the API to intelligently split the workload among two or more completely different GPUs. For example, integrated GPUs included on the CPU can be used in conjunction with a high-end dedicated GPU for a slight performance boost.
On March 7, 2018, Vulkan 1.1 was released by the Khronos Group. This first major update to the API standardized several extensions, such as multi-view, device groups, cross-process and cross-API sharing, advanced compute functionality, HLSL support, and YCbCr support. At the same time, it also brought better compatibility with DirectX 12, explicit multi-GPU support, ray tracing support, and laid the groundwork for the next generation of GPUs. Alongside Vulkan 1.1, SPIR-V was updated to version 1.3.
Vulkan 1.2
On January 15, 2020, Vulkan 1.2 was released by the Khronos Group. This second major update to the API integrates 23 additional commonly-used proven Vulkan extensions into the base Vulkan standard. Some of the most important features are "timeline semaphores for easily managed synchronization", "a formal memory model to precisely define the semantics of synchronization and memory operations in different threads", and "descriptor indexing to enable reuse of descriptor layouts by multiple shaders". The additional features of Vulkan 1.2 improve its flexibility when it comes to implementing other graphics APIs on top of Vulkan, including "uniform buffer standard layout", "scalar block layout", and "separate stencil usage".
Vulkan 1.3
On January 25, 2022, Vulkan 1.3 was released by the Khronos Group. This third major update to the API integrates 23 additional commonly-used proven Vulkan extensions into the base Vulkan standard. Vulkan 1.3 focuses on reducing fragmentation by making the new features not optional in order for a device to be considered Vulkan 1.3 capable. The new features in Vulkan 1.3 include dynamic rendering, additional dynamic state, improved synchronization API, and device profiles.
Vulkan 1.4
On December 3, 2024, Vulkan 1.4 was released by the Khronos Group.
Planned features
When releasing OpenCL 2.2, the Khronos Group announced that OpenCL would converge where possible with Vulkan to enable OpenCL software deployment flexibility over both APIs. This has been now demonstrated by Adobe's Premiere Rush using the clspv open source compiler to compile significant amounts of OpenCL C kernel code to run on a Vulkan runtime for deployment on Android.
History
The Khronos Group began a project to create a next generation graphics API in July 2014 with a kickoff meeting at Valve. At SIGGRAPH 2014, the project was publicly announced with a call for participants.
According to the US Patent and Trademark Office, the trademark for Vulkan was filed on February 19, 2015.
Vulkan was formally named and announced at Game Developers Conference 2015, although speculation and rumors centered around a new API existed beforehand and referred to it as "glNext".
2015
In early 2015, LunarG (funded by Valve) developed and showcased a Linux driver for Intel which enabled Vulkan compatibility on the HD 4000 series integrated graphics, despite the open-source Mesa drivers not being fully compatible with OpenGL 4.0 until later that year. There is still the possibility of Sandy Bridge support, since it supports compute through Direct3D11.
On August 10, 2015, Google announced that future versions of Android would support Vulkan. Android 7.x "Nougat" launched support for Vulkan on August 22, 2016. Android 8.0 "Oreo" has full support.
On December 18, 2015, the Khronos Group announced that the 1.0 version of the Vulkan specification was nearly complete and would be released when conforming drivers were available.
2016
The full Vulkan specification and the open-source Vulkan SDK were released on February 16, 2016.
2018
On February 26, 2018, Khronos Group announced that the Vulkan API became available to all on macOS and iOS through the MoltenVK library, which enables Vulkan to run on top of Metal. Other new developments were shown at SIGGRAPH 2018. Previously MoltenVK was a proprietary and commercially licensed solution, but Valve made an arrangement with developer Brenwill Workshop Ltd to open-source MoltenVK under the Apache 2.0 license and as a result the library is now available on GitHub. Valve also announced that Dota 2 can as of February 26, 2018 run on macOS using the Vulkan API, which is based on MoltenVK.
2019
On February 25, 2019, the Vulkan Safety Critical (SC) Working Group was announced to bring Vulkan GPU acceleration to safety critical industries.
Google's Stadia streaming cloud gaming service used Vulkan on Linux based servers with AMD GPUs.
2020
On January 15, 2020, Vulkan 1.2 was released.
Alongside the Vulkan 1.2 release, the Khronos Group posted a blog post which considered that HLSL support in Vulkan had reached "production ready" status, given the improvements in Microsoft's DXC compiler and Khronos's glslang compiler, and new features in Vulkan 1.2 which enhance HLSL support.
On February 3, 2020, the Raspberry Pi Foundation announced that it was working on an open source Vulkan driver for their Raspberry Pi, a popular single board computer. On June 20, 2020, a graphics engineer revealed that he had created one after two years of work that was capable of running VkQuake3 at over 100FPS on the small computer.
On March 17, 2020, Khronos Group released the Ray Tracing extensions, based on Nvidia's proprietary extension, with some major extensions and many minor changes, which in turn was based on Nvidia's OptiX API. On November 23, 2020, these Ray Tracing extensions were finalized.
On November 24, 2020, Raspberry Pi Foundation announced that their driver for the Raspberry Pi 4 is Vulkan 1.0 conformant.
2022
On January 25, 2022, Vulkan 1.3 was released.
On March 1, 2022, Vulkan SC 1.0 was released, bringing Vulkan graphics and compute for the safety-critical industry while being based on the Vulkan 1.2 standard.
On August 1, 2022, Raspberry Pi Foundation announced that their driver for the Raspberry Pi 4 is Vulkan 1.2 conformant.
On September 1, 2022, Mesh Shading for Vulkan was released.
2024
A new Vulkan Roadmap was announced on January 25, 2024. A new extension for decoding AV1 video was released on February 1, 2024.
Support across vendors
Initial specifications stated that Vulkan drivers can be implemented on any hardware that supports OpenGL ES 3.1 or OpenGL 4.x and up. As Vulkan support requires new graphics drivers, this does not necessarily imply that every existing device that supports OpenGL ES 3.1 or OpenGL 4.x will have Vulkan drivers available.
Intel
As of March 2023, Intel has split Vulkan driver support on Windows and on Linux. All drivers are developed by Intel.
On Windows, Skylake to Ice Lake supports up to Vulkan 1.3, with limited support after July 2022 as future updates will only cover security fixes. Iris Xe and newer are fully supported as of March 2023.
On Linux, as of March 2023 there is incomplete Vulkan support for Haswell with it not being Vulkan 1.0 compliant. Apart from Haswell, Ivy Bridge and Broadwell are also supported by a legacy Vulkan driver in Mesa called HASVK. Skylake and newer being supported by a driver in Mesa called ANV.
AMD
On Windows, Vulkan 1.2 is supported from GCN 1.0 to GCN 3.0, with no further updates planned after June 2021. GCN 4.0 and newer support Vulkan 1.3.
On Linux there are various different Vulkan drivers with varying and overlapping hardware support. There is the open-source Vulkan driver called AMDVLK, developed by AMD which mirrors Windows support. There is also the proprietary driver called AMDGPU-PRO which is not recommended to be used for most users as of March 2023.
There is also the recommended driver called RADV in Mesa developed by Valve, Red Hat, Google and others. This driver as of March 2023 supports all GCN and RDNA cards. This RADV driver's support for GCN 1.0 through GCN 2.0 requires its experimental support in the amdgpu kernel module to be enabled.
NVIDIA
On Windows and Linux there is the NVIDIA developed Vulkan driver which supports Vulkan 1.2 on Kepler cards with no further updates planned after September 2021. Maxwell and newer support Vulkan 1.3.
NVK, an experimental, open source Vulkan driver for Linux based on nouveau, was announced in October 2022. It was merged into mainline Mesa in August 2023. The driver currently supports Vulkan 1.3
Android and mobile GPUs
Most modern Android devices support Vulkan. Android 7.0 Nougat includes optional Vulkan 1.0 support, Android 9.0 Pie includes optional Vulkan 1.1 support, and Android 10 expects (but does not require) that all non-low memory 64-bit devices support Vulkan 1.1. Android 13 expects under the same conditions support of Vulkan 1.3. On Linux and some ChromeOS devices, the open-source Mesa driver provides support for Arm Mali (Midgard and Bifrost), Qualcomm Adreno, and Broadcom VideoCore VI hardware.
Apple
As of June 2022, Apple devices do not provide native support for the Vulkan API. Vulkan support is available via the open-source library MoltenVK, which provides a Vulkan implementation on top of the Metal graphics API provided on iOS and macOS devices, though it has some limitations in regards to certain advanced API features.
In June 2022, version 1.3.217 of Vulkan added support for Metal objects, facilitating import and export between the two APIs. In December 2022, Vulkan version 1.3.236 added small fixes for the interaction with Apple Metal.
Huawei and OpenAtom Foundation
As of August 2023, Huawei provides supports for native Vulkan NAPI with the industry standard SPIR-V shader since HarmonyOS 4.0 API 10 and extended into HarmonyOS NEXT system. It has been adopted as an extension on OpenAtom's consortium open source project, OpenHarmony with a newer graphics stack for the system, ArkGraphics 3D software engine that has been recently open sourced since May 2024 on OpenHarmony 5.0 beta 1 that has been previously exclusive to proprietary HarmonyOS NEXT developer kit, on custom graphics pipelines features.
Backwards compatibility
Vulkan is not backwards compatible with OpenGL, although there are certain projects that implement OpenGL on top of Vulkan, such as Google's ANGLE and Mesa's Zink.
Vulkan is also not compatible with other graphics APIs such as Direct3D, Metal, and Mantle, however implementations of those APIs exist atop of Vulkan:
Direct3D has a number of implementations, namely DXVK for Direct3D 8, 9, 10, and 11, and VKD3D-Proton for Direct3D 12 support. Other, older versions, of Direct3D may be enabled with other related software such as Wine.
Metal has an in-development third-party implementation named Indium, intended to be used with the Darling compatibility layer.
Mantle has an in-development third-party implementation named GRVK, to support older Mantle games.
Platform-specific graphics APIs implemented atop of Vulkan may also be able to run on alternative platforms. For example, DXVK provides an alternative shared library intended to be used on Linux natively (without the Wine compatibility layer) to help with game porting.
| Technology | Software development: General | null |
46905624 | https://en.wikipedia.org/wiki/Climate%20change%20in%20Antarctica | Climate change in Antarctica | Climate change caused by greenhouse gas emissions from human activities occurs everywhere on Earth, and while Antarctica is less vulnerable to it than any other continent, climate change in Antarctica has been observed. Since 1959, there has been an average temperature increase of >0.05 °C/decade since 1957 across the continent, although it had been uneven. West Antarctica warmed by over 0.1 °C/decade from the 1950s to the 2000s, and the exposed Antarctic Peninsula has warmed by since the mid-20th century. The colder, stabler East Antarctica had been experiencing cooling until the 2000s. Around Antarctica, the Southern Ocean has absorbed more oceanic heat than any other ocean, and has seen strong warming at depths below . Around the West Antarctic, the ocean has warmed by since 1955.
The warming of the Southern Ocean around Antarctica has caused the weakening or collapse of ice shelves, which float just offshore of glaciers and stabilize them. Many coastal glaciers have been losing mass and retreating, causing net-annual ice loss across Antarctica, although the East Antarctic ice sheet continues to gain ice inland. By 2100, net ice loss from Antarctica is expected to add about to global sea level rise. Marine ice sheet instability may cause West Antarctica to contribute tens of centimeters more if it is triggered before 2100. With higher warming, instability would be much more likely, and could double global, 21st-century sea-level rise.
The fresh, 1100–1500 billion tons (GT) per year of meltwater from the ice dilutes the saline Antarctic bottom water, weakening the lower cell of the Southern Ocean overturning circulation (SOOC). According to some research, a full collapse of the SOOC may occur at between and of global warming, although the full effects are expected to occur over multiple centuries; these include less precipitation in the Southern Hemisphere but more in the Northern Hemisphere, an eventual decline of fisheries in the Southern Ocean and a potential collapse of certain marine ecosystems. While many Antarctic species remain undiscovered, there are documented increases in Antarctic flora, and large fauna such as penguins are already having difficulty retaining suitable habitat. On ice-free land, permafrost thaws release greenhouse gases and formerly frozen pollution.
The West Antarctic ice sheet is likely to completely melt unless temperatures are reduced by below 2020 levels. The loss of this ice sheet would take between 2,000 and 13,000 years, although several centuries of high greenhouse emissions could shorten this time to 500 years. A sea-level rise of would occur if the ice sheet collapses, leaving ice caps on the mountains, and if those ice caps also melt. Isostatic rebound may contribute an additional to global sea levels over another 1,000 years. The far-stabler East Antarctic ice sheet may only cause a sea-level rise of – from the current level of warming, a small fraction of the contained in the full ice sheet. With global warming of around , vulnerable areas like Wilkes Basin and Aurora Basin may collapse over around 2,000 years, potentially adding up to to sea levels. The complete melting and disappearance of the East Antarctic ice sheet would require at least 10,000 years and would only occur if global warming reaches to .
Temperature and weather changes
Antarctica is the coldest, driest continent on Earth, and has the highest average elevation. Antarctica's dryness means the air contains little water vapor and conducts heat poorly. The Southern Ocean surrounding the continent is far more effective at absorbing heat than any other ocean. The presence of extensive, year-around sea ice, which has a high albedo (reflectivity), adds to the albedo of the ice sheets' own bright, white surface. Antarctica's coldness means it is the only place on Earth where an atmospheric temperature inversion occurs every winter; elsewhere on Earth, the atmosphere is at its warmest near the surface and becomes cooler as elevation increases. During the Antarctic winter, the surface of central Antarctica becomes cooler than middle layers of the atmosphere; this means greenhouse gases trap heat in the middle atmosphere, and reduce its flow toward the surface and toward space, rather than preventing the flow of heat from the lower atmosphere to the upper layers. This effect lasts until the end of the Antarctic winter. Early climate models predicted temperature trends over Antarctica would emerge more slowly and be more subtle than those elsewhere.
There were fewer than twenty permanent weather stations across the continent and only two in the continent's interior. Automatic weather stations were deployed relatively late, and their observational record was brief for much of the 20th century satellite temperature measurements began in 1981 and are typically limited to cloud-free conditions. Thus, datasets representing the entire continent only began to appear by the very end of the 20th century. The exception was the Antarctic Peninsula, where warming was pronounced and well-documented; it was eventually found to have warmed by since the mid 20th century. Based on this limited data, several papers published in the early 2000s said there had been an overall cooling over continental Antarctica outside the Peninsula.
A 2002 analysis led by Peter Doran received widespread media coverage after it also indicated stronger cooling than warming between 1966 and 2000, and found the McMurdo Dry Valleys in East Antarctica had experienced cooling of 0.7 °C per decade, a local trend that was confirmed by subsequent research at McMurdo. Multiple journalists said these findings were "contradictory" to global warming, even though the paper noted the limited data and found warming over 42% of the continent. What became known as the Antarctic Cooling Controversy received further attention in 2004 when Michael Crichton wrote the novel State of Fear. The novel featured a fictional conspiracy among climate scientists to fake evidence global warming, and cited Doran's study as proof that there was no warming in Antarctica outside of the Peninsula. Relatively few scientists responded to the book at the time, but it was mentioned in a 2006 US Senate hearing in support of climate change denial. Peter Doran published a statement in The New York Times decrying the misinterpretation of his work. The British Antarctic Survey and NASA also issued statements affirming the strength of climate science after the hearing.
By 2009, researchers were able to combine historical weather-station data with satellite measurements to create consistent temperature records going back to 1957 that demonstrated warming of >0.05 °C/decade since 1957 across the continent, with cooling in East Antarctica offset by the average temperature increase of at least 0.176 ± 0.06 °C per decade in West Antarctica. Subsequent research confirmed clear warming over West Antarctica in the 20th century, with the only uncertainty being the magnitude. During 2012–2013, estimates based on WAIS Divide ice cores and revised temperature records from Byrd Station suggested a much-larger West-Antarctica warming of since 1958, or around per decade, although there has been uncertainty about it. In 2022, a study narrowed the warming of the Central area of the West Antarctic Ice Sheet between 1959 and 2000 to per decade, and conclusively attributed it to increases in greenhouse gas concentrations caused by human activity.
Between 2000 and 2020, local changes in atmospheric circulation patterns like the Interdecadal Pacific Oscillation (IPO) and the Southern Annular Mode (SAM) slowed or partially reversed the warming of West Antarctica , with the Antarctic Peninsula experiencing cooling from 2002.
While a variability in those patterns is natural, ozone depletion had also led the SAM to be stronger than it had been in the past 600 years of observations. Studies predicted a reversal in the SAM once the ozone layer began to recover following the Montreal Protocol, starting from 2002, and these changes are consistent with their predictions. As these patterns reversed, the East Antarctica interior demonstrated clear warming over those two decades. In particular, the South Pole warmed by 0.61 ± 0.34 °C per decade between 1990 and 2020, which is three times the global average. The Antarctica-wide warming trend continued after 2000, and in February 2020, the continent recorded its highest temperature of 18.3 °C, which is one degree higher than the previous record of 17.5 °C in March 2015.
Models predict under the most intense climate change scenario, known as RCP8.5, Antarctic temperatures will rise by on average by 2100; this rise will be accompanied by a 30% increase in precipitation and a 30% decrease in sea ice. RCPs were developed in the late 2000s, and early 2020s research considers RCP8.5 much less likely than the more-moderate scenarios like RCP 4.5, which lie in between the worst-case scenario and the Paris Agreement goals.
Effects on ocean currents
Between 1971 and 2018, over 90% of thermal energy from global heating entered the oceans. The Southern Ocean absorbs the most heat; after 2005, it accounted for between 67% and 98% of all heat entering the oceans. The temperature in the ocean's upper layer in West Antarctica has warmed by since 1955, and the Antarctic Circumpolar Current (ACC) is also warming faster than the average. It is also a highly important carbon sink. These properties are connected to the Southern Ocean overturning circulation (SOOC), one half of the global thermohaline circulation. It is important estimates on when global warming will reach – inevitable in all scenarios where greenhouse gas emissions have not been significantly lowered – depend on the strength of the circulation more than any factor other than the overall emissions.
The overturning circulation has two parts; the smaller upper cell, which is most-strongly affected by winds and precipitation, and the larger lower cell that is defined by the temperature and salinity of Antarctic bottom water. Since the 1970s, the upper cell has strengthened by 50–60% while the lower cell has weakened by 10–20%. Some of this was due to the natural cycle of Interdecadal Pacific Oscillation (IPO) but there is a clear effect of climate change, because it alters winds and precipitation through shifts in the Southern Annular Mode (SAM) pattern. Fresh meltwater from the erosion of the West Antarctic ice sheet dilutes the more-saline Antarctic bottom water, which flows at a rate of 1100–1500 billion tons (GT) per year. During the 2010s, a temporary reduction in ice-shelf melting in West Antarctica allowed for the partial recovery of Antarctic bottom water and the lower cell of the circulation. Greater melting and further decline of the circulation is expected in the future.
As bottom water weakens while the flow of warmer, fresher waters strengthens near the surface, the surface waters become more buoyant, and less likely to sink and mix with the lower layers, increasing ocean stratification. One study says the strength of the circulation would halve by 2050 under the worst climate-change scenario, with greater losses occurring afterwards. Paleoclimate evidence shows the entire circulation has significantly weakened or completely collapsed in the past; preliminary research says such a collapse may become likely once global warming reaches between and , but this estimate is much-less certain than for the majority of tipping points in the climate system. Such a collapse would be prolonged; one estimate says it would occur before 2300. As with the better-studied Atlantic meridional overturning circulation (AMOC), a major slowing or collapse of the SOOC would have substantial regional and global effects. Some likely effects include a decline in precipitation in Southern Hemisphere countries like Australia, a corresponding increase in precipitation in the Northern Hemisphere, and an eventual decline of fisheries in the Southern Ocean, which could lead to a potential collapse of some marine ecosystems. These effects are expected to occur over centuries, but there has been limited research to date and few specifics are currently known.
Effects on the cryosphere
Observed changes in ice mass
Contrasting temperature trends across parts of Antarctica mean that some locations, particularly at the coasts, lose mass while locations further inland continue to gain mass. These contrasting trends and the remoteness of the region make estimating an average trend difficult. In 2018, a systematic review of all previous studies and data by the Ice Sheet Mass Balance Inter-comparison Exercise (IMBIE) estimated an increase in the West Antarctic ice sheet from 53 ± 29 Gt (gigatonnes) in 1992 to 159 ± 26 Gt in the final five years of the study. On the Antarctic Peninsula, the study estimated a loss of 20 ± 15 Gt per year with an increase in loss of roughly 15 Gt per year after 2000, a significant quantity of which was the loss of ice shelves. The review's overall estimate was that Antarctica lost 2,720 ± 1,390 gigatons of ice from 1992 to 2017, averaging 109 ± 56 Gt per year. This would amount to of sea-level rise. A 2021 analysis of data from four research satellite systems – Envisat, European Remote-Sensing Satellite, GRACE and GRACE-FO, and ICESat – indicated an annual mass loss of about 12 Gt from 2012 to 2016 due to much-greater ice gain in East Antarctica than earlier estimated, which offset most of the losses from West Antarctica. The East Antarctic ice sheet can still gain mass despite warming because effects of climate change on the water cycle increase precipitation over its surface, which then freezes and helps to accrete more ice.
Black carbon pollution
Black carbon from incomplete fuel combustion is carried long distances by wind. If it reaches Antarctica, black carbon accumulates on snow and ice, reducing the reflectivity and causing it to absorb more energy. This accelerates melting and can create an ice-albedo feedback loop in which meltwater itself absorbs more heat from sunlight. Due to its remoteness, Antarctica has the cleanest snow in the world, and some research says the effects of black carbon across West and East Antarctica is minimal with an albedo reduction of about 0.5% in one 47-year ice core.
The highest concentrations of black carbon are found on the Antarctic Peninsula, where human activity is higher than elsewhere. Black carbon deposits near common tourist sites and research stations increase summer seasonal melting by between about of snow per m2.
21st-century ice loss and sea-level rise
By 2100, net ice loss from Antarctica is expected to add about to global sea-level rise. Other processes may cause West Antarctica to contribute more to sea-level rise. Marine ice-sheet instability is the potential for warm water currents to enter between the seafloor and the base of the ice sheet once the sheet is no longer heavy enough to displace such flows. Marine ice-cliff instability may cause ice cliffs taller than to collapse under their own weight once they are no longer buttressed by ice shelves. This process has never been observed and it only occurs in some models. By 2100, these processes may increase sea-level rise caused by Antarctica to under the low-emission scenario and by under the high-emission scenario.
Some scientists have given greater estimates but all agree melting in Antarctica would have a greater impact and would be much more likely to occur under higher warming scenarios, where it may double the overall 21st-century sea-level rise to or more. According to one study, if the Paris Agreement is followed and global warming is limited to , the loss of ice in Antarctica will continue at the 2020 rate for the rest of the 21st century, but if a trajectory leading to is followed, Antarctica ice loss will accelerate after 2060 and start adding per year to global sea levels by 2100.
Long-term sea level rise
Sea levels will continue to rise long after 2100 but potentially at very different rates. According to the most-recent reports of the Intergovernmental Panel on Climate Change (SROCC and the IPCC Sixth Assessment Report), there will be a median rise of and maximum rise of under the low-emission scenario. The highest-emission scenario results in a median rise of with a minimum of and a maximum of .
Over longer timescales, the West Antarctic ice sheet, which is much smaller than the East Antarctic ice sheet and is grounded deep below sea level, is considered highly vulnerable. The melting of all of the ice in West Antarctica would increase global sea-level rise to . Mountain ice caps that are not in contact with water are less vulnerable than the majority of the ice sheet, which is located below sea level. The collapse of the West Antarctic ice sheet would cause around of sea-level rise. This kind of collapse is now considered almost inevitable because it appears to have occurred during the Eemian period 125,000 years ago, when temperatures were similar to those in the early 21st century. The Amundsen Sea also appears to be warming at rates that, if continued, make the ice sheet's collapse inevitable.
The only way to reverse ice loss from West Antarctica once triggered is to lower the global temperature to below the pre-industrial level, to below the temperature of 2020. Other researchers said a climate engineering intervention to stabilize the ice sheet's glaciers may delay its loss by centuries and give the environment more time to adapt. This is an uncertain proposal and would be one of the most-expensive projects ever attempted. Otherwise, the disappearance of the West Antarctic ice sheet would take an estimated 2,000 years. The loss of West Antarctica ice would take at least 500 years and possibly as long as 13,000 years. Once the ice sheet is lost, the isostatic rebound of the land previously covered by the ice sheet would result in an additional of sea-level rise over the following 1,000 years.
The East Antarctic ice sheet is far more stable than the West Antarctic ice sheet. The loss of the entire East Antarctic ice sheet would require global warming of between and , and a minimum of 10,000 years. Some of its parts, such as Totten Glacier and Wilkes Basin, are in vulnerable subglacial basins that lie below sea level. Estimates suggest the irreversible loss of those basins would begin once global warming reaches , although this loss may become irreversible at warming of between and . After global warming reaches the critical threshold for the collapse of these subglacial basins, their loss will likely occur over around 2,000 years, although the loss may be as fast as 500 years or as slow as 10,000 years.
The loss of all of this ice would add between and to sea levels, depending on the ice sheet model used. Isostatic rebound of the newly ice-free land would add between and . Evidence from the Pleistocene shows partial loss can occur at lower warming levels; Wilkes Basin is estimated to have lost enough ice to add to sea levels between 115,000 and 129,000 years ago during the Eemian, and about between 318,000 and 339,000 years ago during Marine Isotope Stage 9.
Permafrost thaw
Antarctica has much less permafrost than the Arctic. Antarctic permafrost is subject to thaw. The permafrost in Antarctica traps various compounds, including persistent organic pollutants (POPs) like polycyclic aromatic hydrocarbons, many of which are known carcinogens or can cause liver damage; and polychlorinated biphenyls such as hexachlorobenzene (HCB) and DDT, which are associated with decreased reproductive success and immunohematological disorders. Antarctic soils also contain heavy metals, including mercury, lead and cadmium, all of which can cause endocrine disruption, DNA damage, immunotoxicity and reproductive toxicity. These compounds are released when contaminated permafrost thaws; this can change the chemistry of surface water. Bioaccumulation and biomagnification spread these compounds throughout the food web. Permafrost thaw also results in greenhouse gas emissions, though the limited volume of Antarctic permafrost relative to Arctic permafrost means Antarctic permafrost is not considered a significant cause of climate change.
Ecological effects
Marine ecosystems
Nearly all of the species in Antarctic are marine; by 2015, 8,354 species had been discovered in Antarctica and taxonomically accepted; of these species, only 57 were not marine. Antarctica may have up to 17,000 species; while 90% of the ocean around Antarctica is deeper than , only 30% of the benthic-sample locations were taken at that depth. On the Antarctic continental shelves, bethnic-zone biomass may increase due to oceanic warming, which is likely to be of most benefit to seaweed. Around 12% of the native benthic species may be outcompeted and go extinct. These estimates are preliminary; the vulnerabilities of most Antarctic species have yet to be assessed.
Unlike the Arctic, there has been little change in marine primary production across the Southern Ocean in the available observations. Estimates say an increase in Southern Ocean primary production could occur after 2100; this increase would block many nutrients from travelling to other oceans, leading to decreased production elsewhere. Some microbial communities appear to have been negatively affected by ocean acidification and there is a risk future acidification would threaten the eggs of pteropods, a type of zooplankton.
Antarctic krill are a key species in the Antarctic food web; they feed on phytoplankton, and are the main food for fish and penguins. Krill are likely to abandon the fastest-warming areas, such as the Weddell Sea, while icefish may find shelf waters around Antarctic islands unsuitable. The shifts or declines in krill and copepod numbers are known to prevent the recovery in numbers of baleen whale following the declines caused by historical whaling. Without a reversal in temperature increases, baleen whales are likely to be forced to adapt their migratory patterns or face local extinction. Many other marine species are expected to move into Antarctic waters as the oceans continue to warm, forcing native species to compete with them. Some research says at of warming, the diversity of Antarctic species would decline by nearly 17% and the suitable climate area would shrink by 50%.
Penguins
Penguins are the highest species in the Antarctic food web and are already being substantially affected by climate change. Numbers of Adélie penguins, chinstrap penguins, emperor penguin and king penguins have already been declining, while the number of gentoo penguins has increased. Gentoo penguins, which are ice intolerant and use mosses as nesting material, have been able to spread into previously inaccessible territories and substantially increase in number. The vulnerable penguin species can respond through acclimatization, adaptation, or range shift. Range shift through dispersal leads to colonization elsewhere but results in local extinction.
Climate change is particularly threatening to penguins. As early as 2008, it was estimated every Southern Ocean temperature increase of reduces king penguin populations by nine percent. Under the worst-case warming scenario, king penguins will permanently lose at least two of their current eight breeding sites, and 70% of the species (1.1 million pairs) will have to relocate to avoid extinction. Emperor penguin populations may be at a similar risk; with no climate mitigation, 80% of populations are at risk of extinction by 2100. With Paris-Agreement temperature goals in place, that number may fall to 31% under the goal, and to 19% under the goal.
A 27-year study of the largest colony of Magellanic penguins that was published in 2014 found extreme weather caused by climate change kills seven percent of penguin chicks in an average year, accounting for up to 50% of all chick deaths in some years. Since 1987, the number of breeding pairs in the colony has fallen by 24%. Chinstrap penguins are also in decline, mainly due to a corresponding decline of Antarctic krill. It is estimated while Adélie penguins will retain some habitat past 2099, one-third of colonies along the West Antarctic Peninsula – around 20% of the species – will be in decline by 2060.
Terrestrial ecosystems
On the Antarctic continent, plants are mainly found in coastal areas; the commonest plants are lichens, followed by mosses and ice algae. In the Antarctic Peninsula, green snow algae have a combined biomass of around . As glaciers retreat, they expose areas that often become colonized by pioneer lichen species. The reduction in precipitation in East Antarctica had turned many green mosses from green to red or brown as they respond to this drought. Schistidium antarctici had declined, while the desiccation-tolerant species Bryum pseudotriquetrum and Ceratodon purpureus have increased. The Antarctic ozone hole has led to an increase in UV-B radiation, which also causes observed damage to plant cells and photosynthesis.
The only vascular plants on continental Antarctica are Deschampsia antarctica and Colobanthus quitensis, which are found on the Antarctic Peninsula. Increased temperatures have boosted photosynthesis and allowed these species to increase their population and range. Other plant species are increasingly likely to spread to Antarctica as the climate continues to warm and as human activity on the continent increases.
Effects of human development
Tourism in Antarctica has significantly increased since 2020; 74,400 tourists arrived there in late 2019 and early 2020. The development of Antarctica for the purposes of industry, tourism, and an increase in research facilities may put pressure on the continent and threaten its status as largely untouched land. Regulated tourism in Antarctica brings about awareness, and encourages the investment and public support needed to preserve Antarctica's distinctive environment. An unmitigated loss of ice on land and sea could greatly reduce its attractiveness.
Policy can be used to increase climate-change resilience through the protection of ecosystems. Ships that operate in Antarctic waters adhere to the international Polar Code, which includes regulations and safety measures such as operational training and assessments, the control of oil discharge, appropriate sewage disposal, and the prevention of pollution by toxic liquids. Antarctic Specially Protected Areas (ASPA) and Antarctic Specially Managed Areas (ASMA) are designated by the Antarctic Treaty to protect flora and fauna. Both ASPAs and ASMAs restrict entry but to different extents, with ASPAs being the highest level of protection. Designation of ASPAs has decreased 84% since the 1980s despite a rapid increase in tourism, which may bring additional stressors to the natural environment and ecosystems. To alleviate stress on Antarctic ecosystems posed by climate change and the rapid increase in tourism, much of the scientific community advocates for an increase in protected areas like ASPAs to improve Antarctica's resilience to rising temperatures.
| Physical sciences | Climate change | Earth science |
36672320 | https://en.wikipedia.org/wiki/G-Cloud | G-Cloud | The Galactic cloud, G cloud, G-Cloud or G-Cloud complex, is an interstellar cloud located next to the Local Interstellar Cloud, within the Local Bubble. It is unknown whether the Solar System is embedded in the Local Interstellar Cloud or in the region where the two clouds are interacting, although the Solar System is currently moving towards the G-Cloud. The G-Cloud contains the stars Alpha Centauri (a triple star system that includes Proxima Centauri) and Altair (and possibly others).
Estimates for the n(H I) particle density in the direction of Alpha Centauri. were made in 2011 by Crawford as 0.1 cm−3 and in 2014 by Gry as 0.098 cm−3.
| Physical sciences | Notable patches of universe | Astronomy |
48593539 | https://en.wikipedia.org/wiki/Seasonal%20tropical%20forest | Seasonal tropical forest | Seasonal tropical forest, also known as moist deciduous, semi-evergreen seasonal, tropical mixed or monsoon forest, typically contains a range of tree species: only some of which drop some or all of their leaves during the dry season. This tropical forest is classified under the Walter system as (i) tropical climate with high overall rainfall (typically in the 1000–2500 mm range; 39–98 inches) and (ii) having a very distinct wet season with (an often cooler “winter”) dry season. These forests represent a range of habitats influenced by monsoon (Am) or tropical wet savanna (Aw/As) climates (as in the Köppen climate classification). Drier forests in the Aw/As climate zone are typically deciduous and placed in the Tropical dry forest biome: with further transitional zones (ecotones) of savannah woodland then tropical and subtropical grasslands, savannas, and shrublands.
Distribution
Seasonal (mixed) tropical forests can be found in many parts of the tropical zone, with examples found in:
In the Asia-Pacific region: seasonal forests predominate across large areas of the Eastern Java, Wallacea, Indian subcontinent and Indochina
Eastern Java monsoon forests
Wallacea Forest
Brahmaputra Valley semi-evergreen forests
Mondulkiri Province, Cambodia
Cat Tien National Park, Vietnam
Khao Yai National Park and Huai Kha Khaeng Wildlife Sanctuary, Thailand
Northern Australia: Cape York Peninsula (Queensland), Arnhem Land (Northern Territory), The Kimberly (Western Australia)
In the Americas
Atlantic forests of Brazil
Central and eastern Panama: with Barro Colorado Island especially well studied
In Africa
Coastal West Africa: Guinean seasonal forest: from south-western Gambia to eastern Ghana
Climate
The climate of seasonal forests is typically controlled by a system called the Intertropical Convergence Zone (ITCZ), located near the equator and created by the convergence of the trade winds from the Northern and Southern Hemispheres. The position of these bands vary seasonally, moving north in the northern summer and south in the northern winter, and ultimately controlling the wet and dry seasons in the tropics.
These regions appear to have experienced strong warming, at a mean rate of 0.26 degrees Celsius per decade, which coincides with a global rise in temperature resulting from the anthropocentric inputs of greenhouse gases into the atmosphere. Studies have also found that precipitation has declined and tropical Asia has experienced an increase in dry season intensity whereas Amazonian has no significant pattern change in precipitation or dry season. Additionally, El Niño-Southern Oscillation (ENSO) events drive the inter-annual climatic variability in temperature and precipitation and result in drought and increased intensity of the dry season. As anthropogenic warming increases the intensity and frequency of ENSO will increase, rendering tropical rainforest regions susceptible to stress and increased mortality of trees and other plants.
Structure
As with tropical rainforests there are different canopy layers, but these may be less pronounced in mixed forests, which are often characterised by numerous lianas due to their growth advantage during the dry season. The colloquial term jungle, derived from the Sanskrit word for "forest", has no specific ecological meaning but originally referred to this type of primary and especially secondary forest in the Indian subcontinent. Determining which strands of mixed forest are primary and secondary can also be problematic, since the species mixture is influenced by factors such as soil depth and climate, as well as human interference.
Characteristic biology
The fauna and flora of seasonal tropical mixed forest are usually distinctive. Examples of the biodiversity and habitat type are often well described for National Parks in:
Africa represented by:
the northern part of Korup National Park in Cameroon (central region)
the Upper Guinean forests (West Africa)
Asia represented by Cat Tien National Park and Huai Kha Khaeng in the (Indochina region)
Pacific region: including the Queensland forest reserves
Central American wildlife is well represented in:
Costa Rica e.g. Corcovado National Park
the Soberanía National Park in Panama.
South American flora listed and represented in Rio Doce State Park
| Physical sciences | Forests | Earth science |
36674345 | https://en.wikipedia.org/wiki/Information%20technology | Information technology | Information technology (IT) is a set of related fields that encompass computer systems, software, programming languages, data and information processing, and storage. IT forms part of information and communications technology (ICT). An information technology system (IT system) is generally an information system, a communications system, or, more specifically speaking, a computer system — including all hardware, software, and peripheral equipment — operated by a limited group of IT users, and an IT project usually refers to the commissioning and implementation of an IT system. IT systems play a vital role in facilitating efficient data management, enhancing communication networks, and supporting organizational processes across various industries. Successful IT projects require meticulous planning and ongoing maintenance to ensure optimal functionality and alignment with organizational objectives.
Although humans have been storing, retrieving, manipulating, analysing and communicating information since the earliest writing systems were developed, the term information technology in its modern sense first appeared in a 1958 article published in the Harvard Business Review; authors Harold J. Leavitt and Thomas L. Whisler commented that "the new technology does not yet have a single established name. We shall call it information technology (IT)." Their definition consists of three categories: techniques for processing, the application of statistical and mathematical methods to decision-making, and the simulation of higher-order thinking through computer programs.
The term is commonly used as a synonym for computers and computer networks, but it also encompasses other information distribution technologies such as television and telephones. Several products or services within an economy are associated with information technology, including computer hardware, software, electronics, semiconductors, internet, telecom equipment, and e-commerce.
Based on the storage and processing technologies employed, it is possible to distinguish four distinct phases of IT development: pre-mechanical (3000 BC – 1450 AD), mechanical (1450 – 1840), electromechanical (1840 – 1940), and electronic (1940 to present).
Information technology is a branch of computer science, defined as the study of procedures, structures, and the processing of various types of data. As this field continues to evolve globally, its priority and importance have grown, leading to the introduction of computer science-related courses in K-12 education.
History
Ideas of computer science were first mentioned before the 1950s under the Massachusetts Institute of Technology (MIT) and Harvard University, where they had discussed and began thinking of computer circuits and numerical calculations. As time went on, the field of information technology and computer science became more complex and was able to handle the processing of more data. Scholarly articles began to be published from different organizations.
During the early computing, Alan Turing, J. Presper Eckert, and John Mauchly were considered some of the major pioneers of computer technology in the mid-1900s. Giving them such credit for their developments, most of their efforts were focused on designing the first digital computer. Along with that, topics such as artificial intelligence began to be brought up as Turing was beginning to question such technology of the time period.
Devices have been used to aid computation for thousands of years, probably initially in the form of a tally stick. The Antikythera mechanism, dating from about the beginning of the first century BC, is generally considered the earliest known mechanical analog computer, and the earliest known geared mechanism. Comparable geared devices did not emerge in Europe until the 16th century, and it was not until 1645 that the first mechanical calculator capable of performing the four basic arithmetical operations was developed.
Electronic computers, using either relays or valves, began to appear in the early 1940s. The electromechanical Zuse Z3, completed in 1941, was the world's first programmable computer, and by modern standards one of the first machines that could be considered a complete computing machine. During the Second World War, Colossus developed the first electronic digital computer to decrypt German messages. Although it was programmable, it was not general-purpose, being designed to perform only a single task. It also lacked the ability to store its program in memory; programming was carried out using plugs and switches to alter the internal wiring. The first recognizably modern electronic digital stored-program computer was the Manchester Baby, which ran its first program on 21 June 1948.
The development of transistors in the late 1940s at Bell Laboratories allowed a new generation of computers to be designed with greatly reduced power consumption. The first commercially available stored-program computer, the Ferranti Mark I, contained 4050 valves and had a power consumption of 25 kilowatts. By comparison, the first transistorized computer developed at the University of Manchester and operational by November 1953, consumed only 150 watts in its final version.
Several other breakthroughs in semiconductor technology include the integrated circuit (IC) invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor in 1959, silicon dioxide surface passivation by Carl Frosch and Lincoln Derick in 1955, the first planar silicon dioxide transistors by Frosch and Derick in 1957, the MOSFET demonstration by a Bell Labs team. the planar process by Jean Hoerni in 1959,and the microprocessor invented by Ted Hoff, Federico Faggin, Masatoshi Shima, and Stanley Mazor at Intel in 1971. These important inventions led to the development of the personal computer (PC) in the 1970s, and the emergence of information and communications technology (ICT).
By 1984, according to the National Westminster Bank Quarterly Review, the term information technology had been redefined as "The development of cable television was made possible by the convergence of telecommunications and computing technology (…generally known in Britain as information technology)." We then begin to see the appearance of the term in 1990 contained within documents for the International Organization for Standardization (ISO).
Innovations in technology have already revolutionized the world by the twenty-first century as people were able to access different online services. This has changed the workforce drastically as thirty percent of U.S. workers were already in careers in this profession. 136.9 million people were personally connected to the Internet, which was equivalent to 51 million households. Along with the Internet, new types of technology were also being introduced across the globe, which has improved efficiency and made things easier across the globe.
Along with technology revolutionizing society, millions of processes could be done in seconds. Innovations in communication were also crucial as people began to rely on the computer to communicate through telephone lines and cable. The introduction of the email was considered revolutionary as "companies in one part of the world could communicate by e-mail with suppliers and buyers in another part of the world..."
Not only personally, computers and technology have also revolutionized the marketing industry, resulting in more buyers of their products. In 2002, Americans exceeded $28 billion in goods just over the Internet alone while e-commerce a decade later resulted in $289 billion in sales. And as computers are rapidly becoming more sophisticated by the day, they are becoming more used as people are becoming more reliant on them during the twenty-first century.
Data processing
Storage
Early electronic computers such as Colossus made use of punched tape, a long strip of paper on which data was represented by a series of holes, a technology now obsolete. Electronic data storage, which is used in modern computers, dates from World War II, when a form of delay-line memory was developed to remove the clutter from radar signals, the first practical application of which was the mercury delay line. The first random-access digital storage device was the Williams tube, which was based on a standard cathode ray tube. However, the information stored in it and delay-line memory was volatile in the fact that it had to be continuously refreshed, and thus was lost once power was removed. The earliest form of non-volatile computer storage was the magnetic drum, invented in 1932 and used in the Ferranti Mark 1, the world's first commercially available general-purpose electronic computer.
IBM introduced the first hard disk drive in 1956, as a component of their 305 RAMAC computer system. Most digital data today is still stored magnetically on hard disks, or optically on media such as CD-ROMs. Until 2002 most information was stored on analog devices, but that year digital storage capacity exceeded analog for the first time. , almost 94% of the data stored worldwide was held digitally: 52% on hard disks, 28% on optical devices, and 11% on digital magnetic tape. It has been estimated that the worldwide capacity to store information on electronic devices grew from less than 3 exabytes in 1986 to 295 exabytes in 2007, doubling roughly every 3 years.
Databases
Database Management Systems (DMS) emerged in the 1960s to address the problem of storing and retrieving large amounts of data accurately and quickly. An early such system was IBM's Information Management System (IMS), which is still widely deployed more than 50 years later. IMS stores data hierarchically, but in the 1970s Ted Codd proposed an alternative relational storage model based on set theory and predicate logic and the familiar concepts of tables, rows, and columns. In 1981, the first commercially available relational database management system (RDBMS) was released by Oracle.
All DMS consist of components, they allow the data they store to be accessed simultaneously by many users while maintaining its integrity. All databases are common in one point that the structure of the data they contain is defined and stored separately from the data itself, in a database schema.
In recent years, the extensible markup language (XML) has become a popular format for data representation. Although XML data can be stored in normal file systems, it is commonly held in relational databases to take advantage of their "robust implementation verified by years of both theoretical and practical effort." As an evolution of the Standard Generalized Markup Language (SGML), XML's text-based structure offers the advantage of being both machine- and human-readable.
Transmission
Data transmission has three aspects: transmission, propagation, and reception. It can be broadly categorized as broadcasting, in which information is transmitted unidirectionally downstream, or telecommunications, with bidirectional upstream and downstream channels.
XML has been increasingly employed as a means of data interchange since the early 2000s, particularly for machine-oriented interactions such as those involved in web-oriented protocols such as SOAP, describing "data-in-transit rather than... data-at-rest".
Manipulation
Hilbert and Lopez identify the exponential pace of technological change (a kind of Moore's law): machines' application-specific capacity to compute information per capita roughly doubled every 14 months between 1986 and 2007; the per capita capacity of the world's general-purpose computers doubled every 18 months during the same two decades; the global telecommunication capacity per capita doubled every 34 months; the world's storage capacity per capita required roughly 40 months to double (every 3 years); and per capita broadcast information has doubled every 12.3 years.
Massive amounts of data are stored worldwide every day, but unless it can be analyzed and presented effectively it essentially resides in what have been called data tombs: "data archives that are seldom visited". To address that issue, the field of data mining — "the process of discovering interesting patterns and knowledge from large amounts of data" — emerged in the late 1980s.
Services
Email
The technology and services it provides for sending and receiving electronic messages (called "letters" or "electronic letters") over a distributed (including global) computer network. In terms of the composition of elements and the principle of operation, electronic mail practically repeats the system of regular (paper) mail, borrowing both terms (mail, letter, envelope, attachment, box, delivery, and others) and characteristic features — ease of use, message transmission delays, sufficient reliability and at the same time no guarantee of delivery. The advantages of e-mail are: easily perceived and remembered by a person addresses of the form user_name@domain_name (for example, somebody@example.com); the ability to transfer both plain text and formatted, as well as arbitrary files; independence of servers (in the general case, they address each other directly); sufficiently high reliability of message delivery; ease of use by humans and programs.
Disadvantages of e-mail: the presence of such a phenomenon as spam (massive advertising and viral mailings); the theoretical impossibility of guaranteed delivery of a particular letter; possible delays in message delivery (up to several days); limits on the size of one message and on the total size of messages in the mailbox (personal for users).
Search system
A software and hardware complex with a web interface that provides the ability to search for information on the Internet. A search engine usually means a site that hosts the interface (front-end) of the system. The software part of a search engine is a search engine (search engine) — a set of programs that provides the functionality of a search engine and is usually a trade secret of the search engine developer company. Most search engines look for information on World Wide Web sites, but there are also systems that can look for files on FTP servers, items in online stores, and information on Usenet newsgroups. Improving search is one of the priorities of the modern Internet (see the Deep Web article about the main problems in the work of search engines).
Commercial effects
Companies in the information technology field are often discussed as a group as the "tech sector" or the "tech industry." These titles can be misleading at times and should not be mistaken for "tech companies;" which are generally large scale, for-profit corporations that sell consumer technology and software. It is also worth noting that from a business perspective, Information technology departments are a "cost center" the majority of the time. A cost center is a department or staff which incurs expenses, or "costs", within a company rather than generating profits or revenue streams. Modern businesses rely heavily on technology for their day-to-day operations, so the expenses delegated to cover technology that facilitates business in a more efficient manner are usually seen as "just the cost of doing business." IT departments are allocated funds by senior leadership and must attempt to achieve the desired deliverables while staying within that budget. Government and the private sector might have different funding mechanisms, but the principles are more-or-less the same. This is an often overlooked reason for the rapid interest in automation and artificial intelligence, but the constant pressure to do more with less is opening the door for automation to take control of at least some minor operations in large companies.
Many companies now have IT departments for managing the computers, networks, and other technical areas of their businesses. Companies have also sought to integrate IT with business outcomes and decision-making through a BizOps or business operations department.
In a business context, the Information Technology Association of America has defined information technology as "the study, design, development, application, implementation, support, or management of computer-based information systems". The responsibilities of those working in the field include network administration, software development and installation, and the planning and management of an organization's technology life cycle, by which hardware and software are maintained, upgraded, and replaced.
Information services
Information services is a term somewhat loosely applied to a variety of IT-related services offered by commercial companies, as well as data brokers.
Ethics
The field of information ethics was established by mathematician Norbert Wiener in the 1940s. Some of the ethical issues associated with the use of information technology include:
Breaches of copyright by those downloading files stored without the permission of the copyright holders
Employers monitoring their employees' emails and other Internet usage
Unsolicited emails
Hackers accessing online databases
Web sites installing cookies or spyware to monitor a user's online activities, which may be used by data brokers
IT projects
Research suggests that IT projects in business and public administration can easily become significant in scale. Work conducted by McKinsey in collaboration with the University of Oxford suggested that half of all large-scale IT projects (those with initial cost estimates of $15 million or more) often failed to maintain costs within their initial budgets or to complete on time.
| Technology | Basics_3 | null |
36675611 | https://en.wikipedia.org/wiki/Taxonomy | Taxonomy | Taxonomy is a practice and science concerned with classification or categorization. Typically, there are two parts to it: the development of an underlying scheme of classes (a taxonomy) and the allocation of things to the classes (classification).
Originally, taxonomy referred only to the classification of organisms on the basis of shared characteristics. Today it also has a more general sense. It may refer to the classification of things or concepts, as well as to the principles underlying such work. Thus a taxonomy can be used to organize species, documents, videos or anything else.
A taxonomy organizes taxonomic units known as "taxa" (singular "taxon")." Many are hierarchies.
One function of a taxonomy is to help users more easily find what they are searching for. This may be effected in ways that include a library classification system and a search engine taxonomy.
Etymology
The word was coined in 1813 by the Swiss botanist A. P. de Candolle and is irregularly compounded from the Greek , taxis 'order' and , nomos 'law', connected by the French form ; the regular form would be , as used in the Greek reborrowing .
Applications
Wikipedia categories form a taxonomy, which can be extracted by automatic means. , it has been shown that a manually-constructed taxonomy, such as that of computational lexicons like WordNet, can be used to improve and restructure the Wikipedia category taxonomy.
In a broader sense, taxonomy also applies to relationship schemes other than parent-child hierarchies, such as network structures. Taxonomies may then include a single child with multi-parents, for example, "Car" might appear with both parents "Vehicle" and "Steel Mechanisms"; to some however, this merely means that 'car' is a part of several different taxonomies. A taxonomy might also simply be organization of kinds of things into groups, or an alphabetical list; here, however, the term vocabulary is more appropriate. In current usage within knowledge management, taxonomies are considered narrower than ontologies since ontologies apply a larger variety of relation types.
Mathematically, a hierarchical taxonomy is a tree structure of classifications for a given set of objects. It is also named containment hierarchy. At the top of this structure is a single classification, the root node, that applies to all objects. Nodes below this root are more specific classifications that apply to subsets of the total set of classified objects. The progress of reasoning proceeds from the general to the more specific.
By contrast, in the context of legal terminology, an open-ended contextual taxonomy is employed—a taxonomy holding only with respect to a specific context. In scenarios taken from the legal domain, a formal account of the open-texture of legal terms is modeled, which suggests varying notions of the "core" and "penumbra" of the meanings of a concept. The progress of reasoning proceeds from the specific to the more general.
History
Anthropologists have observed that taxonomies are generally embedded in local cultural and social systems, and serve various social functions. Perhaps the most well-known and influential study of folk taxonomies is Émile Durkheim's The Elementary Forms of Religious Life. A more recent treatment of folk taxonomies (including the results of several decades of empirical research) and the discussion of their relation to the scientific taxonomy can be found in Scott Atran's Cognitive Foundations of Natural History. Folk taxonomies of organisms have been found in large part to agree with scientific classification, at least for the larger and more obvious species, which means that it is not the case that folk taxonomies are based purely on utilitarian characteristics.
In the seventeenth century, the German mathematician and philosopher Gottfried Leibniz, following the work of the thirteenth-century Majorcan philosopher Ramon Llull on his Ars generalis ultima, a system for procedurally generating concepts by combining a fixed set of ideas, sought to develop an alphabet of human thought. Leibniz intended his characteristica universalis to be an "algebra" capable of expressing all conceptual thought. The concept of creating such a "universal language" was frequently examined in the 17th century, also notably by the English philosopher John Wilkins in his work An Essay towards a Real Character and a Philosophical Language (1668), from which the classification scheme in Roget's Thesaurus ultimately derives.
Taxonomy in various disciplines
Natural sciences
Taxonomy in biology encompasses the description, identification, nomenclature, and classification of organisms. Uses of taxonomy include:
Alpha taxonomy, the description and basic classification of new species, subspecies, and other taxa
Linnaean taxonomy, the original classification scheme of Carl Linnaeus
rank-based scientific classification as opposed to clade-based classification
Evolutionary taxonomy, traditional post-Darwinian hierarchical biological classification
Numerical taxonomy, various taxonomic methods employing numeric algorithms
Phenetics, system for ordering species based on overall similarity
Phylogenetics, biological taxonomy based on putative ancestral descent of organisms
Plant taxonomy
Virus classification, taxonomic system for viruses
Folk taxonomy, description and organization, by individuals or groups, of their own environments
Nosology, classification of diseases
Soil classification, systematic categorization of soils
Business and economics
Uses of taxonomy in business and economics include:
Corporate taxonomy, the hierarchical classification of entities of interest to an enterprise, organization or administration
Economic taxonomy, a system of classification for economic activity
Global Industry Classification Standard, an industry taxonomy developed by MSCI and Standard & Poor's (S&P)
Industry Classification Benchmark, an industry classification taxonomy launched by Dow Jones and FTSE
International Standard Industrial Classification (ISIC), a United Nations system for classifying economic data
North American Industry Classification System (NAICS), used in Canada, Mexico, and the United States of America
Pavitt's Taxonomy, classification of firms by their principal sources of innovation
Standard Industrial Classification, a system for classifying industries by a four-digit code
United Kingdom Standard Industrial Classification of Economic Activities, a Standard Industrial Classification by type of economic activity
EU taxonomy for sustainable activities, a classification system established to clarify which investments are environmentally sustainable, in the context of the European Green Deal.
Records management taxonomy, the representation of data, upon which the classification of unstructured content is based, within an organization.
XBRL Taxonomy, eXtensible Business Reporting Language
SRK taxonomy, in workplace user-interface design
Computing
Software engineering
Vegas et al. make a compelling case to advance the knowledge in the field of software engineering through the use of taxonomies. Similarly, Ore et al. provide a systematic methodology to approach taxonomy building in software engineering related topics.
Several taxonomies have been proposed in software testing research to classify techniques, tools, concepts and artifacts. The following are some example taxonomies:
A taxonomy of model-based testing techniques
A taxonomy of static-code analysis tools
Engström et al. suggest and evaluate the use of a taxonomy to bridge the communication between researchers and practitioners engaged in the area of software testing. They have also developed a web-based tool to facilitate and encourage the use of the taxonomy. The tool and its source code are available for public use.
Other uses of taxonomy in computing
Flynn's taxonomy, a classification for instruction-level parallelism methods
Folksonomy, classification based on user's tags
Taxonomy for search engines, considered as a tool to improve relevance of search within a vertical domain
ACM Computing Classification System, a subject classification system for computing devised by the Association for Computing Machinery
Education and academia
Uses of taxonomy in education include:
Bloom's taxonomy, a standardized categorization of learning objectives in an educational context
Classification of Instructional Programs, a taxonomy of academic disciplines at institutions of higher education in the United States
Mathematics Subject Classification, an alphanumerical classification scheme based on the coverage of Mathematical Reviews and Zentralblatt MATH
SOLO taxonomy, Structure of Observed Learning Outcome, proposed by Biggs and Collis Tax
Safety
Uses of taxonomy in safety include:
Safety taxonomy, a standardized set of terminologies used within the fields of safety and health care
Human Factors Analysis and Classification System, a system to identify the human causes of an accident
Swiss cheese model, a model used in risk analysis and risk management propounded by Dante Orlandella and James T. Reason
A taxonomy of rail incidents in Confidential Incident Reporting & Analysis System (CIRAS)
Other taxonomies
Military taxonomy, a set of terms that describe various types of military operations and equipment
Moys Classification Scheme, a subject classification for law devised by Elizabeth Moys
Research publishing
Citing inadequacies with current practices in listing authors of papers in medical research journals, Drummond Rennie and co-authors called in a 1997 article in JAMA, the Journal of the American Medical Association for
a radical conceptual and systematic change, to reflect the realities of multiple authorship and to buttress accountability. We propose dropping the outmoded notion of author in favor of the more useful and realistic one of contributor.
In 2012, several major academic and scientific publishing bodies mounted Project CRediT to develop a controlled vocabulary of contributor roles. Known as CRediT (Contributor Roles Taxonomy), this is an example of a flat, non-hierarchical taxonomy; however, it does include an optional, broad classification of the degree of contribution: lead, equal or supporting. Amy Brand and co-authors summarise their intended outcome as:
Identifying specific contributions to published research will lead to appropriate credit, fewer author disputes, and fewer disincentives to collaboration and the sharing of data and code.
CRediT comprises 14 specific contributor roles using the following defined terms:
Conceptualization
Methodology
Software
Validation
Formal Analysis
Investigation
Resources
Data curation
Writing – Original Draft
Writing – Review & Editing
Visualization
Supervision
Project Administration
Funding acquisition
The taxonomy is an open standard conformiing to the OpenStand principles, and is published under a Creative Commons licence.
Taxonomy for the web
Websites with a well designed taxonomy or hierarchy are easily understood by users, due to the possibility of users developing a mental model of the site structure.
Guidelines for writing taxonomy for the web include:
Mutually exclusive categories can be beneficial. If categories appear in several places, it is called cross-listing or polyhierarchical. The hierarchy will lose its value if cross-listing appears too often. Cross-listing often appears when working with ambiguous categories that fits more than one place.
Having a balance between breadth and depth in the taxonomy is beneficial. Too many options (breadth), will overload the users by giving them too many choices. At the same time having a too narrow structure, with more than two or three levels to click-through, will make users frustrated and might give up.
In communications theory
Frederick Suppe distinguished two senses of classification: a broad meaning, which he called "conceptual classification" and a narrow meaning, which he called "systematic classification".
About conceptual classification Suppe wrote: "Classification is intrinsic to the use of language, hence to most if not all communication. Whenever we use nominative phrases we are classifying the designated subject as being importantly similar to other entities bearing the same designation; that is, we classify them together. Similarly the use of predicative phrases classifies actions or properties as being of a particular kind. We call this conceptual classification, since it refers to the classification involved in conceptualizing our experiences and surroundings"
About systematic classification Suppe wrote: "A second, narrower sense of classification is the systematic classification involved in the design and utilization of taxonomic schemes such as the biological classification of animals and plants by genus and species.
Is-a and has-a relationships, and hyponymy
Two of the predominant types of relationships in knowledge-representation systems are predication and the universally quantified conditional. Predication relationships express the notion that an individual entity is an example of a certain type (for example, John is a bachelor), while universally quantified conditionals express the notion that a type is a subtype of another type (for example, "A dog is a mammal", which means the same as "All dogs are mammals").
The "has-a" relationship is quite different: an elephant has a trunk; a trunk is a part, not a subtype of elephant. The study of part-whole relationships is mereology.
Taxonomies are often represented as is-a hierarchies where each level is more specific than the level above it (in mathematical language is "a subset of" the level above). For example, a basic biology taxonomy would have concepts such as mammal, which is a subset of animal, and dogs and cats, which are subsets of mammal. This kind of taxonomy is called an is-a model because the specific objects are considered as instances of a concept. For example, Fido is-an instance of the concept dog and Fluffy is-a cat.
In linguistics, is-a relations are called hyponymy. When one word describes a category, but another describe some subset of that category, the larger term is called a hypernym with respect to the smaller, and the smaller is called a "hyponym" with respect to the larger. Such a hyponym, in turn, may have further subcategories for which it is a hypernym. In the simple biology example, dog is a hypernym with respect to its subcategory collie, which in turn is a hypernym with respect to Fido which is one of its hyponyms. Typically, however, hypernym is used to refer to subcategories rather than single individuals.
Research
Researchers reported that large populations consistently develop highly similar category systems. This may be relevant to lexical aspects of large communication networks and cultures such as folksonomies and language or human communication, and sense-making in general.
Theoretical approaches
Knowledge organization
Hull (1998) suggested "The fundamental elements of any classification are its theoretical commitments, basic units and the criteria for ordering these basic units into a classification".
There is a widespread opinion in knowledge organization and related fields that such classes corresponds to concepts. We can, for example, classify "waterfowls" into the classes "ducks", "geese", and "swans"; we can also say, however, that the concept “waterfowl” is a generic broader term in relation to the concepts "ducks", "geese", and "swans". This example demonstrates the close relationship between classification theory and concept theory. A main opponent of concepts as units is Barry Smith. Arp, Smith and Spear (2015) discuss ontologies and criticize the conceptualist understanding. The book writes (7): “The code assigned to France, for example, is ISO 3166 – 2:FR and the code is assigned to France itself — to the country that is otherwise referred to as Frankreich or Ranska. It is not assigned to the concept of France (whatever that might be).” Smith's alternative to concepts as units is based on a realist orientation, when scientists make successful claims about the types of entities that exist in reality, they are referring to objectively existing entities which realist philosophers call universals or natural kinds. Smith's main argument - with which many followers of the concept theory agree - seems to be that classes cannot be determined by introspective methods, but must be based on scientific and scholarly research. Whether units are called concepts or universals, the problem is to decide when a thing (say a "blackbird") should be considered a natural class. In the case of blackbirds, for example, recent DNA analysis have reconsidered the concept (or universal) "blackbird" and found that what was formerly considered one species (with subspecies) are in reality many different species, which just have chosen similar characteristics to adopt to their ecological niches.
An important argument for considering concepts the basis of classification is that concepts are subject to change and that they change when scientific revolutions occur. Our concepts of many birds, for example, have changed with recent development in DNA analysis and the influence of the cladistic paradigm - and have demanded new classifications. Smith's example of France demands an explanation. First, France is not a general concept, but an individual concept. Next, the legal definition of France is determined by the conventions that France has made with other countries. It is still a concept, however, as Leclercq (1978) demonstrates with the corresponding concept Europe.
Hull (1998) continued: "Two fundamentally different sorts of classification are those that reflect structural organization and those that are systematically related to historical development." What is referred to is that in biological classification the anatomical traits of organisms is one kind of classification, the classification in relation to the evolution of species is another (in the section below, we expand these two fundamental sorts of classification to four). Hull adds that in biological classification, evolution supplies the theoretical orientation.
Ereshevsky
Ereshefsky (2000) presented and discussed three general philosophical schools of classification: "essentialism, cluster analysis, and historical classification. Essentialism sorts entities according to causal relations rather than their intrinsic qualitative features."
These three categories may, however, be considered parts of broader philosophies. Four main approaches to classification may be distinguished: (1) logical and rationalist approaches including "essentialism"; (2) empiricist approaches including cluster analysis. (It is important to notice that empiricism is not the same as empirical study, but a certain ideal of doing empirical studies. With the exception of the logical approaches they all are based on empirical studies, but are basing their studies on different philosophical principles). (3) Historical and hermeneutical approaches including Ereshefsky's "historical classification" and (4) Pragmatic, functionalist and teleological approaches (not covered by Ereshefsky). In addition, there are combined approaches (e.g., the so-called evolutionary taxonomy", which mixes historical and empiricist principles).
Logical and rationalist approaches
Logical division, or logical partitioning (top-down classification or downward classification) is an approach that divides a class into subclasses and then divide subclasses into their subclasses, and so on, which finally forms a tree of classes. The root of the tree is the original class, and the leaves of the tree are the final classes. Plato advocated a method based on dichotomy, which was rejected by Aristotle and replaced by the method of definitions based on genus, species, and specific difference. The method of facet analysis (cf., faceted classification) is primarily based on logical division. This approach tends to classify according to "essential" characteristics, a widely discussed and criticized concept (cf., essentialism). These methods may overall be related to the rationalist theory of knowledge. Michelle Bunn notes that logical partitioning uses categories which are established a priori; data is then collected and used to test the extent to which the classification system can be sustained.
Empiricist approaches
"Empiricism alone is not enough: a healthy advance in taxonomy depends on a sound theoretical foundation"
Phenetics or numerical taxonomy is by contrast bottom-up classification, where the starting point is a set of items or individuals, which are classified by putting those with shared characteristics as members of a narrow class and proceeding upward. Numerical taxonomy is an approach based solely on observable, measurable similarities and differences of the things to be classified. Classification is based on overall similarity: the elements that are most alike in most attributes are classified together. But it is based on statistics, and therefore does not fulfill the criteria of logical division (e.g. to produce classes, that are mutually exclusive and jointly coextensive with the class they divide). Some people will argue that this is not classification/taxonomy at all, but such an argument must consider the definitions of classification (see above). These methods may overall be related to the empiricist theory of knowledge.
Historical and hermeneutical approaches
Genealogical classification is classification of items according to their common heritage. This must also be done on the basis of some empirical characteristics, but these characteristics are developed by the theory of evolution. Charles Darwin's main contribution to classification theory of not just his claim "... all true classification is genealogical ..." but that he provided operational guidance for classification. Genealogical classification is not restricted to biology, but is also much used in, for example, classification of languages, and may be considered a general approach to classification." These methods may overall be related to the historicist theory of knowledge. One of the main schools of historical classification is cladistics, which is today dominant in biological taxonomy, but also applied to other domains.
The historical and hermeneutical approaches is not restricted to the development of the object of classification (e.g., animal species) but is also concerned with the subject of classification (the classifiers) and their embeddedness in scientific traditions and other human cultures.
Pragmatic, functionalist and teleological approaches
Pragmatic classification (and functional and teleological classification) is the classification of items which emphasis the goals, purposes, consequences, interests, values and politics of classification. It is, for example, classifying animals into wild animals, pests, domesticated animals and pets. Also kitchenware (tools, utensils, appliances, dishes, and cookware used in food preparation, or the serving of food) is an example of a classification which is not based on any of the above-mentioned three methods, but clearly on pragmatic or functional criteria. Bonaccorsi, et al. (2019) is about the general theory of functional classification and applications of this approach for patent classification. Although the examples may suggest that pragmatic classifications are primitive compared to established scientific classifications, it must be considered in relation to the pragmatic and critical theory of knowledge, which consider all knowledge as influences by interests.
Ridley (1986) wrote: "teleological classification. Classification of groups by their shared purposes, or functions, in life - where purpose can be identified with adaptation. An imperfectly worked-out, occasionally suggested, theoretically possible principle of classification that differs from the two main such principles, phenetic and phylogenetic classification".
Artificial versus natural classification
Natural classification is a concept closely related to the concept natural kind. Carl Linnaeus is often recognized as the first scholar to clearly have differentiated "artificial" and "natural" classifications A natural classification is one, using Plato's metaphor, that is “carving nature at its joints” Although Linnaeus considered natural classification the ideal, he recognized that his own system (at least partly) represented an artificial classification.
John Stuart Mill explained the artificial nature of the Linnaean classification and suggested the following definition of a natural classification:"The Linnæan arrangement answers the purpose of making us think together of all those kinds of plants, which possess the same number of stamens and pistils; but to think of them in that manner is of little use, since we seldom have anything to affirm in common of the plants which have a given number of stamens and pistils.""The ends of scientific classification are best answered, when the objects are formed into groups respecting which a greater number of general propositions can be made, and those propositions more important, than could be made respecting any other groups into which the same things could be distributed." "A classification thus formed is properly scientific or philosophical, and is commonly called a Natural, in contradistinction to a Technical or Artificial, classification or arrangement."Ridley (1986) provided the following definitions:
"artificial classification. The term (like its opposite, natural classification) has many meanings; in this book I have picked a phenetic meaning. A classificatory group will be defined by certain characters, called defining characters; in an artificial classification, the members of a group resemble one another in their defining characters (as they must, by definition) but not in their non-defining characters. With respect to the characters not used in the classification, the members of a group are uncorrelated.
"natural classification. Classificatory groups are defined by certain characters, called 'defining' characters; in a natural group, the members of the group resemble one another for non-defining characters as well as for the defining character. This is not the only meaning for what is perhaps the most variously used term in taxonomy ...
Taxonomic monism vs. pluralism
Stamos (2004) wrote: "The fact is, modern scientists classify atoms into elements based on proton number rather than anything else because it alone is the causally privileged factor [gold is atomic number 79 in the periodic table because it has 79 protons in its nucleus]. Thus nature itself has supplied the causal monistic essentialism. Scientists in their turn simply discover and follow (where "simply" ≠ "easily")."
Examples of important taxonomies
Periodic table
The periodic table is the classification of the chemical elements which is in particular associated with Dmitri Mendeleev (cf., History of the periodic table). An authoritative work on this system is Scerri (2020). Hubert Feger (2001; numbered listing added) wrote about it: "A well-known, still used, and expanding classification is Mendeleev's Table of Elements. It can be viewed as a prototype of all taxonomies in that it satisfies the following evaluative criteria:
Theoretical foundation: A theory determines the classes and their order.
Objectivity: The elements can be observed and classified by anybody familiar with the table of elements.
Completeness: All elements find a unique place in the system, and the system implies a list of all possible elements.
Simplicity: Only a small amount of information is used to establish the system and identify an object.
Predictions: The values of variables not used for classification can be predicted (number of electrons and atomic weight), as well as the existence of relations and of objects hitherto unobserved. Thus, the validity of the classification system itself becomes testable."
Bursten (2020) wrote, however "Hepler-Smith, a historian of chemistry, and I, a philosopher whose work often draws on chemistry, found common ground in a shared frustration with our disciplines’ emphases on the chemical elements as the stereotypical example of a natural kind. The frustration we shared was that while the elements did display many hallmarks of paradigmatic kindhood, elements were not the kinds of kinds that generated interesting challenges for classification in chemistry, nor even were they the kinds of kinds that occupied much contemporary critical chemical thought. Compounds, complexes, reaction pathways, substrates, solutions – these were the kinds of the chemistry laboratory, and rarely if ever did they slot neatly into taxonomies in the orderly manner of classification suggested by the Periodic Table of Elements. A focus on the rational and historical basis of the development of the Periodic Table had made the received view of chemical classification appear far more pristine, and far less interesting, than either of us believed it to be."
Linnaean taxonomy
Linnaean taxonomy is the particular form of biological classification (taxonomy) set up by Carl Linnaeus, as set forth in his Systema Naturae (1735) and subsequent works. A major discussion in the scientific literature is whether a system that was constructed before Charles Darwin's theory of evolution can still be fruitful and reflect the development of life.
Astronomy
Astronomy is a fine example on how Kuhn's (1962) theory of scientific revolutions (or paradigm shifts) influences classification. For example:
Paradigm one: Ptolemaic astronomers might learn the concepts "star" and "planet" by having the Sun, the Moon, and Mars pointed out as instances of the concept “planet” and some fixed stars as instances of the concept “star.”
Paradigm two: Copernicans might learn the concepts "star", "planet", and "satellites" by having Mars and Jupiter pointed out as instances of the concept “planet,” the Moon as an instance of the concept “satellite,” and the Sun and some fixed stars as instances of the concept "star". Thus, the concepts "star", "planet", and "satellite" got a new meaning and astronomy got a new classification of celestial bodies.
Hornbostel–Sachs classification of musical instruments
Hornbostel–Sachs is a system of musical instrument classification devised by Erich Moritz von Hornbostel and Curt Sachs, and first published in 1914. In the original classification, the top categories are:
Idiophones: instruments that rely on the body of the instrument to create and resonate sound.
Membranophones: instruments that have a membrane that is stretched over a structure, often wood or metal, and struck or rubbed to produce a sound. The subcategories are largely determined by the shape of the structure that the membrane is stretched over.
Chordophone: Instruments that use vibrating strings, which are most commonly stretched across a metal or wooden structure, to create sound.
Aerophones Instruments that require air passing through, or across, them to create sound. Most commonly constructed of wood or metal.
A fifth top category,
Electrophones: Instruments that require electricity to be amplified and heard. This group was added by Sachs in 1940.
Each top category is subdivided and Hornbostel-Sachs is a very comprehensive classification of musical instruments with wide applications. In Wikipedia, for example, all musical instruments are organized according to this classification.
In opposition to, for example, the astronomical and biological classifications presented above, the Hornbostel-Sachs classification seems very little influenced by research in musicology and organology. It is based on huge collections of musical instruments, but seems rather as a system imposed upon the universe of instruments than as a system with organic connections to scholarly theory. It may therefore be interpreted as a system based on logical division and rationalist philosophy.
Diagnostic and Statistical Manual of Mental Disorders (DSM)
Diagnostic and Statistical Manual of Mental Disorders (DSM) is a classification of mental disorders published by the American Psychiatric Association (APA).The first edition of the DSM was published in 1952, and the newest, fifth edition was published in 2013. In contrast to, for example, the periodic table and the Hornbostel-Sachs classification, the principles for classification have changed much during its history. The first edition was influenced by psychodynamic theory, The DSM-III, published in 1980 adopted an atheoretical, “descriptive” approach to classification The system is very important for all people involved in psychiatry, whether as patients, researchers or therapists (in addition to insurance companies), but the systems is strongly criticized and has not the scientific status as many other classifications.
Sample list of taxonomies
Business, organizations, and economics
Classification of customers, for marketing (as in Master data management) or for profitability (e.g. by Activity-based costing)
Classified information, as in legal or government documentation
Job classification, as in job analysis
Standard Industrial Classification, economic activities
Mathematics
Attribute-value system, a basic knowledge representation framework
Classification theorems in mathematics
Mathematical classification, grouping mathematical objects based on a property that all those objects share
Statistical classification, identifying to which of a set of categories a new observation belongs, on the basis of a training set of data
Media
Classification (literature), a figure of speech linking a proper noun to a common noun using the or other articles
Decimal classification, decimal classification systems
Document classification, a problem in library science, information science and computer science
Classified information, sensitive information to which access is restricted by law or regulation to particular classes of people
Library classification, a system of coding, assorting and organizing library materials according to their subject
Image classification in computer vision
Motion picture rating system, for film classification
Science
Scientific classification (disambiguation)
Biological classification of organisms
Chemical classification
Medical classification, the process of transforming descriptions of medical diagnoses and procedures into universal medical code numbers
Taxonomic classification, also known as classification of species
Cladistics, an approach using similarities
Other
An industrial process such as mechanical screening for sorting materials by size, shape, density, etc.
Civil service classification, personnel grades in government
Classification of swords
Classification of wine
Locomotive classification
Product classification
Security classification, information to which access is restricted by law or regulation
Ship classification society, a non-governmental organization that establishes and maintains technical standards for the construction and operation of ships and offshore structures
Organizations involved in taxonomy
International Society for Knowledge Organization
Classification Society.
| Physical sciences | Science basics | Basics and measurement |
43733503 | https://en.wikipedia.org/wiki/Laniakea%20Supercluster | Laniakea Supercluster | The Laniakea Supercluster (; Hawaiian for "open skies" or "immense heaven") or the Local Supercluster (LSC or LS) is the galaxy supercluster that is home to the Milky Way and approximately 100,000 other nearby galaxies.
It was defined in September 2014, when a group of astronomers including R. Brent Tully of the University of Hawaiʻi, Hélène Courtois of the University of Lyon, Yehuda Hoffman of the Hebrew University of Jerusalem, and Daniel Pomarède of CEA Université Paris-Saclay published a new way of defining superclusters according to the relative velocities of galaxies. The new definition of the local supercluster subsumes the prior defined Virgo and Hydra-Centaurus Supercluster as appendages, the former being the prior defined local supercluster.
Follow-up studies suggest that the Laniakea Supercluster is not gravitationally bound. It will disperse rather than continue to maintain itself as an overdensity relative to surrounding areas.
Name
The name () means 'immense heaven' in Hawaiian, . The name was suggested by Nawaʻa Napoleon, an associate professor of Hawaiian language at Kapiolani Community College. The name honors Polynesian navigators, who used knowledge of the sky to navigate the Pacific Ocean.
Characteristics
The Laniakea Supercluster encompasses approximately 100,000 galaxies stretched out over . It has the approximate mass of 1017 solar masses, or 100,000 times that of our galaxy, which is almost the same as that of the Horologium Supercluster. It consists of four subparts, which were known previously as separate superclusters:
Virgo Supercluster, the part in which the Milky Way resides.
Hydra–Centaurus Supercluster
the Great Attractor, Laniakea's central gravitational point near Norma
Antlia Wall, known as Hydra Supercluster
Centaurus Supercluster
Pavo–Indus Supercluster
Southern Supercluster, including Fornax Cluster (S373), Dorado and Eridanus clouds.
The most massive galaxy clusters of the Laniakea Supercluster are Virgo, Hydra, Centaurus, Abell 3565, Abell 3574, Abell 3521, Fornax, Eridanus, and Norma. The entire supercluster consists of approximately 300 to 500 known galaxy clusters and groups. The real number may be much larger because some of these are traversing the Zone of Avoidance, an area of the sky that is partially obscured by gas and dust from the Milky Way galaxy, making them essentially undetectable.
Superclusters are some of the universe's largest structures and have boundaries that are difficult to define, especially from the inside. Within a given supercluster, most galaxy motions will be directed inward, toward the center of mass. This gravitational focal point, in the case of Laniakea, is called the Great Attractor, and influences the motions of the Local Group of galaxies, where the Milky Way galaxy resides, and all others throughout the supercluster. Unlike its constituent clusters, Laniakea is not gravitationally bound and is projected to be torn apart by dark energy.
Although the confirmation of the existence of the Laniakea Supercluster emerged in 2014, early studies in the 1980s already suggested that several of the superclusters then known might be connected. For example, South African astronomer Tony Fairall stated in 1988 that redshifts suggested that the Virgo and Hydra–Centaurus superclusters may be connected.
Location
The neighboring superclusters to the Laniakea Supercluster are the Shapley Supercluster, Hercules Supercluster, Coma Supercluster, and Perseus–Pisces Supercluster. The edges of the superclusters and Laniakea were not clearly known at the time of Laniakea's definition. Since then, the study of the edges of the supercluster and of structures beyond them has substantially improved.
Laniakea is itself a constituent part of the Pisces–Cetus Supercluster Complex, a galaxy filament.
| Physical sciences | Notable galaxy clusters | Astronomy |
32501050 | https://en.wikipedia.org/wiki/Local%20Sheet | Local Sheet | The Local Sheet in astronomy is a nearby extragalactic region of space where the Milky Way, the members of the Local Group and other galaxies share a similar peculiar velocity. This region lies within a radius of about , thick, and galaxies beyond that distance show markedly different velocities. The Local Group has only a relatively small peculiar velocity of with respect to the Local Sheet. Typical velocity dispersion of galaxies is only in the radial direction. Nearly all nearby bright galaxies belong to the Local Sheet. The Local Sheet is part of the Local Volume and is in the Virgo Supercluster (Local Supercluster). The Local Sheet forms a wall of galaxies delineating one boundary of the Local Void.
A significant component of the mean velocity of the galaxies in the Local Sheet appears as the result of the gravitational attraction of the Virgo Cluster of galaxies, resulting in a peculiar motion ~ toward the cluster. A second component is directed away from the center of the Local Void; an expanding region of space spanning an estimated that is only sparsely populated with galaxies. This component has a velocity of . The Local Sheet is inclined 8° from the Local Supercluster (Virgo Supercluster).
The so-called Council of Giants is a ring of twelve large galaxies surrounding the Local Group in the Local Sheet, with a radius of . Ten of these are spirals, while the remaining two are ellipticals. The two ellipticals (Maffei 1 and Centaurus A) lie on opposite sides of the Local Group.
* The mass is given as the logarithm (base unspecified) of the mass in solar masses.
| Physical sciences | Notable galaxy clusters | Astronomy |
36687154 | https://en.wikipedia.org/wiki/Estimation | Estimation | Estimation (or estimating) is the process of finding an estimate or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable. The value is nonetheless usable because it is derived from the best information available. Typically, estimation involves "using the value of a statistic derived from a sample to estimate the value of a corresponding population parameter". The sample provides information that can be projected, through various formal or informal processes, to determine a range most likely to describe the missing information. An estimate that turns out to be incorrect will be an overestimate if the estimate exceeds the actual result and an underestimate if the estimate falls short of the actual result.
The confidence in an estimate is quantified as a confidence interval, the likelihood that the estimate is in a certain range. Human estimators systematically suffer from overconfidence, believing that their estimates are more accurate than they actually are.
How estimation is done
Estimation is often done by sampling, which is counting a small number from a selected subset, and projecting that number onto a larger population. An example of estimation would be determining how many candies of a given size are in a glass jar. Because the distribution of candies inside the jar may vary, the observer can count the number of candies visible through the glass, consider the size of the jar, and presume that a similar distribution can be found in the parts that can not be seen, thereby making an estimate of the total number of candies that could be in the jar if that presumption were true. Estimates can similarly be generated by projecting results from polls or surveys onto the entire population.
In making an estimate, the goal is often most useful to generate a range of possible outcomes that is precise enough to be useful but not so precise that it is likely to be inaccurate. For example, in trying to guess the number of candies in the jar, if fifty were visible, and the total volume of the jar seemed to be about twenty times as large as the volume containing the visible candies, then one might simply project that there were a thousand candies in the jar. Such a projection, intended to pick the single value that is believed to be closest to the actual value, is called a point estimate. However, a point estimation is likely to be incorrect, because the sample size—in this case, the number of candies that are visible—is too small a number to be sure that it does not contain anomalies that differ from the population as a whole. A corresponding concept is an interval estimate, which captures a much larger range of possibilities, but is too broad to be useful. For example, if one were asked to estimate the percentage of people who like candy, it would clearly be correct that the number falls between zero and one hundred percent. Such an estimate would provide no guidance, however, to somebody who is trying to determine how many candies to buy for a party to be attended by a hundred people.
Uses of estimation
In mathematics, approximation describes the process of finding estimates in the form of upper or lower bounds for a quantity that cannot readily be evaluated precisely, and approximation theory deals with finding simpler functions that are close to some complicated function and that can provide useful estimates. In statistics, an estimator is the formal name for the rule by which an estimate is calculated from data, and estimation theory deals with finding estimators with good properties. This process is used in signal processing, for approximating an unobserved signal on the basis of an observed signal containing noise. For estimation of yet-to-be observed quantities, forecasting and prediction are applied. A Fermi problem, in physics, is one concerning estimation in problems that typically involve making justified guesses about quantities that seem impossible to compute given limited available information.
Estimation is important in business and economics because too many variables exist to figure out how large-scale activities will develop. Estimation in project planning can be particularly significant, because plans for the distribution of labor and purchases of raw materials must be made, despite the inability to know every possible problem that may come up. A certain amount of resources will be available for carrying out a particular project, making it important to obtain or generate a cost estimate as one of the vital elements of entering into the project. The U.S. Government Accountability Office defines a cost estimate as, "the summation of individual cost elements, using established methods and valid data, to estimate the future costs of a program, based on what is known today", and reports that "realistic cost estimating was imperative when making wise decisions in acquiring new systems". Furthermore, project plans must neither underestimate the needs of the project, which can result in delays while unmet needs are fulfilled, nor must they greatly overestimate the needs of the project, or else the unneeded resources may go to waste.
An informal estimate when little information is available is called a guesstimate because the inquiry becomes closer to purely guessing the answer. The estimated sign, ℮, is used to designate that package contents are close to the nominal contents.
| Mathematics | Basics | null |
36694809 | https://en.wikipedia.org/wiki/Sojourner%20%28rover%29 | Sojourner (rover) | The robotic Sojourner rover reached Mars on July 4, 1997 as part of the Mars Pathfinder mission. Sojourner was operational on Mars for 92 sols (95 Earth days), and was the first wheeled vehicle to operate on an astronomical object other than the Earth or Moon. The landing site was in the Ares Vallis channel in the Chryse Planitia region of the Oxia Palus quadrangle.
The rover was equipped with front and rear cameras, and hardware that was used to conduct several scientific experiments. It was designed for a mission lasting 7 sols, with a possible extension to 30 sols, and was active for 83 sols (85 Earth days). The rover communicated with Earth through the Pathfinder base station, which had its last successful communication session with Earth at 3:23 a.m. PDT on September 27, 1997. The last signal from the rover was received on the morning of October 7, 1997.
Sojourner traveled just over by the time communication was lost. Its final confirmed command was to remain stationary until October 5, 1997, (sol 91) and then drive around the lander; there is no indication it was able to do so. The Sojourner mission formally ended on March 10, 1998, after all further options were exhausted.
Mission
Sojourner was an experimental vehicle whose main mission was to test in the Martian environment technical solutions that were developed by engineers of the NASA research laboratories. It was necessary to verify whether the design strategy followed had resulted in the construction of a vehicle suitable for the environment it would encounter, despite the limited knowledge of it. Careful analysis of the operations on Mars would make it possible to develop solutions to critical problems identified and to introduce improvements for subsequent planetary exploration missions. One of the mission's main aims was to prove the development of "faster, better and cheaper" spacecraft was possible. Development took three years and cost under $150 million for the lander, and $25 million for the rover; development was faster and less costly than all previous missions.
These objectives required careful selection of the landing site to balance the technical requests with the scientific ones. A large plain was needed for the probe to land and rocky terrain to verify the rover's systems. The choice fell on Ares Vallis in Chryse Planitia, which is characterized by alluvial-looking rock formations. Scholars believed the analysis of the rocks, which lie in what appears to be the outlet of a huge drainage channel, could have confirmed the past presence of liquid water on the surface of Mars and provide details of the surrounding areas, from which the rocks were eroded.
Technical characteristics
Sojourner was developed by NASA's Jet Propulsion Laboratory (JPL). It is a six-wheeled, long, wide and high vehicle. In the mission's cruise phase, it occupied an high space and has a mass of . It was supported by a lander, a tetrahedron-shaped structure with a mass of , and had a camera, scientific instrumentation, three petals of solar panels, a meteorology mast, and of equipment that was required to maintain communications between the rover and the lander. Hardware included a steerable, high-gain X-band antenna that could send approximately 5.5 kilobits per second into a Deep Space Network antenna, gallium-arsenide solar arrays that generated 1.1 kW⋅h/day and were capable of providing enough power to transmit for 2–4 hours per sol and maintain 128 megabytes of dynamic memory through the night.
Lander
One of the lander's main tasks was to support the rover by imaging its operations and sending data from the rover to Earth. The lander had rechargeable batteries and over of solar cells on its petals. The lander contained a stereoscopic camera with spatial filters on an expandable pole called Imager for Mars Pathfinder (IMP), and the Atmospheric Structure Instrument/Meteorology Package (ASI/MET) which acted as a Mars meteorological station, collecting data about pressure, temperature, and winds. The MET structure included three windsocks mounted at three heights on a pole, the topmost at about and generally registered winds from the west. To provide continuous data, the IMP imaged the windsocks once every daylight hour. These measurements allowed the eolian processes at the landing site, including the particle threshold and the aerodynamic surface roughness, to be measured.
The square eyes of the IMP camera are separated by to provide stereoscopic vision and ranging performance to support rover operations. The dual optical paths are folded by two sets of mirrors to bring the light to a single charge-coupled device (CCD). To minimize moving parts, the IMP is electronically shuttered; half of the CCD is masked and used as a readout zone for the electronic shutter. The optics had an effective pixel resolution of one milliradian per pixel which gives per pixel at a range of . The camera cylinder is mounted on gimbals that provide rotation freedom of 360° in azimuth and −67° to +90° in elevation. This assembly is supported by an extendible mast that was designed and built by AEC Able Engineering. The mast holds the camera at approximately above the Martian surface and extends Pathfinders horizon to on a featureless plane.
Power system
Sojourner had solar panels and a non-rechargeable lithium-thionyl chloride (LiSOCl2) battery that could provide 150 watt-hours and allowed limited nocturnal operations. Once the batteries were depleted, the rover could only operate during the day. The batteries also allowed the rover's health to be checked while enclosed in the cruise stage while en route to Mars. The rover had of solar cells, which could produce a maximum of about 15 watts on Mars, depending on conditions. The cells were GaAs/Ge (Gallium Arsenide/Germanium) with approximately 18 percent efficiency. They could survive temperatures down to about . After about its 40th sol on Mars, the lander's battery no longer held a charge so it was decided to shut off the rover before sunset and wake it up at sunrise.
Locomotion system
The rover's wheels were made of aluminum and were in diameter and wide. They had serrated, stainless steel tracks that could generate a pressure of in optimal conditions on soft ground. No such need arose during the operational phase. Each wheel was driven by its own independent motor. The first and third wheels were used for steering. A six-wheel-steering configuration was considered, but this was too heavy. As the rover rotated on itself, it drew a wide circle.
The wheels were connected to the frame through specially developed suspension to ensure all six were in contact with the ground, even on rough terrain. JPL's Don Bickler developed the wheels, which were referred to as "Rocker-bogie", for the experimental "Rocky" vehicles, of which the Sojourner is the eighth version. They consisted of two elements; "Bogie" connected the front wheel with the central one and "Rocker" connected the rear wheel with the other two. The system did not include springs or other elastic elements, which could have increased the pressure exerted by each wheel. This system allowed the overcoming of obstacles up to high but theoretically would have allowed the rover to overcome obstacles of , or about 30% of the rover's length. The suspension system was also given the ability to collapse on itself so the rover would occupy in the cruising configuration.
The locomotion system was found to be suitable for the environment of Mars—being very stable, and allowing forward and backward movements with similar ease—and was adopted with appropriate precautions in the subsequent Spirit and Opportunity rover missions.
In the ten-year development phase that led to the realization of Sojourner, alternative solutions that could take advantage of the long experience gained at JPL in the development of vehicles for the Moon and Mars were examined. The use of four or more legs was excluded for three reasons: a low number of legs would limit the rover's movements and the freedom of action, and increasing the number would lead to a significant increase in complexity. Proceeding in this configuration would also require knowledge of the space in front—the ground corresponding to the next step—leading to further difficulties. The choice of a wheeled vehicle solved most of the stability problems, led to a reduction in weight, and improved efficiency and control compared to the previous solution. The simplest configuration was a four-wheel system that, however, encounters difficulties in overcoming obstacles. Better solutions were the use of six or eight wheels with the rear ones able to push, allowing the obstacle to be overcome. The lighter, simpler, six-wheeled option was preferred.
The rover could travel from the lander—the approximate limit of its communication range— and had a maximum speed of .
Hardware and software
Sojourner'''s central processing unit (CPU) was an Intel 80C85 with a 2 MHz clock, addressing 64 kilobytes (Kb) of memory, and running a cyclic executive. It had four memory stores; 64 Kb of RAM made by IBM for the main processor, 16 Kb of radiation-hardened PROM made by Harris, 176 Kb of non-volatile storage made by Seeq Technology, and 512 Kb of temporary data storage made by Micron. The electronics were housed inside the rover's warm electronics box (WEB). The WEB is a box-like structure formed from fiberglass facesheets bonded to aluminum spars. The gaps between facesheets were filled with blocks of aerogel that worked as thermal insulation. The aerogel used on the Sojourner had a density of approximately 20 mg/cc. This insulator was designed to trap heat generated by rover's electronics; this trapped heat soaked at night through the passive insulation maintaining the electronics in the WEB at between , while externally the rover experienced a temperature range between .
The Pathfinder lander's computer was a Radiation Hardened IBM Risc 6000 Single Chip with a Rad6000 SC CPU, 128 megabytes (Mb) of RAM and 6 Mb of EEPROM memory, and its operating system was VxWorks.
The mission was jeopardised by a concurrent software bug in the lander that had been found in preflight testing but was deemed a glitch and given a low priority because it only occurred in certain unanticipated heavy-load conditions, and the focus was on verifying the entry and landing code. The problem, which was reproduced and corrected from Earth using a laboratory duplicate, was due to computer resets caused by priority inversion. No scientific or engineering data was lost after a computer reset but all of the following operations were interrupted until the next day. Resets occurred on July 5, 10, 11 and 14 during the mission before the software was patched on July 21 to enable priority inheritance.
Communication and cameras Sojourner communicated with its base station using a 9,600 baud radio modem, although error-checking protocols limited communications to a functional rate of 2,400 baud with a theoretical range of about . Under normal operation, it would periodically send a "heartbeat" message to the lander. If no response was given, the rover could autonomously return to the location at which the last heartbeat was received. If desired, the same strategy could be used to deliberately extend the rover's operational range beyond that of its radio transceiver, although the rover rarely traveled further than from Pathfinder during its mission. The Ultra high frequency (UHF) radio modems operated in half-duplex mode, meaning they could either send or receive data but not both at the same time. The data was communicated in bursts of 2 kB.
The rover was imaged on Mars by the base station's IMP camera system, which also helped determine where the rover should go.
The rover had two monochrome cameras in front and a color camera at the rear. Each front camera had an array 484 pixels high by 768 wide. The cameras used CCDs manufactured by Eastman Kodak Company; they were clocked out by CPU, and capable of auto-exposure, Block Truncation Coding (BTC) data compression, bad pixel/column handling, and image data packetizing.
Both front cameras were coupled with five laser stripe projectors that enabled stereoscopic images to be taken along with measurements for hazard detection in the rover's path. The optics consisted of a window, lens, and field flattener. The window was made of sapphire while the lens objective and flattener were made of zinc selenide.
Another color camera was located on the back of the rover near the APXS, and rotated by 90°. It provided images of the APXS's target area and the rover's ground tracks.
The sensor of this color camera was arranged so 12 of 16 pixels of a 4×4 pixel block were sensitive to green light; while 2 pixels were sensitive to red light and the other 2 were sensitive to infrared and blue light.
Because the rover's cameras had zinc-selenide lenses, which block light with a wavelength shorter than 500 nanometers (nm), no blue light actually reached the blue-and-infrared-sensitive pixels, which therefore recorded only infrared light.
Rover Control Software Sojourner operation was supported by "Rover Control Software" (RCS) that ran on a Silicon Graphics Onyx2 computer on Earth and allowed command sequences to be generated using a graphical interface. The rover driver would wear 3D goggles supplied with imagery from the base station and would move a virtual model with a specialized joystick. The control software allowed the rover and surrounding terrain to be viewed from any angle, supporting the study of terrain features, the placing of waypoints, and virtual flyovers. Darts were used as icons to show where the rover should go. Desired locations were added to a sequence and sent to the rover to perform. Typically, a long sequence of commands were composed and sent once a day. The rover drivers were Brian K. Cooper and Jack Morrison.
Science payload
Alpha Proton X-Ray Spectrometer
The Alpha Proton X-Ray Spectrometer (APXS) was designed to determine the chemical composition of Martian soil, rocks and dust by analyzing the return radiation in its alpha, proton, and X-ray components resulting from the sample's exposure to a radioactive source contained in the instrument. The instrument had a curium-244 source that emits alpha particles with an energy of 5.8 MeV and a half-life of 18.1 years. A portion of the incident radiation that impacted the analyzed sample's surface was reflected and the remainder interacted with the sample.
The principle of the APXS technique is based on the interaction of alpha particles from a radioisotope source with matter. There are three components of the return radiation; simple Rutherford backscattering, production of protons from reactions with the nucleus of light elements, and generation of X-rays upon recombination of atomic shell vacancies created by alpha particle bombardment by interaction with the electrons of the innermost orbitals. The instrument was designed to detect the energy of all three components of the return radiation, making it possible to identify the atoms present and their quantities in a few tens of micrometers below the surface of the analyzed sample. The detection process was rather slow; each measurement could take up to ten hours.
Sensitivity and selectivity depends on a channel; alpha backscattering has high sensitivity for light elements like carbon and oxygen, proton emission is mainly sensitive to sodium, magnesium, aluminium, silicon, sulfur, and X-ray emission is more sensitive to heavier elements sodium to iron and beyond. Combining all three measurements makes APXS sensitive to all elements with the exception of hydrogen that is present at concentration levels above a fraction of one percent. The instrument was designed for the failed Russian Mars-96 mission. The alpha particle and proton detectors were provided by the Chemistry Department of the Max Planck Institute and the X-ray detector was developed by the University of Chicago.
During each measurement, the front surface of the instrument had to be in contact with the sample. For this to be possible, the APXS was mounted on a robotic arm called the Alpha-Proton-X-ray Spectrometer Deployment Mechanism (ADM). The ADM was an anthropomorphic actuator that was equipped with a wrist that was capable of rotations of ±25°. The dual mobility of the rover and the ADM increased the potential of the instrument—the first of its kind to reach Mars.
Wheel Abrasion Experiment
The Wheel Abrasion Experiment (WAE) was designed to measure the abrasive action of Martian soil on thin layers of aluminum, nickel, and platinum, and thus deduce the grain size of the soil at the landing site. For this purpose, 15 layers—five of each metal—were mounted on one of the two central wheels with a thickness between 200 and 1000 ångström, and electrically isolated from the rest of the rover. By directing the wheel appropriately, sunlight was reflected towards a nearby photovoltaic sensor. The collected signal was analyzed to determine the desired information. For the abrasive action to be significant on the mission schedule, the rover was scheduled to stop at frequent intervals and, with the other five wheels braked, force the WAE wheel to rotate, causing increased wear. Following the WAE experiment on Mars, attempts were made to reproduce the effects observed in the laboratory.
The interpretation of the results proposed by Ferguson et al. suggests the soil at the landing site is made up of fine-grained dust of limited hardness with a grain size of less than 40 μm. The instrument was developed, built and directed by the Lewis' Photovoltaics and Space Environments Branch of the Glenn Research Center.
Materials Adherence Experiment
The Materials Adherence Experiment (MAE) was designed by engineers at the Glenn Research Center to measure the daily accumulation of dust on the back of the rover and the reduction in the energy-conversion capacity of the photovoltaic panels. It consisted of two sensors.
The first was composed of a photovoltaic cell covered by transparent glass that could be removed on command. Near local midday, measurements of the cell's energy yield were made, both with the glass in place and removed. From the comparison, it was possible to deduce the reduction in cell yield caused by the dust. Results from the first cell were compared with those of a second photovoltaic cell that was exposed to the Martian environment. The second sensor used a quartz crystal microbalance (QCM) to measure the weight-per-surface unit of the dust deposited on the sensor.
During the mission, a daily rate equal to 0.28% of percentage reduction in the energy efficiency of the photovoltaic cells was recorded. This was independent of whether the rover was stationary or in motion. This suggests the dust settling on the rover was suspended in the atmosphere and was not raised by the rover's movements.
Control system
Since it was established transmissions relating to driving the Sojourner would occur once every sol, the rover was equipped with a computerized control system to guide its movements independently.
A series of commands had been programmed, providing an appropriate strategy for overcoming obstacles. One of the primary commands was "Go to Waypoint". A local reference system, of which the lander was the origin, was envisaged. Coordinate directions were fixed at the moment of landing, taking the direction of north as a reference. During the communication session (once per sol), the rover received from Earth a command string containing the coordinates of the arrival point, which it would have to reach autonomously.
The algorithm implemented on the on-board computer attempted, as a first option, to reach the obstacle in a straight line from the starting position. Using a system of photographic objectives and laser emitters, the rover could identify obstacles along this path. The on-board computer was programmed to search for the signal produced by the lasers in the cameras' images. In the case of a flat surface and no obstacles, the position of this signal was unchanged with respect to the reference signal stored in the computer; any deviation from this position made it possible to identify the type of obstacle. The photographic scan was performed after each advance equal to the diameter of the wheels, , and before each turn.
In the confirmed presence of an obstacle, the computer commanded the execution of a first strategy to avoid it. The rover, still by itself, rotated until the obstacle was no longer in sight. Then, after having advanced for half of its length, it recalculated a new straight path that would lead it to the point of arrival. At the end of the procedure, the computer had no memory of the existence of the obstacle. The steering angle of the wheels was controlled through potentiometers.
In particularly uneven terrain, the procedure described above would have been prevented by the presence of a large number of obstacles. There was, therefore, a second procedure known as "thread the needle", which consisted of proceeding between two obstacles along the bisector between them, providing they were sufficiently spaced to allow the rover to pass. If the rover had encountered a clearing before reaching a predetermined distance, it would have had to rotate on itself to calculate a new straight trajectory to reach the target. Conversely, the rover would have had to go back and try a different trajectory. As a last resort, contact sensors were mounted on the front and rear surfaces of the rover.
To facilitate the rover's direction, an appropriate on-the-spot rotation could be commanded from Earth. The command was "Turn" and was performed using a gyroscope. Three accelerometers measured the acceleration of gravity along three perpendicular directions, making it possible to measure the surface's slope. The rover was programmed to deviate from routes that would require a slope greater than 30°, though it was designed not to tip over when tilted at 45°. The distance traveled was determined by the number of revolutions of the wheels.
Marie Curie Marie Curie is a flight spare for the Sojourner. During the operational phase on Mars, the sequences of the most complex commands to be sent to Sojourner were verified on this identical rover at JPL. NASA planned to send Marie Curie on the canceled Mars Surveyor 2001 mission; it was suggested to send it in 2003, proposing Marie Curie to be deployed "using a robotic-arm attached to the lander". Rather than this, the Mars Exploration Rover program was launched in 2003. In 2015, JPL transferred Marie Curie to the Smithsonian National Air and Space Museum (NASM).
According to space historian and NASM curator Matt Shindell:
Mars Yard
To test robotic prototypes and applications under natural lighting conditions, JPL built a simulated Martian landscape called "Mars Yard". The test area measured and had a variety of terrain arrangements to support multiple test conditions. The soil was a combination of beach sand, decomposed granite, brick dust, and volcanic cinders. The rocks were several types of basalts, including fine-grained and vesicular in both red and black. Rock-size distributions were selected to match those seen on Mars and the soil characteristics matched those found in some Martian regions. Large rocks were not Mars-like in composition, being less dense and easier to move for testing. Other obstacles such as bricks and trenches were often used for specialized testing. Mars Yard was expanded in 1998 and then in 2007 to support other Mars rover missions.
Naming
The name "Sojourner" was chosen for the rover through a competition held in March 1994 by the Planetary Society in collaboration with JPL; it ran for one year and was open to students of 18 years and below from any country. Participants were invited to choose a "heroine to whom to dedicate the rover" and to write an essay about her accomplishments, and how these accomplishments could be applied to the Martian environment. The initiative was publicized in the United States through the January 1995 edition of the magazine Science and Children published by the National Science Teachers Association.
Some 3,500 papers were received from countries including Canada, India, Israel, Japan, Mexico, Poland, Russia, and the United States, of which 1,700 were from students aged between 5 and 18. The winners were chosen on the basis of the quality and creativity of the work, the appropriateness of the name for a Martian rover, and the competitor's knowledge of the heroine and the probe mission. The winning paper was written by 12-year-old Valerie Ambroise of Bridgeport, Connecticut, who suggested dedicating the rover to Sojourner Truth, a Civil War era African-American abolitionist and women's rights advocate. The second place went to Deepti Rohatgi, 18, of Rockville, Maryland, who proposed Marie Curie, a Nobel Prize-winning Franco-Polish chemist. Third place went to Adam Sheedy, 16, of Round Rock, Texas, who chose Judith Resnik, a United States astronaut and Space Shuttle crew member who died in the 1986 Challenger disaster. The rover was also known as Microrover Flight Experiment abbreviated MFEX.
Operations Sojourner was launched on December 4, 1996, aboard a Delta II booster, and reached Mars on July 4, 1997. It operated in Ares Vallis channel in the Chryse Planitia of the Oxia Palus quadrangle, from July 5 to September 27, 1997, when the lander cut off communications with Earth. In the 83 sols of activity—twelve times the expected duration for the rover—Sojourner traveled , always remaining within of the lander. It collected 550 images, performed 16 analyzes through the APXS—nine of rocks and the remainder of the soil— and performed 11 Wheel Abrasion Experiments and 14 experiments on soil mechanics in cooperation with the lander.
Landing site
The landing site for the rover was chosen in April 1994 at the Lunar and Planetary Institute in Houston. The landing site is an ancient flood plain called Ares Vallis, which is located in Mars' northern hemisphere and is one of the rockiest parts of Mars. It was chosen because it was thought to be a relatively safe surface on which to land and one that contains a wide variety of rocks that were deposited during a flood. This area was well-known, having been photographed by the Viking mission. After a successful landing, the lander was officially named "The Carl Sagan Memorial Station" in honor of the astronomer.
Deployment Mars Pathfinder landed on July 4, 1997. The petals were deployed 87 minutes later with Sojourner rover and the solar panels attached on the inside. The rover exited the lander on the next day.
Rock analysis
The rocks at the landing site were given names of cartoon characters. Among them were Pop Tart, Ender, mini-Matterhorn, Wedge, Baker's Bench, Scooby Doo, Yogi, Barnacle Bill, Pooh Bear, Piglet, the Lamb, the Shark, Ginger, Souffle, Casper, Moe, and Stimpy. A dune was called Mermaid Dune, and a pair of hills were named Twin Peaks.
The first analysis was carried out on the rock called "Barnacle Bill" during the third sol. The rock's composition was determined by the APXS spectrometer, which took 10 hours for a complete scan. The rock "Yogi" was analyzed on the 10th sol. It has been suggested the conformation of the land close to the rock, even visually at a lower level than the surrounding surface, was derived from the evaporation of floodwater.
Both rocks turned out to be andesites; this finding surprised some scholars because andesites are formed by geological processes that require an interaction between materials of the crust and the mantle. A lack of information on the surrounding highlands made it impossible to grasp all of the implications of the discovery.
The rover was then directed to the next target and on the 14th sol, it analyzed the rock named "Scooby-Doo" and imaged the "Casper" rock. Both were deemed to be consolidated deposits. The rock called "Moe" showed evidence of wind erosion. Most of the rocks analyzed showed a high silicon content. In a region nicknamed "Rock Garden", the rover encountered crescent-moon-shaped dunes that are similar to dunes on earth.
The landing site is rich in varied rocks, some of which are clearly volcanic in origin, such as "Yogi"; others are conglomerates, the origins of which are the subject of several proposals. In one hypothesis, they formed in the presence of water in Mars' distant past. In support of this, high silicon contents would be detected. This could also be a consequence of sedimentation processes; rounded rocks of various sizes were discovered and the valley's shapes are compatible with a river channel environment. Smaller, more rounded stones may also have been generated during a surface impact event.
When the mission's final results were described in a series of articles in the journal Science (December 5, 1997), it was believed the rock Yogi had a coating of dust but was similar to the rock Barnacle Bill. Calculations suggested both rocks mostly contain orthopyroxene (magnesium-iron silicate), feldspars (aluminum silicates of potassium, sodium, and calcium), and quartz (silicon dioxide) with smaller amounts of magnetite, ilmenite, iron sulfide, and calcium phosphate.
Sojourner in popular culture
In the 2000 film Red Planet, the crew of the first mission to Mars survives the crash-landing of their entry vehicle. Their communications equipment is destroyed so they cannot contact their recovery vehicle in orbit. To re-establish contact before being presumed dead and left behind on Mars, the crew goes to the site of the Pathfinder rover, from which they salvage parts to make a basic radio.
In the opening titles of the 2001 Star Trek: Enterprise, Sojourner is shown lying dormant and covered in dust. Another scene shows a plaque marking the landing site of the rover on board the Carl Sagan Memorial Station. In the episode "Terra Prime", Sojourner is briefly seen on the surface of Mars as a monument.
In Andy Weir's 2011 novel The Martian, and the 2015 film based on it, the protagonist Mark Watney is stranded on Mars. Mark recovers the Pathfinder lander and uses it to contact Earth. For the movie, the lander and rover were re-created with the help of JPL. Production designer Arthur Max, who worked on the film, said they "have a fully practical working Pathfinder, which we use throughout the movie." In the movie, Mark Watney is later seen in his Mars outpost, the Ares III Hab, with the Sojourner roving around.
Awards and honors
On October 21, 1997, at the Geological Society of America's annual meeting in Salt Lake City, Utah, Sojourner was awarded honorary membership in the Society's Planetary Geology Division.
In November 1997, to commemorate the achievements of Mars Pathfinder program, a $3 Priority Mail stamp was issued. Fifteen million stamps were printed. The stamp is based on the first image received from the Mars Pathfinder after its landing on the Martian surface July 4, 1997, which shows the Sojourner rover resting on the Pathfinder with a panoramic view of the Ares Vallis region in the background. The stamp's reverse bears text about the Pathfinder mission.
Sojourner was included in the Robot Hall of Fame by Carnegie Mellon University.
Perseverance rover, which landed in 2021, has a simplified representation of all previous NASA Martian rovers, starting with Sojourner'', on one of its external plates.
Key personnel
The development of the rover and its instruments as well as its guidance during operations on Mars were done by a group of engineers from NASA, collectively referred to as "The Rover Team". The key personnel were:
Microrover Flight Experiment Manager: Jacob Matijevic, JPL
Chief Engineer, Microrover Flight Experiment: William Layman, JPL
Assembly and Lead Test Engineer, Microrover Flight Experiment, Allen Sirota, JPL
Microrover Mission Operations Engineer: Andrew Mishkin, JPL
IMP Principal investigator: Peter H. Smith, University of Arizona
ASI/MET Facility Instrument Science Team Leader: John T. Schofield, JPL
ASI/MET Chief Engineer: Clayton LaBaw, JPL
APXS Principal investigator: Rudolf Rieder, Max-Planck Institute, Department of Chemistry, Mainz, Germany
Wheel Abrasion Experiment, Principal investigators: D. Ferguson and J. Kolecki, NASA Lewis Research Center
Material Adherence Experiment, Principal investigators: G. Landis and P. Jenkins, NASA Lewis Research Center
Manager of the Mars Exploration Program at JPL: Donna Shirley
Gallery
Comparison to later Mars rovers
Sojourners location in context
| Technology | Rovers | null |
48609118 | https://en.wikipedia.org/wiki/Circumstellar%20disc | Circumstellar disc | A circumstellar disc (or circumstellar disk) is a torus, pancake or ring-shaped accretion disk of matter composed of gas, dust, planetesimals, asteroids, or collision fragments in orbit around a star. Around the youngest stars, they are the reservoirs of material out of which planets may form. Around mature stars, they indicate that planetesimal formation has taken place, and around white dwarfs, they indicate that planetary material survived the whole of stellar evolution. Such a disc can manifest itself in various ways.
Young star
According to the widely accepted model of star formation, sometimes referred to as the nebular hypothesis, a young star (protostar) is formed by the gravitational collapse of a pocket of matter within a giant molecular cloud. The infalling material possesses some amount of angular momentum, which results in the formation of a gaseous protoplanetary disc around the young, rotating star. The former is a rotating circumstellar disc of dense gas and dust that continues to feed the central star. It may contain a few percent of the mass of the central star, mainly in the form of gas which is itself mainly hydrogen. The main accretion phase lasts a few million years, with accretion rates typically between 10−7 and 10−9 solar masses per year (rates for typical systems presented in Hartmann et al.).
The disc gradually cools in what is known as the T Tauri star stage. Within this disc, the formation of small dust grains made of rocks and ices can occur, and these can coagulate into planetesimals. If the disc is sufficiently massive, the runaway accretions begin, resulting in the appearance of planetary embryos. The formation of planetary systems is thought to be a natural result of star formation. A sun-like star usually takes around 100 million years to form.
Around the Solar System
The asteroid belt is a reservoir of small bodies in the Solar System located between the orbit of Mars and Jupiter. It is a source of interplanetary dust.
Edgeworth-Kuiper belt, beyond the orbit of Neptune
Scattered disc, beyond the orbit of Neptune
Hills cloud; only the inner Oort cloud has a toroid-like shape. The outer Oort cloud is more spherical in shape.
Binary system
The infall of gas onto a binary system allows the formation of circumstellar and circumbinary discs. The formation of such a disc will occur for any binary system in which infalling gas contains some degree of angular momentum. A general progression of disc formation is observed with increasing levels of angular momentum:
Circumprimary disc is one which orbits the primary (i.e. more massive) star of the binary system. This type of disc will form through accretion if any angular momentum is present in the infalling gas.
Circumsecondary disc is one which orbits around the secondary (i.e. less massive) star of the binary star system. This type of disc will only form when a high enough level of angular momentum is present within the infalling gas. The amount of angular momentum required is dependent on the secondary-to-primary mass ratio. A circumsecondary disk is sometimes seen transiting in front of the primary.
Circumbinary disc is one which orbits about both the primary and secondary stars. Such a disc will form at a later time than the circumprimary and circumsecondary discs, with an inner radius much larger than the orbital radius of the binary system. A circumbinary disc may form with an upper mass limit of approximately 0.005 solar masses, at which point the binary system is generally unable to perturb the disc strongly enough for gas to be further accreted onto the circumprimary and circumsecondary discs. An example of a circumbinary disc may be seen around the star system GG Tauri.
Given the formation of a circumbinary disc, the formation of an inner cavity surrounding the binary is inevitable. This cavity is the result of spiral density waves located at Lindblad resonances, specifically the outer Lindblad resonances. The exact resonances which excise the cavity depend on the eccentricity of the binary , but in each case the size of the cavity is proportional to the binary separation .
Accretion Variability
Short-Term Variability
The indicative timescale that governs the short-term evolution of accretion onto binaries within circumbinary disks is the binary's orbital period . Accretion into the inner cavity is not constant, and varies depending on and the behavior of the gas along the innermost region of the cavity. For non-eccentric binaries, accretion variability coincides with the Keplerian orbital period of the inner gas, which develops lumps corresponding to outer Lindblad resonances. This period is approximately five times the binary orbital period. For eccentric binaries, the period of accretion variability is the same as the binary orbital period due to each binary component scooping in matter from the circumbinary disk each time it reaches the apocenter of its orbit.
Long-Term Variability
Eccentric binaries also see accretion variability over secular timescales hundreds of times the binary period. This corresponds to the apsidal precession rate of the inner edge of the cavity, which develops its own eccentricity , along with a significant region of the inner circumbinary disk up to . This eccentricity may in turn affect the inner cavity accretion as well as dynamics further out in the disk, such as circumbinary planet formation and migration.
Orbital Evolution
It was originally believed that all binaries located within circumbinary disk would evolve towards orbital decay due to the gravitational torque of the circumbinary disk, primarily from material at the innermost edge of the excised cavity. This decay is no longer guaranteed when accretion from the circumbinary disk onto the binary occurs, and can even lead to increased binary separations. The dynamics of orbital evolution depend on the binary's parameters, such as the mass ratio and eccentricity , as well as the thermodynamics of the accreting gas.
Misaligned Disks
Once a circumstellar disk has formed, spiral density waves are created within the circumstellar material via a differential torque due to the binary's gravity. The majority of these discs form axissymmetric to the binary plane, but it is possible for processes such as the Bardeen-Petterson effect, a misaligned dipole magnetic field and radiation pressure to produce a significant warp or tilt to an initially flat disk.
Strong evidence of tilted disks is seen in the systems Her X-1, SMC X-1, and SS 433 (among others), where a periodic line-of-sight blockage of X-ray emissions is seen on the order of 50–200 days; much slower than the systems' binary orbit of ~1 day. The periodic blockage is believed to result from precession of a circumprimary or circumbinary disk, which normally occurs retrograde to the binary orbit as a result of the same differential torque which creates spiral density waves in an axissymmetric disk.
Evidence of tilted circumbinary disks can be seen through warped geometry within circumstellar disks, precession of protostellar jets, and inclined orbits of circumplanetary objects (as seen in the eclipsing binary TY CrA). For disks orbiting a low secondary-to-primary mass ratio binary, a tilted circumbinary disc will undergo rigid precession with a period on the order of years. For discs around a binary with a mass ratio of one, differential torques will be strong enough to tear the interior of the disc apart into two or more separate, precessing discs.
A study from 2020 using ALMA data showed that circumbinary disks around short period binaries are often aligned with the orbit of the binary. Binaries with a period longer than one month showed typically a misalignment of the disk with the binary orbit.
Dust
Debris discs consist of planetesimals along with fine dust and small amounts of gas generated through their collisions and evaporation. The original gas and small dust particles have been dispersed or accumulated into planets.
Zodiacal cloud or interplanetary dust is the material in the Solar System created by collisions of asteroids and evaporation of comet seen to observers on Earth as a band of scattered light along the ecliptic before sunrise or after sunset.
Exozodiacal dust is dust around another star than the Sun in a location analogous to that of the Zodiacal Light in the Solar System.
Stages
Stages in circumstellar discs refer to the structure and the main composition of the disc at different times during its evolution. Stages include the phases when the disc is composed mainly of submicron-sized particles, the evolution of these particles into grains and larger objects, the agglomeration of larger objects into planetesimals, and the growth and orbital evolution of planetesimals into the planetary systems, like our Solar System or many other stars.
Major stages of evolution of circumstellar discs:
Protoplanetary discs: In this stage large quantities of primordial material (e.g., gas and dust) are present and the discs are massive enough to have potential to be planet-forming.
Transition discs: At this stage, the disc shows significant reduction in the presence of gas and dust and presents properties between protoplanetary and debris discs.
Debris discs: In this stage the circumstellar disc is a tenuous dust disc, presenting small gas amounts or even no gas at all. It is characterized by having dust lifetimes smaller than the age of the disc, hence indicating that the disc is second generation rather than primordial.
Disc dissipation and evolution
Material dissipation is one of the processes responsible for circumstellar discs evolution. Together with information about the mass of the central star, observation of material dissipation at different stages of a circumstellar disc can be used to determine the timescales involved in its evolution. For example, observations of the dissipation process in transition discs (discs with large inner holes) estimate the average age of a circumstellar disc to be approximately 10 Myr.
Dissipation process and its duration in each stage is not well understood. Several mechanisms, with different predictions for discs' observed properties, have been proposed to explain dispersion in circumstellar discs. Mechanisms like decreasing dust opacity due to grain growth, photoevaporation of material by X-ray or UV photons from the central star (stellar wind), or the dynamical influence of a giant planet forming within the disc are some of the processes that have been proposed to explain dissipation.
Dissipation is a process that occurs continuously in circumstellar discs throughout the lifetime of the central star, and at the same time, for the same stage, is a process that is present in different parts of the disc. Dissipation can be divided in inner disc dissipation, mid-disc dissipation, and outer disc dissipation, depending on the part of the disc considered.
Inner disc dissipation occurs at the inner part of the disc (< 0.05 – 0.1 AU). Since it is closest to the star, this region is also the hottest, thus material present there typically emits radiation in the near-infrared region of the electromagnetic spectrum. Study of the radiation emitted by the very hot dust present in that part of the disc indicates that there is an empirical connection between accretion from a disc onto the star and ejections in an outflow.
Mid-disc dissipation, occurs at the mid-disc region (1-5 AU) and is characterized for the presence of much more cooler material than in the inner part of the disc. Consequently, radiation emitted from this region has greater wavelength, indeed in the mid-infrared region, which makes it very difficult to detect and to predict the timescale of this region's dissipation. Studies made to determine the dissipation timescale in this region provide a wide range of values, predicting timescales from less than 10 up to 100 Myr.
Outer disc dissipation occurs in regions between 50 – 100 AU, where temperatures are much lower and emitted radiation wavelength increases to the millimeter region of the electromagnetic spectrum. Mean dust masses for this region has been reported to be ~ 10−5 solar masses. Studies of older debris discs (107 - 109 yr) suggest dust masses as low as 10−8 solar masses, implying that diffusion in outer discs occurs on a very long timescale.
As mentioned, circumstellar discs are not equilibrium objects, but instead are constantly evolving. The evolution of the surface density of the disc, which is the amount of mass per unit area so after the volume density at a particular location in the disc has been integrated over the vertical structure, is given by:
where is the radial location in the disc and is the viscosity at location . This equation assumes axisymmetric symmetry in the disc, but is compatible with any vertical disc structure.
Viscosity in the disc, whether molecular, turbulent or other, transports angular momentum outwards in the disc and most of the mass inwards, eventually accreting onto the central object. The mass accretion onto the star in terms of the disc viscosity is expressed:
where is the inner radius.
Direct imaging
Protoplanetary disks and debris disks can be imaged with different methods. If the disk is seen edge-on, the disk can sometimes block the light of the star and the disk can be directly observed without a coronagraph or other advanced techniques (e.g. Gomez's Hamburger or Flying Saucer Nebula). Other edge-on disks (e.g. Beta Pictoris or AU Microscopii) and face-on disks (e.g. IM Lupi or AB Aurigae) require a coronagraph, adaptive optics or differential images to take an image of the disk with a telescope. These optical and infrared observations, for example with SPHERE, usually take an image of the star light being scattered on the surface of the disk and trace small micron-sized dust particles. Radio arrays like ALMA on the other hand can map larger millimeter-sized dust grains found in the mid-plane of the disk. Radio arrays like ALMA can also detect narrow emission from the gas of the disk. This can reveal the velocity of the gas within and around the disk. In some cases an edge-on protoplanetary disk (e.g. CK 3 or ASR 41) can cast a shadow onto the surrounding dusty material. This cast shadow works like a shadow play, and the projection of the disk is much larger than the true size of the disk.
| Physical sciences | Stellar astronomy | Astronomy |
53830256 | https://en.wikipedia.org/wiki/Methane%20emissions | Methane emissions | Increasing methane emissions are a major contributor to the rising concentration of greenhouse gases in Earth's atmosphere, and are responsible for up to one-third of near-term global heating. During 2019, about 60% (360 million tons) of methane released globally was from human activities, while natural sources contributed about 40% (230 million tons). Reducing methane emissions by capturing and utilizing the gas can produce simultaneous environmental and economic benefits.
Since the Industrial Revolution, concentrations of methane in the atmosphere have more than doubled, and about 20 percent of the warming the planet has experienced can be attributed to the gas. About one-third (33%) of anthropogenic emissions are from gas release during the extraction and delivery of fossil fuels; mostly due to gas venting and gas leaks from both active fossil fuel infrastructure and orphan wells. Russia is the world's top methane emitter from oil and gas.
Animal agriculture is a similarly large source (30%); primarily because of enteric fermentation by ruminant livestock such as cattle and sheep. According to the Global Methane Assessment published in 2021, methane emissions from livestock (including cattle) are the largest sources of agricultural emissions worldwide A single cow can make up to 99 kg of methane gas per year. Ruminant livestock can produce 250 to 500 L of methane per day.
Human consumer waste flows, especially those passing through landfills and wastewater treatment, have grown to become a third major category (18%). Plant agriculture, including both food and biomass production, constitutes a fourth group (15%), with rice production being the largest single contributor.
The world's wetlands contribute about three-quarters (75%) of the enduring natural sources of methane. Seepages from near-surface hydrocarbon and clathrate hydrate deposits, volcanic releases, wildfires, and termite emissions account for much of the remainder. Contributions from the surviving wild populations of ruminant mammals are vastly overwhelmed by those of cattle, humans, and other livestock animals.
The Economist recommended setting methane emissions targets as a reduction in methane emissions would allow for more time to tackle the more challenging carbon emissions".
Atmospheric concentration and warming influence
The atmospheric methane (CH4) concentration is increasing and exceeded 1860 parts per billion in 2019, equal to two-and-a-half times the pre-industrial level. The methane itself causes direct radiative forcing that is second only to that of carbon dioxide (CO2). Due to interactions with oxygen compounds stimulated by sunlight, CH4 can also increase the atmospheric presence of shorter-lived ozone and water vapour, themselves potent warming gases: atmospheric researchers call this amplification of methane's near-term warming influence indirect radiative forcing. When such interactions occur, longer-lived and less-potent CO2 is also produced. Including both the direct and indirect forcings, the increase in atmospheric methane is responsible for about one-third of near-term global heating.
Though methane causes far more heat to be trapped than the same mass of carbon dioxide, less than half of the emitted CH4 remains in the atmosphere after a decade. On average, carbon dioxide warms for much longer, assuming no change in rates of carbon sequestration. The global warming potential (GWP) is a way of comparing the warming due to other gases to that from carbon dioxide, over a given time period. Methane's GWP20 of 85 means that a ton of CH4 emitted into the atmosphere creates approximately 85 times the atmospheric warming as a ton of CO2 over a period of 20 years. On a 100-year timescale, methane's GWP100 is in the range of 28–34.
Methane emissions are important as reducing them can buy time to tackle carbon emissions.
Overview of emission sources
Biogenic methane is actively produced by microorganisms in a process called methanogenesis. Under certain conditions, the process mix responsible for a sample of methane may be deduced from the ratio of the isotopes of carbon, and through analysis methods similar to carbon dating.
Anthropogenic
, emission volumes from some sources remain more uncertain than others; due in part to localized emission spikes not captured by the limited global measurement capability. The time required for a methane emission to become well-mixed throughout earth's troposphere is about 1–2 years.
Satellite data indicate over 80% of the growth of methane emissions during 2010–2019 are tropical terrestrial emissions.
There is accumulating research and data showing that oil and gas industry methane emissions – or from fossil fuel extraction, distribution and use – are much larger than thought.
Natural
Natural sources have always been a part of the methane cycle. Wetland emissions have been declining due to draining for agricultural and building areas.
Methanogenesis
Most ecological emissions of methane relate directly to methanogens generating methane in warm, moist soils as well as in the digestive tracts of certain animals. Methanogens are methane producing microorganisms. In order to produce energy, they use an anaerobic process called methanogenesis. This process is used in lieu of aerobic, or with oxygen, processes because methanogens are unable to metabolise in the presence of even small concentrations of oxygen. When acetate is broken down in methanogenesis, the result is the release of methane into the surrounding environment.
Methanogenesis, the scientific term for methane production, occurs primarily in anaerobic conditions because of the lack of availability of other oxidants. In these conditions, microscopic organisms called archaea use acetate and hydrogen to break down essential resources in a process called fermentation.
Acetoclastic methanogenesis – certain archaea cleave acetate produced during anaerobic fermentation to yield methane and carbon dioxide.
H3C-COOH → CH4 + CO2
Hydrogenotrophic methanogenesis – archaea oxidize hydrogen with carbon dioxide to yield methane and water.
4H2 + CO2 → CH4 + 2H2O
While acetoclastic methanogenesis and hydrogenotrophic methanogenesis are the two major source reactions for atmospheric methane, other minor biological methane source reactions also occur. For example, it has been discovered that leaf surface wax exposed to UV radiation in the presence of oxygen is an aerobic source of methane.
Natural methane cycles
Emissions of methane into the atmosphere are directly related to temperature and moisture. Thus, the natural environmental changes that occur during seasonal change act as a major control of methane emission. Additionally, even changes in temperature during the day can affect the amount of methane that is produced and consumed.
Its concentration is higher in the Northern Hemisphere since most sources (both natural and human) are located on land and the Northern Hemisphere has more land mass. The concentrations vary seasonally, with, for example, a minimum in the northern tropics during April−May mainly due to removal by the hydroxyl radical.
For example, plants that produce methane can emit as much as two to four times more methane during the day than during the night. This is directly related to the fact that plants tend to rely on solar energy to enact chemical processes.
Additionally, methane emissions are affected by the level of water sources. Seasonal flooding during the spring and summer naturally increases the amount of methane released into the air.
Wetlands
In wetlands, where the rate of methane production is high, plants help methane travel into the atmosphere—acting like inverted lightning rods as they direct the gas up through the soil and into the air. They are also suspected to produce methane themselves, but because the plants would have to use aerobic conditions to produce methane, the process itself is still unidentified, according to a 2014 Biogeochemistry article.
A 1994 article on methane emissions from northern wetlands said that since the 1800s, atmospheric methane concentrations increased annually at a rate of about 0.9%.
Human-caused methane emissions
The AR6 of the IPCC said, "It is unequivocal that the increases in atmospheric carbon dioxide (CO2), methane (CH4), and nitrous oxide (N2O) since the pre-industrial period are overwhelmingly caused by human activities." Atmospheric methane accounted for 20% of the total radiative forcing (RF) from all of the long-lived and globally mixed greenhouse gases.
According to the 2021 assessment by the Climate and Clean Air Coalition (CCAC) and the United Nations Environment Programme (UNEP) over 50% of global methane emissions are caused by human activities in fossil fuels (35%), waste (20%), and agriculture (40%). The oil and gas industry accounts for 23%, and coal mining for 12%. Twenty percent of global anthropogenic emissions stem from landfills and wastewater. Manure and enteric fermentation represent 32%, and rice cultivation represents 8%.
The most clearly identified rise in atmospheric methane as a result of human activity occurred in the 1700s during the industrial revolution. During the 20th centurymainly because of the use of fossil fuelsconcentration of methane in the atmosphere increased, then stabilized briefly in the 1990s, only to begin to increase again in 2007. After 2014, the increase accelerated and by 2017, reached 1,850 (parts per billion) ppb.
Increases in methane levels due to modern human activities arise from a number of specific sources including industrial activity; from extraction of oil and natural gas from underground reserves; transportation via pipeline of oil and natural gas; and thawing permafrost in Arctic regions, due to global warming which is caused by human use of fossil fuels.
The primary component of natural gas is methane, which is emitted to the atmosphere in every stage of natural gas "production, processing, storage, transmission, and distribution".
Emissions due to oil and gas extraction
A 2005 Wuppertal Institute for Climate, Environment and Energy article identified pipelines that transport natural gas as a source of methane emissions. The article cited the example of Trans-Siberian natural gas pipeline system to western and Central Europe from the Yamburg and Urengoy exist gas fields in Russia with a methane concentration of 97%. In accordance with the IPCC and other natural gas emissions control groups, measurements had to be taken throughout the pipeline to measure methane emissions from technological discharges and leaks at the pipeline fittings and vents. Although the majority of the natural gas leaks were carbon dioxide, a significant amount of methane was also being consistently released from the pipeline as a result of leaks and breakdowns. In 2001, natural gas emissions from the pipeline and natural gas transportation system accounted for 1% of the natural gas produced. Between 2001 and 2005, this was reduced to 0.7%, the 2001 value was significantly less than that of 1996.
A 2012 Climatic Change article and 2014 publication by a team of scientists led by Robert W. Howarth said that there was strong evidence that "shale gas has a larger GHG footprint than conventional gas, considered over any time scale. The GHG footprint of shale gas also exceeds that of oil or coal when considered at decadal time scales." Howarth called for policy changes to regulate methane emissions resulting from hydraulic fracturing and shale gas development.
A 2013 study by a team of researchers led by Scot M. Miller, said that U.S. greenhouse gas reduction policies in 2013 were based on what appeared to be significant underestimates of anthropogenic methane emissions. The article said, that "greenhouse gas emissions from agriculture and fossil fuel extraction and processing"oil and/or natural gaswere "likely a factor of two or greater than cited in existing studies." By 2001, following a detailed study anthropogenic sources on climate change, IPCC researchers found that there was "stronger evidence that most of the observed warming observed over the last 50 years [was] attributable to human activities." Since the Industrial Revolution humans have had a major impact on concentrations of atmospheric methane, increasing atmospheric concentrations roughly 250%. According to the 2021 IPCC report, 30 - 50% of the current rise in temperatures is caused by emissions of methane, and reducing methane is a fast way of climate change mitigation. An alliance of 107 countries, including Brazil, the EU and the US, have joined the pact known as the Global Methane Pledge, committing to a collective goal of reducing global methane emissions by at least 30% from 2020 levels by 2030.
The European Union adopted methane regulations in 2024. The law requires oil and gas developers to monitor, measure, and report methane emissions. Producers must stop flaring unused natural gas and use satellite imagery to detect leaks.
Animals and livestock
Ruminant animals, particularly cows and sheep, contain bacteria in their gastrointestinal systems that help to break down plant material. Some of these microorganisms use the acetate from the plant material to produce methane, and because these bacteria live in the stomachs and intestines of ruminants, whenever the animal "burps" or defecates, it emits methane as well. Based upon a 2012 study in the Snowy Mountains region, the amount of methane emitted by one cow is equivalent to the amount of methane that around 3.4 hectares of methanotrophic bacteria can consume. research in the Snowy Mountains region of Australia showed 8 tonnes of methane oxidized by methanotrophic bacteria per year on a 1,000 hectare farm. 200 cows on the same farm emitted 5.4 tonnes of methane per year. Hence, one cow emitted 27 kg of methane per year, while the bacteria oxidized 8 kg per hectare. The emissions of one cow were oxidized by 27/8 ≈ 3.4 hectare.
Termites also contain methanogenic microorganisms in their gut. However, some of these microorganisms are so unique that they live nowhere else in the world except in the third gut of termites. These microorganisms also break down biotic components to produce ethanol, as well as methane byproduct. However, unlike ruminants who lose 20% of the energy from the plants they eat, termites only lose 2% of their energy in the process. Thus comparatively, termites do not have to eat as much food as ruminants to obtain the same amount of energy, and give off proportionally less methane.
In 2001, NASA researchers confirmed the vital role of enteric fermentation in livestock on global warming. A 2006 UN FAO report reported that livestock generate more greenhouse gases as measured in CO2 equivalents than the entire transportation sector. Livestock accounts for 9% of anthropogenic CO2, 65%t of anthropogenic nitrous oxide and 37% of anthropogenic methane. Since then, animal science and biotechnology researchers have focused research on methanogens in the rumen of livestock and mitigation of methane emissions.
Nicholas Stern, the author of the 2006 Stern Review on climate change has stated "people will need to turn vegetarian if the world is to conquer climate change". In 2003, the National Academy of Sciences's president, Ralph Ciceronean atmospheric scientistraised concerns about the increase in the number of methane-producing dairy and beef cattle was a "serious topic" as methane was the "second-most-important greenhouse gas in the atmosphere".
Approximately 5% of the methane is released via the flatus, whereas the other 95% is released via eructation. Vaccines are under development to reduce the amount introduced through eructation. Asparagopsis seaweed as a livestock feed additive has reduced methane emissions by more than 80%.
Waste
Landfills
Due to the large collections of organic matter and availability of anaerobic conditions, landfills are the third largest source of atmospheric methane in the United States, accounting for roughly 18.2% of methane emissions globally in 2014. When waste is first added to a landfill, oxygen is abundant and thus undergoes aerobic decomposition; during which time very little methane is produced. However, generally within a year oxygen levels are depleted and anaerobic conditions dominate the landfill allowing methanogens to takeover the decomposition process. These methanogens emit methane into the atmosphere and even after the landfill is closed, the mass amount of decaying matter allows the methanogens to continue producing methane for years.
Waste water treatment
Waste water treatment facilities act to remove organic matter, solids, pathogens, and chemical hazards as a result of human contamination. Methane emission in waste treatment facilities occurs as a result of anaerobic treatments of organic compounds and anaerobic biodegradation of sludge.
Release of stored methane from the Arctic
Others
Aquatic ecosystems
Natural and anthropogenic methane emissions from aquatic ecosystems are estimated to contribute about half of total global emissions. Urbanization and eutrophication are expected to lead to increased methane emissions from aquatic ecosystems.
Ecological conversion
Conversion of forests and natural environments into agricultural plots increases the amount of nitrogen in the soil, which inhibits methane oxidation, weakening the ability of the methanotrophic bacteria in the soil to act as sinks. Additionally, by changing the level of the water table, humans can directly affect the soil's ability to act as a source or sink. The relationship between water table levels and methane emission is explained in the wetlands section of natural sources.
Rice agriculture
Rice agriculture is a significant source of methane. With warm weather and water-logged soil, rice paddies act like wetlands, but are generated by humans for the purpose of food production. Due to the swamp-like environment of rice fields, these paddies emitted about 30 of the 400 million metric tons of anthropogenic methane in 2022.
Biomass burning
Incomplete burning of both living and dead organic matter results in the emission of methane. While natural wildfires can contribute to methane emissions, the bulk majority of biomass burning occurs as a result of humans – including everything from accidental burnings by civilians to deliberate burnings used to clear out land to biomass burnings occurring as a result of destroying waste.
Oil and natural gas supply chain
Methane is a primary component of natural gas, and thus during the production, processing, storage, transmission, and distribution of natural gas, a significant amount of methane is lost into the atmosphere.
According to the EPA Inventory of U.S Greenhouse Gas Emissions and Sinks: 1990–2015 report, 2015 methane emissions from natural gas and petroleum systems totaled 8.1 Tg per year in the United States. Individually, the EPA estimates that the natural gas system emitted 6.5 Tg per year of methane while petroleum systems emitted 1.6 Tg per year of methane. Methane emissions occur in all sectors of the natural gas industry, from drilling and production, through gathering and processing and transmission, to distribution. These emissions occur through normal operation, routine maintenance, fugitive leaks, system upsets, and venting of equipment. In the oil industry, some underground crude contains natural gas that is entrained in the oil at high reservoir pressures. When oil is removed from the reservoir, associated gas is produced.
However, a review of methane emissions studies reveals that the EPA Inventory of Greenhouse Gas Emissions and Sinks: 1990–2015 report likely significantly underestimated 2015 methane emissions from the oil and natural gas supply chain. The review concluded that in 2015 the oil and natural gas supply chain emitted 13 Tg per year of methane, which is about 60% more than the EPA report for the same time period. The authors write that the most likely cause for the discrepancy is an under sampling by the EPA of so-called "abnormal operating conditions", during which large quantities of methane can be emitted.
Coal mining
In 2014 NASA researchers reported the discovery of a methane cloud floating over the Four Corners region of the south-west United States. The discovery was based on data from the European Space Agency's Scanning Imaging Absorption Spectrometer for Atmospheric Chartography instrument from 2002 to 2012.
The report concluded that "the source is likely from established gas, coal, and coalbed methane mining and processing." The region emitted 590,000 metric tons of methane every year between 2002 and 2012—almost 3.5 times the widely used estimates in the European Union's Emissions Database for Global Atmospheric Research. In 2019, the International Energy Agency (IEA) estimated that the methane emissions leaking from the world's coalmines are warming the global climate at the same rate as the shipping and aviation industries combined.
Methane gas from methane clathrates
At high pressures, such as are found on the bottom of the ocean, methane forms a solid clathrate with water, known as methane hydrate. An unknown, but possibly very large quantity of methane is trapped in this form in ocean sediments. Researchers are investigating possible changes in this process (clathrate gun hypothesis).
However, the 2021 IPCC Sixth Assessment Report found that it was "very unlikely that gas clathrates (mostly methane) in deeper terrestrial permafrost and subsea clathrates will lead to a detectable departure from the emissions trajectory during this century".
Methane slip from gas engines
The use of natural gas and biogas in internal combustion engines for such applications as electricity production, cogeneration and heavy vehicles or marine vessels such as LNG carriers using the boil off gas for propulsion, emits a certain percentage of unburned hydrocarbons of which 85% is methane. The climate issues of using gas to fuel internal combustion engines may offset or even cancel out the advantages of less CO2 and particle emissions is described in this 2016 EU Issue Paper on methane slip from marine engines: "Emissions of unburnt methane (known as the 'methane slip') were around 7 g per kg LNG at higher engine loads, rising to 23–36 g at lower loads. This increase could be due to slow combustion at lower temperatures, which allows small quantities of gas to avoid the combustion process". Road vehicles run more on low load than marine engines causing relatively higher methane slip.
Global methane emissions monitoring
The Tropospheric Monitoring Instrument aboard the European Space Agency's Sentinel-5P spacecraft launched in October 2017 provides the most detailed methane emissions monitoring which is publicly available. It has a resolution of about 50 square kilometres.
MethaneSAT is under development by the Environmental Defense Fund in partnership with researchers at Harvard University, to monitor methane emissions with an improved resolution of 1 kilometer. MethaneSAT is designed to monitor 50 major oil and gas facilities, and could also be used for monitoring of landfills and agriculture. It receives funding from Audacious Project (a collaboration of TED and the Gates Foundation), and is projected to launch as soon as 2024.
In 2023, 12 satellites were deployed by GHGSat for monitoring methane emissions.
Uncertainties in methane emissions, including so-called "super-emitter" fossil extractions and unexplained atmospheric fluctuations, highlight the need for improved monitoring at both regional and global scale. Satellites have recently begun to come online with capability to measure methane and other more powerful greenhouse gases with improving resolution.
The Tropomi instrument on Sentinel-5 launched in 2017 by the European Space Agency can measure methane, sulphur dioxide, nitrogen dioxide, carbon monoxide, aerosol, and ozone concentrations in earth's troposphere at resolutions of several kilometers. In 2022, a study using data from the instrument monitoring large methane emissions worldwide was published; 1,200 large methane plumes were detected over oil and gas extraction sites. NASA's instrument also identified super-emitters. A 50% increase was observed in large methane emissions events detected by satellites in 2023 compared to 2022.
Japan's GOSAT-2 platform launched in 2018 provides similar capability.
The Claire satellite launched in 2016 by the Canadian firm GHGSat uses data from Tropomi to home in on sources of methane emissions as small as 15 m2.
Other satellites are planned that will increase the precision and frequency of methane measurements, as well as provide a greater ability to attribute emissions to terrestrial sources. These include MethaneSAT, expected to be launched in 2022, and CarbonMapper.
Global maps combining satellite data to help identify and monitor major methane emission sources are being built.
The International Methane Emissions Observatory was created by the UN.
Quantifying the global methane budget
In order to mitigate climate change, scientists have been focusing on quantifying the global methane CH4 budget as the concentration of methane continues to increase—it is now second after carbon dioxide in terms of climate forcing. Further understanding of atmospheric methane is necessary in "assessing realistic pathways" towards climate change mitigation. Various research groups give the following values for methane emissions:
National reduction policies
China implemented regulations requiring coal plants to either capture methane emissions or convert methane into in 2010. According to a Nature Communications paper published in January 2019, methane emissions instead increased 50 percent between 2000 and 2015.
In March 2020, Exxon called for stricter methane regulations, which would include detection and repair of methane leaks, minimization of venting and releases of unburned methane, and reporting requirements for companies. However, in August 2020, the U.S. Environmental Protection Agency rescinded a prior tightening of methane emission rules for the U.S. oil and gas industry.
Approaches to reduce emissions
Natural gas industries
About 40% of methane emissions from the fossil fuel industry could be "eliminated at no net cost for firms", according to the International Energy Agency (IEA) by using existing technologies. Forty percent represents 9% of all human methane emissions.
To reduce emissions from the natural gas industries, the EPA developed the Natural Gas STAR Program, also known as Gas STAR.
The Coalbed Methane Outreach Program (CMOP) helps and encourages the mining industry to find ways to use or sell methane that would otherwise be released from the coal mine into the atmosphere.
In 2023, the European Union agreed to legislation that will require fossil fuel companies to monitor and report methane leaks and to repair them within a short time period. The law also compels remediation of methane venting and methane flaring. The United States and China stated that they will include methane reduction targets in their next climate plans but have not enacted rules that would compel monitoring, reporting or repair of methane leaks.
Livestock
In order to counteract the amount of methane that ruminants give off, a type of drug called monensin (marketed as rumensin) has been developed. This drug is classified as an ionophore, which is an antibiotic that is naturally produced by a harmless bacteria strain. This drug not only improves feed efficiency but also reduces the amount of methane gas emitted from the animal and its manure.
In addition to medicine, specific manure management techniques have been developed to counteract emissions from livestock manure. Educational resources have begun to be provided for small farms. Management techniques include daily pickup and storage of manure in a completely closed off storage facility that will prevent runoff from making it into bodies of water. The manure can then be kept in storage until it is either reused for fertilizer or taken away and stored in an offsite compost. Nutrient levels of various animal manures are provided for optimal use as compost for gardens and agriculture.
Crops and soils
In order to reduce effects on methane oxidation in soil, several steps can be taken. Controlling the usage of nitrogen enhancing fertilizer and reducing the amount of nitrogen pollution into the air can both lower inhibition of methane oxidation. Additionally, using drier growing conditions for crops such as rice and selecting strains of crops that produce more food per unit area can reduce the amount of land with ideal conditions for methanogenesis. Careful selection of areas of land conversion (for example, plowing down forests to create agricultural fields) can also reduce the destruction of major areas of methane oxidation.
Landfills
To counteract methane emissions from landfills, on March 12, 1996, the EPA (Environmental Protection Agency) added the "Landfill Rule" to the Clean Air Act. This rule requires large landfills that have ever accepted municipal solid waste, have been used as of November 8, 1987, can hold at least 2.5 million metric tons of waste with a volume greater than 2.5 million cubic meters, and/or have nonmethane organic compound (NMOC) emissions of at least 50 metric tons per year to collect and combust emitted landfill gas. This set of requirements excludes 96% of the landfills in the USA. While the direct result of this is landfills reducing emission of non-methane compounds that form smog, the indirect result is reduction of methane emissions as well.
In an attempt to absorb the methane that is already being produced from landfills, experiments in which nutrients were added to the soil to allow methanotrophs to thrive have been conducted. These nutrient supplemented landfills have been shown to act as a small scale methane sink, allowing the abundance of methanotrophs to sponge the methane from the air to use as energy, effectively reducing the landfill's emissions.
| Physical sciences | Climate change | Earth science |
49980572 | https://en.wikipedia.org/wiki/Antarctic%20fishes | Antarctic fishes | Antarctic fish is a common name for a variety of fish that inhabit the Southern Ocean. There are relatively few families in this region, the most species-rich being the Liparidae (snailfishes), followed by Nototheniidae (cod icefishes). The latter is one of eight different families that belong to the suborder Notothenioidei of the order Perciformes. They are also called notothenioids, but this name is also used to describe the other three, non-Antarctic families and some of the non-Antarctic genera in the mainly Antarctic families belonging to the suborder.
Antarctic fish are best known for their uses in studying adaptive radiation, the ecological process that causes the rapid development of several different species from one common ancestor of this fishes. These studies have been done using genetics, phylogeny, study of paleontology, and combinations of these fields to determine the sister lineage of the Antarctic fish.
Description
General
Though many different species comprise the Antarctic icefish cluster, there are some common characteristics between fish. They generally have a set of rounded pectoral fins and rounded pelvic fins that maximize mobility in both the water column and on the seafloor. Their eyes are of medium size and are set towards the top of the head, indicating that they catch prey by moving into the water column from the seafloor. The mouth is large in comparison to size of the body. The spiny dorsal fin is placed halfway down the body and is detached from the soft dorsal fin. The soft dorsal fin extends down the body and ends shortly before the caudal fin. The anal fin mirrors the soft dorsal fin down the underside of the body before the caudal fin. The shape of the caudal fin varies based on family, but is generally either rounded, forked or truncate. Only Artedidraconids have chin barbels hanging from the lower jaw that drags through the sand and a hook shaped operculum. Otherwise, the operculum is rounded.
Coloring ranges from light gray to dark gray with large spotting. Some species are tan or green or even red.
Channichthyids are the largest of these fish at a maximum of , with Harpagifer the smallest at .
Buoyancy
Notothenioid fish dominate the Southern Ocean diversity and biomass largely because of the pelagization by some species. Most fish are benthic and consequently, spend their lives on the seafloor. Notothenioids are found in many different niches like semipelagic, cryopelagic, pelagic, and benthic zones. Species have been able to colonize the water column despite not having swim bladders like other bony fishes. Evolution that decreased the amount of minerals present in the skeleton and increased the number of lipids in the body made this possible. As a result, fish can remain neutrally buoyant and decrease the energy requirement for remaining pelagic. These adaptations are most often observed in Nototheniids, the most diverse of the families. Because of their movement into the water column, fish are observed to feed on both the seafloor and in the water column.
Antifreeze glycoproteins
Antarctic notothenioids are able to survive the freezing temperatures of . with the use of antifreeze glycoproteins (AFGPs). Antifreeze glycoproteins bind to ice that enters the body through ingestion of food, water and from their environment to prevent the organism freezing internally. AFGPs evolved from pancreatic trypsinogen gene for survival as Antarctic waters began to cool. They consist of repeating amino acids alanine-alanine-threonine to effectively bind to ice molecules and render them as non-threatening to survival. AFGPs are created in the pancreas and are released into the digestive tract to wrap around ice crystals so they can be safely excreted with excrement. Unused AFGPs are recycled by entering the bloodstream and cycling back to the liver for storage. Fish also excrete AFGPs in their mucus and on the surface of their skin to prevent external freezing.
AFGPs do not eliminate all internal ice crystals and can instead fill fish with inactivated ice crystals. This can pose as a danger to the organism as well. Research has shown that summer warming of waters does eliminate internal ice somewhat but it does not do so explicitly.
Heat shock proteins
Heat shock proteins (HSPs) are expressed during exposure to high temperatures and is a characteristic held by most organisms. In some species of nototheniids, this trait is not expressed. The trait is not expressed because of the extreme cold of the Southern Oceans leading to upregulation of Hsp70. HSP expression indicates that regulation of Hsp70 occurred once during speciation, showing that it is a trait of most, if not all Antarctic notothenioids possess.
Some Antarctic fish are able to resist perishing when exposed to temperatures higher than their environment but little research has taken place to explain how they survive.
Hemoglobin loss
Channichthyids' most well known feature is their lack of erythrocytes. Genetic evidence shows that crocodile icefish had erythrocytes but have back evolved to not use hemoglobin, or any protein binding for oxygen transport. Instead, white-blooded fish have improved heart output, higher blood volume, higher uptake of oxygen and lower metabolic rates. The discovery that icefish species Neopagetopsis ionah possesses a nearly intact, but useless set of globin genes demonstrates that multiple events led to the loss of hemoglobin expression in icefish.
One possible explanation for why icefish were able to survive the mutational events that removed their erythrocytes is that because iron is a limiting element in the oceans, icefish found a way to thrive by not needing it for oxygen transport. One other possible explanation is that colder waters increases the viscosity of body fluids to the point that it is beneficial to eliminate erythrocytes altogether, and instead rely on adaptations. Neither theories have much further research to support their points.
Distribution
Artedidraconids are deep sea dwellers in the Southern Ocean. Bathydraconids are also found in Antarctic deep sea. Channichthyids are distributed around both Antarctica and Southern America. Harpagiferids are found in the Southern Ocean, Southwest Pacific, Southwest Atlantic, and the Indian Ocean. Nototheniids are distributed throughout the coasts of Antarctica.
Life cycle
Notothenioids have a lifespan of an estimated ten years and reach sexual maturity at ages 3–4 years. Notothenioids are thought to spawn annually while sex organ maturation takes place every other year. Spawning generally takes place during fall or winter if taking place in seasonal ice habitats while spawning in Antarctic zones spawn in summer and fall. Some migratory patterns have been observed in a few species in the seasonal ice habitats. Most fish move to shallower waters or areas with sloping continental shelves to spawn.
Eggs are released in batches. Notothenioids are known for nesting and guarding their eggs to ensure protection from predators, improve oxygen content of water around eggs and dispose of dead or damaged eggs. Whether or not these eggs are pelagic or attached to the seafloor, rocks or sponges depends on the species of fish. Crocodile icefish tend to have either eggs attached to the seafloor and with eggs attached to the pelvic fin. Bathydraconids guard eggs on the seafloor of shallow waters. Both harpagiferids and artedidraconids also guard their eggs by attaching them to the seafloor.
Eggs have a long incubation time of around 5 months. The long incubation time can be attributed to the colder waters. Larvae do not hatch until in advanced stages of development. Well developed larvae have higher chances of survival in extreme climates. Once hatched, the larvae have sufficient means to swim and evade predators with long, slender bodies and larval fins. Main predators of the fish larvae are other benthic fish.
Ecology
Adaptive radiation is the rapid speciation of multiple species from a common ancestor to fill empty niches. Evidence of adaptive radiation is common ancestry, early bursts of speciation that decrease with time and a correlation between phenotype and environment. The species flock concept is the phenomena of related species sharing the same habitat. A group of species fit the species flock concept if they exhibit species richness, a common ancestor and share the same area. Species flocks are indicative of adaptive radiation. Antarctic fish fit these criteria with modifications in swim bladders, development of AFGPs, loss of HSPs and modifications in oxygen transport while inhabiting the same geographic area.
Antarctic fish speciation coincides with the separation of Antarctica from Gondwana, a continent composed of Antarctica, Australia, South America and Africa. The temperate, shallow seas hosted a variety of marine life. The close relatives of Antarctic notothenioids, like Halaphritis, Bovichtus and Pseudaphritis, inhabited these seas. With the cleavage of Australia, South America and Africa from each other, species of marine life separated. As Antarctica cleaved from South America 122 Ma, the Drake Passage formed, fully isolating Antarctica geographically by establishing the Antarctic Circumpolar Current and the Antarctic Polar Front.
The cooling of Antarctica's seas prompted a mass extinction of most of the organisms off the coasts of Antarctica and in the Southern ocean. The mass extinction created many open niches for Antarctic notothenioids to colonize, triggering adaptive radiation. It was originally thought that AFGPs triggered radiation but further research in the timing of AFGP onset and speciation did not support the theory. AFGPs do not fit with the early burst model because they were developed in Antarctic fish 10 Ma before rapid speciation.
| Biology and health sciences | Acanthomorpha | Animals |
51001497 | https://en.wikipedia.org/wiki/Rec.%202100 | Rec. 2100 | ITU-R Recommendation BT.2100, more commonly known by the abbreviations Rec. 2100 or BT.2100, introduced high-dynamic-range television (HDR-TV) by recommending the use of the perceptual quantizer (PQ [SMPTE ST 2084]) or hybrid log–gamma (HLG) transfer functions instead of the traditional "gamma" previously used for SDR-TV.
It defines various aspects of HDR-TV such as display resolution (HDTV and UHDTV), frame rate, chroma subsampling, bit depth, color space, color primaries, white point, and transfer function. It was posted on the International Telecommunication Union (ITU) website on July 4, 2016. Rec. 2100 uses a wide color gamut (WCG) which is the same as Rec. 2020's.
Technical details
Transfer functions
Rec. 2100 defines two sets of HDR transfer functions which are perceptual quantization (PQ) and hybrid log-gamma (HLG). HLG is supported in Rec. 2100 with a nominal peak luminance of 1,000 cd/m2 and a system gamma value that can be adjusted depending on background luminance. For a reference viewing environment the peak luminance of display should be 1000 cd/m2 or more for small area highlights and the black level should be 0.005 cd/m2 or less. The surround light should be 5 cd/m2 and be neutral grey at standard illuminant D65.
Within each set, the documented transfer functions include an:
electro-optical transfer function (EOTF) which maps the non-linear signal value into display light
opto-optical transfer function (OOTF) which maps relative scene linear light to display linear light
opto-electronic transfer function (OETF) which maps relative scene linear light into the non-linear signal value
System colorimetry
Rec. 2100 uses the same color primaries as Rec. 2020 which is a Wide Color Gamut.
Resolution
Rec. 2100 specifies three resolutions of 1920 × 1080 ("Full HD"), 3840 × 2160 ("4K UHD"), and 7680 × 4320 ("8K UHD"). These resolutions have an aspect ratio of 16:9 and use square pixels.
Frame rate
Rec. 2100 specifies the following frame rates: 120p, 119.88p, 100p, 60p, 59.94p, 50p, 30p, 29.97p, 25p, 24p, 23.976p. Only progressive scan frame rates are allowed.
Digital representation
Rec. 2100 specifies a bit depth of either 10-bits per sample or 12-bits per sample, with either narrow range or full range color values. A future-use and intermediate linear RGB format using IEEE 16 bit floating point representation for each channel is also specified. For narrow range color, 10-bit per sample signals use video levels where the black level is defined as 64, achromatic gray level as 512 and the nominal peak as 940 in RGB, Y, and I encoding and 960 in Cb/Cr, and Ct/Cp component encoding. 12-bit per sample signals define 256 as the black level, 2048 as the gray level and the nominal peak is 3760 in RGB, Y, and I component encoding and 3840 in Cb/Cr, and Ct/Cp component encoding. Narrow range signals may extend below black or above peak white (super-black and super-white respectively), but must always be clipped to the signal range of 4-1019 for 10-bit signals or 16-4079 for 12-bit signals.
Signal formats
Rec. 2100 specifies the use of RGB, YCbCr, and ICtCp. ICtCp provides an improved color representation that is designed for high dynamic range (HDR) and wide color gamut signals (WCG).
Luma coefficients
Rec. 2100 allows for RGB, YCbCr, and ICtCp signal formats with 4:4:4, 4:2:2, and 4:2:0 chroma subsampling. Rec. 2100 specifies that if a luma (Y') signal is made that it use the same matrix coefficients as Rec. 2020: 0.2627 for red, 0.678 for green, and 0.0593 for blue (derived from BT.2020 primaries and white point).
Chroma sample location
Before Rec. BT.2020 the chroma sample location that was in use was center left. But in H.265 (2018-02) top-left chroma siting was mandated for BT.2020-2 and BT.2100-1, that must be described in VUI (video usability information) as such. First value of VUI should be 2 for top-left chroma and 0 for center left. Blu-ray also uses top-left chroma for HDR, including for Dolby Vision.
| Physical sciences | Basics | Physics |
51011436 | https://en.wikipedia.org/wiki/Aphelida | Aphelida | Aphelida is a phylum of Fungi that appears to be the sister to true fungi.
Taxonomy
Phylum Aphelidiomycota Tedersoo 2018 [Aphelida Karpov, Aleoshin & Mikhailov 2014]
Class Aphelidiomycetes Tedersoo 2018 [Aphelidea Gromov 2000]
Order Aphelidiales Tedersoo et al. 2018 [Aphelidida Gromov 2000 non Cavalier-Smith 2012]
Family Aphelididae Gromov 2000 [Amoeboaphelidiidae Cavalier-Smith 2012]
Genus Amoeboaphelidium Scherffel 1925 emend. Karpov 2014
Genus Paraphelidium Karpov, Moreira & Lopez-Garcia 2017
Genus Pseudaphelidium Schweikert & Schnepf 1996
Genus Aphelidium Zopf 1885 emend. Gromov 2000
| Biology and health sciences | Basics | Plants |
39577606 | https://en.wikipedia.org/wiki/SN%201972E | SN 1972E | SN 1972E was a supernova in the galaxy NGC 5253 that was discovered 13 May 1972 with an apparent B magnitude of about 8.5, shortly after it had reached its maximum brightness. In terms of apparent brightness, it was the second-brightest supernova of any kind (fainter only than SN 1987A) of the 20th century. It was observed for nearly 700 days, and it became the prototype object for the development of theoretical understanding of Type Ia supernovae.
Background
The supernova was discovered by Charles Kowal, about 56 arc seconds west and 85 arc seconds south of the center of NGC 5253. The position in the periphery of the galaxy aided observation, minimizing interference by background objects. Well-positioned for Southern Hemisphere observers, it was quite observable from Northern Hemisphere observatories as well. Attempts made to observe it in X-rays with Uhuru and OSO 7 and to detect gamma rays from it via Cherenkov radiation showers gave at best equivocal results.
Photometric and spectroscopic measurements were made in the visible and near infrared by many observers, extending to about 700 days after maximum light. Interstellar absorption lines of ionized calcium due to gas both in our galaxy and NGC 5253 were observed, allowing an estimate of the interstellar extinction.
The extended length of the observed light curve found a remarkably uniform 0.01 magnitudes per day decline starting about 60 days after discovery. Translated into other units, this is almost exactly a 77-day half-life, which is the half-life of 56Co. In the standard model for Type Ia supernovae, approximately a solar mass of 56Ni is formed and ejected from a white dwarf which accretes mass from a binary companion and is raised over the Chandrasekhar limit and explodes. This 56Ni decays with a half-life of about 6 days to 56Co, and the decay of the cobalt provides the energy radiated away by the supernova remnant. The model also produces an estimate for the luminosity of such a supernova. The observations of SN1972e, both peak brightness and fade rate, were in general agreement with these predictions, and led to rapid acceptance of this degenerate-explosion model.
| Physical sciences | Notable transient events | Astronomy |
39580830 | https://en.wikipedia.org/wiki/Symmetry%20in%20quantum%20mechanics | Symmetry in quantum mechanics | Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems. In application, understanding symmetries can also provide insights on the eigenstates that can be expected. For example, the existence of degenerate states can be inferred by the presence of non commuting symmetry operators or that the non degenerate states are also eigenvectors of symmetry operators.
This article outlines the connection between the classical form of continuous symmetries as well as their quantum operators, and relates them to the Lie groups, and relativistic transformations in the Lorentz group and Poincaré group.
Notation
The notational conventions used in this article are as follows. Boldface indicates vectors, four vectors, matrices, and vectorial operators, while quantum states use bra–ket notation. Wide hats are for operators, narrow hats are for unit vectors (including their components in tensor index notation). The summation convention on the repeated tensor indices is used, unless stated otherwise. The Minkowski metric signature is (+−−−).
Symmetry transformations on the wavefunction in non-relativistic quantum mechanics
Continuous symmetries
Generally, the correspondence between continuous symmetries and conservation laws is given by Noether's theorem.
The form of the fundamental quantum operators, for example the energy operator as a partial time derivative and momentum operator as a spatial gradient, becomes clear when one considers the initial state, then changes one parameter of it slightly. This can be done for displacements (lengths), durations (time), and angles (rotations). Additionally, the invariance of certain quantities can be seen by making such changes in lengths and angles, illustrating conservation of these quantities.
In what follows, transformations on only one-particle wavefunctions in the form:
are considered, where denotes a unitary operator. Unitarity is generally required for operators representing transformations of space, time, and spin, since the norm of a state (representing the total probability of finding the particle somewhere with some spin) must be invariant under these transformations. The inverse is the Hermitian conjugate . The results can be extended to many-particle wavefunctions. Written in Dirac notation as standard, the transformations on quantum state vectors are:
Now, the action of changes to , so the inverse changes back to . Thus, an operator invariant under satisfies [I am sorry, but this is non-sequitor. You have not laid a foundation for this proposition]:
Concomitantly,
for any state ψ. Quantum operators representing observables are also required to be Hermitian so that their eigenvalues are real numbers, i.e. the operator equals its Hermitian conjugate, .
Overview of Lie group theory
Following are the key points of group theory relevant to quantum theory, examples are given throughout the article. For an alternative approach using matrix groups, see the books of Hall
Let be a Lie group, which is a group that locally is parameterized by a finite number of real continuously varying parameters . In more mathematical language, this means that is a smooth manifold that is also a group, for which the group operations are smooth.
the dimension of the group, , is the number of parameters it has.
the group elements, , in are functions of the parameters: and all parameters set to zero returns the identity element of the group: Group elements are often matrices which act on vectors, or transformations acting on functions.
The generators of the group are the partial derivatives of the group elements with respect to the group parameters with the result evaluated when the parameter is set to zero: In the language of manifolds, the generators are the elements of the tangent space to G at the identity. The generators are also known as infinitesimal group elements or as the elements of the Lie algebra of G. (See the discussion below of the commutator.) One aspect of generators in theoretical physics is they can be constructed themselves as operators corresponding to symmetries, which may be written as matrices, or as differential operators. In quantum theory, for unitary representations of the group, the generators require a factor of : The generators of the group form a vector space, which means linear combinations of generators also form a generator.
The generators (whether matrices or differential operators) satisfy the commutation relations: where are the (basis dependent) structure constants of the group. This makes, together with the vector space property, the set of all generators of a group a Lie algebra. Due to the antisymmetry of the bracket, the structure constants of the group are antisymmetric in the first two indices.
The representations of the group then describe the ways that the group (or its Lie algebra) can act on a vector space. (The vector space might be, for example, the space of eigenvectors for a Hamiltonian having as its symmetry group.) We denote the representations using a capital . One can then differentiate to obtain a representation of the Lie algebra, often also denoted by . These two representations are related as follows: without summation on the repeated index . Representations are linear operators that take in group elements and preserve the composition rule:
A representation which cannot be decomposed into a direct sum of other representations, is called irreducible. It is conventional to label irreducible representations by a superscripted number in brackets, as in , or if there is more than one number, we write .
There is an additional subtlety that arises in quantum theory, where two vectors that differ by multiplication by a scalar represent the same physical state. Here, the pertinent notion of representation is a projective representation, one that only satisfies the composition law up to a scalar. In the context of quantum mechanical spin, such representations are called spinorial.
Momentum and energy as generators of translation and time evolution, and rotation
The space translation operator acts on a wavefunction to shift the space coordinates by an infinitesimal displacement . The explicit expression can be quickly determined by a Taylor expansion of about , then (keeping the first order term and neglecting second and higher order terms), replace the space derivatives by the momentum operator . Similarly for the time translation operator acting on the time parameter, the Taylor expansion of is about , and the time derivative replaced by the energy operator .
The exponential functions arise by definition as those limits, due to Euler, and can be understood physically and mathematically as follows. A net translation can be composed of many small translations, so to obtain the translation operator for a finite increment, replace by and by , where is a positive non-zero integer. Then as increases, the magnitude of and become even smaller, while leaving the directions unchanged. Acting the infinitesimal operators on the wavefunction times and taking the limit as tends to infinity gives the finite operators.
Space and time translations commute, which means the operators and generators commute.
For a time-independent Hamiltonian, energy is conserved in time and quantum states are stationary states: the eigenstates of the Hamiltonian are the energy eigenvalues :
and all stationary states have the form
where is the initial time, usually set to zero since there is no loss of continuity when the initial time is set.
An alternative notation is .
Angular momentum as the generator of rotations
Orbital angular momentum
The rotation operator, , acts on a wavefunction to rotate the spatial coordinates of a particle by a constant angle :
where are the rotated coordinates about an axis defined by a unit vector through an angular increment , given by:
where is a rotation matrix dependent on the axis and angle. In group theoretic language, the rotation matrices are group elements, and the angles and axis are the parameters, of the three-dimensional special orthogonal group, SO(3). The rotation matrices about the standard Cartesian basis vector through angle , and the corresponding generators of rotations , are:
More generally for rotations about an axis defined by , the rotation matrix elements are:
where is the Kronecker delta, and is the Levi-Civita symbol.
It is not as obvious how to determine the rotational operator compared to space and time translations. We may consider a special case (rotations about the , , or -axis) then infer the general result, or use the general rotation matrix directly and tensor index notation with and . To derive the infinitesimal rotation operator, which corresponds to small , we use the small angle approximations and , then Taylor expand about or , keep the first order term, and substitute the angular momentum operator components.
The -component of angular momentum can be replaced by the component along the axis defined by , using the dot product .
Again, a finite rotation can be made from many small rotations, replacing by and taking the limit as tends to infinity gives the rotation operator for a finite rotation.
Rotations about the same axis do commute, for example a rotation through angles and about axis can be written
However, rotations about different axes do not commute. The general commutation rules are summarized by
In this sense, orbital angular momentum has the common sense properties of rotations. Each of the above commutators can be easily demonstrated by holding an everyday object and rotating it through the same angle about any two different axes in both possible orderings; the final configurations are different.
In quantum mechanics, there is another form of rotation which mathematically appears similar to the orbital case, but has different properties, described next.
Spin angular momentum
All previous quantities have classical definitions. Spin is a quantity possessed by particles in quantum mechanics without any classical analogue, having the units of angular momentum. The spin vector operator is denoted . The eigenvalues of its components are the possible outcomes (in units of ) of a measurement of the spin projected onto one of the basis directions.
Rotations (of ordinary space) about an axis through angle about the unit vector in space acting on a multicomponent wave function (spinor) at a point in space is represented by:
However, unlike orbital angular momentum in which the z-projection quantum number can only take positive or negative integer values (including zero), the z-projection spin quantum number s can take all positive and negative half-integer values. There are rotational matrices for each spin quantum number.
Evaluating the exponential for a given z-projection spin quantum number s gives a (2s + 1)-dimensional spin matrix. This can be used to define a spinor as a column vector of 2s + 1 components which transforms to a rotated coordinate system according to the spin matrix at a fixed point in space.
For the simplest non-trivial case of s = 1/2, the spin operator is given by
where the Pauli matrices in the standard representation are:
Total angular momentum
The total angular momentum operator is the sum of the orbital and spin
and is an important quantity for multi-particle systems, especially in nuclear physics and the quantum chemistry of multi-electron atoms and molecules.
We have a similar rotation matrix:
Conserved quantities in the quantum harmonic oscillator
The dynamical symmetry group of the n dimensional quantum harmonic oscillator is the special unitary group SU(n). As an example, the number of infinitesimal generators of the corresponding Lie algebras of SU(2) and SU(3) are three and eight respectively. This leads to exactly three and eight independent conserved quantities (other than the Hamiltonian) in these systems.
The two dimensional quantum harmonic oscillator has the expected conserved quantities of the Hamiltonian and the angular momentum, but has additional hidden conserved quantities of energy level difference and another form of angular momentum.
Lorentz group in relativistic quantum mechanics
Following is an overview of the Lorentz group; a treatment of boosts and rotations in spacetime. Throughout this section, see (for example) T. Ohlsson (2011) and E. Abers (2004).
Lorentz transformations can be parametrized by rapidity for a boost in the direction of a three-dimensional unit vector , and a rotation angle about a three-dimensional unit vector defining an axis, so and are together six parameters of the Lorentz group (three for rotations and three for boosts). The Lorentz group is 6-dimensional.
Pure rotations in spacetime
The rotation matrices and rotation generators considered above form the spacelike part of a four-dimensional matrix, representing pure-rotation Lorentz transformations. Three of the Lorentz group elements and generators for pure rotations are:
The rotation matrices act on any four vector and rotate the space-like components according to
leaving the time-like coordinate unchanged. In matrix expressions, is treated as a column vector.
Pure boosts in spacetime
A boost with velocity in the x, y, or z directions given by the standard Cartesian basis vector , are the boost transformation matrices. These matrices and the corresponding generators are the remaining three group elements and generators of the Lorentz group:
The boost matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to:
The term "boost" refers to the relative velocity between two frames, and is not to be conflated with momentum as the generator of translations, as explained below.
Combining boosts and rotations
Products of rotations give another rotation (a frequent exemplification of a subgroup), while products of boosts and boosts or of rotations and boosts cannot be expressed as pure boosts or pure rotations. In general, any Lorentz transformation can be expressed as a product of a pure rotation and a pure boost. For more background see (for example) B.R. Durney (2011) and H.L. Berk et al. and references therein.
The boost and rotation generators have representations denoted and respectively, the capital in this context indicates a group representation.
For the Lorentz group, the representations and of the generators and fulfill the following commutation rules.
In all commutators, the boost entities mixed with those for rotations, although rotations alone simply give another rotation. Exponentiating the generators gives the boost and rotation operators which combine into the general Lorentz transformation, under which the spacetime coordinates transform from one rest frame to another boosted and/or rotating frame. Likewise, exponentiating the representations of the generators gives the representations of the boost and rotation operators, under which a particle's spinor field transforms.
In the literature, the boost generators and rotation generators are sometimes combined into one generator for Lorentz transformations , an antisymmetric four-dimensional matrix with entries:
and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrix , with entries:
The general Lorentz transformation is then:
with summation over repeated matrix indices α and β. The Λ matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to:
Transformations of spinor wavefunctions in relativistic quantum mechanics
In relativistic quantum mechanics, wavefunctions are no longer single-component scalar fields, but now 2(2s + 1) component spinor fields, where s is the spin of the particle. The transformations of these functions in spacetime are given below.
Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states locally transform under some representation of the Lorentz group:
where is a finite-dimensional representation, in other words a dimensional square matrix, and is thought of as a column vector containing components with the allowed values of :
Real irreducible representations and spin
The irreducible representations of and , in short "irreps", can be used to build to spin representations of the Lorentz group. Defining new operators:
so and are simply complex conjugates of each other, it follows they satisfy the symmetrically formed commutators:
and these are essentially the commutators the orbital and spin angular momentum operators satisfy. Therefore, and form operator algebras analogous to angular momentum; same ladder operators, z-projections, etc., independently of each other as each of their components mutually commute. By the analogy to the spin quantum number, we can introduce positive integers or half integers, , with corresponding sets of values and . The matrices satisfying the above commutation relations are the same as for spins a and b have components given by multiplying Kronecker delta values with angular momentum matrix elements:
where in each case the row number m′n′ and column number mn are separated by a comma, and in turn:
and similarly for J(n). The three J(m) matrices are each square matrices, and the three J(n) are each square matrices. The integers or half-integers m and n numerate all the irreducible representations by, in equivalent notations used by authors: , which are each square matrices.
Applying this to particles with spin ;
left-handed -component spinors transform under the real irreps ,
right-handed -component spinors transform under the real irreps ,
taking direct sums symbolized by (see direct sum of matrices for the simpler matrix concept), one obtains the representations under which -component spinors transform: where . These are also real irreps, but as shown above, they split into complex conjugates.
In these cases the refers to any of , , or a full Lorentz transformation .
Relativistic wave equations
In the context of the Dirac equation and Weyl equation, the Weyl spinors satisfying the Weyl equation transform under the simplest irreducible spin representations of the Lorentz group, since the spin quantum number in this case is the smallest non-zero number allowed: 1/2. The 2-component left-handed Weyl spinor transforms under and the 2-component right-handed Weyl spinor transforms under . Dirac spinors satisfying the Dirac equation transform under the representation , the direct sum of the irreps for the Weyl spinors.
The Poincaré group in relativistic quantum mechanics and field theory
Space translations, time translations, rotations, and boosts, all taken together, constitute the Poincaré group. The group elements are the three rotation matrices and three boost matrices (as in the Lorentz group), and one for time translations and three for space translations in spacetime. There is a generator for each. Therefore, the Poincaré group is 10-dimensional.
In special relativity, space and time can be collected into a four-position vector , and in parallel so can energy and momentum which combine into a four-momentum vector . With relativistic quantum mechanics in mind, the time duration and spatial displacement parameters (four in total, one for time and three for space) combine into a spacetime displacement , and the energy and momentum operators are inserted in the four-momentum to obtain a four-momentum operator,
which are the generators of spacetime translations (four in total, one time and three space):
There are commutation relations between the components four-momentum P (generators of spacetime translations), and angular momentum M (generators of Lorentz transformations), that define the Poincaré algebra:
where η is the Minkowski metric tensor. (It is common to drop any hats for the four-momentum operators in the commutation relations). These equations are an expression of the fundamental properties of space and time as far as they are known today. They have a classical counterpart where the commutators are replaced by Poisson brackets.
To describe spin in relativistic quantum mechanics, the Pauli–Lubanski pseudovector
a Casimir operator, is the constant spin contribution to the total angular momentum, and there are commutation relations between P and W and between M and W:
Invariants constructed from W, instances of Casimir invariants can be used to classify irreducible representations of the Lorentz group.
Symmetries in quantum field theory and particle physics
Unitary groups in quantum field theory
Group theory is an abstract way of mathematically analyzing symmetries. Unitary operators are paramount to quantum theory, so unitary groups are important in particle physics. The group of N dimensional unitary square matrices is denoted U(N). Unitary operators preserve inner products which means probabilities are also preserved, so the quantum mechanics of the system is invariant under unitary transformations. Let be a unitary operator, so the inverse is the Hermitian adjoint , which commutes with the Hamiltonian:
then the observable corresponding to the operator is conserved, and the Hamiltonian is invariant under the transformation .
Since the predictions of quantum mechanics should be invariant under the action of a group, physicists look for unitary transformations to represent the group.
Important subgroups of each U(N) are those unitary matrices which have unit determinant (or are "unimodular"): these are called the special unitary groups and are denoted SU(N).
U(1)
The simplest unitary group is U(1), which is just the complex numbers of modulus 1. This one-dimensional matrix entry is of the form:
in which θ is the parameter of the group, and the group is Abelian since one-dimensional matrices always commute under matrix multiplication. Lagrangians in quantum field theory for complex scalar fields are often invariant under U(1) transformations. If there is a quantum number a associated with the U(1) symmetry, for example baryon and the three lepton numbers in electromagnetic interactions, we have:
U(2) and SU(2)
The general form of an element of a U(2) element is parametrized by two complex numbers a and b:
and for SU(2), the determinant is restricted to 1:
In group theoretic language, the Pauli matrices are the generators of the special unitary group in two dimensions, denoted SU(2). Their commutation relation is the same as for orbital angular momentum, aside from a factor of 2:
A group element of SU(2) can be written:
where σj is a Pauli matrix, and the group parameters are the angles turned through about an axis.
The two-dimensional isotropic quantum harmonic oscillator has symmetry group SU(2), while the symmetry algebra of the rational anisotropic oscillator is a nonlinear extension of u(2).
U(3) and SU(3)
The eight Gell-Mann matrices (see article for them and the structure constants) are important for quantum chromodynamics. They originally arose in the theory SU(3) of flavor which is still of practical importance in nuclear physics. They are the generators for the SU(3) group, so an element of SU(3) can be written analogously to an element of SU(2):
where are eight independent parameters. The matrices satisfy the commutator:
where the indices , , take the values 1, 2, 3, ..., 8. The structure constants fabc are totally antisymmetric in all indices analogous to those of SU(2). In the standard colour charge basis (r for red, g for green, b for blue):
the colour states are eigenstates of the and matrices, while the other matrices mix colour states together.
The eight gluons states (8-dimensional column vectors) are simultaneous eigenstates of the adjoint representation of , the 8-dimensional representation acting on its own Lie algebra , for the and matrices. By forming tensor products of representations (the standard representation and its dual) and taking appropriate quotients, protons and neutrons, and other hadrons are eigenstates of various representations of of color. The representations of SU(3) can be described by a "theorem of the highest weight".
Matter and antimatter
In relativistic quantum mechanics, relativistic wave equations predict a remarkable symmetry of nature: that every particle has a corresponding antiparticle. This is mathematically contained in the spinor fields which are the solutions of the relativistic wave equations.
Charge conjugation switches particles and antiparticles. Physical laws and interactions unchanged by this operation have C symmetry.
Discrete spacetime symmetries
Parity mirrors the orientation of the spatial coordinates from left-handed to right-handed. Informally, space is "reflected" into its mirror image. Physical laws and interactions unchanged by this operation have P symmetry.
Time reversal flips the time coordinate, which amounts to time running from future to past. A curious property of time, which space does not have, is that it is unidirectional: particles traveling forwards in time are equivalent to antiparticles traveling back in time. Physical laws and interactions unchanged by this operation have T symmetry.
C, P, T symmetries
CPT theorem
CP violation
PT symmetry
Lorentz violation
Gauge theory
In quantum electrodynamics, the local symmetry group is U(1) and is abelian. In quantum chromodynamics, the local symmetry group is SU(3) and is non-abelian.
The electromagnetic interaction is mediated by photons, which have no electric charge. The electromagnetic tensor has an electromagnetic four-potential field possessing gauge symmetry.
The strong (color) interaction is mediated by gluons, which can have eight color charges. There are eight gluon field strength tensors with corresponding gluon four potentials field, each possessing gauge symmetry.
The strong (color) interaction
Color charge
Analogous to the spin operator, there are color charge operators in terms of the Gell-Mann matrices :
and since color charge is a conserved charge, all color charge operators must commute with the Hamiltonian:
Isospin
Isospin is conserved in strong interactions.
The weak and electromagnetic interactions
Duality transformation
Magnetic monopoles can be theoretically realized, although current observations and theory are consistent with them existing or not existing. Electric and magnetic charges can effectively be "rotated into one another" by a duality transformation.
Electroweak symmetry
Electroweak symmetry
Electroweak symmetry breaking
Supersymmetry
A Lie superalgebra is an algebra in which (suitable) basis elements either have a commutation relation or have an anticommutation relation. Symmetries have been proposed to the effect that all fermionic particles have bosonic analogues, and vice versa. These symmetry have theoretical appeal in that no extra assumptions (such as existence of strings) barring symmetries are made. In addition, by assuming supersymmetry, a number of puzzling issues can be resolved. These symmetries, which are represented by Lie superalgebras, have not been confirmed experimentally. It is now believed that they are broken symmetries, if they exist. But it has been speculated that dark matter is constitutes gravitinos, a spin 3/2 particle with mass, its supersymmetric partner being the graviton.
Exchange symmetry
The concept of exchange symmetry is derived from a fundamental postulate of quantum statistics, which states that no observable physical quantity should change after exchanging two identical particles. It states that because all observables are proportional to for a system of identical particles, the wave function must either remain the same or change sign upon such an exchange. More generally, for a system of n identical particles the wave function must transform as an irreducible representation of the finite symmetric group Sn. It turns out that, according to the spin-statistics theorem, fermion states transform as the antisymmetric irreducible representation of Sn and boson states as the symmetric irreducible representation.
Because the exchange of two identical particles is mathematically equivalent to the rotation of each particle by 180 degrees (and so to the rotation of one particle's frame by 360 degrees), the symmetric nature of the wave function depends on the particle's spin after the rotation operator is applied to it. Integer spin particles do not change the sign of their wave function upon a 360 degree rotation—therefore the sign of the wave function of the entire system does not change. Semi-integer spin particles change the sign of their wave function upon a 360 degree rotation (see more in spin–statistics theorem).
Particles for which the wave function does not change sign upon exchange are called bosons, or particles with a symmetric wave function. The particles for which the wave function of the system changes sign are called fermions, or particles with an antisymmetric wave function.
Fermions therefore obey different statistics (called Fermi–Dirac statistics) than bosons (which obey Bose–Einstein statistics). One of the consequences of Fermi–Dirac statistics is the exclusion principle for fermions—no two identical fermions can share the same quantum state (in other words, the wave function of two identical fermions in the same state is zero). This in turn results in degeneracy pressure for fermions—the strong resistance of fermions to compression into smaller volume. This resistance gives rise to the “stiffness” or “rigidity” of ordinary atomic matter (as atoms contain electrons which are fermions).
| Physical sciences | Quantum mechanics | Physics |
50007204 | https://en.wikipedia.org/wiki/Unbiunium | Unbiunium | Unbiunium, also known as eka-actinium or element 121, is a hypothetical chemical element; it has symbol Ubu and atomic number 121. Unbiunium and Ubu are the temporary systematic IUPAC name and symbol respectively, which are used until the element is discovered, confirmed, and a permanent name is decided upon. In the periodic table of the elements, it is expected to be the first of the superactinides, and the third element in the eighth period. It has attracted attention because of some predictions that it may be in the island of stability. It is also likely to be the first of a new g-block of elements.
Unbiunium has not yet been synthesized. It is expected to be one of the last few reachable elements with current technology; the limit could be anywhere between element 120 and 124. It will also likely be far more difficult to synthesize than the elements known so far up to 118, and still more difficult than elements 119 and 120. The teams at RIKEN in Japan and at the JINR in Dubna, Russia have indicated plans to attempt the synthesis of element 121 in the future after they attempt elements 119 and 120.
The position of unbiunium in the periodic table suggests that it would have similar properties to lanthanum and actinium; however, relativistic effects may cause some of its properties to differ from those expected from a straight application of periodic trends. For example, unbiunium is expected to have a s2p valence electron configuration, instead of the s2d of lanthanum and actinium or the s2g expected from the Madelung rule, but this is not predicted to affect its chemistry much. It would on the other hand significantly lower its first ionization energy beyond what would be expected from periodic trends.
Introduction
History
Fusion reactions producing superheavy elements can be divided into "hot" and "cold" fusion, depending on the excitation energy of the compound nucleus produced. In hot fusion reactions, very light, high-energy projectiles are accelerated toward very heavy targets (actinides), giving rise to compound nuclei at high excitation energies (~40–50 MeV) that may fission or evaporate several (3 to 5) neutrons. In cold fusion reactions (which use heavier projectiles, typically from the fourth period, and lighter targets, usually lead and bismuth), the fused nuclei produced have a relatively low excitation energy (~10–20 MeV), which decreases the probability that these products will undergo fission reactions. As the fused nuclei cool to the ground state, they require emission of only one or two neutrons. However, hot fusion reactions tend to produce more neutron-rich products because the actinides have the highest neutron-to-proton ratios of any element that can presently be made in macroscopic quantities; it is currently the only method to produce the superheavy elements from flerovium (element 114) onward.
Attempts to synthesize elements 119 and 120 push the limits of current technology, due to the decreasing cross sections of the production reactions and their probably short half-lives, expected to be on the order of microseconds. Heavier elements, beginning with element 121, would likely be too short-lived to be detected with current technology, decaying within a microsecond before reaching the detectors. Where this one-microsecond border of half-lives lies is not known, and this may allow the synthesis of some isotopes of elements 121 through 124, with the exact limit depending on the model chosen for predicting nuclide masses. It is also possible that element 120 is the last element reachable with current experimental techniques, and that elements from 121 onward will require new methods.
Because of the current impossibility of synthesizing elements beyond californium (Z = 98) in sufficient quantities to create a target, with einsteinium (Z = 99) targets being currently considered, the practical synthesis of elements beyond oganesson requires heavier projectiles, such as titanium-50, chromium-54, iron-58, or nickel-64. This, however, has the drawback of resulting in more symmetrical fusion reactions that are colder and less likely to succeed. For example, the reaction between 243Am and 58Fe is expected to have a cross section on the order of 0.5 fb, several orders of magnitude lower than measured cross sections in successful reactions; such an obstacle would make this and similar reactions infeasible for producing unbiunium.
Past synthesis attempt
The synthesis of unbiunium was first attempted in 1977 by bombarding a target of uranium-238 with copper-65 ions at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany:
+ → * → no atoms
No atoms were identified.
Prospects for future synthesis
Currently, the beam intensities at superheavy element facilities result in about 1012 projectiles hitting the target per second; this cannot be increased without burning the target and the detector, and producing larger amounts of the increasingly unstable actinides needed for the target is impractical. The team at the Joint Institute for Nuclear Research (JINR) in Dubna has built a new superheavy element factory (SHE-factory) with improved detectors and the ability to work on a smaller scale, but even so, continuing beyond element 120 and perhaps 121 would be a great challenge. It is possible that the age of fusion–evaporation reactions to produce new superheavy elements is coming to an end due to the increasingly short half-lives to spontaneous fission and the looming proton drip line, so that new techniques such as nuclear transfer reactions (for example, firing uranium nuclei at each other and letting them exchange protons, potentially producing products with around 120 protons) would be required to reach the superactinides.
Because the cross sections of these fusion-evaporation reactions increase with the asymmetry of the reaction, titanium would be a better projectile than chromium for the synthesis of element 121, though this necessitates an einsteinium target. This poses severe challenges due to the significant heating and damage of the target due to the high radioactivity of einsteinium-254, but it would nonetheless probably be the most promising approach. It would require working on a smaller scale due to the lower amount of 254Es that can be produced. This small-scale work could in the near future only be carried out in Dubna's SHE-factory.
The isotopes 299Ubu, 300Ubu, and 301Ubu, that could be produced in the reaction between 254Es and 50Ti via the 3n and 4n channels, are expected to be the only reachable unbiunium isotopes with half-lives long enough for detection. The cross sections would nevertheless push the limits of what can currently be detected. For example, in a 2016 publication, the cross section of the aforementioned reaction between 254Es and 50Ti was predicted to be around 7 fb in the 4n channel, four times lower than the lowest measured cross section for a successful reaction. A 2021 calculation gives similarly low theoretical cross sections of 10 fb for the 3n channel and 0.6 fb for the 4n channel of this reaction, along with cross sections on the order of 1–10 fb for the reactions 249Bk+54Cr, 252Es+50Ti, and 258Md+48Ca. However, 252Es and 258Md cannot currently be synthesized in sufficient quantities to form target material.
Should the synthesis of unbiunium isotopes in such a reaction be successful, the resulting nuclei would decay through isotopes of ununennium that could be produced by cross-bombardments in the 248Cm+51V or 249Bk+50Ti reactions, down through known isotopes of tennessine and moscovium synthesized in the 249Bk+48Ca and 243Am+48Ca reactions. The multiplicity of excited states populated by the alpha decay of odd nuclei may however preclude clear cross-bombardment cases, as was seen in the controversial link between 293Ts and 289Mc. Heavier isotopes are expected to be more stable; 320Ubu is predicted to be the most stable unbiunium isotope, but there is no way to synthesize it with current technology as no combination of usable target and projectile could provide enough neutrons.
The teams at RIKEN and at JINR have listed the synthesis of element 121 among their future plans. These two laboratories are best suited to these experiments as they are the only ones in the world where long beam times are accessible for reactions with such low predicted cross-sections.
Naming
Using Mendeleev's nomenclature for unnamed and undiscovered elements, unbiunium should be known as eka-actinium. Using the 1979 IUPAC recommendations, the element should be temporarily called unbiunium (symbol Ubu) until it is discovered, the discovery is confirmed, and a permanent name chosen. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations are mostly ignored among scientists who work theoretically or experimentally on superheavy elements, who call it "element 121", with the symbol E121, (121), or 121.
Nuclear stability and isotopes
The stability of nuclei decreases greatly with the increase in atomic number after curium, element 96, whose half-life is four orders of magnitude longer than that of any currently known higher-numbered element. All isotopes with an atomic number above 101 undergo radioactive decay with half-lives of less than 30 hours. No elements with atomic numbers above 82 (after lead) have stable isotopes. Nevertheless, for reasons not yet well understood, there is a slight increase of nuclear stability around atomic numbers 110–114, which leads to the appearance of what is known in nuclear physics as the "island of stability". This concept, proposed by University of California professor Glenn Seaborg and stemming from the stabilizing effects of the closed nuclear shells around Z = 114 (or possibly 120, 122, 124, or 126) and N = 184 (and possibly also N = 228), explains why superheavy elements last longer than predicted. In fact, the very existence of elements heavier than rutherfordium can be attested to shell effects and the island of stability, as spontaneous fission would rapidly cause such nuclei to disintegrate in a model neglecting such factors.
A 2016 calculation of the half-lives of the isotopes of unbiunium from 290Ubu to 339Ubu suggested that those from 290Ubu to 303Ubu would not be bound and would decay through proton emission, those from 304Ubu through 314Ubu would undergo alpha decay, and those from 315Ubu to 339Ubu would undergo spontaneous fission. Only the isotopes from 309Ubu to 314Ubu would have long enough alpha-decay lifetimes to be detected in laboratories, starting decay chains terminating in spontaneous fission at moscovium, tennessine, or ununennium. This would present a grave problem for experiments aiming at synthesizing isotopes of unbiunium if true, because the isotopes whose alpha decay could be observed could not be reached by any presently usable combination of target and projectile. Calculations in 2016 and 2017 by the same authors on elements 123 and 125 suggest a less bleak outcome, with alpha decay chains from the more reachable nuclides 300–307Ubt passing through unbiunium and leading down to bohrium or nihonium. It has also been suggested that cluster decay might be a significant decay mode in competition with alpha decay and spontaneous fission in the region past Z = 120, which would pose yet another hurdle for experimental identification of these nuclides.
Predicted chemistry
Unbiunium is predicted to be the first element of an unprecedentedly long transition series, called the superactinides in analogy to the earlier actinides. While its behavior is not likely to be very distinct from lanthanum and actinium, it is likely to pose a limit to the applicability of the periodic law; from element 121, the 5g, 6f, 7d, and 8p1/2 orbitals are expected to fill up together due to their very close energies, and around the elements in the late 150s and 160s, the 9s, 9p1/2, and 8p3/2 subshells join in, so that the chemistry of the elements just beyond 121 and 122 (the last for which complete calculations have been conducted) is expected to be so similar that their position in the periodic table would be purely a formal matter.
Based on the Aufbau principle, one would expect the 5g subshell to begin filling at the unbiunium atom. However, while lanthanum does have significant 4f involvement in its chemistry, it does not yet have a 4f electron in its ground-state gas-phase configuration; a greater delay occurs for 5f, where neither actinium nor thorium atoms have a 5f electron although 5f contributes to their chemistry. It is predicted that a similar situation of delayed "radial" collapse might happen for unbiunium so that the 5g orbitals do not start filling until around element 125, even though some 5g chemical involvement may begin earlier. Because of the lack of radial nodes in the 5g orbitals, analogous to the 4f but not the 5f orbitals, the position of unbiunium in the periodic table is expected to be more akin to that of lanthanum than that of actinium among its congeners, and Pekka Pyykkö proposed to rename the superactinides as "superlanthanides" for that reason. The lack of radial nodes in the 4f orbitals contribute to their core-like behavior in the lanthanide series, unlike the more valence-like 5f orbitals in the actinides; however, the relativistic expansion and destabilization of the 5g orbitals should partially compensate for their lack of radial nodes and hence smaller extent.
Unbiunium is expected to fill the 8p1/2 orbital due to its relativistic stabilization, with a configuration of [Og] 8s2 8p1. Nevertheless, the [Og] 7d1 8s2 configuration, which would be analogous to lanthanum and actinium, is expected to be a low-lying excited state at only 0.412 eV, and the expected [Og] 5g1 8s2 configuration from the Madelung rule should be at 2.48 eV. The electron configurations of the ions of unbiunium are expected to be , [Og]8s2; , [Og]8s1; and , [Og]. The 8p electron of unbiunium is expected to be very loosely bound, so that its predicted ionization energy of 4.45 eV is lower than that of ununennium (4.53 eV) and all known elements except for the alkali metals from potassium to francium. A similar large reduction in ionization energy is also seen in lawrencium, another element having an anomalous s2p configuration due to relativistic effects.
Despite the change in electron configuration and possibility of using the 5g shell, unbiunium is not expected to behave chemically very differently from lanthanum and actinium. A 2016 calculation on unbiunium monofluoride (UbuF) showed similarities between the valence orbitals of unbiunium in this molecule and those of actinium in actinium monofluoride (AcF); in both molecules, the highest occupied molecular orbital is expected to be non-bonding, unlike in the superficially more similar nihonium monofluoride (NhF) where it is bonding. Nihonium has the electron configuration [Rn] 5f14 6d10 7s2 7p1, with an s2p valence configuration. Unbiunium may hence be somewhat like lawrencium in having an anomalous s2p configuration that does not affect its chemistry: the bond dissociation energies, bond lengths, and polarizabilities of the UbuF molecule are expected to continue the trend through scandium, yttrium, lanthanum, and actinium, all of which have three valence electrons above a noble gas core. The Ubu–F bond is expected to be strong and polarized, just like for the lanthanum and actinium monofluorides.
The non-bonding electrons on unbiunium in UbuF are expected to be able to bond to extra atoms or groups, resulting in the formation of the unbiunium trihalides , analogous to and . Hence, the main oxidation state of unbiunium in its compounds should be +3, although the closeness of the valence subshells' energy levels may permit higher oxidation states, just like in elements 119 and 120. Relativistic effects appear to be small for the unbiunium trihalides, with and having very similar bonding, though the former should be more ionic. The standard electrode potential for the couple is predicted as −2.1 V.
| Physical sciences | Periods | Chemistry |
32544339 | https://en.wikipedia.org/wiki/Fracking | Fracking | Hydraulic fracturing is a well stimulation technique involving the fracturing of formations in bedrock by a pressurized liquid. The process involves the high-pressure injection of "fracking fluid" (primarily water, containing sand or other proppants suspended with the aid of thickening agents) into a wellbore to create cracks in the deep rock formations through which natural gas, petroleum, and brine will flow more freely. When the hydraulic pressure is removed from the well, small grains of hydraulic fracturing proppants (either sand or aluminium oxide) hold the fractures open.
Hydraulic fracturing began as an experiment in 1947, and the first commercially successful application followed in 1949. As of 2012, 2.5 million "frac jobs" had been performed worldwide on oil and gas wells, over one million of those within the U.S. Such treatment is generally necessary to achieve adequate flow rates in shale gas, tight gas, tight oil, and coal seam gas wells. Some hydraulic fractures can form naturally in certain veins or dikes. Drilling and hydraulic fracturing have made the United States a major crude oil exporter as of 2019, but leakage of methane, a potent greenhouse gas, has dramatically increased. Increased oil and gas production from the decade-long fracking boom has led to lower prices for consumers, with near-record lows of the share of household income going to energy expenditures.
Hydraulic fracturing is highly controversial. Its proponents highlight the economic benefits of more extensively accessible hydrocarbons (such as petroleum and natural gas), the benefits of replacing coal with natural gas, which burns more cleanly and emits less carbon dioxide (CO2), and the benefits of energy independence. Opponents of fracking argue that these are outweighed by the environmental impacts, which include groundwater and surface water contamination, noise and air pollution, the triggering of earthquakes, and the resulting hazards to public health and the environment. Research has found adverse health effects in populations living near hydraulic fracturing sites, including confirmation of chemical, physical, and psychosocial hazards such as pregnancy and birth outcomes, migraine headaches, chronic rhinosinusitis, severe fatigue, asthma exacerbations and psychological stress. Adherence to regulation and safety procedures are required to avoid further negative impacts.
The scale of methane leakage associated with hydraulic fracturing is uncertain, and there is some evidence that leakage may cancel out any greenhouse gas emissions benefit of natural gas relative to other fossil fuels.
Increases in seismic activity following hydraulic fracturing along dormant or previously unknown faults are sometimes caused by the deep-injection disposal of hydraulic fracturing flowback (a byproduct of hydraulically fractured wells), and produced formation brine (a byproduct of both fractured and non-fractured oil and gas wells). For these reasons, hydraulic fracturing is under international scrutiny, restricted in some countries, and banned altogether in others. The European Union is drafting regulations that would permit the controlled application of hydraulic fracturing.
Geology
Mechanics
Fracturing rocks at great depth frequently become suppressed by pressure due to the weight of the overlying rock strata and the cementation of the formation. This suppression process is particularly significant in "tensile" (Mode 1) fractures which require the walls of the fracture to move against this pressure. Fracturing occurs when effective stress is overcome by the pressure of fluids within the rock. The minimum principal stress becomes tensile and exceeds the tensile strength of the material. Fractures formed in this way are generally oriented in a plane perpendicular to the minimum principal stress, and for this reason, hydraulic fractures in wellbores can be used to determine the orientation of stresses. In natural examples, such as dikes or vein-filled fractures, the orientations can be used to infer past states of stress.
Veins
Most mineral vein systems are a result of repeated natural fracturing during periods of relatively high pore fluid pressure. The effect of high pore fluid pressure on the formation process of mineral vein systems is particularly evident in "crack-seal" veins, where the vein material is part of a series of discrete fracturing events, and extra vein material is deposited on each occasion. One example of long-term repeated natural fracturing is in the effects of seismic activity. Stress levels rise and fall episodically, and earthquakes can cause large volumes of connate water to be expelled from fluid-filled fractures. This process is referred to as "seismic pumping".
Dikes
Minor intrusions in the upper part of the crust, such as dikes, propagate in the form of fluid-filled cracks. In such cases, the fluid is magma. In sedimentary rocks with a significant water content, fluid at fracture tip will be steam.
History
Precursors
Fracturing as a method to stimulate shallow, hard rock oil wells dates back to the 1860s. Dynamite or nitroglycerin detonations were used to increase oil and natural gas production from petroleum bearing formations. On 24 April 1865, US Civil War veteran Col. Edward A. L. Roberts received a patent for an "exploding torpedo". It was employed in Pennsylvania, New York, Kentucky, Oklahoma, Texas, and West Virginia using liquid and also, later, solidified nitroglycerin. Companies like Lighting Torpedo Company used this process in Oklahoma and Texas. Later still the same method was applied to water and gas wells. Stimulation of wells with acid, instead of explosive fluids, was introduced in the 1930s. Due to acid etching, fractures would not close completely resulting in further productivity increase.
20th century applications
Harold Hamm, Aubrey McClendon, Tom Ward and George P. Mitchell are each considered to have pioneered hydraulic fracturing innovations toward practical applications.
Oil and gas wells
The relationship between well performance and treatment pressures was studied by Floyd Farris of Stanolind Oil and Gas Corporation. This study was the basis of the first hydraulic fracturing experiment, conducted in 1947 at the Hugoton gas field in Grant County of southwestern Kansas by Stanolind. For the well treatment, of gelled gasoline (essentially napalm) and sand from the Arkansas River was injected into the gas-producing limestone formation at . The experiment was not very successful as the deliverability of the well did not change appreciably. The process was further described by J.B. Clark of Stanolind in his paper published in 1948. A patent on this process was issued in 1949 and an exclusive license was granted to the Halliburton Oil Well Cementing Company. On 17 March 1949, Halliburton performed the first two commercial hydraulic fracturing treatments in Stephens County, Oklahoma, and Archer County, Texas. Since then, hydraulic fracturing has been used to stimulate approximately one million oil and gas wells in various geologic regimes with good success.
In contrast with large-scale hydraulic fracturing used in low-permeability formations, small hydraulic fracturing treatments are commonly used in high-permeability formations to remedy "skin damage", a low-permeability zone that sometimes forms at the rock-borehole interface. In such cases the fracturing may extend only a few feet from the borehole.
In the Soviet Union, the first hydraulic proppant fracturing was carried out in 1952. Other countries in Europe and Northern Africa subsequently employed hydraulic fracturing techniques including Norway, Poland, Czechoslovakia (before 1989), Yugoslavia (before 1991), Hungary, Austria, France, Italy, Bulgaria, Romania, Turkey, Tunisia, and Algeria.
Massive fracturing
Massive hydraulic fracturing (also known as high-volume hydraulic fracturing) is a technique first applied by Pan American Petroleum in Stephens County, Oklahoma, US in 1968. The definition of massive hydraulic fracturing varies, but generally refers to treatments injecting over 150 short tons, or approximately 300,000 pounds (136 metric tonnes), of proppant.
American geologists gradually became aware that there were huge volumes of gas-saturated sandstones with permeability too low (generally less than 0.1 millidarcy) to recover the gas economically. Starting in 1973, massive hydraulic fracturing was used in thousands of gas wells in the San Juan Basin, Denver Basin, the Piceance Basin, and the Green River Basin, and in other hard rock formations of the western US. Other tight sandstone wells in the US made economically viable by massive hydraulic fracturing were in the Clinton-Medina Sandstone (Ohio, Pennsylvania, and New York), and Cotton Valley Sandstone (Texas and Louisiana).
Massive hydraulic fracturing quickly spread in the late 1970s to western Canada, Rotliegend and Carboniferous gas-bearing sandstones in Germany, Netherlands (onshore and offshore gas fields), and the United Kingdom in the North Sea.
Horizontal oil or gas wells were unusual until the late 1980s. Then, operators in Texas began completing thousands of oil wells by drilling horizontally in the Austin Chalk, and giving massive slickwater hydraulic fracturing treatments to the wellbores. Horizontal wells proved much more effective than vertical wells in producing oil from tight chalk; sedimentary beds are usually nearly horizontal, so horizontal wells have much larger contact areas with the target formation.
Hydraulic fracturing operations have grown exponentially since the mid-1990s, when technologic advances and increases in the price of natural gas made this technique economically viable.
Shales
Hydraulic fracturing of shales goes back at least to 1965, when some operators in the Big Sandy gas field of eastern Kentucky and southern West Virginia started hydraulically fracturing the Ohio Shale and Cleveland Shale, using relatively small fracs. The frac jobs generally increased production, especially from lower-yielding wells.
In 1976, the United States government started the Eastern Gas Shales Project, which included numerous public-private hydraulic fracturing demonstration projects. During the same period, the Gas Research Institute, a gas industry research consortium, received approval for research and funding from the Federal Energy Regulatory Commission.
In 1997, Nick Steinsberger, an engineer of Mitchell Energy (now part of Devon Energy), applied the slickwater fracturing technique, using more water and higher pump pressure than previous fracturing techniques, which was used in East Texas in the Barnett Shale of north Texas. In 1998, the new technique proved to be successful when the first 90 days gas production from the well called S.H. Griffin No. 3 exceeded production of any of the company's previous wells. This new completion technique made gas extraction widely economical in the Barnett Shale, and was later applied to other shales, including the Eagle Ford and Bakken Shale. George P. Mitchell has been called the "father of fracking" because of his role in applying it in shales. The first horizontal well in the Barnett Shale was drilled in 1991, but was not widely done in the Barnett until it was demonstrated that gas could be economically extracted from vertical wells in the Barnett.
As of 2013, massive hydraulic fracturing is being applied on a commercial scale to shales in the United States, Canada, and China. Several additional countries are planning to use hydraulic fracturing.
Process
According to the United States Environmental Protection Agency (EPA), hydraulic fracturing is a process to stimulate a natural gas, oil, or geothermal well to maximize extraction. The EPA defines the broader process to include acquisition of source water, well construction, well stimulation, and waste disposal.
Method
A hydraulic fracture is formed by pumping fracturing fluid into a wellbore at a rate sufficient to increase pressure at the target depth (determined by the location of the well casing perforations), to exceed that of the fracture gradient (pressure gradient) of the rock. The fracture gradient is defined as pressure increase per unit of depth relative to density, and is usually measured in pounds per square inch, per foot (psi/ft). The rock cracks, and the fracture fluid permeates the rock extending the crack further, and further, and so on. Fractures are localized as pressure drops off with the rate of frictional loss, which is relative to the distance from the well. Operators typically try to maintain "fracture width", or slow its decline following treatment, by introducing a proppant into the injected fluida material such as grains of sand, ceramic, or other particulate, thus preventing the fractures from closing when injection is stopped and pressure removed. Consideration of proppant strength and prevention of proppant failure becomes more important at greater depths where pressure and stresses on fractures are higher. The propped fracture is permeable enough to allow the flow of gas, oil, salt water and hydraulic fracturing fluids to the well.
During the process, fracturing fluid leakoff (loss of fracturing fluid from the fracture channel into the surrounding permeable rock) occurs. If not controlled, it can exceed 70% of the injected volume. This may result in formation matrix damage, adverse formation fluid interaction, and altered fracture geometry, thereby decreasing efficiency.
The location of one or more fractures along the length of the borehole is strictly controlled by various methods that create or seal holes in the side of the wellbore. Hydraulic fracturing is performed in cased wellbores, and the zones to be fractured are accessed by perforating the casing at those locations.
Hydraulic-fracturing equipment used in oil and natural gas fields usually consists of a slurry blender, one or more high-pressure, high-volume fracturing pumps (typically powerful triplex or quintuplex pumps) and a monitoring unit. Associated equipment includes fracturing tanks, one or more units for storage and handling of proppant, high-pressure treating iron, a chemical additive unit (used to accurately monitor chemical addition), fracking hose (low-pressure flexible hoses), and many gauges and meters for flow rate, fluid density, and treating pressure. Chemical additives are typically 0.5% of the total fluid volume. Fracturing equipment operates over a range of pressures and injection rates, and can reach up to and .
Well types
A distinction can be made between conventional, low-volume hydraulic fracturing, used to stimulate high-permeability reservoirs for a single well, and unconventional, high-volume hydraulic fracturing, used in the completion of tight gas and shale gas wells. High-volume hydraulic fracturing usually requires higher pressures than low-volume fracturing; the higher pressures are needed to push out larger volumes of fluid and proppant that extend farther from the borehole.
Horizontal drilling involves wellbores with a terminal drillhole completed as a "lateral" that extends parallel with the rock layer containing the substance to be extracted. For example, laterals extend in the Barnett Shale basin in Texas, and up to in the Bakken formation in North Dakota. In contrast, a vertical well only accesses the thickness of the rock layer, typically . Horizontal drilling reduces surface disruptions as fewer wells are required to access the same volume of rock.
Drilling often plugs up the pore spaces at the wellbore wall, reducing permeability at and near the wellbore. This reduces flow into the borehole from the surrounding rock formation, and partially seals off the borehole from the surrounding rock. Low-volume hydraulic fracturing can be used to restore permeability.
Fracturing fluids
The main purposes of fracturing fluid are to extend fractures, add lubrication, change gel strength, and to carry proppant into the formation. There are two methods of transporting proppant in the fluidhigh-rate and high-viscosity. High-viscosity fracturing tends to cause large dominant fractures, while high-rate (slickwater) fracturing causes small spread-out micro-fractures.
Water-soluble gelling agents (such as guar gum) increase viscosity and efficiently deliver proppant into the formation.
Fluid is typically a slurry of water, proppant, and chemical additives. Additionally, gels, foams, and compressed gases, including nitrogen, carbon dioxide and air can be injected. Typically, 90% of the fluid is water and 9.5% is sand with chemical additives accounting to about 0.5%. However, fracturing fluids have been developed using liquefied petroleum gas (LPG) and propane. This process is called waterless fracturing.
When propane is used it is turned into vapor by the high pressure and high temperature. The propane vapor and natural gas both return to the surface and can be collected, making it easier to reuse and/or resale. None of the chemicals used will return to the surface. Only the propane used will return from what was used in the process.
The proppant is a granular material that prevents the created fractures from closing after the fracturing treatment. Types of proppant include silica sand, resin-coated sand, bauxite, and man-made ceramics. The choice of proppant depends on the type of permeability or grain strength needed. In some formations, where the pressure is great enough to crush grains of natural silica sand, higher-strength proppants such as bauxite or ceramics may be used. The most commonly used proppant is silica sand, though proppants of uniform size and shape, such as a ceramic proppant, are believed to be more effective.
The fracturing fluid varies depending on fracturing type desired, and the conditions of specific wells being fractured, and water characteristics. The fluid can be gel, foam, or slickwater-based. Fluid choices are tradeoffs: more viscous fluids, such as gels, are better at keeping proppant in suspension; while less-viscous and lower-friction fluids, such as slickwater, allow fluid to be pumped at higher rates, to create fractures farther out from the wellbore. Important material properties of the fluid include viscosity, pH, various rheological factors, and others.
Water is mixed with sand and chemicals to create hydraulic fracturing fluid. Approximately 40,000 gallons of chemicals are used per fracturing.
A typical fracture treatment uses between 3 and 12 additive chemicals. Although there may be unconventional fracturing fluids, typical chemical additives can include one or more of the following:
Acids—hydrochloric acid or acetic acid is used in the pre-fracturing stage for cleaning the perforations and initiating fissure in the near-wellbore rock.
Sodium chloride (salt)—delays breakdown of gel polymer chains.
Polyacrylamide and other friction reducers decrease turbulence in fluid flow and pipe friction, thus allowing the pumps to pump at a higher rate without having greater pressure on the surface.
Ethylene glycol—prevents formation of scale deposits in the pipe.
Borate salts—used for maintaining fluid viscosity during the temperature increase.
Sodium and potassium carbonates—used for maintaining effectiveness of crosslinkers.
Glutaraldehyde- a biocide that prevents pipe corrosion from microbial activity.
Guar gum and other water-soluble gelling agents—increases viscosity of the fracturing fluid to deliver proppant into the formation more efficiently.
Citric acid—used for corrosion prevention.
Isopropanol—used to winterize the chemicals to ensure it doesn't freeze.
The most common chemical used for hydraulic fracturing in the United States in 2005–2009 was methanol, while some other most widely used chemicals were isopropyl alcohol, 2-butoxyethanol, and ethylene glycol.
Typical fluid types are:
Conventional linear gels. These gels are cellulose derivative (carboxymethyl cellulose, hydroxyethyl cellulose, carboxymethyl hydroxyethyl cellulose, hydroxypropyl cellulose, hydroxyethyl methyl cellulose), guar or its derivatives (hydroxypropyl guar, carboxymethyl hydroxypropyl guar), mixed with other chemicals.
Borate-crosslinked fluids. These are guar-based fluids cross-linked with boron ions (from aqueous borax/boric acid solution). These gels have higher viscosity at pH 9 onwards and are used to carry proppant. After the fracturing job, the pH is reduced to 3–4 so that the cross-links are broken, and the gel is less viscous and can be pumped out.
Organometallic-crosslinked fluids – zirconium, chromium, antimony, titanium salts – are known to crosslink guar-based gels. The crosslinking mechanism is not reversible, so once the proppant is pumped down along with cross-linked gel, the fracturing part is done. The gels are broken down with appropriate breakers.
Aluminium phosphate-ester oil gels. Aluminium phosphate and ester oils are slurried to form cross-linked gel. These are one of the first known gelling systems.
For slickwater fluids the use of sweeps is common. Sweeps are temporary reductions in the proppant concentration, which help ensure that the well is not overwhelmed with proppant. As the fracturing process proceeds, viscosity-reducing agents such as oxidizers and enzyme breakers are sometimes added to the fracturing fluid to deactivate the gelling agents and encourage flowback. Such oxidizers react with and break down the gel, reducing the fluid's viscosity and ensuring that no proppant is pulled from the formation. An enzyme acts as a catalyst for breaking down the gel. Sometimes pH modifiers are used to break down the crosslink at the end of a hydraulic fracturing job, since many require a pH buffer system to stay viscous. At the end of the job, the well is commonly flushed with water under pressure (sometimes blended with a friction reducing chemical.) Some (but not all) injected fluid is recovered. This fluid is managed by several methods, including underground injection control, treatment, discharge, recycling, and temporary storage in pits or containers. New technology is continually developing to better handle waste water and improve re-usability.
Fracture monitoring
Measurements of the pressure and rate during the growth of a hydraulic fracture, with knowledge of fluid properties and proppant being injected into the well, provides the most common and simplest method of monitoring a hydraulic fracture treatment. This data along with knowledge of the underground geology can be used to model information such as length, width and conductivity of a propped fracture.
Radionuclide monitoring
Injection of radioactive tracers along with the fracturing fluid is sometimes used to determine the injection profile and location of created fractures. Radiotracers are selected to have the readily detectable radiation, appropriate chemical properties, and a half life and toxicity level that will minimize initial and residual contamination. Radioactive isotopes chemically bonded to glass (sand) and/or resin beads may also be injected to track fractures. For example, plastic pellets coated with 10 GBq of Ag-110mm may be added to the proppant, or sand may be labelled with Ir-192, so that the proppant's progress can be monitored. Radiotracers such as Tc-99m and I-131 are also used to measure flow rates. The Nuclear Regulatory Commission publishes guidelines which list a wide range of radioactive materials in solid, liquid and gaseous forms that may be used as tracers and limit the amount that may be used per injection and per well of each radionuclide.
A new technique in well-monitoring involves fiber-optic cables outside the casing. Using the fiber optics, temperatures can be measured every foot along the well – even while the wells are being fracked and pumped. By monitoring the temperature of the well, engineers can determine how much hydraulic fracturing fluid different parts of the well use as well as how much natural gas or oil they collect, during hydraulic fracturing operation and when the well is producing.
Microseismic monitoring
For more advanced applications, microseismic monitoring is sometimes used to estimate the size and orientation of induced fractures. Microseismic activity is measured by placing an array of geophones in a nearby wellbore. By mapping the location of any small seismic events associated with the growing fracture, the approximate geometry of the fracture is inferred. Tiltmeter arrays deployed on the surface or down a well provide another technology for monitoring strain
Microseismic mapping is very similar geophysically to seismology. In earthquake seismology, seismometers scattered on or near the surface of the earth record S-waves and P-waves that are released during an earthquake event. This allows for motion along the fault plane to be estimated and its location in the Earth's subsurface mapped. Hydraulic fracturing, an increase in formation stress proportional to the net fracturing pressure, as well as an increase in pore pressure due to leakoff. Tensile stresses are generated ahead of the fracture's tip, generating large amounts of shear stress. The increases in pore water pressure and in formation stress combine and affect weaknesses near the hydraulic fracture, like natural fractures, joints, and bedding planes.
Different methods have different location errors and advantages. Accuracy of microseismic event mapping is dependent on the signal-to-noise ratio and the distribution of sensors. Accuracy of events located by seismic inversion is improved by sensors placed in multiple azimuths from the monitored borehole. In a downhole array location, accuracy of events is improved by being close to the monitored borehole (high signal-to-noise ratio).
Monitoring of microseismic events induced by reservoir stimulation has become a key aspect in evaluation of hydraulic fractures, and their optimization. The main goal of hydraulic fracture monitoring is to completely characterize the induced fracture structure, and distribution of conductivity within a formation. Geomechanical analysis, such as understanding a formations material properties, in-situ conditions, and geometries, helps monitoring by providing a better definition of the environment in which the fracture network propagates. The next task is to know the location of proppant within the fracture and the distribution of fracture conductivity. This can be monitored using multiple types of techniques to finally develop a reservoir model than accurately predicts well performance.
Horizontal completions
Since the early 2000s, advances in drilling and completion technology have made horizontal wellbores much more economical. Horizontal wellbores allow far greater exposure to a formation than conventional vertical wellbores. This is particularly useful in shale formations which do not have sufficient permeability to produce economically with a vertical well. Such wells, when drilled onshore, are now usually hydraulically fractured in a number of stages, especially in North America. The type of wellbore completion is used to determine how many times a formation is fractured, and at what locations along the horizontal section.
In North America, shale reservoirs such as the Bakken, Barnett, Montney, Haynesville, Marcellus, and most recently the Eagle Ford, Niobrara and Utica shales are drilled horizontally through the producing intervals, completed and fractured. The method by which the fractures are placed along the wellbore is most commonly achieved by one of two methods, known as "plug and perf" and "sliding sleeve".
The wellbore for a plug-and-perf job is generally composed of standard steel casing, cemented or uncemented, set in the drilled hole. Once the drilling rig has been removed, a wireline truck is used to perforate near the bottom of the well, and then fracturing fluid is pumped. Then the wireline truck sets a plug in the well to temporarily seal off that section so the next section of the wellbore can be treated. Another stage is pumped, and the process is repeated along the horizontal length of the wellbore.
The wellbore for the sliding sleeve technique is different in that the sliding sleeves are included at set spacings in the steel casing at the time it is set in place. The sliding sleeves are usually all closed at this time. When the well is due to be fractured, the bottom sliding sleeve is opened using one of several activation techniques and the first stage gets pumped. Once finished, the next sleeve is opened, concurrently isolating the previous stage, and the process repeats. For the sliding sleeve method, wireline is usually not required.
These completion techniques may allow for more than 30 stages to be pumped into the horizontal section of a single well if required, which is far more than would typically be pumped into a vertical well that had far fewer feet of producing zone exposed.
Uses
Hydraulic fracturing is used to increase the rate at which substances such as petroleum or natural gas can be recovered from subterranean natural reservoirs. Reservoirs are typically porous sandstones, limestones or dolomite rocks, but also include "unconventional reservoirs" such as shale rock or coal beds. Hydraulic fracturing enables the extraction of natural gas and oil from rock formations deep below the earth's surface (generally ), which is greatly below typical groundwater reservoir levels. At such depth, there may be insufficient permeability or reservoir pressure to allow natural gas and oil to flow from the rock into the wellbore at high economic return. Thus, creating conductive fractures in the rock is instrumental in extraction from naturally impermeable shale reservoirs. Permeability is measured in the microdarcy to nanodarcy range. Fractures are a conductive path connecting a larger volume of reservoir to the well. So-called "super fracking" creates cracks deeper in the rock formation to release more oil and gas, and increases efficiency. The yield for typical shale bores generally falls off after the first year or two, but the peak producing life of a well can be extended to several decades.
Non-oil/gas uses
While the main industrial use of hydraulic fracturing is in stimulating production from oil and gas wells, hydraulic fracturing is also applied:
To stimulate groundwater wells
To precondition or induce rock cave-ins mining
As a means of enhancing waste remediation, usually hydrocarbon waste or spills
To dispose waste by injection deep into rock
To measure stress in the Earth
For electricity generation in enhanced geothermal systems
To increase injection rates for geologic sequestration of
To store electrical energy, pumped storage hydroelectricity
Since the late 1970s, hydraulic fracturing has been used, in some cases, to increase the yield of drinking water from wells in a number of countries, including the United States, Australia, and South Africa.
Economic effects
Hydraulic fracturing has been seen as one of the key methods of extracting unconventional oil and unconventional gas resources. According to the International Energy Agency, the remaining technically recoverable resources of shale gas are estimated to amount to , tight gas to , and coalbed methane to . As a rule, formations of these resources have lower permeability than conventional gas formations. Therefore, depending on the geological characteristics of the formation, specific technologies such as hydraulic fracturing are required. Although there are also other methods to extract these resources, such as conventional drilling or horizontal drilling, hydraulic fracturing is one of the key methods making their extraction economically viable. The multi-stage fracturing technique has facilitated the development of shale gas and light tight oil production in the United States and is believed to do so in the other countries with unconventional hydrocarbon resources.
A large majority of studies indicate that hydraulic fracturing in the United States has had a strong positive economic benefit so far. The Brookings Institution estimates that the benefits of Shale Gas alone has led to a net economic benefit of $48 billion per year. Most of this benefit is within the consumer and industrial sectors due to the significantly reduced prices for natural gas. Other studies have suggested that the economic benefits are outweighed by the externalities and that the levelized cost of electricity (LCOE) from less carbon and water intensive sources is lower.
The primary benefit of hydraulic fracturing is to offset imports of natural gas and oil, where the cost paid to producers otherwise exits the domestic economy. However, shale oil and gas is highly subsidised in the US, and has not yet covered production costs – meaning that the cost of hydraulic fracturing is paid for in income taxes, and in many cases is up to double the cost paid at the pump.
Research suggests that hydraulic fracturing wells have an adverse effect on agricultural productivity in the vicinity of the wells. One paper found "that productivity of an irrigated crop decreases by 5.7% when a well is drilled during the agriculturally active months within 11–20 km radius of a producing township. This effect becomes smaller and weaker as the distance between township and wells increases." The findings imply that the introduction of hydraulic fracturing wells to Alberta cost the province $14.8 million in 2014 due to the decline in the crop productivity,
The Energy Information Administration of the US Department of Energy estimates that 45% of US gas supply will come from shale gas by 2035 (with the vast majority of this replacing conventional gas, which has a lower greenhouse-gas footprint).
Public debate
Politics and public policy
Popular movement and civil society organizations
An anti-fracking movement has emerged both internationally with involvement of international environmental organizations and nations such as France and locally in affected areas such as Balcombe in Sussex where the Balcombe drilling protest was in progress during mid-2013. The considerable opposition against hydraulic fracturing activities in local townships in the United States has led companies to adopt a variety of public relations measures to reassure the public, including the employment of former military personnel with training in psychological warfare operations. According to Matt Pitzarella, the communications director at Range Resources, employees trained in the Middle East have been valuable to Range Resources in Pennsylvania, when dealing with emotionally charged township meetings and advising townships on zoning and local ordinances dealing with hydraulic fracturing.
There have been many protests directed at hydraulic fracturing. For example, ten people were arrested in 2013 during an anti-fracking protest near New Matamoras, Ohio, after they illegally entered a development zone and latched themselves to drilling equipment. In northwest Pennsylvania, there was a drive-by shooting at a well site, in which someone shot two rounds of a small-caliber rifle in the direction of a drilling rig. In Washington County, Pennsylvania, a contractor working on a gas pipeline found a pipe bomb that had been placed where a pipeline was to be constructed, which local authorities said would have caused a "catastrophe" had they not discovered and detonated it.
U.S. government and Corporate lobbying
The United States Department of State established the Global Shale Gas Initiative to persuade governments around the world to give concessions to the major oil and gas companies to set up fracking operations. A document from the United States diplomatic cables leak show that, as part of this project, U.S. officials convened conferences for foreign government officials that featured presentations by major oil and gas company representatives and by public relations professionals with expertise on how to assuage populations of target countries whose citizens were often quite hostile to fracking on their lands. The US government project succeeded as many countries on several continents acceded to the idea of granting concessions for fracking; Poland, for example, agreed to permit fracking by the major oil and gas corporations on nearly a third of its territory. The US Export-Import Bank, an agency of the US government, provided $4.7 billion in financing for fracking operations set up since 2010 in Queensland, Australia.
Alleged Russian state advocacy
In 2014 a number of European officials suggested that several major European protests against hydraulic fracturing (with mixed success in Lithuania and Ukraine) may be partially sponsored by Gazprom, Russia's state-controlled gas company. The New York Times suggested that Russia saw its natural gas exports to Europe as a key element of its geopolitical influence, and that this market would diminish if hydraulic fracturing is adopted in Eastern Europe, as it opens up significant shale gas reserves in the region. Russian officials have on numerous occasions made public statements to the effect that hydraulic fracturing "poses a huge environmental problem".
Current fracking operations
Hydraulic fracturing is currently taking place in the United States in Arkansas, California, Colorado, Louisiana, North Dakota, Oklahoma, Pennsylvania, Texas, Virginia, West Virginia, and Wyoming. Other states, such as Alabama, Indiana, Michigan, Mississippi, New Jersey, New York, and Ohio, are either considering or preparing for drilling using this method. Maryland and Vermont have permanently banned hydraulic fracturing, and New York and North Carolina have instituted temporary bans. New Jersey currently has a bill before its legislature to extend a 2012 moratorium on hydraulic fracturing that recently expired. Although a hydraulic fracturing moratorium was recently lifted in the United Kingdom, the government is proceeding cautiously because of concerns about earthquakes and the environmental effect of drilling. Hydraulic fracturing is currently banned in France and Bulgaria.
Documentary films
Josh Fox's 2010 Academy Award nominated film Gasland became a center of opposition to hydraulic fracturing of shale. The movie presented problems with groundwater contamination near well sites in Pennsylvania, Wyoming and Colorado. Energy in Depth, an oil and gas industry lobbying group, called the film's facts into question. In response, a rebuttal of Energy in Depth's claims of inaccuracy was posted on Gasland's website. The Director of the Colorado Oil and Gas Conservation Commission (COGCC) offered to be interviewed as part of the film if he could review what was included from the interview in the final film but Fox declined the offer. ExxonMobil, Chevron Corporation and ConocoPhillips aired advertisements during 2011 and 2012 that claimed to describe the economic and environmental benefits of natural gas and argue that hydraulic fracturing was safe.
The 2012 film Promised Land, starring Matt Damon, takes on hydraulic fracturing. The gas industry countered the film's criticisms of hydraulic fracturing with flyers, and Twitter and Facebook posts.
In January 2013, Northern Irish journalist and filmmaker Phelim McAleer released a crowdfunded documentary called FrackNation as a response to the statements made by Fox in Gasland, claiming it "tells the truth about fracking for natural gas". FrackNation premiered on Mark Cuban's AXS TV. The premiere corresponded with the release of Promised Land.
In April 2013, Josh Fox released Gasland 2, his "international odyssey uncovering a trail of secrets, lies and contamination related to hydraulic fracking". It challenges the gas industry's portrayal of natural gas as a clean and safe alternative to oil as a myth, and that hydraulically fractured wells inevitably leak over time, contaminating water and air, hurting families, and endangering the Earth's climate with the potent greenhouse gas methane.
In 2014, Scott Cannon of Video Innovations released the documentary The Ethics of Fracking. The film covers the politics, spiritual, scientific, medical and professional points of view on hydraulic fracturing. It also digs into the way the gas industry portrays hydraulic fracturing in their advertising.
In 2015, the Canadian documentary film Fractured Land had its world premiere at the Hot Docs Canadian International Documentary Festival.
Research issues
Typically the funding source of the research studies is a focal point of controversy. Concerns have been raised about research funded by foundations and corporations, or by environmental groups, which can at times lead to at least the appearance of unreliable studies. Several organizations, researchers, and media outlets have reported difficulty in conducting and reporting the results of studies on hydraulic fracturing due to industry and governmental pressure, and expressed concern over possible censoring of environmental reports. Some have argued there is a need for more research into the environmental and health effects of the technique.
Health risks
There is concern over the possible adverse public health implications of hydraulic fracturing activity. A 2013 review on shale gas production in the United States stated, "with increasing numbers of drilling sites, more people are at risk from accidents and exposure to harmful substances used at fractured wells." A 2011 hazard assessment recommended full disclosure of chemicals used for hydraulic fracturing and drilling as many have immediate health effects, and many may have long-term health effects.
In June 2014 Public Health England published a review of the potential public health impacts of exposures to chemical and radioactive pollutants as a result of shale gas extraction in the UK, based on the examination of literature and data from countries where hydraulic fracturing already occurs. The executive summary of the report stated: "An assessment of the currently available evidence indicates that the potential risks to public health from exposure to the emissions associated with shale gas extraction will be low if the operations are properly run and regulated. Most evidence suggests that contamination of groundwater, if it occurs, is most likely to be caused by leakage through the vertical borehole. Contamination of groundwater from the underground hydraulic fracturing process itself (i.e. the fracturing of the shale) is unlikely. However, surface spills of hydraulic fracturing fluids or wastewater may affect groundwater, and emissions to air also have the potential to impact on health. Where potential risks have been identified in the literature, the reported problems are typically a result of operational failure and a poor regulatory environment."
A 2012 report prepared for the European Union Directorate-General for the Environment identified potential risks to humans from air pollution and ground water contamination posed by hydraulic fracturing. This led to a series of recommendations in 2014 to mitigate these concerns. A 2012 guidance for pediatric nurses in the US said that hydraulic fracturing had a potential negative impact on public health and that pediatric nurses should be prepared to gather information on such topics so as to advocate for improved community health.
A 2017 study in The American Economic Review found that "additional well pads drilled within 1 kilometer of a community water system intake increases shale gas-related contaminants in drinking water."
A 2022 study conduced by Harvard T.H. Chan School of Public Health and published in Nature Energy found that elderly people living near or downwind of unconventional oil and gas development (UOGD) -- which involves extraction methods including fracking—are at greater risk of experiencing early death compared with elderly persons who don't live near such operations.
Statistics collected by the U.S. Department of Labor and analyzed by the U.S. Centers for Disease Control and Prevention show a correlation between drilling activity and the number of occupational injuries related to drilling and motor vehicle accidents, explosions, falls, and fires. Extraction workers are also at risk for developing pulmonary diseases, including lung cancer and silicosis (the latter because of exposure to silica dust generated from rock drilling and the handling of sand). The U.S. National Institute for Occupational Safety and Health (NIOSH) identified exposure to airborne silica as a health hazard to workers conducting some hydraulic fracturing operations. NIOSH and OSHA issued a joint hazard alert on this topic in June 2012.
Additionally, the extraction workforce is at increased risk for radiation exposure. Fracking activities often require drilling into rock that contains naturally occurring radioactive material (NORM), such as radon, thorium, and uranium.
Another report done by the Canadian Medical Journal reported that after researching they identified 55 factors that may cause cancer, including 20 that have been shown to increase the risk of leukemia and lymphoma. The Yale Public Health analysis warns that millions of people living within a mile of fracking wells may have been exposed to these chemicals.
Environmental effects
The potential environmental effects of hydraulic fracturing include air emissions and climate change, high water consumption, groundwater contamination, land use, risk of earthquakes, noise pollution, and various health effects on humans. Air emissions are primarily methane that escapes from wells, along with industrial emissions from equipment used in the extraction process. Modern UK and EU regulation requires zero emissions of methane, a potent greenhouse gas. Escape of methane is a bigger problem in older wells than in ones built under more recent EU legislation.
In December 2016 the United States Environmental Protection Agency (EPA) issued the "Hydraulic Fracturing for Oil and Gas: Impacts from the Hydraulic Fracturing Water Cycle on Drinking Water Resources in the United States (Final Report)." The EPA found scientific evidence that hydraulic fracturing activities can impact drinking water resources. A few of the main reasons why drinking water can be contaminated according to the EPA are:
Water removal to be used for fracking in times or areas of low water availability
Spills while handling fracking fluids and chemicals that result in large volumes or high concentrations of chemicals reaching groundwater resources
Injection of fracking fluids into wells when mishandling machinery, allowing gases or liquids to move to groundwater resources
Injection of fracking fluids directly into groundwater resources
Leak of defective hydraulic fracturing wastewater to surface water
Disposal or storage of fracking wastewater in unlined pits resulting in contamination of groundwater resources.
The lifecycle greenhouse gas emissions of shale oil are 21%-47% higher than those of conventional oil, while emissions from unconventional gas are from 6% lower to 43% higher than the emissions of conventional gas.
Hydraulic fracturing uses between of water per well, with large projects using up to . Additional water is used when wells are refractured. An average well requires of water over its lifetime. According to the Oxford Institute for Energy Studies, greater volumes of fracturing fluids are required in Europe, where the shale depths average 1.5 times greater than in the U.S. Surface water may be contaminated through spillage and improperly built and maintained waste pits, and ground water can be contaminated if the fluid is able to escape the formation being fractured (through, for example, abandoned wells, fractures, and faults) or by produced water (the returning fluids, which also contain dissolved constituents such as minerals and brine waters). The possibility of groundwater contamination from brine and fracturing fluid leakage through old abandoned wells is low. Produced water is managed by underground injection, municipal and commercial wastewater treatment and discharge, self-contained systems at well sites or fields, and recycling to fracture future wells. Typically less than half of the produced water used to fracture the formation is recovered.
In the United States over 12 million acres are being used for fossil fuels. About of land is needed per each drill pad for surface installations. This is equivalent of six Yellowstone National Parks. Well pad and supporting structure construction significantly fragments landscapes which likely has negative effects on wildlife. These sites need to be remediated after wells are exhausted. Research indicates that effects on ecosystem services costs (i.e., those processes that the natural world provides to humanity) has reached over $250 million per year in the U.S. Each well pad (in average 10 wells per pad) needs during preparatory and hydraulic fracturing process about 800 to 2,500 days of noisy activity, which affect both residents and local wildlife. In addition, noise is created by continuous truck traffic (sand, etc.) needed in hydraulic fracturing. Research is underway to determine if human health has been affected by air and water pollution, and rigorous following of safety procedures and regulation is required to avoid harm and to manage the risk of accidents that could cause harm.
In July 2013, the US Federal Railroad Administration listed oil contamination by hydraulic fracturing chemicals as "a possible cause" of corrosion in oil tank cars.
Hydraulic fracturing has been sometimes linked to induced seismicity or earthquakes. The magnitude of these events is usually too small to be detected at the surface, although tremors attributed to fluid injection into disposal wells have been large enough to have often been felt by people, and to have caused property damage and possibly injuries. A U.S. Geological Survey reported that up to 7.9 million people in several states have a similar earthquake risk to that of California, with hydraulic fracturing and similar practices being a prime contributing factor.
Microseismic events are often used to map the horizontal and vertical extent of the fracturing. A better understanding of the geology of the area being fracked and used for injection wells can be helpful in mitigating the potential for significant seismic events.
People obtain drinking water from either surface water, which includes rivers and reservoirs, or groundwater aquifers, accessed by public or private wells. There are already a host of documented instances in which nearby groundwater has been contaminated by fracking activities, requiring residents with private wells to obtain outside sources of water for drinking and everyday use.
Per- and polyfluoroalkyl substances also known as "PFAS" or "forever chemicals" have been linked to cancer and birth defects. The chemicals used in fracking stay in the environment. Once there those chemicals will eventually break down into PFAS. These chemicals can escape from drilling sites and into the groundwater. PFAS are able to leak into underground wells that store million gallons of wastewater.
Despite these health concerns and efforts to institute a moratorium on fracking until its environmental and health effects are better understood, the United States continues to rely heavily on fossil fuel energy. In 2017, 37% of annual U.S. energy consumption is derived from petroleum, 29% from natural gas, 14% from coal, and 9% from nuclear sources, with only 11% supplied by renewable energy, such as wind and solar power.
In 2022 the USA experienced a fracking boom, when the war in Ukraine led to a massive increase in approval of new drillings. Planned drillings will release 140 billion tons of carbon, 4 times more that the annual global emissions.
Regulations
Countries using or considering use of hydraulic fracturing have implemented different regulations, including developing federal and regional legislation, and local zoning limitations. In 2011, after public pressure France became the first nation to ban hydraulic fracturing, based on the precautionary principle as well as the principle of preventive and corrective action of environmental hazards. The ban was upheld by an October 2013 ruling of the Constitutional Council. Some other countries such as Scotland have placed a temporary moratorium on the practice due to public health concerns and strong public opposition. Countries like South Africa have lifted their bans, choosing to focus on regulation instead of outright prohibition. Germany has announced draft regulations that would allow using hydraulic fracturing for the exploitation of shale gas deposits with the exception of wetland areas. In China, regulation on shale gas still faces hurdles, as it has complex interrelations with other regulatory regimes, especially trade. Many states in Australia have either permanently or temporarily banned fracturing for hydrocarbons. In 2019, hydraulic fracturing was banned in UK.
The European Union has adopted a recommendation for minimum principles for using high-volume hydraulic fracturing. Its regulatory regime requires full disclosure of all additives. In the United States, the Ground Water Protection Council launched FracFocus.org, an online voluntary disclosure database for hydraulic fracturing fluids funded by oil and gas trade groups and the U.S. Department of Energy. Hydraulic fracturing is excluded from the Safe Drinking Water Act's underground injection control's regulation, except when diesel fuel is used. The EPA assures surveillance of the issuance of drilling permits when diesel fuel is employed.
In 2012, Vermont became the first state in the United States to ban hydraulic fracturing. On 17 December 2014, New York became the second state to issue a complete ban on any hydraulic fracturing due to potential risks to human health and the environment.
| Technology | Fuel | null |
42354952 | https://en.wikipedia.org/wiki/Tamisiocarididae | Tamisiocarididae | Tamisiocarididae is a family of radiodonts, extinct marine animals related to arthropods, that bore finely-spined appendages that were presumably used in filter-feeding. When first discovered, the clade was named Cetiocaridae after a speculative evolution artwork, Bearded Ceticaris by John Meszaros, that depicted a hypothetical filter-feeding radiodont at a time before any were known to exist. However, the family name was not valid according to the International Code of Zoological Nomenclature, as no real genus named "Cetiocaris" exists, and in 2019 it was formally replaced by the name Tamisiocarididae, after the only valid genus of the clade at the time. The family is only known from Series 2 of the Cambrian, unlike other radiodont families, which persisted longer into the Cambrian. All known species would have lived in tropical or subtropical waters, suggesting a preference for warmer waters.
Description
Like most radiodonts, cetiocarids have spiny frontal appendages. However, in this family the auxiliary spines are fine and densely-arranged, which are modified for use in filter feeding like modern basking sharks and mysticete whales. For example, Tamisiocaris is estimated to have fed on prey roughly a millimeter in size.
Classification
Tamisiocarididae was originally named Cetiocaridae. In the 2013 speculative paleoart book All Your Yesterdays, paleoartist John Meszaros depicted a hypothetical filter-feeding anomalocaridid he named "Ceticaris". This artwork inspired the name of Cetiocaridae. However, as no genus "Cetiocaris" actually exists, the name Cetiocaridae does not comply with article 29 of the International Code of Zoological Nomenclature and is invalid. The family Tamisiocarididae was subsequently devised as a replacement name for the clade.
Cetiocaridae was originally defined phylogenetically as all species more closely related to Tamisiocaris borealis than to Anomalocaris canadensis, Amplectobelua symbrachiata, or Hurdia victoria.
Distribution
Tamisocaridid fossils have been found in the Emu Bay Shale of Australia, Sirius Passet lagerstätte of Greenland, and Kinzers Formation of the United States. Their fossils date to stage 3 and stage 4 of the Cambrian.
| Biology and health sciences | Fossil arthropods | Animals |
42366612 | https://en.wikipedia.org/wiki/Rainbow%20matching | Rainbow matching | In the mathematical discipline of graph theory, a rainbow matching in an edge-colored graph is a matching in which all the edges have distinct colors.
Definition
Given an edge-colored graph , a rainbow matching in is a set of pairwise non-adjacent edges, that is, no two edges share a common vertex, such that all the edges in the set have distinct colors.
A maximum rainbow matching is a rainbow matching that contains the largest possible number of edges.
History
Rainbow matchings are of particular interest given their connection to transversals of Latin squares.
Denote by the complete bipartite graph on vertices. Every proper -edge coloring of corresponds to a Latin square of order . A rainbow matching then corresponds to a transversal of the Latin square, meaning a selection of positions, one in each row and each column, containing distinct entries.
This connection between transversals of Latin squares and rainbow matchings in has inspired additional interest in the study of rainbow matchings in triangle-free graphs.
Existence when each edge has a single color
An edge-coloring is called proper if each edge has a single color, and each two edges of the same color have no vertex in common.
A proper edge-coloring does not guarantee the existence of a perfect rainbow matching. For example, consider the graph : the complete bipartite graph on 2+2 vertices. Suppose the edges and are colored green, and the edges and are colored blue. This is a proper coloring, but there are only two perfect matchings, and each of them is colored by a single color. This invokes the question: when does a large rainbow matching is guaranteed to exist?
Bounds depending only on the number of vertices
Much of the research on this question was published using the terminology of Latin transversals in Latin squares. Translated into the rainbow matching terminology:
In 1967, H. J. Ryser conjectured that, when is odd, every proper edge-coloring of has a rainbow matching of size .
In 1975, S. K. Stein and Brualdi conjectured that, when is even, every proper edge-coloring of has a rainbow matching of size . (it is known that a rainbow matching of size need not exist in this case).
A more general conjecture of Stein is that a rainbow matching of size exists not only for a proper edge-coloring, but for any coloring in which each color appears on exactly edges.
Some weaker versions of these conjectures have been proved:
Every proper edge-coloring of has a rainbow matching of size .
Every proper edge-coloring of has a rainbow matching of size
Every proper edge-coloring of has a rainbow matching of size .
Every proper edge-coloring of has a rainbow matching of size .
Every proper edge-coloring of has a rainbow matching of size . (Preprint)
Bounds depending on the minimum degree
Wang asked if there is a function such that every properly edge-colored graph with minimum degree and at least vertices must have a rainbow matching of size . Obviously at least vertices are necessary, but how many are sufficient?
Diemunsch, et al. answered this question in the affirmative and showed that given a properly edge-colored graph with minimum degree and order at least , there exists a rainbow matching of size in .
This bound was later improved to by Andras Gyarfas and Gabor N. Sarkozy. They also show that any graph with at least vertices has a rainbow matching of size at least . These are the best known estimate to date.
Existence when the same edge may have different colors
Suppose that each edge may have several different colors, while each two edges of the same color must still have no vertex in common. In other words, each color is a matching. How many colors are needed in order to guarantee the existence of a rainbow matching?
In complete bipartite graphs
Drisko studied this question using the terminology of Latin rectangles. He proved that, for any , in the complete bipartite graph , any family of matchings (=colors) of size has a perfect rainbow matching (of size ). He applied this theorem to questions about group actions and difference sets.
Drisko also showed that matchings may be necessary: consider a family of matchings, of which are and the other are Then the largest rainbow matching is of size (e.g. take one edge from each of the first matchings).
Alon showed that Drisko's theorem implies an older result in additive number theory.
In general bipartite graphs
Aharoni and Berger generalized Drisko's theorem to any bipartite graph, namely: any family of matchings of size in a bipartite graph has a rainbow matching of size .
Aharoni, Kotlar and Ziv showed that Drisko's extremal example is unique in any bipartite graph.
In general graphs
In general graphs, matchings are no longer sufficient. When is even, one can add to Drisko's example the matching and get a family of matchings without any rainbow matching.
Aharoni, Berger, Chudnovsky, Howard and Seymour proved that, in a general graph, matchings (=colors) are always sufficient. It is not known whether this is tight: currently the best lower bound for even is and for odd it is .
Rainbow fractional matchings
A fractional matching is a set of edges with a non-negative weight assigned to each edge, such that the sum of weights adjacent to each vertex is at most 1. The size of a fractional matching is the sum of weights of all edges. It is a generalization of a matching, and can be used to generalize both the colors and the rainbow matching:
Instead of requiring that each color be a matching of size , the requirement is weakened: each "color" can be an arbitrary set of edges, but it should admit a fractional matching of size at least .
Instead of looking for a rainbow matching, we look for a rainbow fractional matching - a fractional matching in which each edge with a positive weight has a different color.
It is known that, in a bipartite graph, the maximum fractional matching size equals the maximum matching size. Therefore, the theorem of Aharoni and Berger is equivalent to the following. Let be any positive integer. Given any family of fractional-matchings (=colors) of size in a bipartite graph, there exists a rainbow-fractional-matching of size .
Aharoni, Holzman and Jiang extend this theorem to arbitrary graphs as follows. Let be any positive integer or half-integer. Any family of fractional-matchings (=colors) of size at least in an arbitrary graph has a rainbow-fractional-matching of size . The is the smallest possible for fractional matchings in arbitrary graphs: the extremal case is constructed using an odd-length cycle.
Partial proof
For the case of perfect fractional matchings, both the above theorems can derived from the colorful Caratheodory theorem.
For every edge in , let be a vector of size , where for each vertex in , element in equals 1 if is adjacent to , and 0 otherwise (so each vector has 2 ones and -2 zeros). Every fractional matching corresponds to a conical combination of edges, in which each element is at most 1. A conical combination in which each element is exactly 1 corresponds to a perfect fractional matching. In other words, a collection of edges admits a perfect fractional matching, if and only if (the vector of ones) is contained in the conical hull of the vectors for in .
Consider a graph with vertices, and suppose there are subsets of edges, each of which admits a perfect fractional matching (of size ). This means that the vector is in the conical hull of each of these subsets. By the colorful Caratheodory theorem, there exists a selection of edges, one from each subset, that their conical hull contains . This corresponds to a rainbow perfect fractional matching. The expression is the dimension of the vectors - each vector has elements.
Now, suppose that the graph is bipartite. In a bipartite graph, there is a constraint on the vectors : the sum of elements corresponding to each part of the graph must be 1. Therefore, the vectors live in a -dimensional space. Therefore, the same argument as above holds when there are only subsets of edges.
Rainbow matching in hypergraphs
An r-uniform hypergraph is a set of hyperedges each of which contains exactly vertices (so a 2-uniform hypergraph is a just a graph without self-loops). Aharoni, Holzman and Jiang extend their theorem to such hypergraphs as follows. Let be any positive rational number. Any family of fractional-matchings (=colors) of size at least in an -uniform hypergraph has a rainbow-fractional-matching of size . The is the smallest possible when is an integer.
An r-partite hypergraph is an -uniform hypergraph in which the vertices are partitioned into disjoint sets and each hyperedge contains exactly one vertex of each set (so a 2-partite hypergraph is a just bipartite graph). Let be any positive integer. Any family of fractional-matchings (=colors) of size at least in an -partite hypergraph has a rainbow-fractional-matching of size . The is the smallest possible: the extremal case is when is a prime power, and all colors are edges of the truncated projective plane of order . So each color has edges and a fractional matching of size , but any fractional matching of that size requires all edges.
Partial proof
For the case of perfect fractional matchings, both the above theorems can derived from the colorful caratheodory theorem in the previous section. For a general -uniform hypergraph (admitting a perfect matching of size ), the vectors live in a -dimensional space. For an -uniform -partite hypergraph, the -partiteness constraints imply that the vectors live in a -dimensional space.
| Mathematics | Graph theory | null |
38155804 | https://en.wikipedia.org/wiki/Arenicola | Arenicola | Arenicola, also known as sandworms, is a genus of capitellid annelid worms comprising the lugworms and black lugs.
Species
The following species are recognised in the genus Arenicola:
Arenicola brasiliensis Nonato, 1958
Arenicola cristata Stimpson, 1856
Arenicola defodiens Cadman & Nelson-Smith, 1993
Arenicola glasselli Berkeley & Berkeley, 1939
Arenicola loveni Kinberg, 1866
Arenicola marina (Linnaeus, 1758)
Life in a burrow
A lugworm lives in a U-shaped burrow in sand. The U is made of an L-shaped gallery lined with mucus, from the toe of which a vertical unlined shaft runs up to the surface. This is a head shaft. At the surface the head shaft is marked by a small saucer-shaped depression. The tail shaft, from it, is marked by a highly coiled cast of sand. The lugworm lies in this burrow with its head at the base of the head shaft, swallowing sand from time to time. This makes the columns of sand drop slightly, so there is a periodic sinking of the sand in the saucer-shaped depression. When it first digs its burrow the lugworm softens the sand in its head shaft by pushing its head up into it with a piston action. After that the sand is kept loose by a current of water driven through the burrow from the hind end by the waves of contraction passing along the worm's body.
Reproduction
Once it burrows into the sand a lugworm seldom leaves it. It can stay there for weeks on end, sometimes changing its position slightly in the sand. But it may leave the burrow completely and re-enter the sand, making a fresh burrow for breeding but for 2 days in early October there is a mass breeding event. This is when all the lugworms liberate their ova and sperms into the water above, and there the ova are fertilized. The ova are enclosed in tongue-shaped masses of jelly about 8 inches long, 3 inches wide and 1 inch thick. Each mass is anchored at one end. The larvae hatching from the eggs feed on the jelly and eventually break out when they have grown to a dozen segments and are beginning to resemble their parents. They burrow into the sand, usually higher up the beach than the adults, and gradually move down the beach as they get older.
In popular culture
A singing lugworm figures in The Man Who Dreamed of Faeryland by William Butler Yeats:
Cartoonist Piers Baker created a syndicated comic strip called Ollie and Quentin, with a buddy storyline about Ollie, a seagull and Quentin, a lugworm. The strip originated in the UK in 2002, with King Features Syndicate introducing it to international syndication in early 2008. Baker considers the strip "an homage to all the poor lugworms that he used as bait while sea fishing in his youth."
| Biology and health sciences | Lophotrochozoa | Animals |
36740699 | https://en.wikipedia.org/wiki/Position%20and%20momentum%20spaces | Position and momentum spaces | In physics and geometry, there are two closely related vector spaces, usually three-dimensional but in general of any finite dimension.
Position space (also real space or coordinate space) is the set of all position vectors r in Euclidean space, and has dimensions of length; a position vector defines a point in space. (If the position vector of a point particle varies with time, it will trace out a path, the trajectory of a particle.) Momentum space is the set of all momentum vectors p a physical system can have; the momentum vector of a particle corresponds to its motion, with units of [mass][length][time]−1.
Mathematically, the duality between position and momentum is an example of Pontryagin duality. In particular, if a function is given in position space, f(r), then its Fourier transform obtains the function in momentum space, φ(p). Conversely, the inverse Fourier transform of a momentum space function is a position space function.
These quantities and ideas transcend all of classical and quantum physics, and a physical system can be described using either the positions of the constituent particles, or their momenta, both formulations equivalently provide the same information about the system in consideration. Another quantity is useful to define in the context of waves. The wave vector k (or simply "k-vector") has dimensions of reciprocal length, making it an analogue of angular frequency ω which has dimensions of reciprocal time. The set of all wave vectors is k-space. Usually r is more intuitive and simpler than k, though the converse can also be true, such as in solid-state physics.
Quantum mechanics provides two fundamental examples of the duality between position and momentum, the Heisenberg uncertainty principle ΔxΔp ≥ ħ/2 stating that position and momentum cannot be simultaneously known to arbitrary precision, and the de Broglie relation p = ħk which states the momentum and wavevector of a free particle are proportional to each other. In this context, when it is unambiguous, the terms "momentum" and "wavevector" are used interchangeably. However, the de Broglie relation is not true in a crystal.
Classical mechanics
Lagrangian mechanics
Most often in Lagrangian mechanics, the Lagrangian L(q, dq/dt, t) is in configuration space, where q = (q1, q2,..., qn) is an n-tuple of the generalized coordinates. The Euler–Lagrange equations of motion are
(One overdot indicates one time derivative). Introducing the definition of canonical momentum for each generalized coordinate
the Euler–Lagrange equations take the form
The Lagrangian can be expressed in momentum space also, L′(p, dp/dt, t), where p = (p1, p2, ..., pn) is an n-tuple of the generalized momenta. A Legendre transformation is performed to change the variables in the total differential of the generalized coordinate space Lagrangian;
where the definition of generalized momentum and Euler–Lagrange equations have replaced the partial derivatives of L. The product rule for differentials allows the exchange of differentials in the generalized coordinates and velocities for the differentials in generalized momenta and their time derivatives,
which after substitution simplifies and rearranges to
Now, the total differential of the momentum space Lagrangian L′ is
so by comparison of differentials of the Lagrangians, the momenta, and their time derivatives, the momentum space Lagrangian L′ and the generalized coordinates derived from L′ are respectively
Combining the last two equations gives the momentum space Euler–Lagrange equations
The advantage of the Legendre transformation is that the relation between the new and old functions and their variables are obtained in the process. Both the coordinate and momentum forms of the equation are equivalent and contain the same information about the dynamics of the system. This form may be more useful when momentum or angular momentum enters the Lagrangian.
Hamiltonian mechanics
In Hamiltonian mechanics, unlike Lagrangian mechanics which uses either all the coordinates or the momenta, the Hamiltonian equations of motion place coordinates and momenta on equal footing. For a system with Hamiltonian H(q, p, t), the equations are
Quantum mechanics
In quantum mechanics, a particle is described by a quantum state. This quantum state can be represented as a superposition of basis states. In principle one is free to choose the set of basis states, as long as they span the state space. If one chooses the (generalized) eigenfunctions of the position operator as a set of basis functions, one speaks of a state as a wave function in position space. The familiar Schrödinger equation in terms of the position r is an example of quantum mechanics in the position representation.
By choosing the eigenfunctions of a different operator as a set of basis functions, one can arrive at a number of different representations of the same state. If one picks the eigenfunctions of the momentum operator as a set of basis functions, the resulting wave function is said to be the wave function in momentum space.
A feature of quantum mechanics is that phase spaces can come in different types: discrete-variable, rotor, and continuous-variable. The table below summarizes some relations involved in the three types of phase spaces.
Reciprocal relation
The momentum representation of a wave function and the de Broglie relation are closely related to the Fourier inversion theorem and the concept of frequency domain. Since a free particle has a spatial frequency proportional to the momentum , describing the particle as a sum of frequency components is equivalent to describing it as the Fourier transform of a "sufficiently nice" wave function in momentum space.
Position space
Suppose we have a three-dimensional wave function in position space , then we can write this functions as a weighted sum of orthogonal basis functions :
or, in the continuous case, as an integral
It is clear that if we specify the set of functions , say as the set of eigenfunctions of the momentum operator, the function holds all the information necessary to reconstruct and is therefore an alternative description for the state .
In coordinate representation the momentum operator is given by
(see matrix calculus for the denominator notation) with appropriate domain. The eigenfunctions are
and eigenvalues ħk. So
and we see that the momentum representation is related to the position representation by a Fourier transform.
Momentum space
Conversely, a three-dimensional wave function in momentum space can be expressed as a weighted sum of orthogonal basis functions ,
or as an integral,
In momentum representation the position operator is given by
with eigenfunctions
and eigenvalues r. So a similar decomposition of can be made in terms of the eigenfunctions of this operator, which turns out to be the inverse Fourier transform,
Unitary equivalence
The position and momentum operators are unitarily equivalent, with the unitary operator being given explicitly by the Fourier transform, namely a quarter-cycle rotation in phase space, generated by the oscillator Hamiltonian. Thus, they have the same spectrum. In physical language, p acting on momentum space wave functions is the same as r acting on position space wave functions (under the image of the Fourier transform).
Reciprocal space and crystals
For an electron (or other particle) in a crystal, its value of k relates almost always to its crystal momentum, not its normal momentum. Therefore, k and p are not simply proportional but play different roles. See k·p perturbation theory for an example. Crystal momentum is like a wave envelope that describes how the wave varies from one unit cell to the next, but does not give any information about how the wave varies within each unit cell.
When k relates to crystal momentum instead of true momentum, the concept of k-space is still meaningful and extremely useful, but it differs in several ways from the non-crystal k-space discussed above. For example, in a crystal's k-space, there is an infinite set of points called the reciprocal lattice which are "equivalent" to k = 0 (this is analogous to aliasing). Likewise, the "first Brillouin zone" is a finite volume of k-space, such that every possible k is "equivalent" to exactly one point in this region.
| Physical sciences | Quantum mechanics | Physics |
43801098 | https://en.wikipedia.org/wiki/Euharamiyida | Euharamiyida | Euharamiyida also known as Eleutherodontida, is clade of early mammals or mammal-like cynodonts from the Middle Jurassic to Early Cretaceous of Eurasia and possibly North America. The group is sometimes considered a sister group to Multituberculata, or part of an earlier divergence within the synapsid line. It is disputed whether or not they are related to the Haramiyids from the Late Triassic, such as Haramiyavia. The morphology of their teeth indicates that they were herbivorous or omnivorous. Some members of the group are known to be arboreal, including gliding forms similar to modern flying squirrels or colugos.
Evolution
The position of euharamyidans is contested. They are either considered crown group mammals as members of Allotheria, related to multituberculates, or stem-group mammals within Mammaliaformes. The position is often dependent on the relationships of euharamiyids to the Late Triassic haramiyids such as Haramiyavia and Thomasia. In some studies, the two groups are recovered as unrelated.
Phylogeny
Taxa
The following taxonomy follows Mao et al. (2022) unless otherwise cited.
Cryoharamiya
Maiopatagium
Millosodon
Sharypovoia
Sineleutherus
Woodeatonia
?Allostaffia
?Hahnodontidae Sigogneau-Russell, 1991
?Megaconus
?Gondwanatheria
Arboroharamiyidae Zheng et al., 2013
Arboroharamiya
Vilevolodon
Xianshou
Kermackodontidae Butler and Hooker, 2005 (="Eleutherodontidae" Kermack et al., 1998) (considered by other studies to be multituberculates)
Kermackodon
Butlerodon
Shenshouidae Mao and Meng, 2019
Qishou
Shenshou
| Biology and health sciences | Stem-mammals | Animals |
50054690 | https://en.wikipedia.org/wiki/Vacuum%20drying | Vacuum drying | Vacuum drying is the mass transfer operation in which the moisture present in a substance, usually a wet solid, is removed by means of creating a vacuum.
In chemical processing industries like food processing, pharmacology, agriculture, and textiles, drying is an essential unit operation to remove moisture. Vacuum drying is generally used for the drying of substances that are hygroscopic and heat-sensitive, and is based on the principle of creating a vacuum to decrease the chamber pressure below the vapor pressure of the water, causing it to boil. With the help of vacuum pumps, the pressure is reduced around the substance to be dried. This decreases the boiling point of water inside that product and thereby increases the rate of evaporation significantly. The result is a significantly increased drying rate of the product. The vacuum drying process is a batch operation performed at reduced pressures and lower relative humidity compared to ambient pressure, enabling faster drying.
Vacuum dryer
Vacuum dryer is the equipment with the help of which vacuum drying is carried out. Vacuum dryers are sometimes made up of cast iron, but most are made of stainless steel, so that they can bear the high vacuum pressure without any kind of deformation. The oven is divided into hollow trays which increases the surface area for heat conduction. The oven door is locked air tight and is connected to vacuum pump to reduce the pressure.
The materials to be dried are kept on the trays inside the vacuum dryer and pressure is reduced by means of a vacuum pump. The dryer door is tightly shut and steam is passed through the space between trays and jacket so that the heat transfer occurs by conduction. Water vapors from the feed is sent into the condenser and after the drying vacuum pump is disconnected and the dried product is collected from the trays.
Microwave vacuum drying
Because conventional drying approaches (e.g., convective drying) may cause great nutritional and textural changes (like a darker color due to Maillard reaction), microwave vacuum drying is an alternative for pharmaceuticals and food drying, a method known since 1989. The microwaves speed up the drying process and lower temperature in vacuum system, reducing overall drying cycle time and temperature-induced effects of the food product. Microwave vacuum drying may be used for production of dried pharmaceuticals and food. Industrialized equipment, however, may require pre-treating samples before processing in the industrial vacuum system; pre-drying is used by conventional methods to reduce bulk water content.
Applications
Vacuum dryer can be used to dry heat sensitive hygroscopic and toxic materials. If the feed for drying is a solution, it can be dried using vacuum dryer as the solvent can be recovered by condensation. To improve the quality of products, such as for fruit preservation, hybrid drying combining osmotic dehydration followed by heat pump drying and microwave-vacuum drying proved effective.
| Physical sciences | Phase separations | Chemistry |
32582674 | https://en.wikipedia.org/wiki/Heterocongrinae | Heterocongrinae | The garden eels are the subfamily Heterocongrinae in the conger eel family Congridae. The majority of the 36 known species of garden eels live in the Indo-Pacific, but can be found in warm ocean water worldwide. These small eels live in burrows on the sea floor and get their name from the behavior of poking their heads from their burrows while most of their bodies remain hidden. Since they tend to live in groups, the many eel heads "growing" from the sea floor resemble the plants in a garden. They vary in color and size depending on the species. The largest species reaches about in length, but most species do not surpass . Garden eel colonies can grow as large as one acre in surface area and number up to several thousand individuals.
Genera
Hetercongrinae contains the following two genera:
| Biology and health sciences | Anguilliformes | Animals |
42392462 | https://en.wikipedia.org/wiki/Chasles%27%20theorem%20%28kinematics%29 | Chasles' theorem (kinematics) | In kinematics, Chasles' theorem, or Mozzi–Chasles' theorem, says that the most general rigid body displacement can be produced by a screw displacement. A direct Euclidean isometry in three dimensions involves a translation and a rotation. The screw displacement representation of the isometry decomposes the translation into two components, one parallel to the axis of the rotation associated with the isometry and the other component perpendicular to that axis. The Chasles theorem states that the axis of rotation can be selected to provide the second component of the original translation as a result of the rotation. This theorem in three dimensions extends a similar representation of planar isometries as rotation. Once the screw axis is selected, the screw displacement rotates about it and a translation parallel to the axis is included in the screw displacement.
Planar isometries with complex numbers
Euclidean geometry is expressed in the complex plane by points where i squared is −1. Rotations result from multiplications by .
Note that a rotation about complex point p is obtained by complex arithmetic with
where the last expression shows the mapping equivalent to rotation at 0 and a translation.
Therefore, given direct isometry one can solve to obtain as the center for an equivalent rotation, provided that , that is, provided the direct isometry is not a pure translation. As stated by Cederberg, "A direct isometry is either a rotation or a translation."
History
The proof that a spatial displacement can be decomposed into a rotation and slide around and along a line is attributed to the astronomer and mathematician Giulio Mozzi (1763), in fact the screw axis is traditionally called asse di Mozzi in Italy. However, most textbooks refer to a subsequent similar work by Michel Chasles dating from 1830. Several other contemporaries of M. Chasles obtained the same or similar results around that time, including G. Giorgini, Cauchy, Poinsot, Poisson and Rodrigues. An account of the 1763 proof by Giulio Mozzi and some of its history can be found here.
Proof
Mozzi considers a rigid body undergoing first a rotation about an axis passing through the center of mass and then a translation of displacement D in an arbitrary direction. Any rigid motion can be accomplished in this way due to a theorem by Euler on the existence of an axis of rotation.
The displacement D of the center of mass can be decomposed into components parallel and perpendicular to the axis. The perpendicular (and parallel) component acts on all points of the rigid body but Mozzi shows that for some points the previous rotation acted exactly with an opposite displacement, so those points are translated parallel to the axis of rotation. These points lie on the Mozzi axis through which the rigid motion can be accomplished through a screw motion.
Another elementary proof of Mozzi–Chasles' theorem was given by E. T. Whittaker in 1904. Suppose A is to be transformed into B. Whittaker suggests that line AK be selected parallel to the axis of the given rotation, with K the foot of a perpendicular from B. The appropriate screw displacement is about an axis parallel to AK such that K is moved to B. In Whittaker's terms, "A rotation about any axis is equivalent to a rotation through the same angle about any axis parallel to it, together with a simple translation in a direction perpendicular to the axis."
Calculation
The calculation of the commuting translation and rotation from a screw motion can be performed using 3DPGA (), the geometric algebra of 3D Euclidean space. It has three Euclidean basis vectors satisfying representing orthogonal planes through the origin, and one Grassmanian basis vector satisfying to represent the plane at infinity. Any plane a distance from the origin can then be formed as a linear combination which is normalized such that . Because reflections can be represented by the plane in which the reflection occurs, the product of two planes and is the bireflection . The result is a rotation around their intersection line , which could also lie on the plane at infinity when the two reflections are parallel, in which case the bireflection is a translation.
A screw motion is the product of four non-collinear reflections, and thus . But according to the Mozzi-Chasles' theorem a screw motion can be decomposed into a commuting translation where is the axis of translation satisfying , and rotationwhere is the axis of rotation satisfying . The two bivector lines and are orthogonal and commuting. To find and from , we simply write out and consider the result grade-by-grade:Because the quadrivector part and , is directly found to beand thusThus, for a given screw motion the commuting translation and rotation can be found using the two formulae above, after which the lines and are found to be proportional to and respectively.
Other dimensions and fields
The Chasles' theorem is a special case of the Invariant decomposition.
| Physical sciences | Basics_4 | Physics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.