text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Portal topics - ( Random portal )
Astronomy is a natural science that studies celestial objects and the phenomena that occur in the cosmos. It uses mathematics , physics , and chemistry in order to explain their origin and their overall evolution . Objects of interest include planets , moons , stars , nebulae , galaxies , meteoroids , asteroids , and comets . Relevant phenomena include supernova explosions, gamma ray bursts , quasars , blazars , pulsars , and cosmic microwave background radiation . More generally, astronomy studies everything that originates beyond Earth's atmosphere . Cosmology is a branch of astronomy that studies the universe as a whole.
Astronomy is one of the oldest natural sciences. The early civilizations in recorded history made methodical observations of the night sky . These include the Egyptians , Babylonians , Greeks , Indians , Chinese , Maya , and many ancient indigenous peoples of the Americas . In the past, astronomy included disciplines as diverse as astrometry , celestial navigation , observational astronomy , and the making of calendars .
Professional astronomy is split into observational and theoretical branches. Observational astronomy is focused on acquiring data from observations of astronomical objects. This data is then analyzed using basic principles of physics. Theoretical astronomy is oriented toward the development of computer or analytical models to describe astronomical objects and phenomena. These two fields complement each other. Theoretical astronomy seeks to explain observational results and observations are used to confirm theoretical results.
Astronomy is one of the few sciences in which amateurs play an active role . This is especially true for the discovery and observation of transient events . Amateur astronomers have helped with many important discoveries, such as finding new comets. ( Full article... )
The Hubble Space Telescope ( HST or Hubble ) is a space telescope that was launched into low Earth orbit in 1990 and remains in operation. It was not the first space telescope , but it is one of the largest and most versatile, renowned as a vital research tool and as a public relations boon for astronomy . The Hubble Space Telescope is named after astronomer Edwin Hubble and is one of NASA 's Great Observatories . The Space Telescope Science Institute (STScI) selects Hubble's targets and processes the resulting data, while the Goddard Space Flight Center (GSFC) controls the spacecraft.
Hubble features a 2.4 m (7 ft 10 in) mirror, and its five main instruments observe in the ultraviolet , visible , and near-infrared regions of the electromagnetic spectrum . Hubble's orbit outside the distortion of Earth's atmosphere allows it to capture extremely high-resolution images with substantially lower background light than ground-based telescopes. It has recorded some of the most detailed visible light images, allowing a deep view into space. Many Hubble observations have led to breakthroughs in astrophysics , such as determining the rate of expansion of the universe . ( Full article... )
Abell 2199 is a galaxy cluster in the Abell catalogue featuring a brightest cluster galaxy NGC 6166 , a cD galaxy . Abell 2199, located in the Hercules constellation, is the definition of a Bautz-Morgan type I cluster due to NGC 6166.
Read more
No recent news
All times UT unless otherwise specified.
Astronomy featured article candidates :
Astronomy articles for which peer review has been requested:
These books may be in various stages of development. See also the related Science and Mathematics bookshelves.
The following Wikimedia Foundation sister projects provide more on this subject:
Shortcuts to this page: Astronomy portal • P:ASTRO Purge server cache | https://en.wikipedia.org/wiki/Portal:Astronomy |
Portal topics - ( Random portal )
Ecology (from Ancient Greek οἶκος ( oîkos ) ' house ' and -λογία ( -logía ) ' study of ' ) is the natural science of the relationships among living organisms and their environment . Ecology considers organisms at the individual, population , community , ecosystem , and biosphere levels. Ecology overlaps with the closely related sciences of biogeography , evolutionary biology , genetics , ethology , and natural history .
Ecology is a branch of biology , and is the study of abundance , biomass , and distribution of organisms in the context of the environment. It encompasses life processes, interactions, and adaptations ; movement of materials and energy through living communities; successional development of ecosystems; cooperation, competition, and predation within and between species ; and patterns of biodiversity and its effect on ecosystem processes.
Ecology has practical applications in fields such as conservation biology , wetland management, natural resource management , and human ecology .
The word ecology ( German : Ökologie ) was coined in 1866 by the German scientist Ernst Haeckel . The science of ecology as we know it today began with a group of American botanists in the 1890s. Evolutionary concepts relating to adaptation and natural selection are cornerstones of modern ecological theory .
Ecosystems are dynamically interacting systems of organisms, the communities they make up, and the non-living ( abiotic ) components of their environment. Ecosystem processes, such as primary production , nutrient cycling , and niche construction , regulate the flux of energy and matter through an environment. Ecosystems have biophysical feedback mechanisms that moderate processes acting on living ( biotic ) and abiotic components of the planet. Ecosystems sustain life-supporting functions and provide ecosystem services like biomass production (food, fuel, fiber, and medicine), the regulation of climate , global biogeochemical cycles , water filtration , soil formation , erosion control, flood protection, and many other natural features of scientific, historical, economic, or intrinsic value. ( Full article... )
Blue carbon is a concept within climate change mitigation that refers to "biologically driven carbon fluxes and storage in marine systems that are amenable to management". Most commonly, it refers to the role that tidal marshes , mangroves and seagrass meadows can play in carbon sequestration . These ecosystems can play an important role for climate change mitigation and ecosystem-based adaptation . However, when blue carbon ecosystems are degraded or lost, they release carbon back to the atmosphere, thereby adding to greenhouse gas emissions .
The methods for blue carbon management fall into the category of "ocean-based biological carbon dioxide removal (CDR) methods". They are a type of biological carbon fixation . ( Full article... )
Aggressive mimicry is a form of mimicry in which predators , parasites , or parasitoids share similar signals , using a harmless model, allowing them to avoid being correctly identified by their prey or host . Zoologists have repeatedly compared this strategy to a wolf in sheep's clothing . In its broadest sense, aggressive mimicry could include various types of exploitation, as when an orchid exploits a male insect by mimicking a sexually receptive female (see pseudocopulation ), but will here be restricted to forms of exploitation involving feeding. For example, indigenous Australians who dress up as and imitate kangaroos when hunting would not be considered aggressive mimics, nor would a human angler , though they are undoubtedly practising self-decoration camouflage . Treated separately is molecular mimicry , which shares some similarity; for instance a virus may mimic the molecular properties of its host, allowing it access to its cells. An alternative term, Peckhamian mimicry , has been suggested (after George and Elizabeth Peckham ), but it is seldom used.
Aggressive mimicry is opposite in principle to defensive mimicry , where the mimic generally benefits from being treated as harmful. The mimic may resemble its own prey, or some other organism which is beneficial or at least not harmful to the prey. The model, i.e. the organism being 'imitated', may experience increased or reduced fitness , or may not be affected at all by the relationship. On the other hand, the signal receiver inevitably suffers from being tricked, as is the case in most mimicry complexes. ( Full article... )
William Skinner Cooper (25 August 1884 – 8 October 1978) was an American ecologist. Cooper received his B.S. in 1906 from Alma College in Michigan. In 1909, he entered graduate school at the University of Chicago , where he studied with Henry Chandler Cowles , and completed his Ph.D. in 1911. His first major publication, "The Climax Forest of Isle Royale, Lake Superior, and Its Development" appeared in 1913. ( Full article... )
Oikos is an international scientific journal published monthly by the Nordic Society Oikos in the field of ecology . It was previously known as Acta Oecologica Scandinavica . Oikos is published in collaboration with Ecography , Lindbergia , the Journal of Avian Biology , and with the monograph series Ecological Bulletins . ( Full article... )
The following Wikimedia Foundation sister projects provide more on this subject: | https://en.wikipedia.org/wiki/Portal:Ecology |
Portal topics - ( Random portal )
The natural environment or natural world encompasses all biotic and abiotic things occurring naturally , meaning in this case not artificial . The term is most often applied to Earth or some parts of Earth. This environment encompasses the interaction of all living species , climate , weather and natural resources that affect human survival and economic activity.
The concept of the natural environment can be distinguished as components:
In contrast to the natural environment is the built environment . Built environments are where humans have fundamentally transformed landscapes such as urban settings and agricultural land conversion , the natural environment is greatly changed into a simplified human environment. Even acts which seem less extreme, such as building a mud hut or a photovoltaic system in the desert , the modified environment becomes an artificial one. Though many animals build things to provide a better environment for themselves, they are not human, hence beaver dams and the works of mound-building termites are thought of as natural. ( Full article... )
The natural environment or natural world encompasses all biotic and abiotic things occurring naturally , meaning in this case not artificial . The term is most often applied to Earth or some parts of Earth. This environment encompasses the interaction of all living species , climate , weather and natural resources that affect human survival and economic activity.
The concept of the natural environment can be distinguished as components:
In contrast to the natural environment is the built environment . Built environments are where humans have fundamentally transformed landscapes such as urban settings and agricultural land conversion , the natural environment is greatly changed into a simplified human environment. Even acts which seem less extreme, such as building a mud hut or a photovoltaic system in the desert , the modified environment becomes an artificial one. Though many animals build things to provide a better environment for themselves, they are not human, hence beaver dams and the works of mound-building termites are thought of as natural.
People cannot find absolutely natural environments on Earth, naturalness usually varies in a continuum, from 100% natural in one extreme to 0% natural in the other. The massive environmental changes of humanity in the Anthropocene have fundamentally effected all natural environments including: climate change , biodiversity loss and pollution from plastic and other chemicals in the air and water . More precisely, we can consider the different aspects or components of an environment, and see that their degree of naturalness is not uniform. If, for instance, in an agricultural field, the mineralogic composition and the structure of its soil are similar to those of an undisturbed forest soil, but the structure is quite different. ( Full article... )
As farmers worldwide respond to higher crop prices in order to maintain the global food supply-and-demand balance, pristine lands are cleared to replace the food crops that were diverted elsewhere to biofuels' production. Because natural lands, such as rainforests and grasslands , store carbon in their soil and biomass as plants grow each year, clearance of wilderness for new farms translates to a net increase in greenhouse gas emissions . Due to this off-site change in the carbon stock of the soil and the biomass, indirect land use change has consequences in the greenhouse gas (GHG) balance of a biofuel. ( Full article... )
Overpopulation is one of the reasons given for environmental impact as given by Paul R. Ehrlich 's formula I = P x A x T where I is the impact, P is population A is affluence and T is technology. (See also: List of countries by population density . )
Read more...
Environmental events in 2025...
George Joshua Richard Monbiot ( / ˈ m ɒ n b i oʊ / MON -bee-oh ; born 27 January 1963) is an English journalist, author, and environmental and political activist. He writes a regular column for The Guardian and has written several books.
Monbiot grew up in Oxfordshire and studied zoology at the University of Oxford . He then began a career in investigative journalism , publishing his first book Poisoned Arrows in 1989 about human rights issues in West Papua . In later years, he has been involved in activism and advocacy related to various issues, such as climate change , British politics and loneliness . In Feral (2013), he discussed and endorsed expansion of rewilding . He is the founder of The Land is Ours , a campaign for the right of access to the countryside and its resources in the United Kingdom. Monbiot was awarded the Global 500 in 1995 and the Orwell Prize in 2022. ( Full article... )
The Environmental Protection Agency ( EPA ) is an independent agency of the United States government tasked with environmental protection matters. President Richard Nixon proposed the establishment of EPA on July 9, 1970; it began operation on December 2, 1970, after Nixon signed an executive order . The order establishing the EPA was ratified by committee hearings in the House and Senate.
The agency is led by its administrator , who is appointed by the president and approved by the Senate . The current administrator is Lee Zeldin . The EPA is not a Cabinet department, but the administrator is normally given cabinet rank . The EPA has its headquarters in Washington, D.C. There are regional offices for each of the agency's ten regions, as well as 27 laboratories around the country. ( Full article... )
Agriculture • Climate change • Disaster management • Ecology • Energy • Energy development • Environment • Forestry • International development • Protected areas • Superfunds • Systems • Urban studies and planning • Water • Sanitation
The following Wikimedia Foundation sister projects provide more on this subject: | https://en.wikipedia.org/wiki/Portal:Environment |
Portal topics - ( Random portal )
Evolutionary biology is the subfield of biology that studies the evolutionary processes such as natural selection , common descent , and speciation that produced the diversity of life on Earth. In the 1930s, the discipline of evolutionary biology emerged through what Julian Huxley called the modern synthesis of understanding, from previously unrelated fields of biological research, such as genetics and ecology , systematics , and paleontology .
The investigational range of current research has widened to encompass the genetic architecture of adaptation , molecular evolution , and the different forces that contribute to evolution, such as sexual selection , genetic drift , and biogeography . The newer field of evolutionary developmental biology ("evo-devo") investigates how embryogenesis is controlled, thus yielding a wider synthesis that integrates developmental biology with the fields of study covered by the earlier evolutionary synthesis. ( Full article... )
Alfred Russel Wallace (8 January 1823 – 7 November 1913) was an English naturalist , explorer, geographer , anthropologist , biologist and illustrator. He independently conceived the theory of evolution through natural selection ; his 1858 paper on the subject was published that year alongside extracts from Charles Darwin 's earlier writings on the topic. It spurred Darwin to set aside the "big species book" he was drafting and to quickly write an abstract of it, which was published in 1859 as On the Origin of Species .
Wallace did extensive fieldwork, starting in the Amazon River basin . He then did fieldwork in the Malay Archipelago , where he identified the faunal divide now termed the Wallace Line , which separates the Indonesian archipelago into two distinct parts: a western portion in which the animals are largely of Asian origin, and an eastern portion where the fauna reflect Australasia . He was considered the 19th century's leading expert on the geographical distribution of animal species, and is sometimes called the "father of biogeography ", or more specifically of zoogeography . ( Full article... )
The hominoids are descendants of a common ancestor .
Read more...
WikiProjects connected with biology:
A complete list of scientific WikiProjects can be found here . See also Wikispecies , a Wikimedia project dedicated to classification of biological species.
Purge server cache | https://en.wikipedia.org/wiki/Portal:Evolutionary_biology |
Portal topics - ( Random portal )
A fungus is any member of a large group of eukaryotic organisms that includes microorganisms such as yeasts and molds, as well as the more familiar mushrooms . The Fungi are classified as a kingdom that is separate from plants and animals . The discipline of biology devoted to the study of fungi is known as mycology or fungal biology, which is historically regarded as a branch of botany , even though genetic studies have shown that fungi are more closely related to animals than to plants. Fungi reproduce via spores and grow as hyphae , mycelia , and further specialized structures. Fungal spores are often produced on specialized structures or in fruiting bodies , such as the head of a mushroom. Abundant worldwide, most fungi are mostly invisible to the naked eye because of the small size of their structures, and their cryptic lifestyles in soil, on dead matter, and as symbionts of plants, animals, or other fungi. Fungi perform an essential role in the decomposition of organic matter and have fundamental roles in nutrient cycling and exchange. They have long been used as a direct source of food, such as mushrooms and truffles , as a leavening agent for bread, and in fermentation of various food products, such as wine , beer , and soy sauce .
Since the 1940s, fungi have been used for the production of antibiotics , and, more recently, various enzymes produced by fungi are used industrially and in detergents . Fungi are also used as biological agents to control weeds and pests. Many species produce bioactive compounds called mycotoxins , such as alkaloids and polyketides , that are toxic to animals including humans. The fruiting structures of a few species are consumed recreationally or in traditional ceremonies as a source of psychotropic compounds. Fungi can break down manufactured materials and buildings, and become significant pathogens of humans and other animals. Losses of crops due to fungal diseases or food spoilage can have a large impact on human food supplies and local economies. Despite their importance on human affairs, little is known of the true biodiversity of Kingdom Fungi, which has been estimated at around 1.5 million species, with about 5% of these having been formally classified.
Amanita ocreata resemble several edible species commonly consumed by humans, increasing the risk of accidental poisoning. Mature fruiting bodies can be confused with the edible A. velosa , A. lanei or Volvariella speciosa , while immature specimens may be difficult to distinguish from edible Agaricus mushrooms or puffballs. Similar in toxicity to the death cap ( A. phalloides ) and destroying angels of Europe ( A. virosa ) and eastern North America ( A. bisporigera ), it is a potentially deadly fungus responsible for a number of poisonings in California. Its principal toxic constituent, α-amanitin , damages the liver and kidneys, often fatally, and has no known antidote. The initial symptoms are gastrointestinal and include colicky abdominal pain, diarrhea and vomiting . These subside temporarily after 2–3 days, though ongoing damage to internal organs during this time is common; symptoms of jaundice , diarrhea, delirium , seizures , and coma may follow with death from liver failure 6–16 days post ingestion.
Read more...
Molecular analysis has shown the species to be related to other typical Mediterranean Suillus species such as S. bellinii , S. luteus , and S. mediterraneensis . S. collinitus is a mycorrhizal species, and forms associations with several species of pine , most notably the Aleppo pine . This tree species is commonly used in reforestation schemes and soil conservation against erosion in the Mediterranean region , and S. collinitus is often used as a beneficial inoculant to help the young trees better survive in typically harsh soil conditions.
Read more...
If you want to help Wikipedia to improve its coverage of fungi, here are some things you can do...
WikiProjects related to fungi:
Read more... | https://en.wikipedia.org/wiki/Portal:Fungi |
Portal topics - ( Random portal )
Medicine has been practiced since prehistoric times , and for most of this time it was an art (an area of creativity and skill), frequently having connections to the religious and philosophical beliefs of local culture. For example, a medicine man would apply herbs and say prayers for healing, or an ancient philosopher and physician would apply bloodletting according to the theories of humorism . In recent centuries, since the advent of modern science , most medicine has become a combination of art and science (both basic and applied , under the umbrella of medical science ). For example, while stitching technique for sutures is an art learned through practice, knowledge of what happens at the cellular and molecular level in the tissues being stitched arises through science.
Prescientific forms of medicine, now known as traditional medicine or folk medicine , remain commonly used in the absence of scientific medicine and are thus called alternative medicine . Alternative treatments outside of scientific medicine with ethical, safety and efficacy concerns are termed quackery . ( Full article... )
Photo credit: Rakesh Ahuja, MD
Get involved by joining WikiProject Medicine . We discuss collaborations and all manner of issues on our talk page .
The following Wikimedia Foundation sister projects provide more on this subject: | https://en.wikipedia.org/wiki/Portal:Medicine |
Outer space , or simply space , is the expanse that exists beyond Earth's atmosphere and between celestial bodies . It contains ultra-low levels of particle densities , constituting a near-perfect vacuum of predominantly hydrogen and helium plasma , permeated by electromagnetic radiation , cosmic rays , neutrinos , magnetic fields and dust . The baseline temperature of outer space, as set by the background radiation from the Big Bang , is 2.7 kelvins (−270 °C; −455 °F).
The plasma between galaxies is thought to account for about half of the baryonic (ordinary) matter in the universe, having a number density of less than one hydrogen atom per cubic metre and a kinetic temperature of millions of kelvins . Local concentrations of matter have condensed into stars and galaxies . Intergalactic space takes up most of the volume of the universe , but even galaxies and star systems consist almost entirely of empty space. Most of the remaining mass-energy in the observable universe is made up of an unknown form, dubbed dark matter and dark energy .
Outer space does not begin at a definite altitude above Earth's surface. The Kármán line , an altitude of 100 km (62 mi) above sea level , is conventionally used as the start of outer space in space treaties and for aerospace records keeping. Certain portions of the upper stratosphere and the mesosphere are sometimes referred to as " near space ". The framework for international space law was established by the Outer Space Treaty , which entered into force on 10 October 1967. This treaty precludes any claims of national sovereignty and permits all states to freely explore outer space . Despite the drafting of UN resolutions for the peaceful uses of outer space, anti-satellite weapons have been tested in Earth orbit .
The concept that the space between the Earth and the Moon must be a vacuum was first proposed in the 17th century after scientists discovered that air pressure decreased with altitude. The immense scale of outer space was grasped in the 20th century when the distance to the Andromeda Galaxy was first measured. Humans began the physical exploration of space later in the same century with the advent of high-altitude balloon flights . This was followed by crewed rocket flights and, then, crewed Earth orbit, first achieved by Yuri Gagarin of the Soviet Union in 1961. The economic cost of putting objects, including humans, into space is very high, limiting human spaceflight to low Earth orbit and the Moon . On the other hand, uncrewed spacecraft have reached all of the known planets in the Solar System . Outer space represents a challenging environment for human exploration because of the hazards of vacuum and radiation . Microgravity has a negative effect on human physiology that causes both muscle atrophy and bone loss . ( Full article... )
Saturn is the sixth planet from the Sun and the second largest planet in the Solar System , after Jupiter , with an average radius about nine times larger than the Earth's . Saturn is named after the Roman god Saturn , equated to the Greek Cronus (the Titan father of Zeus ), the Babylonian Ninurta and the Hindu Shani . Saturn's astronomical symbol (♄) represents the Roman god's sickle . Along with Jupiter, Uranus and Neptune , Saturn is a gas giant . Together, these four planets are sometimes referred to as the Jovian planets, meaning "Jupiter-like". Saturn has a ring system that is divided into nine continuous and three discontinuous main rings (arcs), consisting mostly of ice particles with a smaller amount of rocky debris and dust . Sixty-two known moons orbit the planet; fifty-three are officially named. This does not include the hundreds of " moonlets " within the rings. Titan , Saturn's largest and the Solar System's second largest moon (after Jupiter's Ganymede ), is larger than the planet Mercury and is the only moon in the Solar System to retain a significant atmosphere.
Read more
The following Wikimedia Foundation sister projects provide more on this subject: | https://en.wikipedia.org/wiki/Portal:Outer_space |
Portal topics - ( Random portal )
Plants are the eukaryotes that form the kingdom Plantae ; they are predominantly photosynthetic . This means that they obtain their energy from sunlight , using chloroplasts derived from endosymbiosis with cyanobacteria to produce sugars from carbon dioxide and water, using the green pigment chlorophyll . Exceptions are parasitic plants that have lost the genes for chlorophyll and photosynthesis, and obtain their energy from other plants or fungi. Most plants are multicellular , except for some green algae.
There are about 380,000 known species of plants, of which the majority, some 260,000, produce seeds . They range in size from single cells to the tallest trees . Green plants provide a substantial proportion of the world's molecular oxygen; the sugars they create supply the energy for most of Earth's ecosystems , and other organisms , including animals, either eat plants directly or rely on organisms which do so. ( Full article... )
Biosphere • Botany • Evolutionary history of plants • Flower • Forest • Fruit • Garden • Gardening • Greenhouse • Houseplant • List of poisonous plants • Paleobotany • Photosynthesis • Plant cell • Tree • Vegetable • Vegetation •
The following Wikimedia Foundation sister projects provide more on this subject:
Purge server cache | https://en.wikipedia.org/wiki/Portal:Plants |
Portal topics - ( Random portal )
The Solar System is the gravitationally bound system of the Sun and the objects that orbit it. It formed about 4.6 billion years ago when a dense region of a molecular cloud collapsed, forming the Sun and a protoplanetary disc . The Sun is a typical star that maintains a balanced equilibrium by the fusion of hydrogen into helium at its core , releasing this energy from its outer photosphere . Astronomers classify it as a G-type main-sequence star .
The largest objects that orbit the Sun are the eight planets . In order from the Sun, they are four terrestrial planets ( Mercury , Venus , Earth and Mars ); two gas giants ( Jupiter and Saturn ); and two ice giants ( Uranus and Neptune ). All terrestrial planets have solid surfaces. Inversely, all giant planets do not have a definite surface, as they are mainly composed of gases and liquids. Over 99.86% of the Solar System's mass is in the Sun and nearly 90% of the remaining mass is in Jupiter and Saturn.
There is a strong consensus among astronomers that the Solar System has at least nine dwarf planets : Ceres , Orcus , Pluto , Haumea , Quaoar , Makemake , Gonggong , Eris , and Sedna . There are a vast number of small Solar System bodies , such as asteroids , comets , centaurs , meteoroids , and interplanetary dust clouds . Some of these bodies are in the asteroid belt (between Mars's and Jupiter's orbit) and the Kuiper belt (just outside Neptune's orbit). Six planets, seven dwarf planets, and other bodies have orbiting natural satellites , which are commonly called 'moons'.
The Solar System is constantly flooded by the Sun's charged particles , the solar wind , forming the heliosphere . Around 75–90 astronomical units from the Sun, the solar wind is halted, resulting in the heliopause . This is the boundary of the Solar System to interstellar space . The outermost region of the Solar System is the theorized Oort cloud , the source for long-period comets , extending to a radius of 2,000–200,000 AU . The closest star to the Solar System, Proxima Centauri , is 4.25 light-years (269,000 AU) away. Both stars belong to the Milky Way galaxy. ( Full article... )
Solar System : Planets ( Definition · Planetary habitability · Terrestrial planets · Gas giants · Rings ) · Dwarf planets ( Plutoid ) · Colonization · Discovery timeline ˑ Exploration · Moons · Planetariums
Bold articles are featured . Italicized articles are on dwarf planets or major moons.
The following Wikimedia Foundation sister projects provide more on this subject: | https://en.wikipedia.org/wiki/Portal:Solar_System |
Portal topics - ( Random portal )
Transport (in British English ) or transportation (in American English ) is the intentional movement of humans, animals, and goods from one location to another. Modes of transport include air , land ( rail and road ), water , cable , pipelines , and space . The field can be divided into infrastructure , vehicles , and operations. Transport enables human trade , which is essential for the development of civilizations .
Transport infrastructure consists of both fixed installations, including roads , railways , airways , waterways , canals , and pipelines , and terminals such as airports , railway stations , bus stations , warehouses , trucking terminals, refueling depots (including fuel docks and fuel stations ), and seaports . Terminals may be used both for the interchange of passengers and cargo and for maintenance.
Means of transport are any of the different kinds of transport facilities used to carry people or cargo. They may include vehicles, riding animals , and pack animals . Vehicles may include wagons , automobiles , bicycles , buses , trains , trucks , helicopters , watercraft , spacecraft , and aircraft . ( Full article... )
High-speed rail (HSR) has developed in Europe as an increasingly popular and efficient means of transport. The first high-speed rail lines on the continent, built in the 1970s, 1980s, and 1990s, improved travel times on intra-national corridors.
Since then, several countries have built extensive high-speed networks, and there are now several cross-border high-speed rail links. Railway operators frequently run international services, and tracks are continuously being built and upgraded to international standards on the emerging European high-speed rail network. ( Full article... )
Articles : American Airlines Flight 11 · American Airlines Flight 77 · BC Rail · Baltimore Steam Packet Company · Ben Gurion International Airport · Sophie Blanchard · Biman Bangladesh Airlines · Boeing 747 · Brihanmumbai Electric Supply and Transport · Isambard Kingdom Brunel · Căile Ferate Române · Canadian Pacific Railway · Charing Cross, Euston and Hampstead Railway · Chickasaw Turnpike · Cincinnati, Lebanon and Northern Railway · City and South London Railway · Cogan House Covered Bridge · Eastern Suburbs & Illawarra railway line, Sydney · El Al · Forksville Covered Bridge · General aviation in the United Kingdom · Great Northern, Piccadilly and Brompton Railway · Hellingly Hospital Railway · Holden · Holden VE Commodore · Hours of service · Indian Railways · Interstate 15 in Arizona · Interstate 355 · Interstate 70 in Utah · John Bull (locomotive) · Kansas Turnpike · London congestion charge · LSWR N15 class · M-35 (Michigan highway) · M62 motorway · Manila Light Rail Transit System · Manila Metro Rail Transit System · Maserati MC12 · Mass Rapid Transit (Singapore) · Mini · Mini Moke · MTR · New Carissa · New York State Route 28 · New York State Route 32 · New York State Route 174 · New York State Route 175 · New York State Route 308 · O-Bahn Busway · Panama Canal · Pan American World Airways · Pioneer Zephyr · Pulaski Skyway · Rail transport in India · Ridge Route · Rogers Locomotive and Machine Works · Royal Blue (B&O train) · San Francisco – Oakland Bay Bridge · SkyTrain (Vancouver) · SR Merchant Navy Class · SR West Country and Battle of Britain Classes · SS Andrea Doria · SS Christopher Columbus · Talbot Tagora · Talyllyn Railway · Transport Legislation Review · Tunnel Railway · United Airlines Flight 93 · Warren County Canal · Winter service vehicle
Lists : Numbered highways in Maryland · Highways in Warren County, New York · Interstate Highways in Texas · Locks on the Kennet and Avon Canal · Longest suspension bridge spans · London Underground stations · Railway stations in the West Midlands · Timeline of the London Underground
Topics : New York State Route 20N
Portals : Aviation Portal · Trains Portal
The following portal pages provide extensive galleries of pictures, maps and other media.
By country • History • Topics
Animal : Camel • Cart • Chariot • Carriage • Dog • Donkey • Elephant • Pigeon • Horse • Horse-drawn boat • Mule • Llama • Ox • Pack animal • Reindeer • Sled • Stagecoach • Yak
Aviation : Aircraft ( list )
• Airline ( lists )
• Airport ( list )
• Airship • Air traffic control • Helicopter • Heliport • History • Military • Safety • Supersonic
Human : Aircraft • Bicycle • Ice skate • Pedestrian • Pulled rickshaw • Cycle rickshaw • Watercraft rowing • Roller skates • Skateboard • Skis • Walking • Wheelbarrow • Wheelchair
Public : Aerial tramway • Class • Elevator • Escalator • Fare • Intermodal • Moving walkway • Passenger • Private • Share taxi
Rail : By country • Cable car • Car • Freight train • Funicular • High-speed • History • Locomotive ( list • diesel • electric • steam )
• Light rail ( list )
• Maglev • Monorail • Multiple unit • Passenger train • People mover • Personal rapid transit • Track • Rapid transit ( lists )
• Station • Terminology • Train ( list )
• Tram
Road : All-terrain vehicle • Automobile ( lists )
• Bus • Continuous track • Engineering • Freeway • Highway • History • Junction • Moped • Motorcycle • Off-road • Parking • Auto rickshaw • Road ( list )
• Road pricing • Safety • Sidewalk • Snowmobile • Tractor • Trolleybus • Truck • Van
Shipping : Bulk • Cargo • Containerization • Conveyor belt • Intermodal • Mail • Logistics • Transshipment
Space : Interplanetary • Rocket • Spaceport • Spacecraft
Technology : Bridge • Cable • Conveyor • Engine • Engineering • Pipeline transport • Vehicle propulsion • Tunnel • Wheel
Theory : Behavior • Congestion • Economics • Finance • Forecasting • Law • Navigation • Planning • Psychology • Queueing • Spoke–hub • Traffic engineering
Water : Amphibious • Barge • Boat ( types )
• Bulk carrier • Canal • Coastal trading vessel • Container ship • Cruise ship • Ferry ( list )
• Harbor • Hovercraft • Hydrofoil • Lighthouse • Naval ship • Port ( list )
• Reefer ship • Roll-on/roll-off • River • Sailing ship • Sea mark • Ship ( lists )
• Submarine • Tanker • Tugboat • Vessel
Timelines : 2020s in transportation technology • 2025 in aviation • 2025 in rail transport • 2025 in spaceflight
Aviation ( Airlines • Airports • Aircraft • Gliding )
Bridges and Tunnels
Highways ( Australian Roads • Automobiles • Buses • Canada Roads • Cycling • Hong Kong Roads • Indian roads • Motorcycling • Trucks • UK Roads • U.S. Roads )
Ships ( Australian Maritime History • Irish Maritime • Lighthouses • Maritime Trades • Piracy • Ports • Shipwrecks )
Spaceflight ( Rocketry )
Trains ( Stations • Streetcars • Rapid transit • NZ Railways • Pakistan Railways • UK Railways • UK Trams • London Transport • New York City Public Transportation • Washington Metro )
The following Wikimedia Foundation sister projects provide more on this subject:
Purge server cache | https://en.wikipedia.org/wiki/Portal:Transport |
Portal topics - ( Random portal )
Viruses are small infectious agents that can replicate only inside the living cells of an organism. Viruses infect all forms of life, including animals , plants , fungi , bacteria and archaea . They are found in almost every ecosystem on Earth and are the most abundant type of biological entity, with millions of different types, although only about 6,000 viruses have been described in detail. Some viruses cause disease in humans, and others are responsible for economically important diseases of livestock and crops.
Virus particles (known as virions) consist of genetic material , which can be either DNA or RNA , wrapped in a protein coat called the capsid ; some viruses also have an outer lipid envelope . The capsid can take simple helical or icosahedral forms, or more complex structures. The average virus is about 1/100 the size of the average bacterium, and most are too small to be seen directly with an optical microscope .
The origins of viruses are unclear: some may have evolved from plasmids , others from bacteria. Viruses are sometimes considered to be a life form, because they carry genetic material, reproduce and evolve through natural selection. However they lack key characteristics (such as cell structure) that are generally considered necessary to count as life. Because they possess some but not all such qualities, viruses have been described as "organisms at the edge of life".
Variant Creutzfeldt–Jakob disease , or vCJD, is a rare type of central nervous system disease within the transmissible spongiform encephalopathy family, caused by a prion . First identified in 1996, vCJD is now distinguished from classic CJD . The incubation period is believed to be years, possibly over 50 years. Prion protein can be detected in appendix and lymphoid tissue (pictured) up to two years before the onset of neurological symptoms, which include psychiatric problems , behavioural changes and painful sensations. Abnormal prion proteins build up as amyloid deposits in the brain, which acquires a characteristic spongiform appearance, with many round vacuoles in the cerebellum and cerebrum . The average life expectancy after symptoms start is 13 months.
About 170 cases have been recorded in the UK, and 50 cases in the rest of the world. The estimated prevalence in the UK is about 1 in 2000, higher than the reported cases. Transmission is believed to be mainly from consuming beef contaminated with the bovine spongiform encephalopathy prion, but may potentially also occur via blood products or contaminated surgical equipment. Infection is also believed to require a specific genetic susceptibility in the PRNP -encoding gene. Human PRNP protein can have either methionine or valine at position 129; nearly all of those affected had two copies of the methionine-containing form, found in 40% of Caucasians.
The first immortal human cell line , HeLa cells were derived from a cervical cancer biopsy and carry human papillomavirus 18 DNA. The cells have been growing since 1951.
Credit: Gerry Shaw (8 March 2012)
26 February: In the ongoing pandemic of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), more than 110 million confirmed cases, including 2.5 million deaths, have been documented globally since the outbreak began in December 2019. WHO
18 February: Seven asymptomatic cases of avian influenza A subtype H5N8 , the first documented H5N8 cases in humans, are reported in Astrakhan Oblast , Russia, after more than 100,0000 hens died on a poultry farm in December. WHO
14 February: Seven cases of Ebola virus disease are reported in Gouécké , south-east Guinea . WHO
7 February: A case of Ebola virus disease is detected in North Kivu Province of the Democratic Republic of the Congo . WHO
4 February: An outbreak of Rift Valley fever is ongoing in Kenya , with 32 human cases, including 11 deaths, since the outbreak started in November. WHO
21 November: The US Food and Drug Administration (FDA) gives emergency-use authorisation to casirivimab/imdevimab , a combination monoclonal antibody (mAb) therapy for non-hospitalised people twelve years and over with mild-to-moderate COVID-19 , after granting emergency-use authorisation to the single mAb bamlanivimab earlier in the month. FDA 1 , 2
18 November: The outbreak of Ebola virus disease in Équateur Province , Democratic Republic of the Congo , which started in June, has been declared over; a total of 130 cases were recorded, with 55 deaths. UN
Infectious diseases are symptomatic diseases of an individual host resulting from the infection and replication of pathogens , including viruses , prions , bacteria , fungi , protozoa and multicellular parasites . Infectious diseases were responsible for 17% of human deaths globally in 2013, with HIV , measles and influenza being among the most significant viral causes of death.
Infectious pathogens must enter, survive and multiply within the host, and spread to fresh hosts. Relatively few microorganisms cause disease in healthy individuals, and infectious disease results from the interplay between these rare pathogens and the host's defences. Infection does not usually result in the host's death. The pathogen is generally cleared from the body by the host's immune system , although persistent infection occurs with several viruses. Transmission can occur by physical contact, contaminated food, water or objects , body fluids, airborne inhalation or via vectors , such as the mosquito (pictured) . Diagnosis sometimes involves identifying the pathogen; techniques include culture , microscopy , immunoassays and PCR -based diagnostics.
The 1918–20 influenza pandemic , the first of the two involving H1N1 influenza virus , was unusually deadly. It infected 500 million people across the entire globe, with a death toll of 50–100 million (3–5% of the world's population), making it one of the deadliest natural disasters of human history. It has also been implicated in the outbreak of encephalitis lethargica in the 1920s. Despite the nickname "Spanish flu", the pandemic 's geographic origin is unknown.
Most influenza outbreaks disproportionately kill young, elderly or already weakened patients; in contrast this predominantly killed healthy young adults. Contemporary medical reports suggest that malnourishment, overcrowded medical facilities and poor hygiene promoted fatal bacterial pneumonia . Some research suggests that the virus might have killed through a cytokine storm , an overreaction of the body's immune system . This would mean the strong immune reactions of young adults resulted in a more severe disease than the weaker immune systems of children and older adults.
— André Lwoff
Viruses & Subviral agents: bat virome • elephant endotheliotropic herpesvirus • HIV • introduction to viruses • Playa de Oro virus • poliovirus • prion • rotavirus • virus
Diseases: colony collapse disorder • common cold • croup • dengue fever • gastroenteritis • Guillain–Barré syndrome • hepatitis B • hepatitis C • hepatitis E • herpes simplex • HIV/AIDS • influenza • meningitis • myxomatosis • polio • pneumonia • shingles • smallpox
Epidemiology & Interventions: 2007 Bernard Matthews H5N1 outbreak • Coalition for Epidemic Preparedness Innovations • Disease X • 2009 flu pandemic • HIV/AIDS in Malawi • polio vaccine • Spanish flu • West African Ebola virus epidemic
Virus–Host interactions: antibody • host • immune system • parasitism • RNA interference
Methodology: metagenomics
Social & Media: And the Band Played On • Contagion • "Flu Season" • Frank's Cock • Race Against Time: Searching for Hope in AIDS-Ravaged Africa • social history of viruses • " Steve Burdick " • "The Time Is Now" • " What Lies Below "
People: Brownie Mary • Macfarlane Burnet • Bobbi Campbell • Aniru Conteh • people with hepatitis C • HIV-positive people • Bette Korber • Henrietta Lacks • Linda Laubenstein • Barbara McClintock • poliomyelitis survivors • Joseph Sonnabend • Eli Todd • Ryan White
Coronaviruses are a subfamily of RNA viruses in the Nidovirales order which infect mammals and birds . They are spherical enveloped viruses, generally around 80–120 nm in diameter, containing a helical nucleocapsid . Their positive-sense single-stranded RNA genome ranges from approximately 26 to 32 kb in size, one of the largest among RNA viruses. Around 74 characteristic club-shaped spikes project from the envelope, which in electron micrographs resemble the solar corona , from which their name derives. Infectious bronchitis virus was isolated in 1933 from chickens; two mice coronaviruses causing hepatitis and encephalomyelitis were discovered in the 1940s.
Coronaviruses predominantly infect epithelial cells , with the viral spike protein determining tissue tropism and host range . Animal coronaviruses often infect the gastrointestinal tract , causing diarrhoea in cows and pigs, and are transmitted by the faecal–oral route . Human and bird coronaviruses infect the respiratory tract , are transmitted via aerosols and droplets ; they cause respiratory tract infections that can range from mild to lethal. Mild illnesses in humans include around 15% of common cold cases, while more lethal coronaviruses cause SARS , MERS and COVID-19 . Many human coronaviruses have evolved from viruses of bats .
Sir Frank Macfarlane Burnet (3 September 1899 – 31 August 1985) was an Australian virologist , microbiologist and immunologist . His early virological studies were on bacteriophages , including the pioneering observation that bacteriophages could exist as a stable non-infectious form that multiplies with the bacterial host, later termed the lysogenic cycle .
With the outbreak of World War II, Burnet's focus moved to influenza . Although his efforts to develop a live vaccine proved unsuccessful, he developed assays for the isolation, culture and detection of influenza virus , including haemagglutination assays. Modern methods for producing influenza vaccines are still based on his work improving virus-growing processes in hen's eggs. He also researched influenza virus genetics, examining the genetic control of virulence and demonstrating, several years before influenza virus was shown to have a segmented genome, that the virus recombined at high frequency.
May 1955: First issue of Virology ; first English-language journal dedicated to virology
4 May 1984: HTLV-III, later HIV , identified as the cause of AIDS by Robert Gallo and coworkers
5 May 1939: First electron micrographs of tobacco mosaic virus taken by Helmut Ruska and coworkers
5 May 1983: Structure of influenza neuraminidase solved by Jose Varghese, Graeme Laver and Peter Colman
8 May 1980: WHO announced formally the global eradication of smallpox
11 May 1978: SV40 sequenced by Walter Fiers and coworkers
12 May 1972: Gene for bacteriophage MS2 coat protein is sequenced by Walter Fiers and coworkers, the first gene to be completely sequenced
13 May 2011: Boceprevir approved for the treatment of chronic hepatitis C virus (HCV) infection, the first direct-acting antiviral for HCV
14 May 1796: Edward Jenner inoculated James Phipps (pictured) with cowpox
15/16 May 1969: Death of Robert Rayford , the earliest confirmed case of AIDS outside Africa
18 May 1998: First World AIDS Vaccine Day
20 May 1983: Isolation of the retrovirus LAV, later HIV, by Luc Montagnier , Françoise Barré-Sinoussi and coworkers
23 May 2011: Telaprevir approved for the treatment of chronic HCV infection
25 May 2011: WHO declared rinderpest eradicated
31 May 1937: First results in humans from the 17D vaccine for yellow fever published by Max Theiler and Hugh H. Smith
Aciclovir (also acyclovir and sold as Zovirax ) is a nucleoside analogue that mimics the nucleoside guanosine . It is active against most viruses in the herpesvirus family, and is mainly used to treat herpes simplex virus infections, chickenpox and shingles . After phosphorylation by viral thymidine kinase and cellular enzymes, the drug inhibits the viral DNA polymerase . Extremely selective and low in cytotoxicity , it was seen as the start of a new era in antiviral therapy . Aciclovir was discovered by Howard Schaeffer and colleagues, and developed by Schaeffer and Gertrude Elion , who was awarded the 1988 Nobel Prize in Medicine in part for its development. Nucleosides isolated from a Caribbean sponge , Cryptotethya crypta , formed the basis for its synthesis. Aciclovir differs from earlier nucleoside analogues in containing only a partial nucleoside structure: the sugar ring is replaced with an open chain. Resistance to the drug is rare in people with a normal immune system.
Subcategories of virology :
Medicine • Microbiology • Molecular & Cellular Biology • Veterinary Medicine
The following Wikimedia Foundation sister projects provide more on this subject: | https://en.wikipedia.org/wiki/Portal:Viruses |
Portal frame is a construction technique where vertical supports are connected to horizontal beams or trusses via fixed joints with designed-in moment -resisting capacity. [ 1 ] The result is wide spans and open floors.
Portal frame structures can be constructed using a variety of materials and methods. These include steel , reinforced concrete and laminated timber such as glulam . First developed in the 1960s, they have become the most common form of enclosure for spans of 20 to 60 meters. [ 2 ]
Because of these very strong and rigid joints, some of the bending moment in the rafters is transferred to the columns . This means that the size of the rafters can be reduced or the span can be increased for the same size rafters. This makes portal frames a very efficient construction technique to use for wide span buildings.
Portal frame construction is therefore typically seen in warehouses , barns and other places where large, open spaces are required at low cost and a pitched roof is acceptable.
Generally portal frames are used for single-story buildings but they can be used for low-rise buildings with several floors where they can be economic if the floors do not span right across the building (in these circumstances a skeleton frame, with internal columns, would be a more economic choice). A typical configuration might be where there is office space built against one wall of a warehouse.
Portal frames can be clad with various materials. For reasons of economy and speed, the most popular solution is some form of lightweight insulated metal cladding with cavity masonry work to the bottom 2 m of the wall to provide security and impact resistance. The lightweight cladding would be carried on sheeting rails spanning between the columns of the portal frames.
Portal frames can be defined as two-dimensional rigid frames that have the basic characteristics of a rigid joint between column and beam.
The main objective of this form of design is to reduce bending moment in the beam, which allows the frame to act as one structural unit.
The transfer of stresses from the beam to the column results in rotational movement at the foundation, which can be overcome by the introduction of a pin/hinge joint.
For warehouses and industrial buildings, sloping roof made of purlins and ac sheet roofing between portals is provided. For assembly halls, portals with R.C slab roof cast monolithically is used.
Portal frames are designed for the following loads:
Previously, it has been shown that the limit state design/load and resistance factor design (LRFD) and permissible stress design/allowable strength design (ASD) can produce significantly different designs of steel gable frames. [ 3 ]
There are few situations where ASD produces significantly lighter weight steel gable frame designs. Additionally, it has been shown that in high snow regions, the difference between the methods is more dramatic. [ 4 ]
While designing, care should be taken for proper
If the joints are not rigid, they will "open up" and the frame will be unstable when subjected to loads. This is the pack of cards effect. | https://en.wikipedia.org/wiki/Portal_frame |
Portastudio refers to a series of multitrack recorders produced by TASCAM beginning in 1979 with the introduction of the TEAC 144, the first four-track compact cassette -based recorder. A TASCAM trademark, "portastudio" is commonly used to refer to any self-contained multitrack recorder dedicated to music production. [ 1 ] [ 2 ] [ 3 ]
The Portastudio is credited with launching the home recording revolution by making it possible for musicians to easily and affordably record and produce multitrack music at home [ 4 ] [ 5 ] [ 6 ] and is cited as one of the most significant innovations in music production technology. [ 7 ]
The first Portastudio, the TEAC 144, was introduced on September 22, 1979, at the AES Convention in New York City. [ 5 ] The 144 combined a 4-channel mixer with pan , treble , and bass on each input with a cassette recorder capable of recording four tracks in one direction at 3¾ inches per second (double the normal cassette playback speed) in a self-contained unit weighing less than 20 pounds at a list price of US$ 899. [ 8 ] The 144 was the first product that made it possible for musicians to affordably record several instrumental and vocal parts on different tracks of the built-in 4-track cassette recorder individually and later blend all the parts together, while transferring them to another standard, two-channel stereo tape deck ( remix and mixdown ) to form a stereo recording. [ 9 ] In 1981, Fostex introduced the first of their "Multitracker" line of multitrack cassette recorders with the 250. [ 10 ]
In 1982, TASCAM replaced the 144 with the 244 Portastudio, which improved upon the previous design with overall better sound quality and more features, including: parametric EQ , dbx Type II noise reduction, and the ability to record up to four tracks simultaneously. [ 11 ] [ 12 ] [ 10 ]
TASCAM continued to develop and release cassette-based portastudio models with different features until 2001, [ 13 ] including the "Ministudio" line of portastudios that offered a limited feature set and the ability to run on batteries at even more affordable price points, and the "MIDIStudio" line which added MIDI functionality. [ 14 ] [ 15 ] [ 16 ] Other manufacturers, including Fostex, Yamaha , Akai , and others introduced their own lines of multitrack cassette recorders. [ 3 ] [ 17 ] [ 18 ] Most were four-track recorders, but there were also six-track and even eight-track units. [ 14 ]
In 1997, TASCAM introduced the first digital Portastudio: the TASCAM 564 which recorded to MiniDisc . [ 19 ] Later Digital Portastudio models, some with the ability to record 24 or even 32 tracks, utilize CD-R , internal hard drives , or SD cards , and commonly include built-in DSP effects . [ 14 ] [ 20 ]
The Portastudio, and particularly its first iteration, the TEAC 144, is credited with launching the home recording revolution by making it possible for musicians to easily and affordably record and produce multitrack music themselves wherever they wanted [ 4 ] [ 5 ] [ 6 ] and is cited as one of the most significant innovations in music production technology. [ 7 ] In general, these machines were typically used by amateur and professional musicians to record demos , although some Portastudio projects, most notably Bruce Springsteen 's 1982 album Nebraska , have become notable major-label releases. Beginning in the 1990s, cassette-based Portastudios experienced new popularity for lo-fi recording.
In 2006, the TEAC Portastudio was inducted into the TECnology Hall of Fame , an honor given to "products and innovations that have had an enduring impact on the development of audio technology." [ 21 ] In 2021, in conjunction with TASCAM's 50th anniversary, a software plug-in emulation of the Porta One ministudio was released by IK Multimedia. [ 22 ] | https://en.wikipedia.org/wiki/Portastudio |
In mathematics, Porter's constant C arises in the study of the efficiency of the Euclidean algorithm . [ 1 ] [ 2 ] It is named after J. W. Porter of University College, Cardiff .
Euclid's algorithm finds the greatest common divisor of two positive integers m and n . Hans Heilbronn proved that the average number of iterations of Euclid's algorithm, for fixed n and averaged over all choices of relatively prime integers m < n ,
is
Porter showed that the error term in this estimate is a constant, plus a polynomially-small correction, and Donald Knuth evaluated this constant to high accuracy. It is:
where
(sequence A086237 in the OEIS )
This number theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Porter's_constant |
The Portevin–Le Chatelier (PLC) effect describes a serrated stress–strain curve or jerky flow, which some materials exhibit as they undergo plastic deformation , specifically inhomogeneous deformation . [ 1 ] This effect has been long associated with dynamic strain aging or the competition between diffusing solutes pinning dislocations and dislocations breaking free of this stoppage. [ 2 ]
The onset of the PLC effect occurs when the strain rate sensitivity becomes negative and inhomogeneous deformation starts. [ 1 ] This effect also can appear on the specimen's surface and in bands of plastic deformation. This process starts at a so-called critical strain , which is the minimum strain needed for the onset of the serrations in the stress–strain curve. The critical strain is both temperature and strain rate dependent. [ 2 ] The existence of a critical strain is attributed to better solute diffusivity due to the deformation created vacancies and increased mobile dislocation density. Both of these contribute to the instability in substitutional alloys, while interstitial alloys are only affected by the increase in mobile dislocation densities. [ 3 ]
While the effect is named after Albert Portevin and François Le Chatelier, [ 4 ] they were not the first to discover it. Félix Savart made the discovery when he observed non-homogeneous deformation during a tensile test of copper strips. He documented the physical serrations in his samples that are currently known as Portevin–Le Chatelier bands. A student of Savart, Antoine Masson, repeated the experiment while controlling for loading rate. Masson observed that under a constant loading rate, the samples would experience sudden large changes in elongation (as large as a few millimeters). [ 5 ]
Much of the underlying physics of the Portevin-Le Chatelier effect lies in a specific case of solute drag creep. Adding solute atoms to a pure crystal introduces a size misfit into the system. This size misfit leads to restriction of dislocation motion. At low temperature, these solute atoms are immobile within the lattice, but at high temperatures, the solute atoms become mobile and interact in a more complex manner with the dislocations. When solute atoms are mobile and the dislocation velocity is not too high, the solute atoms and dislocation can move together where the solute atom decreases the motion of the dislocation. [ 6 ]
The Portevin-Le Chatelier effect occurs in the specific case where solute drag creep is occurring and there is an applied stress, with a material dependent range, on the sample. The applied stress causes the velocity of the dislocations to increase, allowing the dislocation to move away from the solute. This process is commonly referred to as “breakaway”. Once the dislocation has moved away from the solute, the stress on it decreases which causes its velocity to decrease. This allows the solute atoms to “catch up” with the dislocation. As soon as the solute atom catches up, the stress on the dislocation significantly increases, causing the process to repeat. [ 6 ]
The cyclic changes described above produce serrations in the plastic region of the stress strain diagram of a tensile test that is undergoing the Portevin-Le Chatelier effect. The variation in stress also causes non-homogeneous deformation to occur throughout the sample which can be visible to the naked eye through observation of a rough finish. [ 5 ]
Temperature affects both the speed of band propagation through the material and the critical strain. The speed of band propagation is proportional to the temperature (lower temp lower speeds, higher temp higher speeds). Often the critical strain will first decrease due to temperature. [ 2 ] The temperature effect on the PLC regime is caused by the increased ability of the solutes to diffuse to the dislocations with increasing temperature. Although the mechanism of diffusion is not entirely understood, it is believed that solute atoms diffuse by either volume (high temperature), by diffusion in stacking fault ribbons between partial dislocations (intermediate temperature) or pipe diffusion (low temperature). [ 3 ]
While temperature is related to the rate of diffusion, strain rate determines the time the dislocations take to overcome these obstacles, and has a dramatic effect on the conditions of the PLC effect. So generally, the critical stress will decrease with imposed strain rate. [ 3 ] Also the higher the stress rate, the lower the band speed. [ 2 ]
Precipitates , often found in Al alloys (especially of the Mg variety), complicate the PLC effect.
Often these precipitates will cause the so-called inverse behavior, which changes the effect of both strain rate and temperature on the solid [ 7 ] The presence of precipitates is shown to have an influence on the appearance and disappearance of serrations in the stress strain curve.
[ 8 ]
The structure of the material, as well, has an effect on the appearance and parameters that describe the PLC effect. For example, the magnitude of the stress drops is larger with a smaller grain size. The critical strain often increases with larger grains, which is linked to the dependence of the dislocation density to grain size. [ 8 ] Serration amplitude is greater in Al-Mg alloys for a finer grain size. There is a correlation between increasing the critical strain and the onset of serration with increasing grain size. [ 9 ] But some findings indicate that the grain size has practically no effect on the band velocity or the band width. [ 3 ]
Polishing the material affects the beginning of the PLC effect and the band velocities. Apparently a rougher surface provides more nucleation points for high stress, which help initiate deformation bands . These bands also propagate twice as fast in the polished specimen. [ 2 ]
The number of vacancies does not directly affect the PLC start point. It was found that if a material is pre-strained to a value ½ of that required to initiate jerky flow, and then rested at the test temperature or annealed to remove vacancies (but low enough that the dislocation structure is not affected), then the total critical strain is only slightly decreased as well as the types of serrations that do occur. [ 10 ]
While properties like strain rate sensitivity and critical strain mark the beginning of the PLC effect, people have developed a system to describe the serrations themselves. These types are often dependent on strain rate, temperature, and grain size. [ 8 ] While usually the bands are labeled A, B, and C some sources have added a D and E type Bands. [ 11 ] Because the type A, B, and C type bands are most found in literature they will be the only ones covered here.
Type A bands are often seen at high strain rate and low temperatures. [ 11 ] They are a random development of bands that form over the entire specimen. [ 12 ] They are usually described as continuously propagating with small stress drops. [ 3 ]
Type B bands are sometimes described as “hopping” bands and they appear at a medium to high strain rates. [ 12 ] They are often seen as each band forming ahead of the previous one in a spatially correlated way. The serrations are more irregular with smaller amplitudes than type C. [ 3 ]
C bands are often seen at low applied strain rate or high temperatures. [ 11 ] These are identified with random nucleated static bands with large characteristic stress drops the serration. [ 3 ]
The different types of bands are believed to represent different states of dislocation in the bands, and band types can change in a materials stress strain curve. Currently there are no models that can capture the change in band types [ 3 ]
Portevin-Le Chatelier (PLC) effect is a proof of non-uniform deformation of CuNi25 commercial alloys at intermediate temperature. In CuNi25 alloy it manifests itself as irregularities in the form of serrations on the stress–strain curve. It proves instability of force during tension and heterogeneity of microstructure and presence of many heterogeneous factors, affecting its mechanical properties. [ 13 ]
Because the PLC effect is related to a strengthening mechanism, the strength of steel may increase; however, the plasticity and ductility of a material afflicted by the PLC effect decrease drastically. The PLC effect is known to induce blue brittleness in steel; additionally, the loss of ductility may cause rough surfaces to develop during deformation (Al-Mg alloys are especially susceptible to this), rendering them useless for autobody or casting applications. [ 2 ] | https://en.wikipedia.org/wiki/Portevin–Le_Chatelier_effect |
Porting Authorization Code ( PAC ) is a unique identifier (normally 9 characters long and in the format "ABC123456") used by some mobile network operators to facilitate mobile number portability (MNP). This allows users to retain their mobile telephone number when switching operators.
Telecommunications service is regulated in the UK by Ofcom . [ 1 ] On 25 July 2003, Ofcom introduced the General Conditions of Entitlement which apply to all communications networks and service providers in the UK. Several amendments to this original document have been issued since this time.
Condition 18 requires all providers to provide number portability but only to subscribers of publicly available telephone services who request it. Number portability must be provided as soon as practicable and on reasonable terms to subscribers, and bilateral porting arrangements between providers must accord with agreed processes.
Some mobile phone companies can charge a fee to move the customer's number. This is usually no more than £25. The provider must issue a PAC within two hours of the port-out request, if such request was made over the phone for fewer than 25 numbers on a single account. Customer debt is not a valid reason for a service provider to refuse issuing of a PAC. Service providers may not treat PAC requests as requests to terminate service. Pay-as-you-go customers will lose any unused credit when switching service providers. [ 2 ]
Since 1 July 2019, customers can request a PAC by text message, rather than having to call their existing network. [ 3 ]
In India, the code is known as a 'Unique Porting Code (UPC)'. The rules for number portability are prescribed by the Telecom Regulatory Authority of India (TRAI) | https://en.wikipedia.org/wiki/Porting_Authorisation_Code |
The Portland Pattern Repository ( PPR ) is an online repository for computer programming software design patterns . It was accompanied by the website WikiWikiWeb , the world's first wiki . The repository has an emphasis on extreme programming , and is hosted by Cunningham & Cunningham (C2) of Portland, Oregon . [ 1 ] The PPR's motto is "People, Projects & Patterns".
On 17 September 1987, programmer Ward Cunningham with Tektronix and Apple Computer 's Kent Beck co-published the paper "Using Pattern Languages for Object-Oriented Programs" [ 2 ] This paper, about software design patterns, was inspired by Christopher Alexander 's architectural concept of "patterns" [ 2 ] It was written for the 1987 OOPSLA programming conference organized by the Association for Computing Machinery . Cunningham and Beck's idea became popular among programmers because it helped them exchange programming ideas in an easy to understand format.
Cunningham & Cunningham, the programming consultancy that would eventually host the PPR on its Internet domain, was incorporated in Salem, Oregon , on 1 November 1991, and is named after Ward and his wife, Karen R. Cunningham, a mathematician, school teacher, and school director. Cunningham & Cunningham registered their Internet domain, c2.com , on 23 October 1994. Ward created the Portland Pattern Repository on c2.com as a means to help object-oriented programmers publish their computer programming patterns by submitting them to him. Some of those programmers attended the OOPSLA and PLoP conferences about object-oriented programming, and posted their ideas on the PPR. The PPR is accompanied, on c2.com , by the first ever wiki , a collection of reader-modifiable Web pages, which is named WikiWikiWeb . [ 3 ] | https://en.wikipedia.org/wiki/Portland_Pattern_Repository |
The Portland Press Excellence in Science Award was an annual award instituted in 1964 to recognize notable research in any branch of biochemistry undertaken in the UK or Republic of Ireland . It was initially called the CIBA Medal and Prize , then the Novartis Medal and Prize . The prize consists of a medal and a £3000 cash award. The winner is invited to present a lecture at a Society conference and submit an article to one of the Society's publications. [ 1 ] Notable recipients include the Nobel laureates John E. Walker , Paul Nurse , Sydney Brenner , César Milstein , Peter D. Mitchell , Rodney Porter , and John Cornforth .
The Novartis Medal and Prize was last presented in 2019 and will be replaced from 2021 by the Portland Press Excellence in Science Award. Portland Press is the publishing arm of the Biochemical Society. [ 2 ]
Source: Biochemical Society
Source: [ 1 ] | https://en.wikipedia.org/wiki/Portland_Press_Excellence_in_Science_Award |
In mathematics , more specifically measure theory , there are various notions of the convergence of measures . For an intuitive general sense of what is meant by convergence of measures , consider a sequence of measures μ n on a space, sharing a common collection of measurable sets. Such a sequence might represent an attempt to construct 'better and better' approximations to a desired measure μ that is difficult to obtain directly. The meaning of 'better and better' is subject to all the usual caveats for taking limits ; for any error tolerance ε > 0 we require there be N sufficiently large for n ≥ N to ensure the 'difference' between μ n and μ is smaller than ε . Various notions of convergence specify precisely what the word 'difference' should mean in that description; these notions are not equivalent to one another, and vary in strength.
Three of the most common notions of convergence are described below.
This section attempts to provide a rough intuitive description of three notions of convergence, using terminology developed in calculus courses; this section is necessarily imprecise as well as inexact, and the reader should refer to the formal clarifications in subsequent sections. In particular, the descriptions here do not address the possibility that the measure of some sets could be infinite, or that the underlying space could exhibit pathological behavior, and additional technical assumptions are needed for some of the statements. The statements in this section are however all correct if μ n is a sequence of probability measures on a Polish space .
The various notions of convergence formalize the assertion that the 'average value' of each 'sufficiently nice' function should converge: ∫ f d μ n → ∫ f d μ {\displaystyle \int f\,d\mu _{n}\to \int f\,d\mu }
To formalize this requires a careful specification of the set of functions under consideration and how uniform the convergence should be.
The notion of weak convergence requires this convergence to take place for every continuous bounded function f .
This notion treats convergence for different functions f independently of one another, i.e., different functions f may require different values of N ≤ n to be approximated equally well (thus, convergence is non-uniform in f ).
The notion of setwise convergence formalizes the assertion that the measure of each measurable set should converge: μ n ( A ) → μ ( A ) {\displaystyle \mu _{n}(A)\to \mu (A)}
Again, no uniformity over the set A is required.
Intuitively, considering integrals of 'nice' functions, this notion provides more uniformity than weak convergence. As a matter of fact, when considering sequences of measures with uniformly bounded
variation on a Polish space , setwise convergence implies the convergence ∫ f d μ n → ∫ f d μ {\textstyle \int f\,d\mu _{n}\to \int f\,d\mu } for any bounded measurable function f [ citation needed ] .
As before, this convergence is non-uniform in f .
The notion of total variation convergence formalizes the assertion that the measure of all measurable sets should converge uniformly , i.e. for every ε > 0 there exists N such that | μ n ( A ) − μ ( A ) | < ε {\displaystyle |\mu _{n}(A)-\mu (A)|<\varepsilon } for every n > N and for every measurable set A . As before, this implies convergence of integrals against bounded measurable functions, but this time convergence is uniform over all functions bounded by any fixed constant.
This is the strongest notion of convergence shown on this page and is defined as follows. Let ( X , F ) {\displaystyle (X,{\mathcal {F}})} be a measurable space . The total variation distance between two (positive) measures μ and ν is then given by
Here the supremum is taken over f ranging over the set of all measurable functions from X to [−1, 1] . This is in contrast, for example, to the Wasserstein metric , where the definition is of the same form, but the supremum is taken over f ranging over the set of those measurable functions from X to [−1, 1] which have Lipschitz constant at most 1; and also in contrast to the Radon metric , where the supremum is taken over f ranging over the set of continuous functions from X to [−1, 1] . In the case where X is a Polish space , the total variation metric coincides with the Radon metric.
If μ and ν are both probability measures , then the total variation distance is also given by
The equivalence between these two definitions can be seen as a particular case of the Monge–Kantorovich duality . From the two definitions above, it is clear that the total variation distance between probability measures is always between 0 and 2.
To illustrate the meaning of the total variation distance, consider the following thought experiment. Assume that we are given two probability measures μ and ν , as well as a random variable X . We know that X has law either μ or ν but we do not know which one of the two. Assume that these two measures have prior probabilities 0.5 each of being the true law of X . Assume now that we are given one single sample distributed according to the law of X and that we are then asked to guess which one of the two distributions describes that law. The quantity
then provides a sharp upper bound on the prior probability that our guess will be correct.
Given the above definition of total variation distance, a sequence μ n of measures defined on the same measure space is said to converge to a measure μ in total variation distance if for every ε > 0 , there exists an N such that for all n > N , one has that [ 1 ]
For ( X , F ) {\displaystyle (X,{\mathcal {F}})} a measurable space , a sequence μ n is said to converge setwise to a limit μ if
for every set A ∈ F {\displaystyle A\in {\mathcal {F}}} .
Typical arrow notations are μ n → s w μ {\displaystyle \mu _{n}\xrightarrow {sw} \mu } and μ n → s μ {\displaystyle \mu _{n}\xrightarrow {s} \mu } .
For example, as a consequence of the Riemann–Lebesgue lemma , the sequence μ n of measures on the interval [−1, 1] given by μ n ( dx ) = (1 + sin( nx )) dx converges setwise to Lebesgue measure, but it does not converge in total variation.
In a measure theoretical or probabilistic context setwise convergence is often referred to as strong convergence (as opposed to weak convergence). This can lead to some ambiguity because in functional analysis , strong convergence usually refers to convergence with respect to a norm.
In mathematics and statistics , weak convergence is one of many types of convergence relating to the convergence of measures . It depends on a topology on the underlying space and thus is not a purely measure-theoretic notion.
There are several equivalent definitions of weak convergence of a sequence of measures, some of which are (apparently) more general than others. The equivalence of these conditions is sometimes known as the Portmanteau theorem . [ 2 ]
Definition. Let S {\displaystyle S} be a metric space with its Borel σ {\displaystyle \sigma } -algebra Σ {\displaystyle \Sigma } . A bounded sequence of positive probability measures P n ( n = 1 , 2 , … ) {\displaystyle P_{n}\,(n=1,2,\dots )} on ( S , Σ ) {\displaystyle (S,\Sigma )} is said to converge weakly to a probability measure P {\displaystyle P} (denoted P n ⇒ P {\displaystyle P_{n}\Rightarrow P} ) if any of the following equivalent conditions is true (here E n {\displaystyle \operatorname {E} _{n}} denotes expectation or the integral with respect to P n {\displaystyle P_{n}} , while E {\displaystyle \operatorname {E} } denotes expectation or the integral with respect to P {\displaystyle P} ):
In the case S {\displaystyle S} and R {\displaystyle \mathbf {R} } (with its usual topology) are homeomorphic , if F n {\displaystyle F_{n}} and F {\displaystyle F} denote the cumulative distribution functions of the measures P n {\displaystyle P_{n}} and P {\displaystyle P} , respectively, then P n {\displaystyle P_{n}} converges weakly to P {\displaystyle P} if and only if lim n → ∞ F n ( x ) = F ( x ) {\displaystyle \lim _{n\to \infty }F_{n}(x)=F(x)} for all points x ∈ R {\displaystyle x\in \mathbf {R} } at which F {\displaystyle F} is continuous.
For example, the sequence where P n {\displaystyle P_{n}} is the Dirac measure located at 1 / n {\displaystyle 1/n} converges weakly to the Dirac measure located at 0 (if we view these as measures on R {\displaystyle \mathbf {R} } with the usual topology), but it does not converge setwise. This is intuitively clear: we only know that 1 / n {\displaystyle 1/n} is "close" to 0 {\displaystyle 0} because of the topology of R {\displaystyle \mathbf {R} } .
This definition of weak convergence can be extended for S {\displaystyle S} any metrizable topological space . It also defines a weak topology on P ( S ) {\displaystyle {\mathcal {P}}(S)} , the set of all probability measures defined on ( S , Σ ) {\displaystyle (S,\Sigma )} . The weak topology is generated by the following basis of open sets:
where
If S {\displaystyle S} is also separable , then P ( S ) {\displaystyle {\mathcal {P}}(S)} is metrizable and separable, for example by the Lévy–Prokhorov metric . If S {\displaystyle S} is also compact or Polish , so is P ( S ) {\displaystyle {\mathcal {P}}(S)} .
If S {\displaystyle S} is separable, it naturally embeds into P ( S ) {\displaystyle {\mathcal {P}}(S)} as the (closed) set of Dirac measures , and its convex hull is dense .
There are many "arrow notations" for this kind of convergence: the most frequently used are P n ⇒ P {\displaystyle P_{n}\Rightarrow P} , P n ⇀ P {\displaystyle P_{n}\rightharpoonup P} , P n → w P {\displaystyle P_{n}\xrightarrow {w} P} and P n → D P {\displaystyle P_{n}\xrightarrow {\mathcal {D}} P} .
Let ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},\mathbb {P} )} be a probability space and X be a metric space. If X n : Ω → X is a sequence of random variables then X n is said to converge weakly (or in distribution or in law ) to the random variable X : Ω → X as n → ∞ if the sequence of pushforward measures ( X n ) ∗ ( P ) converges weakly to X ∗ ( P ) in the sense of weak convergence of measures on X , as defined above.
Let X {\displaystyle X} be a metric space (for example R {\displaystyle \mathbb {R} } or [ 0 , 1 ] {\displaystyle [0,1]} ). The following spaces of test functions are commonly used in the convergence of probability measures. [ 3 ]
We have C c ⊂ C 0 ⊂ C B ⊂ C {\displaystyle C_{c}\subset C_{0}\subset C_{B}\subset C} . Moreover, C 0 {\displaystyle C_{0}} is the closure of C c {\displaystyle C_{c}} with respect to uniform convergence. [ 3 ]
A sequence of measures ( μ n ) n ∈ N {\displaystyle \left(\mu _{n}\right)_{n\in \mathbb {N} }} converges vaguely to a measure μ {\displaystyle \mu } if for all f ∈ C c ( X ) {\displaystyle f\in C_{c}(X)} , ∫ X f d μ n → ∫ X f d μ {\displaystyle \int _{X}f\,d\mu _{n}\rightarrow \int _{X}f\,d\mu } .
A sequence of measures ( μ n ) n ∈ N {\displaystyle \left(\mu _{n}\right)_{n\in \mathbb {N} }} converges weakly to a measure μ {\displaystyle \mu } if for all f ∈ C B ( X ) {\displaystyle f\in C_{B}(X)} , ∫ X f d μ n → ∫ X f d μ {\displaystyle \int _{X}f\,d\mu _{n}\rightarrow \int _{X}f\,d\mu } .
In general, these two convergence notions are not equivalent.
In a probability setting, vague convergence and weak convergence of probability measures are equivalent assuming tightness . That is, a tight sequence of probability measures ( μ n ) n ∈ N {\displaystyle (\mu _{n})_{n\in \mathbb {N} }} converges vaguely to a probability measure μ {\displaystyle \mu } if and only if ( μ n ) n ∈ N {\displaystyle (\mu _{n})_{n\in \mathbb {N} }} converges weakly to μ {\displaystyle \mu } .
The weak limit of a sequence of probability measures, provided it exists, is a probability measure. In general, if tightness is not assumed, a sequence of probability (or sub-probability) measures may not necessarily converge vaguely to a true probability measure, but rather to a sub-probability measure (a measure such that μ ( X ) ≤ 1 {\displaystyle \mu (X)\leq 1} ). [ 3 ] Thus, a sequence of probability measures ( μ n ) n ∈ N {\displaystyle (\mu _{n})_{n\in \mathbb {N} }} such that μ n → v μ {\displaystyle \mu _{n}{\overset {v}{\to }}\mu } where μ {\displaystyle \mu } is not specified to be a probability measure is not guaranteed to imply weak convergence.
Despite having the same name as weak convergence in the context of functional analysis, weak convergence of measures is actually an example of weak-* convergence. The definitions of weak and weak-* convergences used in functional analysis are as follows:
Let V {\displaystyle V} be a topological vector space or Banach space.
To illustrate how weak convergence of measures is an example of weak-* convergence, we give an example in terms of vague convergence (see above). Let X {\displaystyle X} be a locally compact Hausdorff space. By the Riesz-Representation theorem , the space M ( X ) {\displaystyle M(X)} of Radon measures is isomorphic to a subspace of the space of continuous linear functionals on C 0 ( X ) {\displaystyle C_{0}(X)} . Therefore, for each Radon measure μ n ∈ M ( X ) {\displaystyle \mu _{n}\in M(X)} , there is a linear functional φ n ∈ C 0 ( X ) ∗ {\displaystyle \varphi _{n}\in C_{0}(X)^{*}} such that φ n ( f ) = ∫ X f d μ n {\displaystyle \varphi _{n}(f)=\int _{X}f\,d\mu _{n}} for all f ∈ C 0 ( X ) {\displaystyle f\in C_{0}(X)} . Applying the definition of weak-* convergence in terms of linear functionals, the characterization of vague convergence of measures is obtained. For compact X {\displaystyle X} , C 0 ( X ) = C B ( X ) {\displaystyle C_{0}(X)=C_{B}(X)} , so in this case weak convergence of measures is a special case of weak-* convergence. | https://en.wikipedia.org/wiki/Portmanteau_theorem |
Portola Pharmaceuticals is an American clinical stage biotechnology company that researches, develops, and commercializes drugs . The company focuses primarily on drugs used in the treatment of thrombosis and hematological malignancies . [ 2 ] Founded in 2003 and headquartered in South San Francisco, California , Portola Pharmaceuticals is a member of the NASDAQ Biotechnology Index .
In May 2020, Alexion Pharmaceuticals and Portola announced that they had entered into a definitive merger agreement for Alexion to acquire Portola. [ 3 ]
The company was founded on September 2, 2003, [ 4 ] and named after Gaspar de Portolà , who was the first European to see San Francisco Bay . It completed an IPO on NASDAQ in May 2013. [ 5 ]
The company developed P2Y 12 inhibitor Elinogrel , transferring rights to Novartis in 2009. [ 6 ] The rights were returned in 2012 to Portola, which decided not to continue development.
Portola Pharmaceuticals has collaboration agreements with SRX Cardio, Dermavant, Millennium Pharmaceuticals , Daiichi Sankyo , Bayer , Janssen , BMS , and Pfizer . [ 7 ]
In the class action lawsuit of Hayden v. Portola Pharmaceuticals in U.S. District Court, in the Northern District of California, before U.S. District Judge Vince Chhabria , plaintiffs sued under Section 11 of the Securities Act of 1933 alleging that the company and its underwriters misrepresented its financial position ahead of a 2019 securities offering. [ 8 ] [ 9 ] In November 2022, Judge Chhabria entered an order granting preliminary approval of the proposed settlement of the case in the proposed settlement amount of $17.5 million. [ 10 ] [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Portola_Pharmaceuticals |
Films have portrayed professional women in science, technology, engineering, and mathematics (STEM) fields in various ways throughout film history . [ 1 ]
The study of female characters in film began with movements from the 1960s and 1970s in the form of second-wave feminism , the rise of independent films , and the beginning of academic film studies . [ 2 ] Some films promote certain socially defined female stereotypical archetypes that often combine job stereotypes [ 3 ] with gender stereotypes. [ 3 ] The use of these stereotypes in film has been suggested to contribute to a questionable portrayal of women, [ 4 ] especially revolving around themes of violence, sexuality , objectification , and subordination . [ 5 ]
The presentation of women as scientists on film goes back to the early days of cinema.
The first known presentation may be 1929's Woman in the Moon . Written by Thea von Harbou and directed by Fritz Lang , the film follows a group of Germans as they travel to the Moon. The group includes Friede Velten, an assistant on the trip, who chooses between two potential husbands and ultimately decides to stay on the Moon and live a new life there.
It would take almost ten years before another woman scientist would appear onscreen. When she did, Alice Swallow was a side character, a hard worker who was too busy to marry Cary Grant , who turned to fun-loving socialite Katharine Hepburn instead. Nevertheless, 1938's Bringing up Baby showed millions of people around the world a level-headed, independent woman who did not need to rely on a man to move her life forward.
The earliest portrayal of a real-life woman scientist may be the 1943 film Madame Curie starring Greer Garson as Polish-French physicist Marie Curie in 1890s Paris.
These portrayals in A-list movies were rare and it would take decades before they become more common-place. However, the 1950s saw a proliferation of low-budget American B-movies which show-cased female scientists and post-graduates. They were normally associated with a male boss and involved in a romantic storyline, as well as a scientific one.
One of the earliest B-movie portrayals of a fictional qualified scientist may be 1951's Flight to Mars which tells the story of male engineer and his assistant Carol Stafford, who earned her degree in "spaceship engineering".
1951 also saw Unknown World where a group of scientists, including Dr Joan Lindsey, drill into the earth to create an underground environment where humanity could escape and survive a future nuclear holocaust.
After the Swinging Sixties in the west, women began to feature front and centre of large-budget movies, often in a more serious tone.
An early example was the 1970 film The Andromeda Strain which showed Dr Ruth Leavitt as one of several scientists investigating a deadly organism of extraterrestrial origin.
Other examples include;
Gorillas in the Mist is a film based on the book with the same title by Dian Fossey and follows her as she leaves the United States to study gorillas in Rwanda and Uganda . As she bonds with the gorillas, she worries about poachers and devotes her time to protecting the animals. In the film, Fossey is said to be depicted as an independent woman, breaking the common trope of women being the homemaker. [ 6 ] This may be seen as an unusual portrayal of women scientists and it concentrates on the scientific work, and does not have a romantic story attached to it.
The 1993 film Jurassic Park , based on the novel with the same title by Michael Crichton , depicts a fictional paleobotanist , Dr. Ellie Sattler. She is shown to have extensive knowledge about dinosaurs and plant life throughout the movie. [ 7 ] She is said to be portrayed with great physical ability, allowing her to survive multiple attacks from dinosaurs.
Contact was released in 1997 and told the fictional story of Dr Eleanor "Ellie" Arroway, a SETI scientist who finds evidence of extraterrestrial life and is chosen by the government to make first contact.
Nutty Professor II: The Klumps is the sequel to the 1996 slapstick, science-fiction dark comedy, The Nutty Professor . Janet Jackson portrays molecular biologist Denise Gains. [ 8 ] Denise is the love-interest for the male hero, Sherman Klump. Gaines faces a dilemma where she must balance her professional career with her romantic relationships. In this film, she is hesitant to take a full professorship at the University of Maine, but she will be able to stay and pursue further research with Klump. [ 9 ]
The 2013 film Gravity , directed by Alfonso Cuarón and starring Sandra Bullock and George Clooney , is often cited as a feminist film due to Bullock's starring role as an astronaut .
The film has been critiqued that Gravity "proves that a woman can anchor an action-packed blockbuster that does not have to include violence, superheroes, weapons and/or huge death tolls." [ 10 ] While the film's lead is a woman, she gets help from her male counterpart, played by Clooney. Some critics describe Bullock's character as "the very model of the damsel in distress," as she can never get out of a situation on her own and must lean on Clooney's character to do the heavy lifting. [ 11 ] The role of Bullock's character is thought to be an act of defiant feminism, as she is the lead in a science fiction film, but some viewers find that the film actually subscribes to traditional gender stereotypes and does not portray Bullock's character as a true independent woman. In contrast, Vanessa Reich-Shackelford from Westcoast Women in Engineering, Science, and Technology while considering the character of Dr. Ryan Stone, wrote: "I came to realize that writers Alfonso (also director) and Jonás Cuarón had created one of the most positive representations of a woman in STEM on screen so far." [ 12 ]
The 2016 film Arrival revolves around the character of linguist Dr. Louise Banks, played by Amy Adams , who facilitates the very first instance of human communicative contact with an alien population. The film makes an obvious effort at employing a feminist theme, primarily through the way the backstory shapes Dr. Banks and frames her as an accomplished professional and mother. [ 13 ]
Hidden Figures was released in 2016 and told the true story of three female African-American mathematicians, Katherine Johnson , Dorothy Vaughan and Mary Jackson (engineer) who worked at NASA during the Space Race .
The critically acclaimed film Black Panther features Princess Shuri, a young, Black, female character who excels in the STEM field as an intelligent and creative technology whiz and inventor. Portrayed by Letitia Wright , Princess Shuri is the younger sister of T'Challa ( Chadwick Boseman ), the eventual king of Wakanda , and the mastermind behind harnessing the power of vibranium , specifically in the creation of the Black Panther suit. [ 14 ]
The Marvel Cinematic Universe has portrayed over 60 fictional scientists, about a quarter of them being women, including computer expert Dr Helen Cho, biologist Dr Maya Hensen and biochemist Agent Jemma Simmons .
Eva Flicker, writing in 2002, noted that in science fiction films, men are overwhelmingly portrayed as scientists, making up 82% of all film scientists. [ 15 ] The majority of films that include female scientists and engineers as primary characters are placed into the action, adventure and comedy genre. [ 16 ]
Women in majority of films including the science fiction genre, often fall into six categories:
This type of woman scientist is "only interested in her work" [ 17 ] and is often depicted having a nondescript appearance and style, with strong competency. Typically, as the film progresses, a man saves her. This male salvation consequently brings out the feminine side of "the old maid", [ 18 ] after which she is intended to become more conventionally attractive . However, she loses credibility as an academic and suddenly makes more mistakes than when she did not focus as much on her appearance. Based on this type of story, Flicker concludes that "femininity and intelligence are mutually exclusive characteristics in a woman's film role." [ 15 ] An example of this woman in a film can be seen in Spellbound , in the character Dr. Constance Peterson. [ 15 ]
This type of woman works with men in an all male environment. Because of this, she has a "harsh voice" and occasionally "succumbs to an unhealthy lifestyle," such as partaking in smoking and drinking , to fit in with her male counterparts. Flicker claims that this type of woman is "lost somewhere in the middle" of masculinity and femininity , meaning she is not as sexual a character as other women are, but she is also not on the same level as the men she works with. This decreases her credibility both as a woman and as one trying to fit into "a man's world." In the end, her heightened female emotions allow her to contribute to a solution, which is her redeeming quality as a character. The "male woman" character appears in the 1970 science fiction film Andromeda Strain . [ 15 ]
Seen in The Lost World: Jurassic Park , this woman scientist is described as a character that does minimal work. [ 19 ] For the sake of the dramatization of the film, she is crucial, but typically she does not advance the story and does not contribute much to the solution. Instead, her femininity causes more trouble for the team of scientists. She is portrayed as young, attractive, and is subjected to experience feminine emotions [ 18 ] which add an extra layer to the existing predicament, forcing the man to solve the problem to get the team out of trouble. She is "naïve in her actions," messing up every task that is given to her despite her extensive education and knowledge, while her male counterpart stands in stark contrast and ends up saving the day. [ 15 ]
The "evil plotter" woman is young and very beautiful, and she uses her feminine charm to trick the men into doing what she wants. She has an ulterior motive, which is on the opposite end of the spectrum from what the rest of the team is trying to accomplish. She is the character that the audience and the other characters despise by the end of the movie because she is devilishly smart and knows how to use her scientific knowledge and sexual prowess for evil. This character type was portrayed by Alison Doody as Dr. Elsa Schneider in Indiana Jones and the Last Crusade . [ 15 ]
This role of female scientist encompasses many feminine stereotypes portrayed in movies. In this role the woman is subordinate to her male counterpart, [ 20 ] who is either her father or her lover. She is smart and capable, but her secondary role does not allow her to demonstrate her abilities. Flicker writes that when this woman plays the role of lover to the male scientist, "her work place is limited to the bed." She is only good for sexual satisfaction, not for the degree she earned. The assistant role is seen in the female Dr. Medford in the film Them! , as she is portrayed alongside an older gentleman of the same name. [ 15 ]
This type of woman scientist is intelligent, attractive, and somewhat independent. Flicker says that she "has appropriated some male traits," such as losing herself in her work. She is both sexual and smart, and she manages to exhibit both qualities in the film. Despite this, she is still subordinate to the men on her team, and depends on them and their work to gain respect. [ 21 ] She is the most progressive of the woman scientist types, but she lacks her own form of independence and still must rely on a sexual relationship with a man to be seen as someone. [ 15 ] The "lonely heroine" type is best seen in Jodie Foster 's portrayal of Elleonore Arroway in the film Contact .
Flicker argues that women are often pigeonholed into these six limiting roles when written in films. Each of these roles places the female scientist character on the sidelines, and does not allow her to be on the same level as her male counterpart(s). Although the women in these roles are educated, and often just as educated as the men on their team, they are used primarily as assistants and sexual characters. Producers strategically write women's roles for the male gaze, often making the female characters use their "weapons of a woman," such as sex appeal, to be attractive to male characters and viewers alike. [ 3 ]
Feminist film theorist Laura Mulvey writes that in film, women are passive objects of the male gaze. [ 22 ] Mulvey writes that movies fulfill "a primordial wish for pleasurable looking," and that male audiences are largely catered to in the film industry. [ 22 ] In her analysis of film, she states that the lead woman in a film often falls in love with her male counterpart, and when she does, she only exists as a character to please him. Through the male character's ownership of the woman, the men in the audience find themselves owning her as well. [ 22 ]
The male gaze is a significant aspect of traditional feminist film analysis [ 23 ] and thus is an important factor to consider in relation to female scientists and how they are portrayed in films. Typically women are viewed as sexual objects for the pleasure of males who view these films. This has a direct effect on how people interpret women scientists and their role in movies. Instead of being portrayed as superheroes , they begin to obtain a reputation based on sexual appeal.
Considering superhero films, Amy Shackelford mentions how the male gaze is applied to sexualize the female character, further misinterpreting women in the media through visual depiction. Looking at these particular films, screenwriters have a difficult time accomplishing the task of writing female characters. Shackelford also states that it seems like the only way these screenwriters know how to portray female characters in superhero films power is to sexualize them. [ 24 ]
Judith Mayne supports Laura Mulvey's view. She writes that "most feminist film theory and criticism of the last decade" has been in written in response to Mulvey's 1975 assessment, "Visual Pleasure and Narrative Cinema." [ 2 ] She argues that understanding the often sexist portrayal of women in film requires "an understanding of patriarchy as oppressive and as vulnerable." [ 2 ] Mayne goes deeper in her argument claiming that feminist film theory inspired feminist documentaries that are "aimed at rejecting stereotyped images of women." [ 2 ] This criticism also opens the question about "the notion of woman as 'image.'". [ 2 ]
Law professor Sarah Eschholz and her colleagues Jana Bufkin and Jenny Long write that in film, women are often young, and female characters are rarely played by middle aged or older women. [ 25 ] Often the only role available to these women is that of the mother, who is not meant to be a leading character. They write that "females' primary societal value is based on physical appearance and youthful beauty." [ 25 ] According to their assessment, men are valued at all ages, and arguably more so as they age and become wiser. Most women in film are 35 years old or younger, while their male costars are often older. [ 25 ] Despite women in film having impressive credentials and extensive educations, they are often reduced to objects for looking, due to a reluctance to hire an older, less attractive woman for a major role. [ 15 ] [ 25 ]
In the traditional husband and wife family, women are often portrayed as the second in command. Their husbands take on the role of family head and get to maintain a bachelor level of freedom , which allows them to work and spend time out with the guys. Eschholz, Bufkin, and Long report on studies that show female characters are more likely to be married and have a family than male characters. [ 25 ] Men have the freedom to work and be protagonists through their actions, while their wives or girlfriends are forced to take a back seat in the story to care for the family. [ 25 ] [ 15 ]
Noël Carroll references Mulvey's pivotal paper on psychoanalysis and visual pleasure in his writing, and plays devil's advocate to her claim that women are the only subjects of gaze. Carroll acknowledges and agrees with Mulvey's assessment that women in film are strategically placed for the male gaze despite the role of their actual character. Carroll states, "Women in Hollywood film are staged and blocked for male erotic contemplation and pleasure." [ 26 ] However, Carroll adds that men in films are also strategically placed for the purpose of pleasure. He cites such examples as Sylvester Stallone and Arnold Schwarzenegger , big bodybuilding actors "whose scenes are blocked and staged precisely to afford spectacles of bulging pectorals and other parts." [ 26 ] Similarly to actresses, male actors are also heralded for their facial attractiveness and are sometimes lauded exclusively for being attractive. As an example, Carroll offers Leslie Howard , a male actor who appeared in Of Human Bondage and Gone With the Wind , who was highly successful in the industry despite being "staggeringly ineffectual." [ 26 ] According to Carroll, being subject to the erotic gaze of the audience is not an exclusively female burden; rather, both sexes fall prey to Hollywood camera angles that best show off their bodies.
Kristin Thompson , an American film theorist, analyzes the film Laura . In her analysis, she claims that the main character, Laura, was written to embody the role of "passive visual object," which Mulvey and Flicker claim is an extremely prevalent role of women in film. In the film, the main protagonist spends much of the film admiring an idealised painting of Laura, rather than communicating with an independent human being. [ 15 ] [ 22 ] [ 27 ] However, Thompson, like Carroll, does not believe that this passive role is limited to women in the industry. Thompson claims that Mulvey's assessment stating that women are used as objects and men cannot handle being the subject of gaze is "common but not universal" in the film industry. [ 27 ] She claims that men are also presented in flashy ways in film, and gives the example of Howard Keel in the film Show Boat . [ 27 ] Her analysis aligns with Carroll's conclusion that both sexes must be the subject of the audience's gaze, and that objectification is all-encompassing.
Studies have shown that female scientists are either underrepresented or misrepresented as film characters. [ 3 ] As Eva Flicker writes, film has a way of taking social realities and expressing certain images of women through media formats . [ 15 ] Such media is then able to influence the audience by creating a mirror of metaphors, myths, opinions, and a social memory contributing to stereotypes. [ 15 ]
The existence of gender-STEM stereotyping is not a new phenomenon. Such stereotyping has been shown to be prevalent among people in a broad range of ages and life stages, from early childhood to college. [ 28 ] However, it has been demonstrated that increasing exposure to representations that break the stereotypes of men as the default in STEM can successfully begin to undermine these mental correlations and help to prevent perpetuation of these narratives. [ 28 ] This is an example that can be explained by the social role theory of social psychology [ 29 ] and the distinct role of culture. [ 28 ]
While most films are a one-off presentation of a story, TV shows which lead to a film can portray characters and ideas in a more long-term and rounded manner.
One example of this can be seen in the X-Files 's Dana Scully . The Scully Effect is widely documented as having encouraged a large number of women to go into science. [ 30 ] [ 31 ] [ 32 ] Running from 1993 to 2002, the weekly portrayal of a medical doctor carrying out investigations with the FBI, followed by her story on the big screen ( The X-Files and The X-Files: I Want to Believe ) has had a huge impact on science.
A much earlier example is that of Communications Officer Lt Nyota Uhura on Star Trek . As a woman of colour portraying a military linguist, cryptographer and mathematician, Uhura has had an unprecedented influence on women, and men, around the world. [ 33 ] [ 34 ]
A 2007 meta-analysis by Jocelyn Steinke of Western Michigan University and colleagues looked at gender stereotyping by children who have been exposed to images of scientists through films, television shows, and books. [ 35 ] One study examined elementary school students taking the Draw-a-Scientist-Test , or DAST . The results showed that out of more than 4,000 children who participated in the DAST, only 28 girls drew female scientists. [ 35 ] Another study of 1,137 Korean students between the ages of eleven and fifteen found that 74% of them drew male scientists, while only 16% had depicted female scientists. [ 35 ] Through the influence of mass media outlets, statistics from the National Science Foundation 2000 indicate that women make up only 19.4% of the STEM industry including science and mathematics. [ 36 ] As a result, most children in the major developmental years are subjected to accepting traditional stereotypes of women being passive, emotional, physically weak and dependent, as depicted in films. [ 36 ] The study posits that gender stereotypes can be the product of an individual's surrounding environment , which can influence how they view themselves and others around them. [ 35 ]
As an increasing number of adolescents use social media platforms , the reinforcement of traditional cultural norms is at an all-time high. [ 16 ] Even before young women reach adolescence, they may be exposed to social media perpetuating the stereotype of women as dependent, emotional and less capable beings. [ 16 ]
An increasing interest in science in real-life and on-screen is linked. [ 37 ] [ 38 ] As Mae Jemison (a life-long Star Trek fan) became the first black woman to travel in space and the first real-life astronaut to appear on Star Trek, [ 39 ] [ 40 ] this link can be celebrated for changes it brings across society. | https://en.wikipedia.org/wiki/Portrayal_of_women_scientists_in_film |
In the fields of computing and computer vision , pose (or spatial pose ) represents the position and the orientation of an object, each usually in three dimensions . [ 1 ] Poses are often stored internally as transformation matrices . [ 2 ] [ 3 ] The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale , whereas pose does not. [ 4 ] [ 5 ]
In computer vision, the pose of an object is often estimated from camera input by the process of pose estimation . This information can then be used, for example, to allow a robot to manipulate an object or to avoid moving into the object based on its perceived position and orientation in the environment. Other applications include skeletal action recognition.
The specific task of determining the pose of an object in an image (or stereo images, image sequence) is referred to as pose estimation . Pose estimation problems can be solved in different ways depending on the image sensor configuration, and choice of methodology. Three classes of methodologies can be distinguished:
Camera resectioning is the process of estimating the parameters of a pinhole camera model approximating the camera that produced a given photograph or video; it determines which incoming light ray is associated with each pixel on the resulting image. Basically, the process determines the pose of the pinhole camera.
Usually, the camera parameters are represented in a 3 × 4 projection matrix called the camera matrix .
The extrinsic parameters define the camera pose (position and orientation) while the intrinsic parameters specify the camera image format (focal length, pixel size, and image origin).
This process is often called geometric camera calibration or simply camera calibration, although that term may also refer to photometric camera calibration or be restricted for the estimation of the intrinsic parameters only. Exterior orientation and interior orientation refer to the determination of only the extrinsic and intrinsic parameters, respectively.
The classic camera calibration requires special objects in the scene, which is not required in camera auto-calibration . | https://en.wikipedia.org/wiki/Pose_(computer_vision) |
Pose to pose is a term used in animation , for creating key poses for characters and then inbetweening them in intermediate frames to make the character appear to move from one pose to the next. Pose-to-pose is used in traditional animation as well as computer-based 3D animation. [ 1 ] The opposite concept is straight ahead animation , where the poses of a scene are not planned, which results in more loose and free animation, though with less control over the animation's timing . [ 1 ]
This animation -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Pose_to_pose_animation |
Several species share the specific name poseidon , the species descriptor in a binomial name .
These include: | https://en.wikipedia.org/wiki/Poseidon_(species) |
In mathematics , the poset topology associated to a poset ( S , ≤) is the Alexandrov topology (open sets are upper sets ) on the poset of finite chains of ( S , ≤), ordered by inclusion.
Let V be a set of vertices. An abstract simplicial complex Δ is a set of finite sets of vertices, known as faces σ ⊆ V {\displaystyle \sigma \subseteq V} , such that
Given a simplicial complex Δ as above, we define a (point set) topology on Δ by declaring a subset Γ ⊆ Δ {\displaystyle \Gamma \subseteq \Delta } be closed if and only if Γ is a simplicial complex, i.e.
This is the Alexandrov topology on the poset of faces of Δ.
The order complex associated to a poset ( S , ≤) has the set S as vertices, and the finite chains of ( S , ≤) as faces. The poset topology associated to a poset ( S , ≤) is then the Alexandrov topology on the order complex associated to ( S , ≤).
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Poset_topology |
Position-effect variegation ( PEV ) is a variegation caused by the silencing of a gene in some cells through its abnormal juxtaposition with heterochromatin via rearrangement or transposition . [ 1 ] It is also associated with changes in chromatin conformation . [ 2 ]
The classical example is the Drosophila w m4 (speak white-mottled-4) translocation . In this mutation , an inversion on the X chromosome placed the white gene next to pericentric heterochromatin, or a sequence of repeats that becomes heterochromatic. [ 3 ] Normally, the white gene is expressed in every cell of the adult Drosophila eye resulting in a red-eye phenotype . In the w[m4] mutant, the eye color was variegated (red-white mosaic colored) where the white gene was expressed in some cells in the eyes and not in others. The mutation was described first by Hermann Muller in 1930. [ 4 ] PEV is a heterochromatin-induced gene inactivation . [ 5 ] Gene silencing phenomena similar to this have also been observed in S. cerevisiae and S. pombe . [ 5 ]
Typically, the barrier DNA sequences prevent the heterochromatic region from spreading into the euchromatin but they are no longer present in the flies that inherit certain chromosomal rearrangements. [ 6 ]
PEV is a position effect because the change in position of a gene from its original position to somewhere near a heterochromatic region has an effect on its expression . [ 7 ] The effect is the variegation in a particular phenotype i.e., the appearance of irregular patches of different colour(s), due to the expression of the original wild-type gene in some cells of the tissue but not in others, [ 8 ] as seen in the eye of mutated Drosophila melanogaster .
However, it is possible that the effect of the silenced gene is not phenotypically visible in some cases. PEV was observed first in Drosophila because it was one of the first organisms on which X-ray irradiation was used as a mutation inducer. [ 1 ] X-rays can cause chromosomal rearrangements that can result in PEV. [ 1 ]
Among a number of models, two epigenetic models are popular. One is the cis -spreading of the heterochromatin past the rearrangement breakpoint. The trans -interactions come in when the cis- spreading model is unable to explain certain phenomena. [ 5 ]
According to this model, the heterochromatin forces an altered chromatin conformation on the euchromatic region. Due to this, the transcriptional machinery cannot access the gene which leads to the inhibition of transcription. [ 5 ] In other words, the heterochromatin spreads and causes gene silencing by packaging the normally euchromatic region. [ 2 ] But this model fails to explain some aspects of PEV. For example, variegation can be induced in a gene located several megabases from the heterochromatin-euchromatin breakpoint due to rearrangements in that breakpoint. Also, the austerity of the variegated phenotype can be altered by the distance of the heterochromatic region from the breakpoint. [ 5 ]
This suggests that trans -interactions are crucial for PEV.
These are interactions between the different heterochromatic regions and the global chromosomal organisation in the interphase nucleus. [ 5 ] The rearrangements due to PEV places the reporter gene in a new compartment of the nucleus where the transcriptional machinery required is not available, thus silencing the gene and modifying the chromatin structure. [ 2 ]
These two mechanisms affect each other as well. Which mechanism dominates to influence the phenotype depends upon the type of heterochromatin and the intricacy of the rearrangement. [ 5 ]
The mutations in mus genes are the candidates as PEV modifiers, as these genes are involved in chromosome maintenance and repair. Chromosome structure in the vicinity of the breakpoint appears to be an important determinant of the gene inactivation process. Six second chromosomal mus mutations were isolated with w m4 . A copy of wild-type white gene was placed adjacent to heterochromatin. The different mus mutants that were taken were: mus 201 D1 , mus 205 B1 , mus 208 B1 , mus 209 B1 , mus 210 B1 , mus 211 B1 . A stock was constructed with the replacement of standard X-chromosome with w m4 . It was observed that the suppression of PEV is not a characteristic of mus mutations in general. Only for homozygous mus 209 B1 , the variegation was significantly suppressed. Also, when homozygous, 2735 and D-1368 and all heteroallelic combinations of its Pcna mutations strongly suppress PEV. [ 9 ]
In mouse, variegating coat colour has been observed. When an autosomal region carrying a fur color gene is inserted onto the X chromosome, variable silencing of the allele is seen. Variegation is, however, observed only in the female having this insertion along with a homozygous mutation in the original coat color gene. [ 1 ] The wild-type allele gets inactivated due to heterochromatinization. [ 1 ]
In plants, PEV has been observed in Oenothera blandina . The silencing of euchromatic genes occurs when the genes get placed into a new heterochromatic neighborhood. [ 1 ] | https://en.wikipedia.org/wiki/Position-effect_variegation |
Position-specific isotope analysis , also called site-specific isotope analysis , is a branch of isotope analysis aimed at determining the isotopic composition of a particular atom position in a molecule. Isotopes are elemental variants with different numbers of neutrons in their nuclei, thereby having different atomic masses. Isotopes are found in varying natural abundances depending on the element; their abundances in specific compounds can vary from random distributions (i.e., stochastic distribution) due to environmental conditions that act on the mass variations differently. These differences in abundances are called "fractionations," which are characterized via stable isotope analysis.
Isotope abundances can vary across an entire substrate (i.e., “bulk” isotope variation), specific compounds within a substrate (i.e., compound-specific isotope variation), or across positions within specific molecules (i.e., position specific isotope variation). Isotope abundances can be measured in a variety of ways (e.g., i sotope ratio mass spectrometry , laser spectrometry, NMR , ESI-MS ). Early analyses varied in technique, but were commonly limited by their ability to only measure average isotope compositions over molecules or samples. While this allows isotope analysis of the bulk substrate, it eliminates the ability to distinguish variation between different sites of the same element within the molecule. The field of position-specific isotope biogeochemistry studies these intramolecular variations, known as “position-specific isotope” and “site-specific isotope” enrichments. It focuses on position-specific isotope fractionations in many contexts, development of technologies to measure these fractionations and the application of position-specific isotope enrichments to questions surrounding biogeochemistry , microbiology , enzymology , medicinal chemistry , and Earth history .
Position-specific isotope enrichments can retain critical information about synthesis and source of the atoms in the molecule. Indeed, bulk isotope analysis averages site-specific isotope effects across the molecule, and so while all those values have an influence on the bulk value, signatures of specific processes may be diluted or indistinguishable. While the theory of position-specific isotope analysis has existed for decades, [ 1 ] new technologies exist now to allow these methods to be much more common. [ 2 ] The potential applications of this approach are widespread, such as understanding metabolism in biomolecules, environmental pollutants in air, inorganic reaction mechanisms, etc. Clumped isotope analysis, a subset of position-specific isotope analysis, has already proven useful in characterizing sources of methane , paleoenvironment, paleoaltimetry, among many other applications. More specific case studies of position-specific isotope fractionation are detailed below.
Stable isotopes do not decay, and the heavy and light isotope masses affect how they partition within the environment. Any deviation from a random distribution of the light and heavy isotopes within the environment is called fractionation, and consistent fractionations as a result of a particular process or reaction are called "isotope effects."
Isotope effects are recurring patterns in the partitioning of heavy and light isotopes across different chemical species or compounds, or between atomic sites within a molecule. These isotope effects can come about from a near infinite number of processes, but most of them can be narrowed down into two main categories, based on the nature of the chemical reaction creating or destroying the compound of interest:
(1) Kinetic isotope effects manifest in irreversible reactions , when one isotopologue is preferred in the transition state due to the lowest energy state. The preferred isotopologue will depend on whether the transition state of the molecule during a chemical reaction is more like the reactant or the product. Normal isotope effects are defined as those which partition the lighter isotope into the products of the reaction. Inverse isotope effects are less common as they preferentially partition the heavier isotope into the products.
(2) Equilibrium isotope effects manifest in reversible reactions , when molecules can exchange freely to reach the lowest possible energy state.
These variations can occur on a compound-specific level, but also on a position-specific level within a molecule. For instance, the carboxyl site of amino acids is exchangeable and therefore its carbon isotope signature can change over time and may not represent the original carbon source of the molecule.
Chemical reactions in biological processes are controlled by enzymes that catalyze the conversion of substrate to product. Since enzymes can alter the transition state structure for reactions, they also change kinetic and equilibrium isotope effects. Placed in the context of a metabolism , the expression of isotope effects on biomolecules is further controlled by branch points. Different pathways of biosynthesis will use different enzymes, yielding a range of position specific isotope enrichments. This variability allows position-specific isotope measurements to discern multiple biosynthetic pathways from the same metabolic product. [ 3 ] Biogeochemists use position specific isotope enrichments from amino acids , lipids , and sugars in nature to interpret the relative importance of different metabolisms.
The position-specific isotope effect of an enzymatic reaction is expressed as the ratio of rate constants for a monoisotopic substrate and a substrate substituted with one rare isotope. For example, enzyme formate dehydrogenase catalyzes the reaction of formate and NAD + to carbon dioxide and NADH. The hydrogen of formate is directly transferred to NAD+. This step has an isotope effect, because the rate of protium transfer from formate to NAD+ is nearly three times faster than the rate of the same reaction with a deuterium transfer. This is also an example of a primary isotope effect. [ 4 ] A primary isotope effect is one in which the rare isotope is substituted where a bond is broken or formed. Secondary isotope effects occur on other positions in the molecule and are controlled by the molecular geometry of the transition state. These are generally considered to be negligible but do arise in certain cases, especially for hydrogen isotopes . [ 4 ]
Unlike abiotic reactions, enzymatic reactions occur through a series of steps, including substrate-enzyme binding, conversion of substrate to product, and dissociation of enzyme-product complex. The observed isotope effect of an enzyme will be controlled by the rate limiting step in this mechanism. If the step that converts substrate to product is rate limiting, the enzyme will express its intrinsic isotope effect, that of the bond forming or breaking reaction. [ 5 ]
Like biotic molecules, position specific isotope enrichments in abiotic molecules can reflect the source of chemical precursors and synthesis pathways. The energy for abiotic reactions can come from many different sources, which will affect fractionation. For instance, metal catalysts can speed up abiotic reactions. Reactions can be slowed down or sped up by different temperature and pressure conditions, which will affect the equilibrium constant or activation energy of reversible and irreversible reactions, respectively.
For example, carbon in the interstellar medium and solar nebula partition into distinct states based on thermodynamic favorability. Measuring site-specific isotope enrichments of carbon from organic molecules extracted from carbonaceous chondrites can elucidate where each carbon atom comes from, and how organic molecules can be synthesized abiotically. [ 6 ] More broadly, these isotope enrichments can provide information about physical processes in the region where the molecular precursors were formed, and where the molecule formed in the solar system (i.e., nucleosynthetic heterogeneity, mass independent fractionation , self-shielding, etc.).
Another example of distinct site-specific fractionations in abiotic molecules is Fischer-Tropsch -type synthesis, which is thought to produce abiogenic hydrocarbon chains. [ 6 ] Through this reaction mechanism, site enrichments of carbon would deplete as carbon chain length increases, and be distinct from site-specific enrichments of hydrocarbons of biological origins.
Substrates need to be prepared and analyzed in a specific way to elucidate site specific isotope enrichments. This requires clean separation of the compound of interest from the original sample, which can require a variety of different preparatory chemistries. Once isolated, position-specific isotope enrichments can be analyzed with a variety of instruments, which all have different advantages and provide varying degrees of precision .
To measure the kinetic isotope effects of enzymatic reactions, biochemists perform in vitro experiments with enzymes and substrates. The goal of these experiments is to measure the difference in the enzymatic reaction rates for the monoisotopic substrate and the substrate with one rare isotope. [ 5 ] There are two popularly used techniques in these experiments: Internal competition studies and direct comparison experiments. Both measure position-specific isotope effects.
Direct comparison experiments are primarily used for measuring hydrogen / deuterium isotope effects in enzymatic reactions. The monoisotopic substrate and a deuterated form of the substrate are separately exposed to the enzyme of interest over a range of concentrations. The Michaelis-Menten kinetic parameters for both substrates are determined and the position-specific isotope effect at the site of deuteration is expressed as the ratio of the monoisotopic rate constant over the rare isotope rate constant. [ 7 ]
For isotopes of elements like carbon and sulfur , the difference in kinetic parameters is too small, and the measurement precision too low, to measure an isotope effect by directly comparing the rates of the monoisotopic and rare isotope substrates. Instead, the two are mixed together using the natural abundance of stable isotopes in molecules. The enzyme is exposed to both isotopes simultaneously and its preference for the light isotope is analyzed by collecting the product of the reaction and measuring its isotope composition. For example, if an enzyme removes a carbon from a molecule by turning it into carbon dioxide , that carbon dioxide product can be collected and measured on an Isotope Ratio Mass Spectrometer for its carbon isotope composition . If the carbon dioxide has less 13 C than the substrate mixture, the enzyme has preferentially reacted with the substrate that has a 12 C at the site that is decarboxylated . In this way, internal competition experiments are also position-specific. If only the CO 2 is measured, then only the isotope effect on the site of decarboxylation is recorded. [ 5 ]
Before the advent of technologies that analyze whole molecules for their intramolecular isotopic structure, molecules were sequentially degraded and converted to CO 2 and measured on an I sotope Ratio Mass Spectrometer , revealing position-specific 13 C enrichments.
In 1961, Abelson and Hoering developed a technique for removing the carboxylic acid of amino acids using the ninhydrin reaction. This reaction converts the carboxylic acid to a molecule of CO 2 which is measured via an Isotope Ratio Mass Spectrometer. [ 1 ]
Lipids are of particular interest to stable isotope geochemists because they are preserved in rocks for millions of years. Monson & Hayes used ozonolysis to characterize the position-specific isotope abundances of unsaturated fatty acids , turning different carbon positions into carbon dioxide. Using this technique, they directly measured an isotopic pattern in fatty acids that had been predicted for years. [ 8 ]
In some cases, additional functional groups will need to be added to molecules to facilitate the other separation and analysis methods. Derivatization can change the properties of an analyte; for instance, it would make a polar and non-volatile compound non-polar and more volatile, which would be necessary for analysis in certain types of chromatography . It is important to note, however, that derivatization is not ideal for site-specific analyses as it adds additional elements that must be accounted for in analyses.
Chromatography facilitates separation of distinct molecules within a mixture based on their respective chemical properties, and how those properties interact with the substrate coating the chromatographic column. This separation can happen “on-line,” during the measurement itself, or prior to measurements to isolate a pure compound. Gas and liquid chromatography have distinct advantages, based on the molecules of interest. For example, aqueously soluble molecules are more easily separated with liquid chromatography, while volatile, nonpolar molecules like propane or ethane are separated with gas chromatography.
A variety of different instruments can be used to perform position-specific isotope analysis, and each have distinct advantages and drawbacks. Many of them require comparison the sample of interest to a standard of known isotopic composition; fractionation within the instrument and variation of instrumental conditions over time can affect accuracy of individual measurements if not standardized.
Initial measurements of position specific isotope enrichments were measured using isotope ratio mass spectrometry in which sites on a molecule were first degraded to CO 2 , the CO 2 was captured and purified, and then the CO 2 was measured for its isotope composition on an Isotope Ratio Mass Spectrometer (IRMS). Py-GC-MS was also used in these experiments to degrade molecules even further and characterize their intramolecular isotopic distributions. [ 1 ] Both GC-MS and LC-MS are capable of characterizing position specific isotope enrichments in isotopically labelled molecules . In these molecules, 13 C is so abundant that it can be seen on a mass spectrometer with low sensitivity. The resolution of these instruments can distinguish two molecules with a 1 Dalton difference in their molecular masses; however, this difference could arise from the addition of many rare isotopes ( 17 O, 13 C, 2 H, etc.). For this reason, mass spectrometers using quadrupoles or time-of-flight detection techniques cannot be used for measuring position-specific enrichments at natural abundances .
Laser spectroscopy can be used to measure isotope enrichments of gases in the environment. Laser spectroscopy takes advantage of the different vibrational frequencies of isotopologues which cause them to absorb different wavelengths of light. Transmission of light through the gaseous sample at a controlled temperature can be quantitatively converted into a statement about isotopic composition. For N2O, these measurements can determine the position specific isotope enrichments of ( 15 N. [ 9 ] These measurements are fast and can reach relatively good precision (1-10 per mille ). It is used to characterize environmental gas fluxes, and effects on these fluxes . [ 10 ] This method is limited to measurement and characterization of gases.
Nuclear magnetic resonance observes small differences in molecular reactions to oscillating magnetic fields . It is able to characterize atoms with active nuclides that have a non-zero nuclear spin (e.g., 13 C, 1 H, 17 O 35 Cl, 15 N, 37 Cl), which makes it particularly useful for identifying certain isotopes. In typical proton or 13C NMR, the chemical shifts of protiums (1H) and carbon-13 atoms within a molecule are measured, respectively, as they are excited by a magnetic field and then relax with a diagnostic resonance frequency. With site specific natural isotope fractionation ( SNIF ) NMR, the relaxation resonances of the deuterium and 13C atoms. [ 11 ] NMR does not have the sensitivity to detect isotopologues with multiple rare isotopes. The only peaks that appear in a SNIF-NMR spectra are those of the isotopologues with a single rare isotope. Since the instrument is only measuring the resonances of the rare isotopes, each isotopologue will have one peak. For example, a molecule with six chemically unique carbon atoms will have six peaks in a 13C SNIF NMR spectrum. The site of 13C substitution can be determined by the chemical shift of each of the peaks. As a result, NMR is able to identify site specific isotope enrichments within molecules. [ 11 ] [ 12 ]
The Orbitrap is a high-resolution Fourier transform mass spectrometer that has recently been adapted to allow for site-specific analyses. [ 2 ] Molecules introduced into the Orbitrap are fragmented , accelerated, and analyzed. Because the Orbitrap characterizes molecular masses by measuring oscillations at radio frequencies , it is able to reach very high levels of precision, depending on measurement method (i.e., down to 0.1 per mille for long integration times). It is significantly faster than site-specific isotope measurements that can be performed using NMR , and can measure molecules with different rare isotopes but the same nominal mass at natural abundances (unlike GC and LCMS). It is also widely generalizable to molecules that can be introduced via gas or liquid solvent. [ 2 ] Resolution of the Orbitrap is such that nominal isobars (e.g., 2 H versus 15 N versus 13 C enrichments) can be distinguished from one another, and so molecules do not need to be converted into a homogeneous substrate to facilitate isotope analysis. Like other isotope measurements, measurements of site-specific enrichments on the Orbitrap should be compared to a standard of known composition. [ 2 ]
To illustrate the utility of position-specific isotope enrichments, several case studies are described below in which scientists used position-specific isotope analyses to answer important questions about biochemistry, pollution, and climate.
Phosphoenolpyruvate carboxylase (PEPC) is an enzyme that combines bicarbonate and phosphoenolpyruvate (PEP) to form the four-carbon acid, oxaloacetate . It is an important enzyme in C4 photosynthesis and anaplerotic pathways. [ 13 ] It is also responsible for the position-specific enrichment of oxaloacetate, due to the equilibrium isotope effect of converting the linear molecule CO 2 into the trigonal planar molecule HCO 3 -, which partitions 13 C into bicarbonate. [ 14 ] Inside the PEPC enzyme, H 12 CO 3 - reacts 1.0022 times faster than H 13 CO 3 - so that PEPC has a 0.22% kinetic isotope effect. [ 15 ] This is not enough to compensate for the 13C enrichment in bicarbonate. Thus, oxaloacetate is left with a 13 C-enriched carbon at the C4 position. However, the C1 site experiences a small inverse secondary isotope effect due to its bonding environment in the transition state , leaving the C1 site of oxaloacetate enriched in 13 C. [ 16 ] In this way, PEPC simultaneously partitions 12 C into the C4 site and 13 C into the C1 site of oxaloacetate, an example of multiple position-specific isotope effects.
The first paper on site-specific enrichment used the ninhydrin reaction to cleave the carboxyl site off alpha-amino acids in photosynthetic organisms. [ 1 ] The authors demonstrated an enriched carboxyl site relative to the bulk δ 13 C of the molecules, which they attribute to uptake of heavier CO 2 through the Calvin cycle . [ 1 ] A recent study applied similar theory to understand enrichments in methionine, which they suggested would be powerful in origin and synthesis studies. [ 17 ]
In 2012, a team of scientists used NMR spectroscopy to measure all of the position-specific carbon isotope abundances of glucose and other sugars. It was shown that the isotope abundances are heterogeneous . Different portions of the sugar molecules are used for biosynthesis based on the metabolic pathway an organism uses. [ 12 ] Therefore, any interpretations of position-specific isotopes of molecules downstream of glucose have to consider this intramolecular heterogeneity.
Glucose is the monomer of cellulose, the polymer that makes plants and trees rigid. After the advent of position-specific analyses of glucose, biogeochemists from Sweden looked the concentric tree rings of a Pinus nigra that recorded yearly growth between 1961 and 1995. They digested the cellulose down to its glucose units and used NMR spectroscopy to analyze its intramolecular isotopic patterns. They found correlations with position-specific isotope enrichments that were not apparent with whole molecule carbon isotope analysis of glucose. By measuring position-specific enrichments in the 6-carbon glucose molecule, they gathered six times more information from the same sample. [ 18 ]
The biosynthesis of fatty acids begins with acetyl-CoA precursors that are brought together to make long straight chain lipids . Acetyl-CoA is produced in aerobic organisms by pyruvate dehydrogenase , an enzyme that has been shown to express a large, 2.3% isotope effect on the C2 site of pyruvate and a small fractionation on the C3 site. [ 19 ] These become the odd and even carbon positions of fatty acids respectively and in theory would result in a pattern of 13 C depletions and enrichments at odd and even positions, respectively. In 1982, Monson and Hayes developed technology for measuring the position specific carbon isotope abundances of fatty acids. Their experiments on Escherichia coli revealed the predicted relative 13 C enrichments at odd numbered carbon sites. [ 20 ] However, this pattern was not found in Saccharomyces cerevisiae that were fed glucose. Instead, its fatty acids were 13 C enriched at the odd positions. [ 8 ] This has been interpreted as either a product of isotope effects during fatty acid degradation or the intramolecular isotopic heterogeneity of glucose that ultimately is reflected in the position-specific patterns of fatty acids. [ 21 ]
Site specific isotope enrichments of N 2 O is measured in the environment to help disentangle microbial sources and sinks in the environment. Different isotopologues of N2O absorb light at different wavelengths. Laser spectroscopy converts these differences as it scans across wavelengths to measure the abundance of 14 N- 15 N- 16 O vs. 15 N- 14 N- 16 O, a distinction that is impossible on other instruments. These measurements have achieved very high precision, down to 0.2 per mille. [ 10 ]
Position-specific isotopes can be used to trace environmental pollutants through local and global environment. [ 22 ] This is specifically useful as heavy isotopes are often used to synthesize chemicals and then will get incorporated into the natural environment through biodegradation . Thus, tracing position-specific isotopes in the environment can help trace the movement of these pollutants and chemical products.
These case studies represent some potential applications for position specific isotope analysis, but certainly not all. The opportunities for samples to measure and processes to characterize are virtually unlimited, and new methodological developments will help make these measurements possible going forward. | https://en.wikipedia.org/wiki/Position-specific_isotope_analysis |
In physics and geometry , there are two closely related vector spaces , usually three-dimensional but in general of any finite dimension. Position space (also real space or coordinate space ) is the set of all position vectors r in Euclidean space , and has dimensions of length ; a position vector defines a point in space. (If the position vector of a point particle varies with time, it will trace out a path, the trajectory of a particle.) Momentum space is the set of all momentum vectors p a physical system can have; the momentum vector of a particle corresponds to its motion, with dimension of mass ⋅ length ⋅ time −1 .
Mathematically, the duality between position and momentum is an example of Pontryagin duality . In particular, if a function is given in position space, f ( r ), then its Fourier transform obtains the function in momentum space, φ ( p ). Conversely, the inverse Fourier transform of a momentum space function is a position space function.
These quantities and ideas transcend all of classical and quantum physics, and a physical system can be described using either the positions of the constituent particles, or their momenta, both formulations equivalently provide the same information about the system in consideration. Another quantity is useful to define in the context of waves . The wave vector k (or simply " k -vector") has dimensions of reciprocal length , making it an analogue of angular frequency ω which has dimensions of reciprocal time . The set of all wave vectors is k-space . Usually, the position vector r is more intuitive and simpler than the wave vector k , though the converse can also be true, such as in solid-state physics .
Quantum mechanics provides two fundamental examples of the duality between position and momentum, the Heisenberg uncertainty principle Δ x Δ p ≥ ħ /2 stating that position and momentum cannot be simultaneously known to arbitrary precision, and the de Broglie relation p = ħ k which states the momentum and wavevector of a free particle are proportional to each other. [ 1 ] [ 2 ] In this context, when it is unambiguous, the terms " momentum " and "wavevector" are used interchangeably. However, the de Broglie relation is not true in a crystal. [ 3 ]
Most often in Lagrangian mechanics , the Lagrangian L ( q , d q / dt , t ) is in configuration space , where q = ( q 1 , q 2 ,..., q n ) is an n - tuple of the generalized coordinates . The Euler–Lagrange equations of motion are d d t ∂ L ∂ q ˙ i = ∂ L ∂ q i , q ˙ i ≡ d q i d t . {\displaystyle {\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}_{i}}}={\frac {\partial L}{\partial q_{i}}}\,,\quad {\dot {q}}_{i}\equiv {\frac {dq_{i}}{dt}}\,.}
(One overdot indicates one time derivative ). Introducing the definition of canonical momentum for each generalized coordinate p i = ∂ L ∂ q ˙ i , {\displaystyle p_{i}={\frac {\partial L}{\partial {\dot {q}}_{i}}}\,,} the Euler–Lagrange equations take the form p ˙ i = ∂ L ∂ q i . {\displaystyle {\dot {p}}_{i}={\frac {\partial L}{\partial q_{i}}}\,.}
The Lagrangian can be expressed in momentum space also, [ 4 ] L ′( p , d p / dt , t ), where p = ( p 1 , p 2 , ..., p n ) is an n -tuple of the generalized momenta. A Legendre transformation is performed to change the variables in the total differential of the generalized coordinate space Lagrangian; d L = ∑ i = 1 n ( ∂ L ∂ q i d q i + ∂ L ∂ q ˙ i d q ˙ i ) + ∂ L ∂ t d t = ∑ i = 1 n ( p ˙ i d q i + p i d q ˙ i ) + ∂ L ∂ t d t , {\displaystyle dL=\sum _{i=1}^{n}\left({\frac {\partial L}{\partial q_{i}}}dq_{i}+{\frac {\partial L}{\partial {\dot {q}}_{i}}}d{\dot {q}}_{i}\right)+{\frac {\partial L}{\partial t}}dt=\sum _{i=1}^{n}({\dot {p}}_{i}dq_{i}+p_{i}d{\dot {q}}_{i})+{\frac {\partial L}{\partial t}}dt\,,} where the definition of generalized momentum and Euler–Lagrange equations have replaced the partial derivatives of L . The product rule for differentials [ nb 1 ] allows the exchange of differentials in the generalized coordinates and velocities for the differentials in generalized momenta and their time derivatives, p ˙ i d q i = d ( q i p ˙ i ) − q i d p ˙ i {\displaystyle {\dot {p}}_{i}dq_{i}=d(q_{i}{\dot {p}}_{i})-q_{i}d{\dot {p}}_{i}} p i d q ˙ i = d ( q ˙ i p i ) − q ˙ i d p i {\displaystyle p_{i}d{\dot {q}}_{i}=d({\dot {q}}_{i}p_{i})-{\dot {q}}_{i}dp_{i}} which after substitution simplifies and rearranges to d [ L − ∑ i = 1 n ( q i p ˙ i + q ˙ i p i ) ] = − ∑ i = 1 n ( q ˙ i d p i + q i d p ˙ i ) + ∂ L ∂ t d t . {\displaystyle d\left[L-\sum _{i=1}^{n}(q_{i}{\dot {p}}_{i}+{\dot {q}}_{i}p_{i})\right]=-\sum _{i=1}^{n}({\dot {q}}_{i}dp_{i}+q_{i}d{\dot {p}}_{i})+{\frac {\partial L}{\partial t}}dt\,.}
Now, the total differential of the momentum space Lagrangian L ′ is d L ′ = ∑ i = 1 n ( ∂ L ′ ∂ p i d p i + ∂ L ′ ∂ p ˙ i d p ˙ i ) + ∂ L ′ ∂ t d t {\displaystyle dL'=\sum _{i=1}^{n}\left({\frac {\partial L'}{\partial p_{i}}}dp_{i}+{\frac {\partial L'}{\partial {\dot {p}}_{i}}}d{\dot {p}}_{i}\right)+{\frac {\partial L'}{\partial t}}dt} so by comparison of differentials of the Lagrangians, the momenta, and their time derivatives, the momentum space Lagrangian L ′ and the generalized coordinates derived from L ′ are respectively L ′ = L − ∑ i = 1 n ( q i p ˙ i + q ˙ i p i ) , − q ˙ i = ∂ L ′ ∂ p i , − q i = ∂ L ′ ∂ p ˙ i . {\displaystyle L'=L-\sum _{i=1}^{n}(q_{i}{\dot {p}}_{i}+{\dot {q}}_{i}p_{i})\,,\quad -{\dot {q}}_{i}={\frac {\partial L'}{\partial p_{i}}}\,,\quad -q_{i}={\frac {\partial L'}{\partial {\dot {p}}_{i}}}\,.}
Combining the last two equations gives the momentum space Euler–Lagrange equations d d t ∂ L ′ ∂ p ˙ i = ∂ L ′ ∂ p i . {\displaystyle {\frac {d}{dt}}{\frac {\partial L'}{\partial {\dot {p}}_{i}}}={\frac {\partial L'}{\partial p_{i}}}\,.}
The advantage of the Legendre transformation is that the relation between the new and old functions and their variables are obtained in the process. Both the coordinate and momentum forms of the equation are equivalent and contain the same information about the dynamics of the system. This form may be more useful when momentum or angular momentum enters the Lagrangian.
In Hamiltonian mechanics , unlike Lagrangian mechanics which uses either all the coordinates or the momenta, the Hamiltonian equations of motion place coordinates and momenta on equal footing. For a system with Hamiltonian H ( q , p , t ), the equations are q ˙ i = ∂ H ∂ p i , p ˙ i = − ∂ H ∂ q i . {\displaystyle {\dot {q}}_{i}={\frac {\partial H}{\partial p_{i}}}\,,\quad {\dot {p}}_{i}=-{\frac {\partial H}{\partial q_{i}}}\,.}
In quantum mechanics , a particle is described by a quantum state . This quantum state can be represented as a superposition of basis states . In principle one is free to choose the set of basis states, as long as they span the state space . If one chooses the (generalized) eigenfunctions of the position operator as a set of basis functions, one speaks of a state as a wave function ψ ( r ) in position space . The familiar Schrödinger equation in terms of the position r is an example of quantum mechanics in the position representation. [ 5 ]
By choosing the eigenfunctions of a different operator as a set of basis functions, one can arrive at a number of different representations of the same state. If one picks the eigenfunctions of the momentum operator as a set of basis functions, the resulting wave function ϕ ( k ) {\displaystyle \phi (\mathbf {k} )} is said to be the wave function in momentum space . [ 5 ]
A feature of quantum mechanics is that phase spaces can come in different types: discrete-variable, rotor, and continuous-variable. The table below summarizes some relations involved in the three types of phase spaces. [ 6 ]
The momentum representation of a wave function and the de Broglie relation are closely related to the Fourier inversion theorem and the concept of frequency domain . Since a free particle has a spatial frequency k = | k | = 2 π / λ {\displaystyle k=|\mathbf {k} |=2\pi /\lambda } proportional to the momentum p = | p | = ℏ k {\displaystyle p=|\mathbf {p} |=\hbar k} , describing the particle as a sum of frequency components is equivalent to describing it as the Fourier transform of a " sufficiently nice " wave function in momentum space. [ 2 ]
Suppose we have a three-dimensional wave function in position space ψ ( r ) , then we can write this functions as a weighted sum of orthogonal basis functions ψ j ( r ) : ψ ( r ) = ∑ j ϕ j ψ j ( r ) {\displaystyle \psi (\mathbf {r} )=\sum _{j}\phi _{j}\psi _{j}(\mathbf {r} )} or, in the continuous case, as an integral ψ ( r ) = ∫ k -space ϕ ( k ) ψ k ( r ) d 3 k {\displaystyle \psi (\mathbf {r} )=\int _{\mathbf {k} {\text{-space}}}\phi (\mathbf {k} )\psi _{\mathbf {k} }(\mathbf {r} )\mathrm {d} ^{3}\mathbf {k} } It is clear that if we specify the set of functions ψ k ( r ) {\displaystyle \psi _{\mathbf {k} }(\mathbf {r} )} , say as the set of eigenfunctions of the momentum operator, the function ϕ ( k ) {\displaystyle \phi (\mathbf {k} )} holds all the information necessary to reconstruct ψ ( r ) and is therefore an alternative description for the state ψ {\displaystyle \psi } .
In coordinate representation the momentum operator is given by [ 7 ] p ^ = − i ℏ ∂ ∂ r {\displaystyle \mathbf {\hat {p}} =-i\hbar {\frac {\partial }{\partial \mathbf {r} }}} (see matrix calculus for the denominator notation) with appropriate domain . The eigenfunctions are ψ k ( r ) = 1 ( 2 π ) 3 e i k ⋅ r {\displaystyle \psi _{\mathbf {k} }(\mathbf {r} )={\frac {1}{({\sqrt {2\pi }})^{3}}}e^{i\mathbf {k} \cdot \mathbf {r} }} and eigenvalues ħ k . So ψ ( r ) = 1 ( 2 π ) 3 ∫ k -space ϕ ( k ) e i k ⋅ r d 3 k {\displaystyle \psi (\mathbf {r} )={\frac {1}{({\sqrt {2\pi }})^{3}}}\int _{\mathbf {k} {\text{-space}}}\phi (\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }\mathrm {d} ^{3}\mathbf {k} } and we see that the momentum representation is related to the position representation by a Fourier transform. [ 8 ]
Conversely, a three-dimensional wave function in momentum space ϕ ( k ) {\displaystyle \phi (\mathbf {k} )} can be expressed as a weighted sum of orthogonal basis functions ϕ j ( k ) {\displaystyle \phi _{j}(\mathbf {k} )} , ϕ ( k ) = ∑ j ψ j ϕ j ( k ) , {\displaystyle \phi (\mathbf {k} )=\sum _{j}\psi _{j}\phi _{j}(\mathbf {k} ),} or as an integral, ϕ ( k ) = ∫ r -space ψ ( r ) ϕ r ( k ) d 3 r . {\displaystyle \phi (\mathbf {k} )=\int _{\mathbf {r} {\text{-space}}}\psi (\mathbf {r} )\phi _{\mathbf {r} }(\mathbf {k} )\mathrm {d} ^{3}\mathbf {r} .}
In momentum representation the position operator is given by [ 9 ] r ^ = i ℏ ∂ ∂ p = i ∂ ∂ k {\displaystyle \mathbf {\hat {r}} =i\hbar {\frac {\partial }{\partial \mathbf {p} }}=i{\frac {\partial }{\partial \mathbf {k} }}} with eigenfunctions ϕ r ( k ) = 1 ( 2 π ) 3 e − i k ⋅ r {\displaystyle \phi _{\mathbf {r} }(\mathbf {k} )={\frac {1}{\left({\sqrt {2\pi }}\right)^{3}}}e^{-i\mathbf {k} \cdot \mathbf {r} }} and eigenvalues r . So a similar decomposition of ϕ ( k ) {\displaystyle \phi (\mathbf {k} )} can be made in terms of the eigenfunctions of this operator, which turns out to be the inverse Fourier transform, [ 8 ] ϕ ( k ) = 1 ( 2 π ) 3 ∫ r -space ψ ( r ) e − i k ⋅ r d 3 r . {\displaystyle \phi (\mathbf {k} )={\frac {1}{({\sqrt {2\pi }})^{3}}}\int _{\mathbf {r} {\text{-space}}}\psi (\mathbf {r} )e^{-i\mathbf {k} \cdot \mathbf {r} }\mathrm {d} ^{3}\mathbf {r} .}
The position and momentum operators are unitarily equivalent , with the unitary operator being given explicitly by the Fourier transform, namely a quarter-cycle rotation in phase space, generated by the oscillator Hamiltonian. Thus, they have the same spectrum . In physical language, p acting on momentum space wave functions is the same as r acting on position space wave functions (under the image of the Fourier transform).
For an electron (or other particle ) in a crystal, its value of k relates almost always to its crystal momentum , not its normal momentum. Therefore, k and p are not simply proportional but play different roles. See k·p perturbation theory for an example. Crystal momentum is like a wave envelope that describes how the wave varies from one unit cell to the next, but does not give any information about how the wave varies within each unit cell.
When k relates to crystal momentum instead of true momentum, the concept of k -space is still meaningful and extremely useful, but it differs in several ways from the non-crystal k -space discussed above. For example, in a crystal's k -space, there is an infinite set of points called the reciprocal lattice which are "equivalent" to k = 0 (this is analogous to aliasing ). Likewise, the " first Brillouin zone " is a finite volume of k -space, such that every possible k is "equivalent" to exactly one point in this region. | https://en.wikipedia.org/wiki/Position_and_momentum_spaces |
In astronomy , position angle (usually abbreviated PA ) is the convention for measuring angles on the sky. The International Astronomical Union defines it as the angle measured relative to the north celestial pole (NCP), turning positive into the direction of the right ascension . In the standard (non-flipped) images, this is a counter clockwise measure relative to the axis into the direction of positive declination .
In the case of observed visual binary stars , it is defined as the angular offset of the secondary star from the primary relative to the north celestial pole .
As the example illustrates, if one were observing a hypothetical binary star with a PA of 30°, that means an imaginary line in the eyepiece drawn from the north celestial pole to the primary (P) would be offset from the secondary (S) such that the NCP-P-S angle would be 30°.
When graphing visual binaries, the NCP is, as in the illustration, normally drawn from the center point (origin) that is the Primary downward–that is, with north at bottom–and PA is measured counterclockwise. Also, the direction of the proper motion can, for example, be given by its position angle.
The definition of position angle is also applied to extended objects like galaxies, where it refers to the angle made by the major axis of the object with the NCP line.
The concept of the position angle is inherited from nautical navigation on the oceans, where the optimum compass course is the course from a known position s to a target position t with minimum effort. Setting aside the influence of winds and ocean currents, the optimum course is the course of smallest distance between the two positions on the ocean surface. Computing the compass course is known as the inverse geodetic problem .
This article considers only the abstraction of minimizing the distance between s and t traveling on the surface of a sphere with some radius R : In which direction angle p relative to North should the ship steer to reach the target position? | https://en.wikipedia.org/wiki/Position_angle |
Position error is one of the errors affecting the systems in an aircraft for measuring airspeed and altitude . [ 1 ] [ 2 ] It is not practical or necessary for an aircraft to have an airspeed indicating system and an altitude indicating system that are exactly accurate. A small amount of error is tolerable. It is caused by the location of the static vent that supplies air pressure to the airspeed indicator and altimeter; there is no position on an aircraft where, at all angles of attack, the static pressure is always equal to atmospheric pressure.
All aircraft are equipped with a small hole in the surface of the aircraft called the static port. The air pressure in the vicinity of the static port is conveyed by a conduit to the altimeter and the airspeed indicator . This static port and the conduit constitute the aircraft's static system. The objective of the static system is to sense the pressure of the air at the altitude at which the aircraft is flying. In an ideal static system the air pressure fed to the altimeter and airspeed indicator is equal to the pressure of the air at the altitude at which the aircraft is flying.
As the air flows past an aircraft in flight, the streamlines are affected by the presence of the aircraft, and the speed of the air relative to the aircraft is different at different positions on the aircraft's outer surface. In consequence of Bernoulli's principle , the different speeds of the air result in different pressures at different positions on the aircraft's surface. [ 3 ] The ideal position for a static port is a position where the local air pressure in flight is always equal to the pressure remote from the aircraft, however there is no position on an aircraft where this ideal situation exists for all angles of attack . When deciding on a position for a static port, aircraft designers attempt to find a position where the error between static pressure and free-stream pressure is a minimum across the operating range of angle of attack of the aircraft. The residual error at any given angle of attack is called the position error . [ 4 ]
Position error affects the indicated airspeed and the indicated altitude . Aircraft manufacturers use the aircraft flight manual to publish details of the error in indicated airspeed and indicated altitude across the operating range of speeds. In many aircraft, the effect of position error on airspeed is shown as the difference between indicated airspeed and calibrated airspeed . In some low-speed aircraft, the position error is shown as the difference between indicated airspeed and equivalent airspeed .
Bernoulli's principle states that total pressure (or stagnation pressure ) is constant along a streamline. [ 5 ] There is no variation in stagnation pressure, regardless of the position on the streamline where it is measured. There is no position error associated with stagnation pressure.
The pitot tube supplies pressure to the airspeed indicator . Pitot pressure is equal to stagnation pressure providing the pitot tube is aligned with the local airflow, it is located outside the boundary layer, and outside the wash from the propeller. Pitot pressure can suffer alignment error but it is not vulnerable to position error.
Aircraft design standards specify a maximum amount of Pitot-static system error. The error in indicated altitude must not be excessive because it is important for pilots to know their altitude with reasonable accuracy for the purpose of traffic separation. US Federal Aviation Regulations , Part 23, [ 6 ] §23.1325(e) includes the following requirement for the static pressure system:
The error in indicated airspeed must also not be excessive. Part 23, §23.1323(b) includes the following requirement for the airspeed indicating system:
For the purpose of complying with an aircraft design standard that specifies a maximum permissible error in the airspeed indicating system it is necessary to measure the position error in a representative aircraft. There are many different methods for measuring position error. Some of the more common methods are: | https://en.wikipedia.org/wiki/Position_error |
Position resection and intersection are methods for determining an unknown geographic position ( position finding ) by measuring angles with respect to known positions.
In resection , the one point with unknown coordinates is occupied and sightings are taken to the known points;
in intersection , the two points with known coordinates are occupied and sightings are taken to the unknown point.
Measurements can be made with a compass and topographic map (or nautical chart ), [ 1 ] [ 2 ] theodolite or with a total station using known points of a geodetic network or landmarks of a map.
Resection and its related method, intersection , are used in surveying as well as in general land navigation (including inshore marine navigation using shore-based landmarks). Both methods involve taking azimuths or bearings to two or more objects, then drawing lines of position along those recorded bearings or azimuths.
When intersecting, lines of position are used to fix the position of an unmapped feature or point by fixing its position relative to two (or more) mapped or known points, the method is known as intersection . [ 3 ] At each known point (hill, lighthouse, etc.), the navigator measures the bearing to the same unmapped target, drawing a line on the map from each known position to the target. The target is located where the lines intersect on the map. In earlier times, the intersection method was used by forest agencies and others using specialized alidades to plot the (unknown) location of an observed forest fire from two or more mapped (known) locations, such as forest fire observer towers. [ 4 ]
The reverse of the intersection technique is appropriately termed resection . Resection simply reverses the intersection process by using crossed back bearings , where the navigator's position is the unknown. [ 5 ] Two or more bearings to mapped, known points are taken; their resultant lines of position drawn from those points to where they intersect will reveal the navigator's location. [ 6 ]
When resecting or fixing a position, the geometric strength (angular disparity) of the mapped points affects the precision and accuracy of the outcome. Accuracy increases as the angle between the two position lines approaches 90 degrees. [ 7 ] Magnetic bearings are observed on the ground from the point under location to two or more features shown on a map of the area. [ 8 ] [ 9 ] Lines of reverse bearings, or lines of position , are then drawn on the map from the known features; two and more lines provide the resection point (the navigator's location). [ 10 ] When three or more lines of position are utilized, the method is often popularly (though erroneously) referred to as triangulation (in precise terms, using three or more lines of position is still correctly called resection , as angular law of tangents ( cot ) calculations are not performed). [ 11 ] When using a map and compass to perform resection, it is important to allow for the difference between the magnetic bearings observed and grid north (or true north) bearings ( magnetic declination ) of the map or chart. [ 12 ]
Resection continues to be employed in land and inshore navigation today, as it is a simple and quick method requiring only an inexpensive magnetic compass and map/chart. [ 13 ] [ 14 ] [ 15 ]
In surveying work, [ 16 ] the most common methods of computing the coordinates of a point by angular resection are the Collin's "Q" point method (after John Collins ) as well as the Cassini's Method (after Giovanni Domenico Cassini ) and the Tienstra formula , though the first known solution was given by Willebrord Snellius (see Snellius–Pothenot problem ).
For the type of precision work involved in surveying, the unmapped point is located by measuring the angles subtended by lines of sight from it to a minimum of three mapped (coordinated) points. In geodetic operations the observations are adjusted for spherical excess and projection variations . Precise angular measurements between lines from the point under location using theodolites provides more accurate results, with trig beacons erected on high points and hills to enable quick and unambiguous sights to known points.
When planning to perform a resection, the surveyor must first plot the locations of the known points along with the approximate unknown point of observation. If all points, including the unknown point, lie close to a circle that can be placed on all four points, then there is no solution or the high risk of an erroneous solution. This is known as observing on the "danger circle". The poor solution stems from the property of a chord subtending equal angles to any other point on the circle. | https://en.wikipedia.org/wiki/Position_resection_and_intersection |
A position sensor is a sensor that detects an object's position. A position sensor may indicate the absolute position of the object (its location) or its relative position (displacement) in terms of linear travel, rotational angle or three-dimensional space. Common types of position sensors include the following:
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Position_sensor |
A position weight matrix (PWM) , also known as a position-specific weight matrix (PSWM) or position-specific scoring matrix (PSSM) , is a commonly used representation of motifs (patterns) in biological sequences.
PWMs are often derived from a set of aligned sequences that are thought to be functionally related and have become an important part of many software tools for computational motif discovery.
A PWM has one row for each symbol of the alphabet (4 rows for nucleotides in DNA sequences or 20 rows for amino acids in protein sequences) and one column for each position in the pattern. In the first step in constructing a PWM, a basic position frequency matrix (PFM) is created by counting the occurrences of each nucleotide at each position. From the PFM, a position probability matrix (PPM) can now be created by dividing that former nucleotide count at each position by the number of sequences, thereby normalising the values. Formally, given a set X of N aligned sequences of length l , the elements of the PPM M are calculated:
where i ∈ {\displaystyle \in } (1,..., N ), j ∈ {\displaystyle \in } (1,..., l ), k is the set of symbols in the alphabet and I(a=k) is an indicator function where I(a=k) is 1 if a=k and 0 otherwise.
For example, given the following DNA sequences:
GAGGTAAAC TCCGTAAGT CAGGTTGGA ACAGTCAGT TAGGTCATT TAGGTACTG ATGGTAACT CAGGTATAC TGTGTGAGT AAGGTAAGT
The corresponding PFM is:
Therefore, the resulting PPM is: [ 1 ]
Both PPMs and PWMs assume statistical independence between positions in the pattern, as the probabilities for each position are calculated independently of other positions. From the definition above, it follows that the sum of values for a particular position (that is, summing over all symbols) is 1. Each column can therefore be regarded as an independent multinomial distribution . This makes it easy to calculate the probability of a sequence given a PPM, by multiplying the relevant probabilities at each position. For example, the probability of the sequence S = GAGGTAAAC given the above PPM M can be calculated:
Pseudocounts (or Laplace estimators ) are often applied when calculating PPMs if based on a small dataset, in order to avoid matrix entries having a value of 0. [ 2 ] This is equivalent to multiplying each column of the PPM by a Dirichlet distribution and allows the probability to be calculated for new sequences (that is, sequences which were not part of the original dataset). In the example above, without pseudocounts, any sequence which did not have a G in the 4th position or a T in the 5th position would have a probability of 0, regardless of the other positions.
Most often the elements in PWMs are calculated as log odds. That is, the elements of a PPM are transformed using a background model b {\displaystyle b} so that:
describes how an element in the PWM (left) , M k , j {\displaystyle M_{k,j}} , can be calculated.
The simplest background model assumes that each letter appears equally frequently in the dataset. That is, the value of b k = 1 / | k | {\displaystyle b_{k}=1/\vert k\vert } for all symbols in the alphabet (0.25 for nucleotides and 0.05 for amino acids). Applying this transformation to the PPM M from above (with no pseudocounts added) gives:
The − ∞ {\displaystyle -\infty } entries in the matrix make clear the advantage of adding pseudocounts, especially when using small datasets to construct M . The background model need not have equal values for each symbol: for example, when studying organisms with a high GC-content , the values for C and G may be increased with a corresponding decrease for the A and T values.
When the PWM elements are calculated using log likelihoods, the score of a sequence can be calculated by adding (rather than multiplying) the relevant values at each position in the PWM. The sequence score gives an indication of how different the sequence is from a random sequence. The score is 0 if the sequence has the same probability of being a functional site and of being a random site. The score is greater than 0 if it is more likely to be a functional site than a random site, and less than 0 if it is more likely to be a random site than a functional site. [ 1 ] The sequence score can also be interpreted in a physical framework as the binding energy for that sequence.
The information content (IC) of a PWM is sometimes of interest, as it says something about how different a given PWM is from a uniform distribution .
The self-information of observing a particular symbol at a particular position of the motif is:
The expected (average) self-information of a particular element in the PWM is then:
Finally, the IC of the PWM is then the sum of the expected self-information of every element:
Often, it is more useful to calculate the information content with the background letter frequencies of the sequences you are studying rather than assuming equal probabilities of each letter (e.g., the GC-content of DNA of thermophilic bacteria range from 65.3 to 70.8, [ 3 ] thus a motif of ATAT would contain much more information than a motif of CCGG). The equation for information content thus becomes
where p j {\displaystyle p_{j}} is the background frequency for letter j {\displaystyle j} . This corresponds to the Kullback–Leibler divergence or relative entropy. However, it has been shown that when using PSSM to search genomic sequences (see below) this uniform correction can lead to overestimation of the importance of the different bases in a motif, due to the uneven distribution of n-mers in real genomes, leading to a significantly larger number of false positives. [ 4 ]
There are various algorithms to scan for hits of PWMs in sequences. One example is the MATCH algorithm [ 5 ] which has been implemented in the ModuleMaster. [ 6 ] More sophisticated algorithms for fast database searching with nucleotide as well as amino acid PWMs/PSSMs are implemented in the possumsearch software. [ 7 ]
The basic PWM/PSSM is unable to deal with insertions and deletions. A PSSM with additional probabilities for insertion and deletion at each position can be interpreted as a hidden Markov model . This is the approach used by Pfam . [ 8 ] [ 9 ] | https://en.wikipedia.org/wiki/Position_weight_matrix |
Positional sequencing is a method of sequencing DNA that simultaneously generates information about both identity and location of nucleotide sequences . [ 1 ] The method involves detecting the location of sequence specific recognition events (e.g., such as hybridization of probes of known sequence) on single DNA molecules in each read, and generating maps of the location of such events. Multiple reads can be assembled into a consensus map that identifies the multiple locations of a specific sub-sequence. The assembly process is greatly facilitated by knowledge of the location of each sub-sequence, as well as the fact that individual reads produce non-contiguous sequence data over length scales that can be orders of magnitude greater than what can be achieved with Sanger sequencing or nextgen sequencing by synthesis .
A collection of maps may be used to reconstruct single-base resolved sequence in a process analogous to sequence reconstruction in sequencing by hybridization . Ambiguities in the reconstruction of sequences are resolved through the knowledge of the relative position of overlapping sequence specific recognition events. By varying the parameters (e.g., length of read, density of recognition events, resolution of the detector) governing a specific implementation of the method, it is possible to query all size scales of DNA variation, from single nucleotide sequence all the way to large structural variants and chromosomal aneuploidies . [ 1 ] [ 2 ]
This molecular biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Positional_sequencing |
In mathematics , a positive-definite function is, depending on the context, either of two types of function .
Let R {\displaystyle \mathbb {R} } be the set of real numbers and C {\displaystyle \mathbb {C} } be the set of complex numbers .
A function f : R → C {\displaystyle f:\mathbb {R} \to \mathbb {C} } is called positive semi-definite if for all real numbers x 1 , …, x n the n × n matrix
is a positive semi- definite matrix . [ citation needed ]
By definition, a positive semi-definite matrix, such as A {\displaystyle A} , is Hermitian ; therefore f (− x ) is the complex conjugate of f ( x )).
In particular, it is necessary (but not sufficient) that
(these inequalities follow from the condition for n = 1, 2.)
A function is negative semi-definite if the inequality is reversed. A function is definite if the weak inequality is replaced with a strong (<, > 0).
If ( X , ⟨ ⋅ , ⋅ ⟩ ) {\displaystyle (X,\langle \cdot ,\cdot \rangle )} is a real inner product space , then g y : X → C {\displaystyle g_{y}\colon X\to \mathbb {C} } , x ↦ exp ( i ⟨ y , x ⟩ ) {\displaystyle x\mapsto \exp(i\langle y,x\rangle )} is positive definite for every y ∈ X {\displaystyle y\in X} : for all u ∈ C n {\displaystyle u\in \mathbb {C} ^{n}} and all x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} we have
As nonnegative linear combinations of positive definite functions are again positive definite, the cosine function is positive definite as a nonnegative linear combination of the above functions:
One can create a positive definite function f : X → C {\displaystyle f\colon X\to \mathbb {C} } easily from positive definite function f : R → C {\displaystyle f\colon \mathbb {R} \to \mathbb {C} } for any vector space X {\displaystyle X} : choose a linear function ϕ : X → R {\displaystyle \phi \colon X\to \mathbb {R} } and define f ∗ := f ∘ ϕ {\displaystyle f^{*}:=f\circ \phi } .
Then
where A ~ ( f ) = ( f ( ϕ ( x i ) − ϕ ( x j ) ) = f ( x ~ i − x ~ j ) ) i , j {\displaystyle {\tilde {A}}^{(f)}={\big (}f(\phi (x_{i})-\phi (x_{j}))=f({\tilde {x}}_{i}-{\tilde {x}}_{j}){\big )}_{i,j}} where x ~ k := ϕ ( x k ) {\displaystyle {\tilde {x}}_{k}:=\phi (x_{k})} are distinct as ϕ {\displaystyle \phi } is linear . [ 1 ]
Positive-definiteness arises naturally in the theory of the Fourier transform ; it can be seen directly that to be positive-definite it is sufficient for f to be the Fourier transform of a function g on the real line with g ( y ) ≥ 0.
The converse result is Bochner's theorem , stating that any continuous positive-definite function on the real line is the Fourier transform of a (positive) measure . [ 2 ]
In statistics , and especially Bayesian statistics , the theorem is usually applied to real functions. Typically, n scalar measurements of some scalar value at points in R d {\displaystyle R^{d}} are taken and points that are mutually close are required to have measurements that are highly correlated. In practice, one must be careful to ensure that the resulting covariance matrix (an n × n matrix) is always positive-definite. One strategy is to define a correlation matrix A which is then multiplied by a scalar to give a covariance matrix : this must be positive-definite. Bochner's theorem states that if the correlation between two points is dependent only upon the distance between them (via function f ), then function f must be positive-definite to ensure the covariance matrix A is positive-definite. See Kriging .
In this context, Fourier terminology is not normally used and instead it is stated that f ( x ) is the characteristic function of a symmetric probability density function (PDF) .
One can define positive-definite functions on any locally compact abelian topological group ; Bochner's theorem extends to this context. Positive-definite functions on groups occur naturally in the representation theory of groups on Hilbert spaces (i.e. the theory of unitary representations ).
Alternatively, a function f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } is called positive-definite on a neighborhood D of the origin if f ( 0 ) = 0 {\displaystyle f(0)=0} and f ( x ) > 0 {\displaystyle f(x)>0} for every non-zero x ∈ D {\displaystyle x\in D} . [ 3 ] [ 4 ]
Note that this definition conflicts with definition 1, given above.
In physics, the requirement that f ( 0 ) = 0 {\displaystyle f(0)=0} is sometimes dropped (see, e.g., Corney and Olsen [ 5 ] ). | https://en.wikipedia.org/wiki/Positive-definite_function |
Positive-real functions , often abbreviated to PR function or PRF , are a kind of mathematical function that first arose in electrical network synthesis . They are complex functions , Z ( s ), of a complex variable, s . A rational function is defined to have the PR property if it has a positive real part and is analytic in the right half of the complex plane and takes on real values on the real axis.
In symbols the definition is,
In electrical network analysis, Z ( s ) represents an impedance expression and s is the complex frequency variable, often expressed as its real and imaginary parts;
in which terms the PR condition can be stated;
The importance to network analysis of the PR condition lies in the realisability condition. Z ( s ) is realisable as a one-port rational impedance if and only if it meets the PR condition. Realisable in this sense means that the impedance can be constructed from a finite (hence rational) number of discrete ideal passive linear elements ( resistors , inductors and capacitors in electrical terminology). [ 1 ]
The term positive-real function was originally defined by [ 1 ] Otto Brune to describe any function Z ( s ) which [ 2 ]
Many authors strictly adhere to this definition by explicitly requiring rationality, [ 3 ] or by restricting attention to rational functions, at least in the first instance. [ 4 ] However, a similar more general condition, not restricted to rational functions had earlier been considered by Cauer, [ 1 ] and some authors ascribe the term positive-real to this type of condition, while others consider it to be a generalization of the basic definition. [ 4 ]
The condition was first proposed by Wilhelm Cauer (1926) [ 5 ] who determined that it was a necessary condition. Otto Brune (1931) [ 2 ] [ 6 ] coined the term positive-real for the condition and proved that it was both necessary and sufficient for realisability.
A couple of generalizations are sometimes made, with intention of characterizing the immittance functions of a wider class of passive linear electrical networks.
The impedance Z ( s ) of a network consisting of an infinite number of components (such as a semi-infinite ladder ), need not be a rational function of s , and in particular may have branch points in the left half s -plane. To accommodate such functions in the definition of PR, it is therefore necessary to relax the condition that the function be real for all real s , and only require this when s is positive. Thus, a possibly irrational function Z ( s ) is PR if and only if
Some authors start from this more general definition, and then particularize it to the rational case.
Linear electrical networks with more than one port may be described by impedance or admittance matrices . So by extending the definition of PR to matrix-valued functions, linear multi-port networks which are passive may be distinguished from those that are not. A possibly irrational matrix-valued function Z ( s ) is PR if and only if | https://en.wikipedia.org/wiki/Positive-real_function |
Positive computing is a technological design perspective that embraces psychological well-being and ethical practice, aiming at building a digital environment to support happier and healthier users . Positive computing develops approaches that integrate insights from psychology , education , neuroscience , and HCI with technological development . [ 1 ] [ 2 ] The purpose of positive computing is to bridge the technology and mental health worlds. [ 3 ] Indeed, there are computer and mental health workshops that are aimed to bring people from both communities together. [ 4 ]
Everyone who uses technology is impacted by the way the tool is designed and even if most technologies may have small effects, they still apply to huge populations. [ 5 ] [ 3 ]
Technology researchers typically focus primarily on technical aspects, paying less attention to the ethical impact and ethical considerations of their products. [ 6 ] However, researchers from other fields such as psychology and philosophy studied these matters extensively and provided a wealth of methodologies to assess users' well-being, with thousands of quality-of-life assessment methods and validating studies. [ 7 ] [ 8 ]
Positive computing draws many ideas from positive psychology , a domain of psychology that focuses on societal well-being and improving quality of life.
The recognition of the impact of technology and inventions on people's lives [ 5 ] has moved technology professionals to rethink the technology tools we use and seek a realignment of companies' goals to the social good. Exemplary of this disposition is the famous Google 's motto, " don't be evil ." [ 9 ]
Technologies can be loosely classified into four groups according to their influence on the psychological aspects: [ 3 ]
In Calvo's and Peter's seminal book on positive computing, [ 10 ] they list the following as positive aspects to which we should aim when designing technologies: positive emotions, motivation, engagement, flow, self-awareness, self-compassion, mindfulness, empathy, compassion, and altruism.
An encompassing term for general human welfare and happiness is eudaimonia which is extensively studied in positive psychology [ 11 ] and which is inquired along different dimensions such as self-discovery, the sense of purpose and meaning in life, the involvement in activities, the investment in the pursuit of excellence, the self-perception of one's own potentials. [ 12 ]
There are three basic psychological needs according to Self-determination theory (SDT) : autonomy , competence , and relatedness , which can be briefly described as the feeling of psychological liberty and self-motivation, the feeling of having control and mastery, and the feeling of connection to others.
The three previously mentioned basic psychological needs are measurable and well-defined characteristics that make them excellent as design targets. [ 13 ]
To support autonomy, the design process needs to provide control over multiple options, provide meaningful rationales behind choices, enable the customization of the experience, and avoid controlling language . [ 14 ] [ 13 ]
Competence is also well-studied for game design , and the three main design factors supporting it are the appropriateness of the level of presented challenges, the presence of positive feedback, and the opportunities to learn and master the tasks at hand. [ 14 ] [ 15 ] [ 13 ]
Relatedness-supportive environments need to be designed to provide meaningful and responsive interactions with others, respect human emotions, avoid disrupting social relationships, and provide opportunities for social connections. [ 16 ] [ 13 ]
Responsible design, not to be confused with responsive design , comes from the integration of ethical analysis with well-being–supportive design into engineering practice. [ 17 ] In particular, it features the double diamond design process model adding a post-launch evaluation phase.
The responsible design process consists then of five stages: [ 18 ]
Over the past half-century, artificial intelligence has grown rapidly in terms of both computational power, application, and mainstream usage. As written by Zhongzhi Shi, and observed by many others, "Artificial Intelligence attempts simulation, extension and expansion of human intelligence using artificial methodology and technology." [ 19 ]
A possible outcome of future computer science and computer engineering research is an Intelligence explosion . I. J. Good described the first superintelligent machine as "the last invention that man need ever make," because of the vast influence it would have on our species. [ 20 ] Indeed, Nick Bostrom , in his book Superintelligence: Paths, Dangers, Strategies , proposes the common good principle according to which superintelligence should be developed only for the benefit of all and based on widely shared ethical ideals. [ 21 ]
Malo Bourgon, COO of MIRI , stated that the AI community should consider best practices from the computer security community when testing their systems for safety and security before they are released for wide adoption. [ 22 ] Government legislation, business practices, and stronger education of AI and its consequences to society are also proposed. [ 23 ] These solutions implement the principles of positive computing into AI, making sure that it serves humanity in a positive way.
Notes
Bibliography | https://en.wikipedia.org/wiki/Positive_computing |
A positive displacement meter is a type of flow meter that requires fluid to mechanically displace components in the meter in order for flow measurement. Positive displacement (PD) flow meters measure the volumetric flow rate of a moving fluid or gas by dividing the media into fixed, metered volumes (finite increments or volumes of the fluid). A basic analogy would be holding a bucket below a tap, filling it to a set level, then quickly replacing it with another bucket and timing the rate at which the buckets are filled (or the total number of buckets for the “totalized” flow). With appropriate pressure and temperature compensation, the mass flow rate can be accurately determined.
These devices consist of a chamber(s) that obstructs the media flow and a rotating or reciprocating mechanism that allows the passage of fixed-volume amounts. The number of parcels that pass through the chamber determines the media volume. The rate of revolution or reciprocation determines the flow rate. There are two basic types of positive displacement flow meters. Sensor-only systems or transducers are switch-like devices that provide electronic outputs for processors, controllers, or data acquisition systems.
Complete sensor systems provide additional capabilities such as an integral display and/or user interface. For both types of positive displacement flow meters, performance specifications include the minimum and maximum measurable flow rate, operating pressure, temperature range , maximum allowable material viscosity, connection size, and percent accuracy (typically as a percentage of actual reading, not full scale) . Suppliers indicate whether devices are designed to measure fluid or gas.
A screw flowmeter is composed of a set of screws (also called spindles) which form with the internal structure of the flowmeters' casing a measurement chamber. [ 1 ] The screw will get into rotation thanks to the medium passing through the device, which will then be transferred by the-said screws from one end to the other end of the measuring device. For this to be done, the pressure drop is essential and seen as a "necessary evil". [ 1 ] This rotation can then be recorded by a sensor which, combined with the processing unit (software and hardware), will be able to deliver a measurement according to the flowrate, viscosity and size of the measurement chamber. [ 1 ] ft
Screw flowmeters are well-acknowledged for their excellent linearity (±0.001%), [ 2 ] [ 3 ] excellent repeatability (up to 0,006%) [ 2 ] and accuracy (±0.1%). [ 2 ] [ 3 ] They have the propensity to be used as metrological international reference and/or standard by metrological institutes, due to their outstanding features and reliability. Thanks to screw meters, public and independent institutes of metrology worldwide can compare their respective work, facilities, or calibrate other flowmeters (e.g., master metering) or compare flowmeters' performance according to different measurement principles. [ 4 ] [ 5 ] [ 6 ] [ 7 ]
List of public and independent institutes of metrology using screw flow meters as international reference and/or standard: [ 4 ] [ 5 ] [ 7 ] [ 6 ]
Each piston is mechanically or magnetically operated to fill a cylinder with the fluid and then discharge the fluid. Each stroke represents a finite measurement of the fluid (can be a single or multi-piston device).
Gear flow meters rely on internal gears rotating as fluid passes through them. There are various types of gear meters named mostly for the shape of the internal components
With oval gear flow meters, two oval gears or rotors are mounted inside a cylinder. As the fluid flows through the cylinder, the pressure of the fluid causes the rotors to rotate. As flow rate increases, so does the rotational speed of the rotors.
A disk mounted on a sphere is “wobbled” about an axis by the fluid flow and each rotation represents a finite amount of fluid transferred.
A nutating disc flow meter has a round disc mounted on a spindle in a cylindrical chamber. By tracking the movements of the spindle, the flow meter determines the number of times the chamber traps and empties fluid. This information is used to determine the flow rate.
A rotating impeller containing two or more vanes divides the spaces between the vanes into discrete volumes and each rotation (or vane passing) is counted.
Fluid is drawn into the inlet side of an oscillating diaphragm and then dispelled to the outlet. The diaphragm oscillating cycles are counted to determine the flow rate.
Positive displacement flowmeters are very accurate and have high turndown . They can be used in very viscous , dirty and corrosive fluids and essentially require no straight runs of pipe for fluid flow stream conditioning though pressure drop can be an issue. They are widely used in the custody transfer of oils and liquid fluids (gasoline) and are applied on residential home natural gas and water metering. A diaphragm meter, with which most homes are equipped, is an example of a positive displacement meter. This type of meter is appealing in certain custody transfer flow applications where it is critical that the metering be functional in order for any flow to take place.
Positive displacement flowmeters, with internal wiping seals, produce the highest differential pressure (and subsequently greatest pressure drop head loss ) of all the flowmeter types. Meters that rely on a liquid seal create a relatively low pressure drop.
Positive-displacement (PD) meters can measure both liquids and gases. Like turbine meters, PD flow meters work best with clean, non-corrosive, and non-erosive liquids and gases, although some models will tolerate some impurities. Because of their high accuracy, PD meters are widely used at residences to measure the amount of gas or water used. Other applications include: chemical injection, fuel measurement, precision test stands, high pressure, hydraulic testing, and similar precision applications. [ application 1 ]
Some designs require that only lubricating fluid be measured, because the rotors are exposed to the fluid. PD meters differ from turbine meters in that they handle medium and high-viscosity liquids well. For this reason, they are often used to measure the flow of hydraulic fluids . Compared with orifice-type meters , PD meters require very little straight upstream piping since they are not sensitive to uneven flow distribution across the area of the pipe. [ 8 ] Positive displacement flow meters can provide better relative accuracy at low flows than orifice-type flow meters. However, a positive displacement meter can be considerably heavier and more costly than non-positive-displacement types such as orifice plates, magnetic or vortex flow meters . | https://en.wikipedia.org/wiki/Positive_displacement_meter |
In mathematics , an element of a *-algebra is called positive if it is the sum of elements of the form a ∗ a {\displaystyle a^{*}a} . [ 1 ]
Let A {\displaystyle {\mathcal {A}}} be a *-algebra. An element a ∈ A {\displaystyle a\in {\mathcal {A}}} is called positive if there are finitely many elements a k ∈ A ( k = 1 , 2 , … , n ) {\displaystyle a_{k}\in {\mathcal {A}}\;(k=1,2,\ldots ,n)} , so that a = ∑ k = 1 n a k ∗ a k {\textstyle a=\sum _{k=1}^{n}a_{k}^{*}a_{k}} holds. [ 1 ] This is also denoted by a ≥ 0 {\displaystyle a\geq 0} . [ 2 ]
The set of positive elements is denoted by A + {\displaystyle {\mathcal {A}}_{+}} .
A special case from particular importance is the case where A {\displaystyle {\mathcal {A}}} is a complete normed *-algebra , that satisfies the C*-identity ( ‖ a ∗ a ‖ = ‖ a ‖ 2 ∀ a ∈ A {\displaystyle \left\|a^{*}a\right\|=\left\|a\right\|^{2}\ \forall a\in {\mathcal {A}}} ), which is called a C*-algebra .
In case A {\displaystyle {\mathcal {A}}} is a C*-algebra, the following holds:
Let A {\displaystyle {\mathcal {A}}} be a C*-algebra and a ∈ A {\displaystyle a\in {\mathcal {A}}} . Then the following are equivalent: [ 4 ]
If A {\displaystyle {\mathcal {A}}} is a unital *-algebra with unit element e {\displaystyle e} , then in addition the following statements are equivalent: [ 5 ]
Let A {\displaystyle {\mathcal {A}}} be a *-algebra. Then:
Let A {\displaystyle {\mathcal {A}}} be a C*-algebra. Then:
Let A {\displaystyle {\mathcal {A}}} be a *-algebra. The property of being a positive element defines a translation invariant partial order on the set of self-adjoint elements A s a {\displaystyle {\mathcal {A}}_{sa}} . If b − a ∈ A + {\displaystyle b-a\in {\mathcal {A}}_{+}} holds for a , b ∈ A {\displaystyle a,b\in {\mathcal {A}}} , one writes a ≤ b {\displaystyle a\leq b} or b ≥ a {\displaystyle b\geq a} . [ 13 ]
This partial order fulfills the properties t a ≤ t b {\displaystyle ta\leq tb} and a + c ≤ b + c {\displaystyle a+c\leq b+c} for all a , b , c ∈ A s a {\displaystyle a,b,c\in {\mathcal {A}}_{sa}} with a ≤ b {\displaystyle a\leq b} and t ∈ [ 0 , ∞ ) {\displaystyle t\in [0,\infty )} . [ 8 ]
If A {\displaystyle {\mathcal {A}}} is a C*-algebra, the partial order also has the following properties for a , b ∈ A {\displaystyle a,b\in {\mathcal {A}}} : | https://en.wikipedia.org/wiki/Positive_element |
In mathematics , more specifically in functional analysis , a positive linear operator from an preordered vector space ( X , ≤ ) {\displaystyle (X,\leq )} into a preordered vector space ( Y , ≤ ) {\displaystyle (Y,\leq )} is a linear operator f {\displaystyle f} on X {\displaystyle X} into Y {\displaystyle Y} such that for all positive elements x {\displaystyle x} of X , {\displaystyle X,} that is x ≥ 0 , {\displaystyle x\geq 0,} it holds that f ( x ) ≥ 0. {\displaystyle f(x)\geq 0.} In other words, a positive linear operator maps the positive cone of the domain into the positive cone of the codomain .
Every positive linear functional is a type of positive linear operator.
The significance of positive linear operators lies in results such as Riesz–Markov–Kakutani representation theorem .
A linear function f {\displaystyle f} on a preordered vector space is called positive if it satisfies either of the following equivalent conditions:
The set of all positive linear forms on a vector space with positive cone C , {\displaystyle C,} called the dual cone and denoted by C ∗ , {\displaystyle C^{*},} is a cone equal to the polar of − C . {\displaystyle -C.} The preorder induced by the dual cone on the space of linear functionals on X {\displaystyle X} is called the dual preorder . [ 1 ]
The order dual of an ordered vector space X {\displaystyle X} is the set, denoted by X + , {\displaystyle X^{+},} defined by X + := C ∗ − C ∗ . {\displaystyle X^{+}:=C^{*}-C^{*}.}
Let ( X , ≤ ) {\displaystyle (X,\leq )} and ( Y , ≤ ) {\displaystyle (Y,\leq )} be preordered vector spaces and let L ( X ; Y ) {\displaystyle {\mathcal {L}}(X;Y)} be the space of all linear maps from X {\displaystyle X} into Y . {\displaystyle Y.} The set H {\displaystyle H} of all positive linear operators in L ( X ; Y ) {\displaystyle {\mathcal {L}}(X;Y)} is a cone in L ( X ; Y ) {\displaystyle {\mathcal {L}}(X;Y)} that defines a preorder on L ( X ; Y ) {\displaystyle {\mathcal {L}}(X;Y)} .
If M {\displaystyle M} is a vector subspace of L ( X ; Y ) {\displaystyle {\mathcal {L}}(X;Y)} and if H ∩ M {\displaystyle H\cap M} is a proper cone then this proper cone defines a canonical partial order on M {\displaystyle M} making M {\displaystyle M} into a partially ordered vector space. [ 2 ]
If ( X , ≤ ) {\displaystyle (X,\leq )} and ( Y , ≤ ) {\displaystyle (Y,\leq )} are ordered topological vector spaces and if G {\displaystyle {\mathcal {G}}} is a family of bounded subsets of X {\displaystyle X} whose union covers X {\displaystyle X} then the positive cone H {\displaystyle {\mathcal {H}}} in L ( X ; Y ) {\displaystyle L(X;Y)} , which is the space of all continuous linear maps from X {\displaystyle X} into Y , {\displaystyle Y,} is closed in L ( X ; Y ) {\displaystyle L(X;Y)} when L ( X ; Y ) {\displaystyle L(X;Y)} is endowed with the G {\displaystyle {\mathcal {G}}} -topology . [ 2 ] For H {\displaystyle {\mathcal {H}}} to be a proper cone in L ( X ; Y ) {\displaystyle L(X;Y)} it is sufficient that the positive cone of X {\displaystyle X} be total in X {\displaystyle X} (that is, the span of the positive cone of X {\displaystyle X} be dense in X {\displaystyle X} ).
If Y {\displaystyle Y} is a locally convex space of dimension greater than 0 then this condition is also necessary. [ 2 ] Thus, if the positive cone of X {\displaystyle X} is total in X {\displaystyle X} and if Y {\displaystyle Y} is a locally convex space, then the canonical ordering of L ( X ; Y ) {\displaystyle L(X;Y)} defined by H {\displaystyle {\mathcal {H}}} is a regular order. [ 2 ]
Proposition : Suppose that X {\displaystyle X} and Y {\displaystyle Y} are ordered locally convex topological vector spaces with X {\displaystyle X} being a Mackey space on which every positive linear functional is continuous. If the positive cone of Y {\displaystyle Y} is a weakly normal cone in Y {\displaystyle Y} then every positive linear operator from X {\displaystyle X} into Y {\displaystyle Y} is continuous. [ 2 ]
Proposition : Suppose X {\displaystyle X} is a barreled ordered topological vector space (TVS) with positive cone C {\displaystyle C} that satisfies X = C − C {\displaystyle X=C-C} and Y {\displaystyle Y} is a semi-reflexive ordered TVS with a positive cone D {\displaystyle D} that is a normal cone . Give L ( X ; Y ) {\displaystyle L(X;Y)} its canonical order and let U {\displaystyle {\mathcal {U}}} be a subset of L ( X ; Y ) {\displaystyle L(X;Y)} that is directed upward and either majorized (that is, bounded above by some element of L ( X ; Y ) {\displaystyle L(X;Y)} ) or simply bounded. Then u = sup U {\displaystyle u=\sup {\mathcal {U}}} exists and the section filter F ( U ) {\displaystyle {\mathcal {F}}({\mathcal {U}})} converges to u {\displaystyle u} uniformly on every precompact subset of X . {\displaystyle X.} [ 2 ] | https://en.wikipedia.org/wiki/Positive_linear_operator |
Positive material identification (PMI) is the analysis of a material, this can be any material but is generally used for the analysis of metallic alloy to establish composition by reading the quantities by percentage of its constituent elements . Typical methods for PMI include X-ray fluorescence (XRF) and optical emission spectrometry (OES). [ 1 ]
PMI is a portable method of analysis and can be used in the field on components. [ 2 ]
X-ray fluorescence (XRF) PMI can not detect small elements such as carbon. This means that when undertaking analysis of stainless steels such as grades 304 and 316 the low carbon 'L' variant can not be determined. This however can be analysed with optical emission spectrometry (OES) [ 3 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Positive_material_identification |
Positive pressure is a pressure within a system that is greater than the environment that surrounds that system. Consequently, if there is any leak from the positively pressured system, it will egress into the surrounding environment. This is in contrast to a negative pressure room , where air is sucked in. [ 1 ] [ 2 ]
Use is also made of positive pressure to ensure there is no ingress of the environment into a supposed closed system. A typical example of the use of positive pressure is the location of a habitat in an area where there may exist flammable gases such as those found on an oil platform or laboratory cleanroom . This kind of positive pressure is also used in operating theaters and in vitro fertilisation (IVF) labs. [ citation needed ]
Hospitals may have positive pressure rooms for patients with compromised immune systems . Air will flow out of the room instead of in, so that any airborne microorganisms (e.g., bacteria) that may infect the patient are kept away. [ 2 ]
This process is important in human and chick development. Positive pressure, created by the closure of anterior and posterior neuropores of the neural tube during neurulation , is a requirement of brain development.
Amphibians use this process to respire , whereby they use positive pressure to inflate their lungs .
Industrial use of positive pressure systems are commonly used to ventilate confined spaces with dust, fumes, pollutants and/or high temperatures [ 3 ]
Many hospitals are equipped with negative and positive pressure rooms just for the purposes described. Negative pressure rooms are used to help keep airborne pathogens (eg. aerosolized COVID-19 and active TB ) from escaping into surrounding areas, thereby preventing the spread of airborne pathogens to outside the room. Positive pressure rooms are used for immunocompromised persons (eg. Neutropenic ) whereby controlled quality air is sent into the room to prevent random (and potentially polluted) air from entering the room. [ 4 ] The CDC recommends a positive pressure differential of at least 2.5 Pa between the positively pressured room and the adjoining hallway. [ 5 ] | https://en.wikipedia.org/wiki/Positive_pressure |
In mathematical analysis , a positively (or positive ) invariant set is a set with the following properties:
Suppose x ˙ = f ( x ) {\displaystyle {\dot {x}}=f(x)} is a dynamical system , x ( t , x 0 ) {\displaystyle x(t,x_{0})} is a trajectory, and x 0 {\displaystyle x_{0}} is the initial point. Let O := { x ∈ R n ∣ φ ( x ) = 0 } {\displaystyle {\mathcal {O}}:=\left\lbrace x\in \mathbb {R} ^{n}\mid \varphi (x)=0\right\rbrace } where φ {\displaystyle \varphi } is a real-valued function . The set O {\displaystyle {\mathcal {O}}} is said to be positively invariant if x 0 ∈ O {\displaystyle x_{0}\in {\mathcal {O}}} implies that x ( t , x 0 ) ∈ O ∀ t ≥ 0 {\displaystyle x(t,x_{0})\in {\mathcal {O}}\ \forall \ t\geq 0}
In other words, once a trajectory of the system enters O {\displaystyle {\mathcal {O}}} , it will never leave it again.
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Positively_invariant_set |
The positron or antielectron is the particle with an electric charge of +1 e , a spin of 1/2 (the same as the electron), and the same mass as an electron . It is the antiparticle ( antimatter counterpart) of the electron . When a positron collides with an electron, annihilation occurs. If this collision occurs at low energies, it results in the production of two or more photons .
Positrons can be created by positron emission radioactive decay (through weak interactions ), or by pair production from a sufficiently energetic photon which is interacting with an atom in a material.
In 1928, Paul Dirac published a paper proposing that electrons can have both a positive and negative charge. [ 5 ] This paper introduced the Dirac equation , a unification of quantum mechanics, special relativity , and the then-new concept of electron spin to explain the Zeeman effect . The paper did not explicitly predict a new particle but did allow for electrons having either positive or negative energy as solutions . Hermann Weyl then published a paper discussing the mathematical implications of the negative energy solution. [ 6 ] The positive-energy solution explained experimental results, but Dirac was puzzled by the equally valid negative-energy solution that the mathematical model allowed. Quantum mechanics did not allow the negative energy solution to simply be ignored, as classical mechanics often did in such equations; the dual solution implied the possibility of an electron spontaneously jumping between positive and negative energy states. However, no such transition had yet been observed experimentally. [ 5 ]
Dirac wrote a follow-up paper in December 1929 [ 7 ] that attempted to explain the unavoidable negative-energy solution for the relativistic electron. He argued that "... an electron with negative energy moves in an external [electromagnetic] field as though it carries a positive charge." He further asserted that all of space could be regarded as a "sea" of negative energy states that were filled, so as to prevent electrons jumping between positive energy states (negative electric charge) and negative energy states (positive charge). The paper also explored the possibility of the proton being an island in this sea, and that it might actually be a negative-energy electron. Dirac acknowledged that the proton having a much greater mass than the electron was a problem, but expressed "hope" that a future theory would resolve the issue. [ 7 ]
Robert Oppenheimer argued strongly against the proton being the negative-energy electron solution to Dirac's equation. He asserted that if it were, the hydrogen atom would rapidly self-destruct. [ 8 ] Weyl in 1931 showed that the negative-energy electron must have the same mass as that of the positive-energy electron. [ 9 ] Persuaded by Oppenheimer's and Weyl's argument, Dirac published a paper in 1931 that predicted the existence of an as-yet-unobserved particle that he called an "anti-electron" that would have the same mass and the opposite charge as an electron and that would mutually annihilate upon contact with an electron. [ 10 ]
Ernst Stueckelberg , and later Richard Feynman , proposed an interpretation of the positron as an electron moving backward in time, [ 11 ] reinterpreting the negative-energy solutions of the Dirac equation. Electrons moving backward in time would have a positive electric charge . John Archibald Wheeler invoked this concept to explain the identical properties shared by all electrons, suggesting that "they are all the same electron" with a complex, self-intersecting worldline . [ 12 ] Yoichiro Nambu later applied it to all production and annihilation of particle-antiparticle pairs, stating that "the eventual creation and annihilation of pairs that may occur now and then is no creation or annihilation, but only a change of direction of moving particles, from the past to the future, or from the future to the past." [ 13 ] The backwards in time point of view is nowadays accepted as completely equivalent to other pictures, but it does not have anything to do with the macroscopic terms "cause" and "effect", which do not appear in a microscopic physical description. [ citation needed ]
Onia
Several sources have claimed that Dmitri Skobeltsyn first observed the positron long before 1930, [ 14 ] or even as early as 1923. [ 15 ] They state that while using a Wilson cloud chamber [ 16 ] in order to study the Compton effect , Skobeltsyn detected particles that acted like electrons but curved in the opposite direction in an applied magnetic field, and that he presented photographs with this phenomenon in a conference in the University of Cambridge , on 23–27 July 1928. In his book [ 17 ] on the history of the positron discovery from 1963, Norwood Russell Hanson has given a detailed account of the reasons for this assertion, and this may have been the origin of the myth. But he also presented Skobeltsyn's objection to it in an appendix. [ 18 ] Later, Skobeltsyn rejected this claim even more strongly, calling it "nothing but sheer nonsense". [ 19 ]
Skobeltsyn did pave the way for the eventual discovery of the positron by two important contributions: adding a magnetic field to his cloud chamber (in 1925 [ 20 ] ), and by discovering charged particle cosmic rays , [ 21 ] for which he is credited in Carl David Anderson 's Nobel lecture . [ 22 ] Skobeltsyn did observe likely positron tracks on images taken in 1931, [ 23 ] but did not identify them as such at the time.
Likewise, in 1929 Chung-Yao Chao , a Chinese graduate student at Caltech , noticed some anomalous results that indicated particles behaving like electrons, but with a positive charge, though the results were inconclusive and the phenomenon was not pursued. [ 24 ] Fifty years later, Anderson acknowledged that his discovery was inspired by the work of his Caltech classmate Chung-Yao Chao , whose research formed the foundation from which much of Anderson's work developed but was not credited at the time. [ 25 ]
Anderson discovered the positron on 2 August 1932, [ 26 ] for which he won the Nobel Prize for Physics in 1936. [ 27 ] Anderson did not coin the term positron , but allowed it at the suggestion of the Physical Review journal editor to whom he submitted his discovery paper in late 1932. The positron was the first evidence of antimatter and was discovered when Anderson allowed cosmic rays to pass through a cloud chamber and a lead plate. A magnet surrounded this apparatus, causing particles to bend in different directions based on their electric charge. The ion trail left by each positron appeared on the photographic plate with a curvature matching the mass-to-charge ratio of an electron, but in a direction that showed its charge was positive. [ 28 ]
Anderson wrote in retrospect that the positron could have been discovered earlier based on Chung-Yao Chao's work, if only it had been followed up on. [ 24 ] Frédéric and Irène Joliot-Curie in Paris had evidence of positrons in old photographs when Anderson's results came out, but they had dismissed them as protons. [ 28 ]
The positron had also been contemporaneously discovered by Patrick Blackett and Giuseppe Occhialini at the Cavendish Laboratory in 1932. Blackett and Occhialini had delayed publication to obtain more solid evidence, so Anderson was able to publish the discovery first. [ 29 ]
Positrons are produced, together with neutrinos naturally in β + decays of naturally occurring radioactive isotopes (for example, potassium-40 ) and in interactions of gamma quanta (emitted by radioactive nuclei) with matter. Antineutrinos are another kind of antiparticle produced by natural radioactivity (β − decay). Many different kinds of antiparticles are also produced by (and contained in) cosmic rays . In research published in 2011 by the American Astronomical Society , positrons were discovered originating above thunderstorm clouds; positrons are produced in gamma-ray flashes created by electrons accelerated by strong electric fields in the clouds. [ 30 ] Antiprotons have also been found to exist in the Van Allen Belts around the Earth by the PAMELA module . [ 31 ] [ 32 ]
Antiparticles, of which the most common are antineutrinos and positrons due to their low mass, are also produced in any environment with a sufficiently high temperature (mean particle energy greater than the pair production threshold). During the period of baryogenesis , when the universe was extremely hot and dense, matter and antimatter were continually produced and annihilated. The presence of remaining matter, and absence of detectable remaining antimatter, [ 33 ] also called baryon asymmetry , is attributed to CP-violation : a violation of the CP-symmetry relating matter to antimatter. The exact mechanism of this violation during baryogenesis remains a mystery. [ 34 ]
Positron production from radioactive β + decay can be considered both artificial and natural production, as the generation of the radioisotope can be natural or artificial. Perhaps the best known naturally-occurring radioisotope which produces positrons is potassium-40, a long-lived isotope of potassium which occurs as a primordial isotope of potassium. Even though it is a small percentage of potassium (0.0117%), it is the single most abundant radioisotope in the human body. In a human body of 70 kg (150 lb) mass, about 4,400 nuclei of 40 K decay per second. [ 35 ] The activity of natural potassium is 31 Bq /g. [ 36 ] About 0.001% of these 40 K decays produce about 4000 natural positrons per day in the human body. [ 37 ] These positrons soon find an electron, undergo annihilation, and produce pairs of 511 keV photons, in a process similar (but much lower intensity) to that which happens during a PET scan nuclear medicine procedure. [ citation needed ]
Recent observations indicate black holes and neutron stars produce vast amounts of positron-electron plasma in astrophysical jets . Large clouds of positron-electron plasma have also been associated with neutron stars. [ 38 ] [ 39 ] [ 40 ]
Satellite experiments have found evidence of positrons (as well as a few antiprotons) in primary cosmic rays, amounting to less than 1% of the particles in primary cosmic rays. [ 41 ] However, the fraction of positrons in cosmic rays has been measured more recently with improved accuracy, especially at much higher energy levels, and the fraction of positrons has been seen to be greater in these higher energy cosmic rays. [ 42 ]
These do not appear to be the products of large amounts of antimatter from the Big Bang, or indeed complex antimatter in the universe (evidence for which is lacking, see below). Rather, the antimatter in cosmic rays appear to consist of only these two elementary particles. Recent theories suggest the source of such positrons may come from annihilation of dark matter particles, acceleration of positrons to high energies in astrophysical objects, and production of high energy positrons in the interactions of cosmic ray nuclei with interstellar gas. [ 43 ]
Preliminary results from the presently operating Alpha Magnetic Spectrometer ( AMS-02 ) on board the International Space Station show that positrons in the cosmic rays arrive with no directionality, and with energies that range from 0.5 GeV to 500 GeV. [ 44 ] [ 45 ] Positron fraction peaks at a maximum of about 16% of total electron+positron events, around an energy of 275 ± 32 GeV. At higher energies, up to 500 GeV, the ratio of positrons to electrons begins to fall again. The absolute flux of positrons also begins to fall before 500 GeV, but peaks at energies far higher than electron energies, which peak about 10 GeV. [ 46 ] [ 47 ] These results on interpretation have been suggested to be due to positron production in annihilation events of massive dark matter particles. [ 48 ]
Positrons, like anti-protons, do not appear to originate from any hypothetical "antimatter" regions of the universe. On the contrary, there is no evidence of complex antimatter atomic nuclei, such as antihelium nuclei (i.e., anti-alpha particles), in cosmic rays. These are actively being searched for. A prototype of the AMS-02 designated AMS-01 , was flown into space aboard the Space Shuttle Discovery on STS-91 in June 1998. By not detecting any antihelium at all, the AMS-01 established an upper limit of 1.1×10 −6 for the antihelium to helium flux ratio. [ 49 ]
Physicists at the Lawrence Livermore National Laboratory in California have used a short, ultra-intense laser to irradiate a millimeter-thick gold target and produce more than 100 billion positrons. [ 50 ] Presently significant lab production of 5 MeV positron-electron beams allows investigation of multiple characteristics such as how different elements react to 5 MeV positron interactions or impacts, how energy is transferred to particles, and the shock effect of gamma-ray bursts . [ 51 ]
In 2023, a collaboration between CERN and University of Oxford performed an experiment at the HiRadMat facility [ 52 ] in which nano-second duration beams of electron-positron pairs were produced containing more than 10 trillion electron-positron pairs, so creating the first 'pair plasma' in the laboratory with sufficient density to support collective plasma behavior. [ 53 ] Future experiments offer the possibility to study physics relevant to extreme astrophysical environments where copious electron-positron pairs are generated, such as gamma-ray bursts , fast radio bursts and blazar jets.
Certain kinds of particle accelerator experiments involve colliding positrons and electrons at relativistic speeds. The high impact energy and the mutual annihilation of these matter/antimatter opposites create a fountain of diverse subatomic particles. Physicists study the results of these collisions to test theoretical predictions and to search for new kinds of particles. [ citation needed ]
The ALPHA experiment combines positrons with antiprotons to study properties of antihydrogen . [ 54 ]
Gamma rays, emitted indirectly by a positron-emitting radionuclide (tracer), are detected in positron emission tomography (PET) scanners used in hospitals. PET scanners create detailed three-dimensional images of metabolic activity within the human body. [ 55 ]
An experimental tool called positron annihilation spectroscopy (PAS) is used in materials research to detect variations in density, defects, displacements, or even voids, within a solid material. [ 56 ] | https://en.wikipedia.org/wiki/Positron |
Positron annihilation spectroscopy (PAS) [ 1 ] or sometimes specifically referred to as positron annihilation lifetime spectroscopy (PALS) is a non-destructive spectroscopy technique to study voids and defects in solids. [ 2 ] [ 3 ]
The technique operates on the principle that a positron or positronium will annihilate through interaction with electrons. This annihilation releases gamma rays that can be detected; the time between emission of positrons from a radioactive source and detection of gamma rays due to annihilation corresponds to the lifetime of positron or positronium.
When positrons are injected into a solid body, they interact in some manner with the electrons in that species. For solids containing free electrons (such as metals or semiconductors ), the implanted positrons annihilate rapidly unless voids such as vacancy defects are present. If voids are available, positrons will reside in them and annihilate less rapidly than in the bulk of the material, on time scales up to ~1 ns. For insulators such as polymers or zeolites , implanted positrons interact with electrons in the material to form positronium.
Positronium is a metastable hydrogen-like bound state of an electron and a positron which can exist in two spin states. Para -positronium, p -Ps, is a singlet state (the positron and electron spins are anti-parallel) with a characteristic self-annihilation lifetime of 125 ps in vacuum. [ 4 ] Ortho -positronium, o -Ps, is a triplet state (the positron and electron spins are parallel) with a characteristic self-annihilation lifetime of 142 ns in vacuum. [ 4 ] In molecular materials, the lifetime of o -Ps is environment dependent and it delivers information pertaining to the size of the void in which it resides. Ps can pick up a molecular electron with an opposite spin to that of the positron, leading to a reduction of the o -Ps lifetime from 142 ns to 1-4 ns (depending on the size of the free volume in which it resides). [ 4 ] The size of the molecular free volume can be derived from the o -Ps lifetime via the semi-empirical Tao-Eldrup model. [ 5 ]
While the PALS is successful in examining local free volumes, it still needs to employ data from combined methods in order to yield free volume fractions. Even approaches to obtain fractional free volume from the PALS data that claim to be independent on other experiments, such as PVT measurements, they still do employ theoretical considerations, such as iso-free-volume amount from Simha-Boyer theory. A convenient emerging method for obtaining free volume amounts in an independent manner are computer simulations; these can be combined with the PALS measurements and help to interpret the PALS measurements. [ 6 ]
Pore structure in insulators can be determined using the quantum mechanical Tao-Eldrup model [ 7 ] [ 8 ] and extensions thereof. By changing the temperature at which a sample is analyzed, the pore structure can be fit to a model where positronium is confined in one, two, or three dimensions. However, interconnected pores result in averaged lifetimes that cannot distinguish between smooth channels or channels having smaller, open, peripheral pores due to energetically favored positronium diffusion from small to larger pores.
The behavior of positrons in molecules or condensed matter is nontrivial due
to the strong correlation between electrons and positrons. Even the simplest
case, that of a single positron immersed in a homogeneous gas of electrons,
has proved to be a significant challenge for theory. The positron attracts
electrons to it, increasing the contact density and hence enhancing the
annihilation rate. Furthermore, the momentum density of annihilating
electron-positron pairs is enhanced near the Fermi surface . [ 9 ] Theoretical
approaches used to study this problem have included the Tamm-Dancoff
approximation, [ 10 ] Fermi [ 11 ] and perturbed [ 12 ] hypernetted chain approximations, density functional theory methods [ 13 ] [ 14 ] and quantum Monte Carlo . [ 15 ] [ 16 ]
The experiment itself involves having a radioactive positron source (often 22 Na) situated near the analyte . Positrons are emitted near-simultaneously with gamma rays. These gamma rays are detected by a nearby scintillator . [ citation needed ] | https://en.wikipedia.org/wiki/Positron_annihilation_spectroscopy |
Positron emission , beta plus decay , or β + decay is a subtype of radioactive decay called beta decay , in which a proton inside a radionuclide nucleus is converted into a neutron while releasing a positron and an electron neutrino ( ν e ). [ 1 ] Positron emission is mediated by the weak force . The positron is a type of beta particle (β + ), the other beta particle being the electron (β − ) emitted from the β − decay of a nucleus.
An example of positron emission (β + decay) is shown with magnesium-23 decaying into sodium-23 :
Because positron emission decreases proton number relative to neutron number, positron decay happens typically in large "proton-rich" radionuclides. Positron decay results in nuclear transmutation , changing an atom of one chemical element into an atom of an element with an atomic number that is less by one unit.
Positron emission occurs extremely rarely in nature on Earth. Known instances include cosmic ray interactions and the decay of certain isotopes , such as potassium-40 . This rare form of potassium makes up only 0.012% of the element on Earth and has a 1 in 100,000 chance of decaying via positron emission.
Positron emission should not be confused with electron emission or beta minus decay (β − decay), which occurs when a neutron turns into a proton and the nucleus emits an electron and an antineutrino.
Positron emission is different from proton decay , the hypothetical decay of protons, not necessarily those bound with neutrons, not necessarily through the emission of a positron, and not as part of nuclear physics, but rather of particle physics .
In 1934 Frédéric and Irène Joliot-Curie bombarded aluminium with alpha particles (emitted by polonium ) to effect the nuclear reaction 4 2 He + 27 13 Al → 30 15 P + 1 0 n , and observed that the product isotope 30 15 P emits a positron identical to those found in cosmic rays by Carl David Anderson in 1932. [ 2 ] This was the first example of β + decay (positron emission). The Curies termed the phenomenon "artificial radioactivity", because 30 15 P is a short-lived nuclide which does not exist in nature. The discovery of artificial radioactivity would be cited when the husband-and-wife team won the Nobel Prize.
Isotopes which undergo this decay and thereby emit positrons include, but are not limited to: carbon-11 , nitrogen-13 , oxygen-15 , fluorine-18 , copper-64 , gallium-68, bromine-78, rubidium-82 , yttrium-86, zirconium-89, [ 3 ] sodium-22 , aluminium-26 , potassium-40 , strontium-83 , and iodine-124 . [ 3 ] [ 4 ] As an example, the following equation describes the beta plus decay of carbon-11 to boron -11, emitting a positron and a neutrino :
Inside protons and neutrons, there are fundamental particles called quarks . The two most common types of quarks are up quarks , which have a charge of + 2 ⁄ 3 , and down quarks , with a − 1 ⁄ 3 charge. Quarks arrange themselves in sets of three such that they make protons and neutrons . In a proton, whose charge is +1, there are two up quarks and one down quark ( 2 ⁄ 3 + 2 ⁄ 3 − 1 ⁄ 3 = 1). Neutrons, with no charge, have one up quark and two down quarks ( 2 ⁄ 3 − 1 ⁄ 3 − 1 ⁄ 3 = 0). Via the weak interaction , quarks can change flavor from down to up , resulting in electron emission. Positron emission happens when an up quark changes into a down quark, effectively converting a proton to a neutron. [ 5 ]
Nuclei which decay by positron emission may also decay by electron capture . For low-energy decays, electron capture is energetically favored by 2 m e c 2 = 1.022 MeV , since the final state has an electron removed rather than a positron added. As the energy of the decay goes up, so does the branching fraction of positron emission. However, if the energy difference is less than 2 m e c 2 , the positron emission cannot occur and electron capture is the sole decay mode. Certain otherwise electron-capturing isotopes (for instance, 7 Be ) are stable in galactic cosmic rays , because the electrons are stripped away and the decay energy is too small for positron emission.
A positron is ejected from the parent nucleus, but the daughter (Z−1) atom still has Z atomic electrons from the parent, i.e. the daughter is a negative ion (at least immediately after the positron emission). Since tables of masses are for atomic masses, Z A X → Z − 1 A Y + + 1 0 e + + − 1 0 e − {\displaystyle _{Z}^{A}{\textrm {X}}\rightarrow _{Z-1}^{A}{\textrm {Y}}+_{+1}^{0}{\textrm {e}}^{+}+_{-1}^{0}{\textrm {e}}^{-}} , and, since the mass of the positron is identical to that of the electron, the overall result is that the mass-energy of two electrons is required, and the β + decay is energetically possible if and only if the mass of the parent atom exceeds the mass of the daughter atom by at least two electron masses (2 m e c 2 = 1.022 MeV). [ 6 ]
Isotopes which increase in mass under the conversion of a proton to a neutron, or which decrease in mass by less than 2 m e , cannot spontaneously decay by positron emission. [ 6 ]
These isotopes are used in positron emission tomography , a technique used for medical imaging. The energy emitted depends on the isotope that is decaying; the figure of 0.96 MeV applies only to the decay of carbon-11 .
The short-lived positron emitting isotopes 11 C (T 1 ⁄ 2 = 20.4 min ), 13 N (T 1 ⁄ 2 = 9.9 min ), 15 O (T 1 ⁄ 2 = 2.0 min ), and 18 F (T 1 ⁄ 2 = 109.8 min ) used for positron emission tomography are typically produced by proton or deuteron irradiation of natural or enriched targets. [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Positron_emission |
Positron emission tomography ( PET ) [ 1 ] is a functional imaging technique that uses radioactive substances known as radiotracers to visualize and measure changes in metabolic processes , and in other physiological activities including blood flow , regional chemical composition, and absorption.
Different tracers are used for various imaging purposes, depending on the target process within the body, such as:
PET is a common imaging technique , a medical scintillography technique used in nuclear medicine . A radiopharmaceutical —a radioisotope attached to a drug—is injected into the body as a tracer . When the radiopharmaceutical undergoes beta plus decay , a positron is emitted, and when the positron interacts with an ordinary electron, the two particles annihilate and two gamma rays are emitted in opposite directions. [ 2 ] These gamma rays are detected by two gamma cameras to form a three-dimensional image.
PET scanners can incorporate a computed tomography scanner (CT) and are known as PET–CT scanners . PET scan images can be reconstructed using a CT scan performed using one scanner during the same session.
One of the disadvantages of a PET scanner is its high initial cost and ongoing operating costs. [ 3 ]
PET is both a medical and research tool used in pre-clinical and clinical settings. It is used heavily in the imaging of tumors and the search for metastases within the field of clinical oncology , and for the clinical diagnosis of certain diffuse brain diseases such as those causing various types of dementias . PET is valued as a research tool to learn and enhance knowledge of the normal human brain, heart function, and support drug development. PET is also used in pre-clinical studies using animals. It allows repeated investigations into the same subjects over time, where subjects can act as their own control and substantially reduces the numbers of animals required for a given study. This approach allows research studies to reduce the sample size needed while increasing the statistical quality of its results. [ citation needed ]
Physiological processes lead to anatomical changes in the body. Since PET is capable of detecting biochemical processes as well as expression of some proteins, PET can provide molecular-level information much before any anatomic changes are visible. PET scanning does this by using radiolabelled molecular probes that have different rates of uptake depending on the type and function of tissue involved. Regional tracer uptake in various anatomic structures can be visualized and relatively quantified in terms of injected positron emitter within a PET scan. [ citation needed ]
PET imaging is best performed using a dedicated PET scanner. [ citation needed ] It is also possible to acquire PET images using a conventional dual-head gamma camera fitted with a coincidence detector. The quality of gamma-camera PET imaging is lower, and the scans take longer to acquire. However, this method allows a low-cost on-site solution to institutions with low PET scanning demand. An alternative would be to refer these patients to another center or relying on a visit by a mobile scanner.
Alternative methods of medical imaging include single-photon emission computed tomography (SPECT), computed tomography (CT), magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI), and ultrasound . SPECT is an imaging technique similar to PET that uses radioligands to detect molecules in the body. SPECT is less expensive and provides inferior image quality than PET.
PET scanning with the radiotracer [ 18 F]fluorodeoxyglucose (FDG) is widely used in clinical oncology. FDG is a glucose analog that is taken up by glucose-using cells and phosphorylated by hexokinase (whose mitochondrial form is significantly elevated in rapidly growing malignant tumors). [ 4 ] Metabolic trapping of the radioactive glucose molecule allows the PET scan to be utilized. The concentrations of imaged FDG tracer indicate tissue metabolic activity as it corresponds to the regional glucose uptake. FDG is used to explore the possibility of cancer spreading to other body sites ( cancer metastasis ). These FDG PET scans for detecting cancer metastasis are the most common in standard medical care (representing 90% of current scans). The same tracer may also be used for the diagnosis of types of dementia . Less often, other radioactive tracers , usually but not always labelled with fluorine-18 ( 18 F), are used to image the tissue concentration of different kinds of molecules of interest inside the body. [ citation needed ]
A typical dose of FDG used in an oncological scan has an effective radiation dose of 7.6 mSv . [ 5 ] Because the hydroxy group that is replaced by fluorine-18 to generate FDG is required for the next step in glucose metabolism in all cells, no further reactions occur in FDG. Furthermore, most tissues (with the notable exception of liver and kidneys) cannot remove the phosphate added by hexokinase. This means that FDG is trapped in any cell that takes it up until it decays, since phosphorylated sugars, due to their ionic charge, cannot exit from the cell. This results in intense radiolabeling of tissues with high glucose uptake, such as the normal brain, liver, kidneys, and most cancers, which have a higher glucose uptake than most normal tissue due to the Warburg effect . As a result, FDG-PET can be used for diagnosis, staging, and monitoring treatment of cancers, particularly in Hodgkin lymphoma , [ 6 ] non-Hodgkin lymphoma , [ 7 ] and lung cancer . [ 8 ] [ 9 ] [ 10 ]
A 2020 review of research on the use of PET for Hodgkin lymphoma found evidence that negative findings in interim PET scans are linked to higher overall survival and progression-free survival ; however, the certainty of the available evidence was moderate for survival, and very low for progression-free survival. [ 11 ]
A few other isotopes and radiotracers are slowly being introduced into oncology for specific purposes. For example, 11 C -labelled metomidate (11C-metomidate) has been used to detect tumors of adrenocortical origin. [ 12 ] [ 13 ] Also, fluorodopa (FDOPA) PET/CT (also called F-18-DOPA PET/CT) has proven to be a more sensitive alternative to finding and also localizing pheochromocytoma than the iobenguane (MIBG) scan . [ 14 ] [ 15 ] [ 16 ]
PET imaging with oxygen-15 indirectly measures blood flow to the brain. In this method, increased radioactivity signal indicates increased blood flow which is assumed to correlate with increased brain activity. Because of its two-minute half-life , oxygen-15 must be piped directly from a medical cyclotron for such uses, which is difficult. [ 17 ]
PET imaging with FDG takes advantage of the fact that the brain is normally a rapid user of glucose. Standard FDG PET of the brain measures regional glucose use and can be used in neuropathological diagnosis.
Brain pathologies such as Alzheimer's disease (AD) greatly decrease brain metabolism of both glucose and oxygen in tandem. Therefore FDG PET of the brain may also be used to successfully differentiate Alzheimer's disease from other dementing processes, and also to make early diagnoses of Alzheimer's disease. The advantage of FDG PET for these uses is its much wider availability. In addition, some other fluorine-18 based radioactive tracers can be used to detect amyloid-beta plaques, a potential biomarker for Alzheimer's in the brain. These include florbetapir , flutemetamol , Pittsburgh compound B (PiB) and florbetaben . [ 18 ]
PET imaging with FDG can also be used for localization of "seizure focus". A seizure focus will appear as hypometabolic during an interictal scan. [ 19 ] Several radiotracers (i.e. radioligands) have been developed for PET that are ligands for specific neuroreceptor subtypes such as [ 11 C] raclopride , [ 18 F] fallypride and [ 18 F] desmethoxyfallypride for dopamine D 2 / D 3 receptors; [ 11 C] McN5652 and [ 11 C] DASB for serotonin transporters ; [ 18 F] mefway for serotonin 5HT 1A receptors ; and [ 18 F] nifene for nicotinic acetylcholine receptors or enzyme substrates (e.g. 6- FDOPA for the AADC enzyme ). These agents permit the visualization of neuroreceptor pools in the context of a plurality of neuropsychiatric and neurologic illnesses.
PET may also be used for the diagnosis of hippocampal sclerosis , which causes epilepsy. FDG, and the less common tracers flumazenil and MPPF have been explored for this purpose. [ 20 ] [ 21 ] If the sclerosis is unilateral (right hippocampus or left hippocampus), FDG uptake can be compared with the healthy side. Even if the diagnosis is difficult with MRI, it may be diagnosed with PET. [ 22 ] [ 23 ]
The development of a number of novel probes for non-invasive , in-vivo PET imaging of neuroaggregate in human brain has brought amyloid imaging close to clinical use. The earliest amyloid imaging probes included [ 18 F]FDDNP, [ 24 ] developed at the University of California, Los Angeles , and Pittsburgh compound B (PiB), [ 25 ] developed at the University of Pittsburgh . These probes permit the visualization of amyloid plaques in the brains of Alzheimer's patients and could assist clinicians in making a positive clinical diagnosis of AD pre-mortem and aid in the development of novel anti-amyloid therapies. [ 11 C] polymethylpentene (PMP) is a novel radiopharmaceutical used in PET imaging to determine the activity of the acetylcholinergic neurotransmitter system by acting as a substrate for acetylcholinesterase . Post-mortem examination of AD patients has shown decreased levels of acetylcholinesterase. [ 11 C]PMP is used to map the acetylcholinesterase activity in the brain, which could allow for premortem diagnoses of AD and help to monitor AD treatments. [ 26 ] Avid Radiopharmaceuticals has developed and commercialized a compound called florbetapir that uses the longer-lasting radionuclide fluorine-18 to detect amyloid plaques using PET scans. [ 27 ]
To examine links between specific psychological processes or disorders and brain activity.
Numerous compounds that bind selectively to neuroreceptors of interest in biological psychiatry have been radiolabeled with C-11 or F-18. Radioligands that bind to dopamine receptors ( D 1 , [ 28 ] D 2 , [ 29 ] [ 30 ] reuptake transporter), serotonin receptors ( 5HT 1A , 5HT 2A , reuptake transporter), opioid receptors ( mu and kappa ), cholinergic receptors (nicotinic and muscarinic ) and other sites have been used successfully in studies with human subjects. Studies have been performed examining the state of these receptors in patients compared to healthy controls in schizophrenia , substance abuse , mood disorders and other psychiatric conditions. [ citation needed ]
PET can also be used in image guided surgery for the treatment of intracranial tumors, arteriovenous malformations and other surgically treatable conditions. [ 31 ]
Cardiology , atherosclerosis and vascular disease study: FDG PET can help in identifying hibernating myocardium . However, the cost-effectiveness of PET for this role versus SPECT is unclear. FDG PET imaging of atherosclerosis to detect patients at risk of stroke is also feasible. Also, it can help test the efficacy of novel anti-atherosclerosis therapies. [ 32 ]
Imaging infections with molecular imaging technologies can improve diagnosis and treatment follow-up. Clinically, PET has been widely used to image bacterial infections using FDG to identify the infection-associated inflammatory response. Three different PET contrast agents have been developed to image bacterial infections in vivo are [ 18 F] maltose , [ 33 ] [ 18 F]maltohexaose, and [ 18 F]2-fluorodeoxy sorbitol (FDS). [ 34 ] FDS has the added benefit of being able to target only Enterobacteriaceae .
In pre-clinical trials, a new drug can be radiolabeled and injected into animals. Such scans are referred to as biodistribution studies. The information regarding drug uptake, retention and elimination over time can be obtained quickly and cost-effectively compare to the older technique of killing and dissecting the animals. Commonly, drug occupancy at a purported site of action can be inferred indirectly by competition studies between unlabeled drug and radiolabeled compounds to bind with specificity to the site. A single radioligand can be used this way to test many potential drug candidates for the same target. A related technique involves scanning with radioligands that compete with an endogenous (naturally occurring) substance at a given receptor to demonstrate that a drug causes the release of the natural substance. [ 35 ]
A miniature animal PET has been constructed that is small enough for a fully conscious rat to be scanned. [ 36 ] This RatCAP (rat conscious animal PET) allows animals to be scanned without the confounding effects of anesthesia . PET scanners designed specifically for imaging rodents , often referred to as microPET, as well as scanners for small primates , are marketed for academic and pharmaceutical research. The scanners are based on microminiature scintillators and amplified avalanche photodiodes (APDs) through a system that uses single-chip silicon photomultipliers . [ 1 ]
In 2018 the UC Davis School of Veterinary Medicine became the first veterinary center to employ a small clinical PET scanner as a scanner for clinical (rather than research) animal diagnosis. Because of cost as well as the marginal utility of detecting cancer metastases in companion animals (the primary use of this modality), veterinary PET scanning is expected to be rarely available in the immediate future. [ citation needed ]
PET imaging has been used for imaging muscles and bones. FDG is the most commonly used tracer for imaging muscles, and NaF-F18 is the most widely used tracer for imaging bones.
PET is a feasible technique for studying skeletal muscles during exercise. [ 37 ] Also, PET can provide muscle activation data about deep-lying muscles (such as the vastus intermedialis and the gluteus minimus ) compared to techniques like electromyography , which can be used only on superficial muscles directly under the skin. However, a disadvantage is that PET provides no timing information about muscle activation because it has to be measured after the exercise is completed. This is due to the time it takes for FDG to accumulate in the activated muscles. [ 38 ]
Together with [ 18 F]sodium floride, PET for bone imaging has been in use for 60 years for measuring regional bone metabolism and blood flow using static and dynamic scans. Researchers have recently started using [ 18 F]sodium fluoride to study bone metastasis as well. [ 39 ]
PET scanning is non-invasive, but it does involve exposure to ionizing radiation . [ 3 ] FDG, which is now the standard radiotracer used for PET neuroimaging and cancer patient management, [ 40 ] has an effective radiation dose of 14 mSv . [ 5 ]
The amount of radiation in FDG is similar to the effective dose of spending one year in the American city of Denver, Colorado (12.4 mSv/year). [ 41 ] For comparison, radiation dosage for other medical procedures range from 0.02 mSv for a chest X-ray and 6.5–8 mSv for a CT scan of the chest. [ 42 ] [ 43 ] Average civil aircrews are exposed to 3 mSv/year, [ 44 ] and the whole body occupational dose limit for nuclear energy workers in the US is 50 mSv/year. [ 45 ] For scale, see Orders of magnitude (radiation) .
For PET–CT scanning, the radiation exposure may be substantial—around 23–26 mSv (for a 70 kg person—dose is likely to be higher for higher body weights). [ 46 ] [ 47 ]
Radionuclides are incorporated either into compounds normally used by the body such as glucose (or glucose analogues), water , or ammonia , or into molecules that bind to receptors or other sites of drug action. Such labelled compounds are known as radiotracers . PET technology can be used to trace the biologic pathway of any compound in living humans (and many other species as well), provided it can be radiolabeled with a PET isotope. Thus, the specific processes that can be probed with PET are virtually limitless, and radiotracers for new target molecules and processes are continuing to be synthesized. As of this writing there are already dozens in clinical use and hundreds applied in research. In 2020 by far the most commonly used radiotracer in clinical PET scanning is the carbohydrate derivative FDG. This radiotracer is used in essentially all scans for oncology and most scans in neurology, thus makes up the large majority of radiotracer (>95%) used in PET and PET–CT scanning.
Due to the short half-lives of most positron-emitting radioisotopes, the radiotracers have traditionally been produced using a cyclotron in close proximity to the PET imaging facility. The half-life of fluorine-18 is long enough that radiotracers labeled with fluorine-18 can be manufactured commercially at offsite locations and shipped to imaging centers. Recently rubidium-82 generators have become commercially available. [ 49 ] These contain strontium-82, which decays by electron capture to produce positron-emitting rubidium-82.
The use of positron-emitting isotopes of metals in PET scans has been reviewed, including elements not listed above, such as lanthanides. [ 50 ]
The isotope 89 Zr has been applied to the tracking and quantification of molecular antibodies with PET cameras (a method called "immuno-PET"). [ 51 ] [ 52 ] [ 53 ]
The biological half-life of antibodies is typically on the order of days, see daclizumab and erenumab by way of example. To visualize and quantify the distribution of such antibodies in the body, the PET isotope 89 Zr is well suited because its physical half-life matches the typical biological half-life of antibodies, see table above.
To conduct the scan, a short-lived radioactive tracer isotope is injected into the living subject (usually into blood circulation). Each tracer atom has been chemically incorporated into a biologically active molecule. There is a waiting period while the active molecule becomes concentrated in tissues of interest. Then the subject is placed in the imaging scanner. The molecule most commonly used for this purpose is FDG, a sugar, for which the waiting period is typically an hour. During the scan, a record of tissue concentration is made as the tracer decays.
As the radioisotope undergoes positron emission decay (also known as positive beta decay ), it emits a positron, an antiparticle of the electron with opposite charge. The emitted positron travels in tissue for a short distance (typically less than 1 mm, but dependent on the isotope [ 54 ] ), during which time it loses kinetic energy, until it decelerates to a point where it can interact with an electron. [ 55 ] The encounter annihilates both electron and positron, producing a pair of annihilation ( gamma ) photons moving in approximately opposite directions. These are detected when they reach a scintillator in the scanning device, creating a burst of light which is detected by photomultiplier tubes or silicon avalanche photodiodes (Si APD). The technique depends on simultaneous or coincident detection of the pair of photons moving in approximately opposite directions (they would be exactly opposite in their center of mass frame , but the scanner has no way to know this, and so has a built-in slight direction-error tolerance). Photons that do not arrive in temporal "pairs" (i.e. within a timing-window of a few nanoseconds) are ignored.
The most significant fraction of electron–positron annihilations results in two 511 keV gamma photons being emitted at almost 180 degrees to each other. Hence, it is possible to localize their source along a straight line of coincidence (also called the line of response , or LOR ). In practice, the LOR has a non-zero width as the emitted photons are not exactly 180 degrees apart. If the resolving time of the detectors is less than 500 picoseconds rather than about 10 nanoseconds , it is possible to localize the event to a segment of a chord , whose length is determined by the detector timing resolution. As the timing resolution improves, the signal-to-noise ratio (SNR) of the image will improve, requiring fewer events to achieve the same image quality. This technology is not yet common, but it is available on some new systems. [ 56 ]
The raw data collected by a PET scanner are a list of 'coincidence events' representing near-simultaneous detection (typically, within a window of 6 to 12 nanoseconds of each other) of annihilation photons by a pair of detectors. Each coincidence event represents a line in space connecting the two detectors along which the positron emission occurred (i.e., the line of response (LOR)).
Analytical techniques, much like the reconstruction of computed tomography (CT) and single-photon emission computed tomography (SPECT) data, are commonly used, although the data set collected in PET is much poorer than CT, so reconstruction techniques are more difficult. Coincidence events can be grouped into projection images, called sinograms . The sinograms are sorted by the angle of each view and tilt (for 3D images). The sinogram images are analogous to the projections captured by CT scanners, and can be reconstructed in a similar way. The statistics of data thereby obtained are much worse than those obtained through transmission tomography. A normal PET data set has millions of counts for the whole acquisition, while the CT can reach a few billion counts. This contributes to PET images appearing "noisier" than CT. Two major sources of noise in PET are scatter (a detected pair of photons, at least one of which was deflected from its original path by interaction with matter in the field of view, leading to the pair being assigned to an incorrect LOR) and random events (photons originating from two different annihilation events but incorrectly recorded as a coincidence pair because their arrival at their respective detectors occurred within a coincidence timing window).
In practice, considerable pre-processing of the data is required – correction for random coincidences, estimation and subtraction of scattered photons, detector dead-time correction (after the detection of a photon, the detector must "cool down" again) and detector-sensitivity correction (for both inherent detector sensitivity and changes in sensitivity due to angle of incidence).
Filtered back projection (FBP) has been frequently used to reconstruct images from the projections. This algorithm has the advantage of being simple while having a low requirement for computing resources. Disadvantages are that shot noise in the raw data is prominent in the reconstructed images, and areas of high tracer uptake tend to form streaks across the image. Also, FBP treats the data deterministically – it does not account for the inherent randomness associated with PET data, thus requiring all the pre-reconstruction corrections described above.
Statistical, likelihood-based approaches : Statistical, likelihood-based [ 57 ] [ 58 ] iterative expectation-maximization algorithms such as the Shepp–Vardi algorithm [ 59 ] are now the preferred method of reconstruction. These algorithms compute an estimate of the likely distribution of annihilation events that led to the measured data, based on statistical principles. The advantage is a better noise profile and resistance to the streak artifacts common with FBP, but the disadvantage is greater computer resource requirements. A further advantage of statistical image reconstruction techniques is that the physical effects that would need to be pre-corrected for when using an analytical reconstruction algorithm, such as scattered photons, random coincidences, attenuation and detector dead-time, can be incorporated into the likelihood model being used in the reconstruction, allowing for additional noise reduction. Iterative reconstruction has also been shown to result in improvements in the resolution of the reconstructed images, since more sophisticated models of the scanner physics can be incorporated into the likelihood model than those used by analytical reconstruction methods, allowing for improved quantification of the radioactivity distribution. [ 60 ]
Research has shown that Bayesian methods that involve a Poisson likelihood function and an appropriate prior probability (e.g., a smoothing prior leading to total variation regularization or a Laplacian distribution leading to ℓ 1 {\displaystyle \ell _{1}} -based regularization in a wavelet or other domain), such as via Ulf Grenander 's Sieve estimator [ 61 ] [ 62 ] or via Bayes penalty methods [ 63 ] [ 64 ] or via I.J. Good 's roughness method [ 65 ] [ 66 ] may yield superior performance to expectation-maximization-based methods which involve a Poisson likelihood function but do not involve such a prior. [ 67 ] [ 68 ] [ 69 ]
Attenuation correction : Quantitative PET Imaging requires attenuation correction. [ 70 ] In these systems attenuation correction is based on a transmission scan using 68 Ge rotating rod source. [ 71 ]
Transmission scans directly measure attenuation values at 511 keV. [ 72 ] Attenuation occurs when photons emitted by the radiotracer inside the body are absorbed by intervening tissue between the detector and the emission of the photon. As different LORs must traverse different thicknesses of tissue, the photons are attenuated differentially. The result is that structures deep in the body are reconstructed as having falsely low tracer uptake. Contemporary scanners can estimate attenuation using integrated x-ray CT equipment, in place of earlier equipment that offered a crude form of CT using a gamma ray ( positron emitting) source and the PET detectors.
While attenuation-corrected images are generally more faithful representations, the correction process is itself susceptible to significant artifacts. As a result, both corrected and uncorrected images are always reconstructed and read together.
2D/3D reconstruction : Early PET scanners had only a single ring of detectors, hence the acquisition of data and subsequent reconstruction was restricted to a single transverse plane. More modern scanners now include multiple rings, essentially forming a cylinder of detectors.
There are two approaches to reconstructing data from such a scanner:
3D techniques have better sensitivity (because more coincidences are detected and used) hence less noise, but are more sensitive to the effects of scatter and random coincidences, as well as requiring greater computer resources. The advent of sub-nanosecond timing resolution detectors affords better random coincidence rejection, thus favoring 3D image reconstruction.
Time-of-flight (TOF) PET : For modern systems with a higher time resolution (roughly 3 nanoseconds) a technique called "time-of-flight" is used to improve the overall performance. Time-of-flight PET makes use of very fast gamma-ray detectors and data processing system which can more precisely decide the difference in time between the detection of the two photons. It is impossible to localize the point of origin of the annihilation event exactly (currently within 10 cm). Therefore, image reconstruction is still needed. TOF technique gives a remarkable improvement in image quality, especially signal-to-noise ratio.
PET scans are increasingly read alongside CT or MRI scans, with the combination ( co-registration ) giving both anatomic and metabolic information (i.e., what the structure is, and what it is doing biochemically). Because PET imaging is most useful in combination with anatomical imaging, such as CT, modern PET scanners are now available with integrated high-end multi-detector-row CT scanners (PET–CT). Because the two scans can be performed in immediate sequence during the same session, with the patient not changing position between the two types of scans, the two sets of images are more precisely registered , so that areas of abnormality on the PET imaging can be more perfectly correlated with anatomy on the CT images. This is very useful in showing detailed views of moving organs or structures with higher anatomical variation, which is more common outside the brain.
At the Jülich Institute of Neurosciences and Biophysics, the world's largest PET–MRI device began operation in April 2009. A 9.4- tesla magnetic resonance tomograph (MRT) combined with a PET. Presently, only the head and brain can be imaged at these high magnetic field strengths. [ 73 ]
For brain imaging, registration of CT, MRI and PET scans may be accomplished without the need for an integrated PET–CT or PET–MRI scanner by using a device known as the N-localizer . [ 31 ] [ 74 ] [ 75 ] [ 76 ]
The minimization of radiation dose to the subject is an attractive feature of the use of short-lived radionuclides. Besides its established role as a diagnostic technique, PET has an expanding role as a method to assess the response to therapy, in particular, cancer therapy, [ 77 ] where the risk to the patient from lack of knowledge about disease progress is much greater than the risk from the test radiation. Since the tracers are radioactive, the elderly [ dubious – discuss ] and pregnant are unable to use it due to risks posed by radiation.
Limitations to the widespread use of PET arise from the high costs of cyclotrons needed to produce the short-lived radionuclides for PET scanning and the need for specially adapted on-site chemical synthesis apparatus to produce the radiopharmaceuticals after radioisotope preparation. Organic radiotracer molecules that will contain a positron-emitting radioisotope cannot be synthesized first and then the radioisotope prepared within them, because bombardment with a cyclotron to prepare the radioisotope destroys any organic carrier for it. Instead, the isotope must be prepared first, then the chemistry to prepare any organic radiotracer (such as FDG) accomplished very quickly, in the short time before the isotope decays. Few hospitals and universities are capable of maintaining such systems, and most clinical PET is supported by third-party suppliers of radiotracers that can supply many sites simultaneously. This limitation restricts clinical PET primarily to the use of tracers labelled with fluorine-18, which has a half-life of 110 minutes and can be transported a reasonable distance before use, or to rubidium-82 (used as rubidium-82 chloride ) with a half-life of 1.27 minutes, which is created in a portable generator and is used for myocardial perfusion studies. In recent years a few on-site cyclotrons with integrated shielding and "hot labs" (automated chemistry labs that are able to work with radioisotopes) have begun to accompany PET units to remote hospitals. The presence of the small on-site cyclotron promises to expand in the future as the cyclotrons shrink in response to the high cost of isotope transportation to remote PET machines. [ 78 ] In recent years [ when? ] the shortage of PET scans has been alleviated in the US, as rollout of radiopharmacies to supply radioisotopes has grown 30 percent per year. [ 79 ]
Because the half-life of fluorine-18 is about two hours, the prepared dose of a radiopharmaceutical bearing this radionuclide will undergo multiple half-lives of decay during the working day. This necessitates frequent recalibration of the remaining dose (determination of activity per unit volume) and careful planning with respect to patient scheduling.
The concept of emission and transmission tomography was introduced by David E. Kuhl , Luke Chapman and Roy Edwards in the late 1950s. Their work would lead to the design and construction of several tomographic instruments at Washington University School of Medicine and later at the University of Pennsylvania . [ 80 ] In the 1960s and 70s tomographic imaging instruments and techniques were further developed by Michel Ter-Pogossian , Michael E. Phelps , Edward J. Hoffman and others at Washington University School of Medicine . [ 81 ] [ 82 ]
Work by Gordon Brownell, Charles Burnham and their associates at the Massachusetts General Hospital beginning in the 1950s contributed significantly to the development of PET technology and included the first demonstration of annihilation radiation for medical imaging. [ 83 ] Their innovations, including the use of light pipes and volumetric analysis, have been important in the deployment of PET imaging. In 1961, James Robertson and his associates at Brookhaven National Laboratory built the first single-plane PET scan, nicknamed the "head-shrinker". [ 84 ]
One of the factors most responsible for the acceptance of positron imaging was the development of radiopharmaceuticals. In particular, the development of labeled 2-fluorodeoxy-D-glucose (FDG—firstly synthethized and described by two Czech scientists from Charles University in Prague in 1968) [ 85 ] by the Brookhaven group under the direction of Al Wolf and Joanna Fowler was a major factor in expanding the scope of PET imaging. [ 86 ] The compound was first administered to two normal human volunteers by Abass Alavi in August 1976 at the University of Pennsylvania. Brain images obtained with an ordinary (non-PET) nuclear scanner demonstrated the concentration of FDG in that organ. Later, the substance was used in dedicated positron tomographic scanners, to yield the modern procedure.
The logical extension of positron instrumentation was a design using two two-dimensional arrays. PC-I was the first instrument using this concept and was designed in 1968, completed in 1969 and reported in 1972. The first applications of PC-I in tomographic mode as distinguished from the computed tomographic mode were reported in 1970. [ 87 ] It soon became clear to many of those involved in PET development that a circular or cylindrical array of detectors was the logical next step in PET instrumentation. Although many investigators took this approach, James Robertson [ 88 ] and Zang-Hee Cho [ 89 ] were the first to propose a ring system that has become the prototype of the current shape of PET. The first multislice cylindrical array PET scanner was completed in 1974 at the Mallinckrodt Institute of Radiology by the group led by Ter-Pogossian. [ 90 ]
The PET–CT scanner, attributed to David Townsend and Ronald Nutt, was named by Time as the medical invention of the year in 2000. [ 91 ]
As of August 2008, Cancer Care Ontario reports that the current average incremental cost to perform a PET scan in the province is CA$1,000–1,200 per scan. This includes the cost of the radiopharmaceutical and a stipend for the physician reading the scan. [ 92 ]
In the United States, a PET scan is estimated to be US$1,500–5,000. [ citation needed ]
In England, the National Health Service reference cost (2015–2016) for an adult outpatient PET scan is £798. [ 93 ]
In Australia, as of July 2018, the Medicare Benefits Schedule Fee for whole body FDG PET ranges from A$953 to A$999, depending on the indication for the scan. [ 94 ]
The overall performance of PET systems can be evaluated by quality control tools such as the Jaszczak phantom . [ 95 ] | https://en.wikipedia.org/wiki/Positron_emission_tomography |
Onia
Positronium ( Ps ) is a system consisting of an electron and its anti-particle , a positron , bound together into an exotic atom , specifically an onium . Unlike hydrogen, the system has no protons . The system is unstable: the two particles annihilate each other to predominantly produce two or three gamma-rays , depending on the relative spin states. The energy levels of the two particles are similar to that of the hydrogen atom (which is a bound state of a proton and an electron). However, because of the reduced mass, the frequencies of the spectral lines are less than half of those for the corresponding hydrogen lines.
The mass of positronium is 1.022 MeV, which is twice the electron mass minus the binding energy of a few eV. The lowest energy orbital state of positronium is 1S, and like with hydrogen, it has a hyperfine structure arising from the relative orientations of the spins of the electron and the positron.
The singlet state , 1 S 0 , with antiparallel spins ( S = 0, M s = 0) is known as para -positronium ( p -Ps). It has a mean lifetime of 0.12 ns and decays preferentially into two gamma rays with energy of 511 keV each (in the center-of-mass frame ). Para -positronium can decay into any even number of photons (2, 4, 6, ...), but the probability quickly decreases with the number: the branching ratio for decay into 4 photons is 1.439(2) × 10 −6 . [ 1 ]
Para- positronium lifetime in vacuum is approximately [ 1 ] t 0 = 2 ℏ m e c 2 α 5 = 0.1244 n s . {\displaystyle t_{0}={\frac {2\hbar }{m_{\mathrm {e} }c^{2}\alpha ^{5}}}=0.1244~\mathrm {ns} .}
The triplet states , 3 S 1 , with parallel spins ( S = 1, M s = −1, 0, 1) are known as ortho -positronium ( o -Ps), and have an energy that is approximately 0.001 eV higher than the singlet. [ 1 ] These states have a mean lifetime of 142.05 ± 0.02 ns , [ 2 ] and the leading decay is three gammas. Other modes of decay are negligible; for instance, the five-photons mode has branching ratio of ≈ 10 −6 . [ 3 ]
Ortho -positronium lifetime in vacuum can be calculated approximately as: [ 1 ] t 1 = 1 2 9 h 2 m e c 2 α 6 ( π 2 − 9 ) = 138.6 n s . {\displaystyle t_{1}={\frac {{\frac {1}{2}}9h}{2m_{\mathrm {e} }c^{2}\alpha ^{6}(\pi ^{2}-9)}}=138.6~\mathrm {ns} .}
However more accurate calculations with corrections to O (α 2 ) yield a value of 7.040 μs −1 for the decay rate, corresponding to a lifetime of 142 ns . [ 4 ] [ 5 ]
Positronium in the 2S state is metastable having a lifetime of 1100 ns against annihilation . [ 6 ] The positronium created in such an excited state will quickly cascade down to the ground state, where annihilation will occur more quickly.
Measurements of these lifetimes and energy levels have been used in precision tests of quantum electrodynamics , confirming quantum electrodynamics (QED) predictions to high precision. [ 1 ] [ 7 ] [ 8 ]
Annihilation can proceed via a number of channels, each producing gamma rays with total energy of 1022 keV (sum of the electron and positron mass-energy), usually 2 or 3, with up to 5 gamma ray photons recorded from a single annihilation.
The annihilation into a neutrino –antineutrino pair is also possible, but the probability is predicted to be negligible. The branching ratio for o -Ps decay for this channel is 6.2 × 10 −18 ( electron neutrino –antineutrino pair) and 9.5 × 10 −21 (for other flavour) [ 3 ] in predictions based on the Standard Model, but it can be increased by non-standard neutrino properties, like relatively high magnetic moment . The experimental upper limits on branching ratio for this decay (as well as for a decay into any "invisible" particles) are < 4.3 × 10 −7 for p -Ps and < 4.2 × 10 −7 for o -Ps. [ 2 ]
While precise calculation of positronium energy levels uses the Bethe–Salpeter equation or the Breit equation , the similarity between positronium and hydrogen allows a rough estimate. In this approximation, the energy levels are different because of a different effective mass, μ , in the energy equation (see electron energy levels for a derivation): E n = − μ q e 4 8 h 2 ε 0 2 1 n 2 , {\displaystyle E_{n}=-{\frac {\mu q_{\mathrm {e} }^{4}}{8h^{2}\varepsilon _{0}^{2}}}{\frac {1}{n^{2}}},} where:
Thus, for positronium, its reduced mass only differs from the electron by a factor of 2. This causes the energy levels to also roughly be half of what they are for the hydrogen atom.
So finally, the energy levels of positronium are given by E n = − 1 2 m e q e 4 8 h 2 ε 0 2 1 n 2 = − 6.8 e V n 2 . {\displaystyle E_{n}=-{\frac {1}{2}}{\frac {m_{\mathrm {e} }q_{\mathrm {e} }^{4}}{8h^{2}\varepsilon _{0}^{2}}}{\frac {1}{n^{2}}}={\frac {-6.8~\mathrm {eV} }{n^{2}}}.}
The lowest energy level of positronium ( n = 1 ) is −6.8 eV . The next level is −1.7 eV . The negative sign is a convention that implies a bound state . Positronium can also be considered by a particular form of the two-body Dirac equation ; Two particles with a Coulomb interaction can be exactly separated in the (relativistic) center-of-momentum frame and the resulting ground-state energy has been obtained very accurately using finite element methods of Janine Shertzer . [ 9 ] Their results lead to the discovery of anomalous states. [ 10 ] [ 11 ] The Dirac equation whose Hamiltonian comprises two Dirac particles and a static Coulomb potential is not relativistically invariant. But if one adds the 1 / c 2 n (or α 2 n , where α is the fine-structure constant ) terms, where n = 1,2... , then the result is relativistically invariant. Only the leading term is included. The α 2 contribution is the Breit term; workers rarely go to α 4 because at α 3 one has the Lamb shift, which requires quantum electrodynamics. [ 9 ]
After a radioactive atom in a material undergoes a β + decay (positron emission), the resulting high-energy positron slows down by colliding with atoms, and eventually annihilates with one of the many electrons in the material. It may however first form positronium before the annihilation event. The understanding of this process is of some importance in positron emission tomography . Approximately: [ 12 ] [ 13 ]
The Croatian physicist Stjepan Mohorovičić predicted the existence of positronium in a 1934 article published in Astronomische Nachrichten , in which he called it the "electrum". [ 15 ] Other sources incorrectly credit Carl Anderson as having predicted its existence in 1932 while at Caltech . [ 16 ] It was experimentally discovered by Martin Deutsch at MIT in 1951 and became known as positronium. [ 16 ] Many subsequent experiments have precisely measured its properties and verified predictions of quantum electrodynamics.
A discrepancy known as the ortho-positronium lifetime puzzle persisted for some time, but was resolved with further calculations and measurements. [ 17 ] Measurements were in error because of the lifetime measurement of unthermalised positronium, which was produced at only a small rate. This had yielded lifetimes that were too long. Also calculations using relativistic quantum electrodynamics are difficult, so they had been done to only the first order. Corrections that involved higher orders were then calculated in a non-relativistic quantum electrodynamics. [ 4 ]
In 2024, the AEgIS collaboration at CERN was the first to cool positronium by laser light, leaving it available for experimental use. The substance was brought to −100 °C (−148 °F) using laser cooling . [ 18 ] [ 19 ]
Molecular bonding was predicted for positronium. [ 20 ] Molecules of positronium hydride (PsH) can be made. [ 21 ] Positronium can also form a cyanide and can form bonds with halogens or lithium. [ 22 ]
The first observation of di-positronium ( Ps 2 ) molecules —molecules consisting of two positronium atoms—was reported on 12 September 2007 by David Cassidy and Allen Mills from University of California, Riverside . [ 23 ] [ 24 ] [ 25 ]
Unlike muonium , positronium does not have a nucleus analogue, because the electron and the positron have equal masses. [ 26 ] Consequently, while muonium tends to behave like a light isotope of hydrogen, [ 27 ] positronium shows large differences in size, polarisability, and binding energy from hydrogen. [ 26 ]
The events in the early universe leading to baryon asymmetry predate the formation of atoms (including exotic varieties such as positronium) by around a third of a million years, so no positronium atoms occurred then.
Likewise, the naturally occurring positrons in the present day result from high-energy interactions such as in cosmic ray –atmosphere interactions, and so are too hot (thermally energetic) to form electrical bonds before annihilation . | https://en.wikipedia.org/wiki/Positronium |
Positronium hydride , or hydrogen positride [ 3 ] is an exotic molecule consisting of a hydrogen atom bound to an exotic atom of positronium (that is a combination of an electron and a positron). Its formula is PsH . It was predicted to exist in 1951 by A. Ore, [ 4 ] and subsequently studied theoretically, but was not observed until 1990. R. Pareja, R. Gonzalez from Madrid trapped positronium in hydrogen-laden magnesia crystals. The trap was prepared by Yok Chen from the Oak Ridge National Laboratory . [ 5 ] In this experiment the positrons were thermalized so that they were not traveling at high speed, and they then reacted with H − ions in the crystal. [ 6 ] In 1992 it was created in an experiment done by David M. Schrader and F.M. Jacobsen and others at the Aarhus University in Denmark . The researchers made the positronium hydride molecules by firing intense bursts of positrons into methane , which has the highest density of hydrogen atoms. Upon slowing down, the positrons were captured by ordinary electrons to form positronium atoms which then reacted with hydrogen atoms from the methane. [ 7 ]
PsH is constructed from one proton, two electrons, and one positron. The binding energy is 1.1 ± 0.2 eV . The lifetime of the molecule is 0.65 nanoseconds . The lifetime of positronium deuteride is indistinguishable from the normal hydride. [ 6 ]
The decay of positronium is easily observed by detecting the two 511 keV gamma ray photons emitted in the decay. The energy of the photons from positronium should differ slightly by the binding energy of the molecule. However, this has not yet been detected. [ 3 ]
The structure of PsH is as a diatomic molecule, with a chemical bond between the two positively charged centres. The electrons are more concentrated around the proton. [ 2 ] Predicting the properties of PsH is a four body Coulomb problem. Calculated using the stochastic variational method, the size of the molecule is larger than dihydrogen , which has a bond length of 0.7413 Å . [ 8 ] In PsH the positron and proton are separated on average by 3.66 a 0 (1.94 Å). The positronium in the molecule is swollen compared to the positronium atom, increasing to 3.48 a 0 compared to 3 a 0 . Average distance of the electrons from the proton is larger than the dihydrogen molecule, at 2.31 a 0 with the maximum density at 2.8 au. [ 3 ]
Due to its short lifetime, establishing the chemistry of positronium hydride poses difficulties. Theoretical calculations can predict outcomes. One method of formation is through alkali metal hydrides reacting with positrons. Molecules with dipole moments greater than 1.625 debye are predicted to attract and hold positrons in a bound state. Crawford's model predicts this positron capture. In the case of lithium hydride , sodium hydride and potassium hydride molecules, this adduct decomposes and positronium hydride and the alkali positive ion form. [ 9 ]
PsH is a simple exotic compound . Other compounds of positronium are possible by the reactions e + + AB → PsA + B + . [ 10 ] Other substances that contain positronium are di-positronium and the ion Ps − with two electrons. Molecules of Ps with normal matter include halides and cyanide. [ 2 ]
Positronium antihydride (Ps H ) contains antihydrogen instead of hydrogen. It can be made as the anti-hydride ion ( H + ) reacts with positronium (Ps)
The GBAR experiment uses the similar reaction H + Ps → H + + e − which cannot produce positronium antihydride, as there is too much energy left over for positronium antihydride to be stable. [ 11 ] | https://en.wikipedia.org/wiki/Positronium_hydride |
In algebra, Posner's theorem states that given a prime polynomial identity algebra A with center Z , the ring A ⊗ Z Z ( 0 ) {\displaystyle A\otimes _{Z}Z_{(0)}} is a central simple algebra over Z ( 0 ) {\displaystyle Z_{(0)}} , the field of fractions of Z . [ 1 ] It is named after Ed Posner .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Posner's_theorem |
A Possum belly , on a drilling rig , is a metal container at the head of the shale shaker that receives the flow of drilling fluid and is directly connected to and at the end of the flow line . A possum belly may also be referred to as a distribution box or flowline trap. [ 1 ]
The purpose of the possum belly is to slow the flow of the drilling fluid (after it has gained momentum from coming down through the flow line ) so that it does not shoot off of the shale shakers .
Possum bellies are generally used when bentonite or another form of "mud" is being used. During the use of freshwater or brine water , the flow line generally either goes straight to the reserve pit, or into the steel pits .
The possum belly derives its name from the similarity of its appearance to the low hanging abdomen of the possum . | https://en.wikipedia.org/wiki/Possum_belly |
In computability theory Post's theorem , named after Emil Post , describes the connection between the arithmetical hierarchy and the Turing degrees .
The statement of Post's theorem uses several concepts relating to definability and recursion theory . This section gives a brief overview of these concepts, which are covered in depth in their respective articles.
The arithmetical hierarchy classifies certain sets of natural numbers that are definable in the language of Peano arithmetic. A formula is said to be Σ m 0 {\displaystyle \Sigma _{m}^{0}} if it is an existential statement in prenex normal form (all quantifiers at the front) with m {\displaystyle m} alternations between existential and universal quantifiers applied to a formula with bounded quantifiers only. Formally a formula ϕ ( s ) {\displaystyle \phi (s)} in the language of Peano arithmetic is a Σ m 0 {\displaystyle \Sigma _{m}^{0}} formula if it is of the form
where ρ {\displaystyle \rho } contains only bounded quantifiers and Q is ∀ {\displaystyle \forall } if m is even and ∃ {\displaystyle \exists } if m is odd.
A set of natural numbers A {\displaystyle A} is said to be Σ m 0 {\displaystyle \Sigma _{m}^{0}} if it is definable by a Σ m 0 {\displaystyle \Sigma _{m}^{0}} formula, that is, if there is a Σ m 0 {\displaystyle \Sigma _{m}^{0}} formula ϕ ( s ) {\displaystyle \phi (s)} such that each number n {\displaystyle n} is in A {\displaystyle A} if and only if ϕ ( n ) {\displaystyle \phi (n)} holds. It is known that if a set is Σ m 0 {\displaystyle \Sigma _{m}^{0}} then it is Σ n 0 {\displaystyle \Sigma _{n}^{0}} for any n > m {\displaystyle n>m} , but for each m there is a Σ m + 1 0 {\displaystyle \Sigma _{m+1}^{0}} set that is not Σ m 0 {\displaystyle \Sigma _{m}^{0}} . Thus the number of quantifier alternations required to define a set gives a measure of the complexity of the set.
Post's theorem uses the relativized arithmetical hierarchy as well as the unrelativized hierarchy just defined. A set A {\displaystyle A} of natural numbers is said to be Σ m 0 {\displaystyle \Sigma _{m}^{0}} relative to a set B {\displaystyle B} , written Σ m 0 , B {\displaystyle \Sigma _{m}^{0,B}} , if A {\displaystyle A} is definable by a Σ m 0 {\displaystyle \Sigma _{m}^{0}} formula in an extended language that includes a predicate for membership in B {\displaystyle B} .
While the arithmetical hierarchy measures definability of sets of natural numbers, Turing degrees measure the level of uncomputability of sets of natural numbers. A set A {\displaystyle A} is said to be Turing reducible to a set B {\displaystyle B} , written A ≤ T B {\displaystyle A\leq _{T}B} , if there is an oracle Turing machine that, given an oracle for B {\displaystyle B} , computes the characteristic function of A {\displaystyle A} .
The Turing jump of a set A {\displaystyle A} is a form of the Halting problem relative to A {\displaystyle A} . Given a set A {\displaystyle A} , the Turing jump A ′ {\displaystyle A'} is the set of indices of oracle Turing machines that halt on input 0 {\displaystyle 0} when run with oracle A {\displaystyle A} . It is known that every set A {\displaystyle A} is Turing reducible to its Turing jump, but the Turing jump of a set is never Turing reducible to the original set.
Post's theorem uses finitely iterated Turing jumps. For any set A {\displaystyle A} of natural numbers, the notation A ( n ) {\displaystyle A^{(n)}} indicates the n {\displaystyle n} –fold iterated Turing jump of A {\displaystyle A} . Thus A ( 0 ) {\displaystyle A^{(0)}} is just A {\displaystyle A} , and A ( n + 1 ) {\displaystyle A^{(n+1)}} is the Turing jump of A ( n ) {\displaystyle A^{(n)}} .
Post's theorem establishes a close connection between the arithmetical hierarchy and the Turing degrees of the form ∅ ( n ) {\displaystyle \emptyset ^{(n)}} , that is, finitely iterated Turing jumps of the empty set. (The empty set could be replaced with any other computable set without changing the truth of the theorem.)
Post's theorem states:
Post's theorem has many corollaries that expose additional relationships between the arithmetical
hierarchy and the Turing degrees. These include:
The operation of a Turing machine T {\displaystyle T} on input n {\displaystyle n} can be formalized logically in first-order arithmetic . For example, we may use symbols A k {\displaystyle A_{k}} , B k {\displaystyle B_{k}} , and C k {\displaystyle C_{k}} for the tape configuration, machine state and location along the tape after k {\displaystyle k} steps, respectively. T {\displaystyle T} 's transition system determines the relation between ( A k , B k , C k ) {\displaystyle (A_{k},B_{k},C_{k})} and ( A k + 1 , B k + 1 , C k + 1 ) {\displaystyle (A_{k+1},B_{k+1},C_{k+1})} ; their initial values (for k = 0 {\displaystyle k=0} ) are the input, the initial state and zero, respectively. The machine halts if and only if there is a number k {\displaystyle k} such that B k {\displaystyle B_{k}} is the halting state.
The exact relation depends on the specific implementation of the notion of Turing machine (e.g. their alphabet, allowed mode of motion along the tape, etc.)
In case T {\displaystyle T} halts at time n 1 {\displaystyle n_{1}} , the relation between ( A k , B k , C k ) {\displaystyle (A_{k},B_{k},C_{k})} and ( A k + 1 , B k + 1 , C k + 1 ) {\displaystyle (A_{k+1},B_{k+1},C_{k+1})} must be satisfied only for k bounded from above by n 1 {\displaystyle n_{1}} .
Thus there is a formula φ ( n , n 1 ) {\displaystyle \varphi (n,n_{1})} in first-order arithmetic with no un bounded quantifiers , such that T {\displaystyle T} halts on input n {\displaystyle n} at time n 1 {\displaystyle n_{1}} at most if and only if φ ( n , n 1 ) {\displaystyle \varphi (n,n_{1})} is satisfied.
For example, for a prefix-free Turing machine with binary alphabet and no blank symbol, we may use the following notations:
For a prefix-free Turing machine we may use, for input n, the initial tape configuration t ( n ) = c a t ( 2 c e i l ( l o g 2 n ) − 1 , 0 , n ) {\displaystyle t(n)=cat(2^{ceil(log_{2}n)}-1,0,n)} where cat stands for concatenation; thus t ( n ) {\displaystyle t(n)} is a log ( n ) − {\displaystyle \log(n)-} length string of 1 − s {\displaystyle 1-s} followed by 0 {\displaystyle 0} and then by n {\displaystyle n} .
The operation of the Turing machine at the first n 1 {\displaystyle n_{1}} steps can thus be written as the conjunction of the initial conditions and the following formulas, quantified over k {\displaystyle k} for all k < n 1 {\displaystyle k<n_{1}} :
T halts on input n {\displaystyle n} at time n 1 {\displaystyle n_{1}} at most if and only if φ ( n , n 1 ) {\displaystyle \varphi (n,n_{1})} is satisfied, where:
This is a first-order arithmetic formula with no unbounded quantifiers, i.e. it is in Σ 0 0 {\displaystyle \Sigma _{0}^{0}} .
Let S {\displaystyle S} be a set that can be recursively enumerated by a Turing machine . Then there is a Turing machine T {\displaystyle T} that for every n {\displaystyle n} in S {\displaystyle S} , T {\displaystyle T} halts when given n {\displaystyle n} as an input.
This can be formalized by the first-order arithmetical formula presented above. The members of S {\displaystyle S} are the numbers n {\displaystyle n} satisfying the following formula:
∃ n 1 : φ ( n , n 1 ) {\displaystyle \exists n_{1}:\varphi (n,n_{1})}
This formula is in Σ 1 0 {\displaystyle \Sigma _{1}^{0}} . Therefore, S {\displaystyle S} is in Σ 1 0 {\displaystyle \Sigma _{1}^{0}} .
Thus every recursively enumerable set is in Σ 1 0 {\displaystyle \Sigma _{1}^{0}} .
The converse is true as well: for every formula φ ( n ) {\displaystyle \varphi (n)} in Σ 1 0 {\displaystyle \Sigma _{1}^{0}} with k existential quantifiers, we may enumerate the k {\displaystyle k} –tuples of natural numbers and run a Turing machine that goes through all of them until it finds the formula is satisfied. This Turing machine halts on precisely the set of natural numbers satisfying φ ( n ) {\displaystyle \varphi (n)} , and thus enumerates its corresponding set.
Similarly, the operation of an oracle machine T {\displaystyle T} with an oracle O that halts after at most n 1 {\displaystyle n_{1}} steps on input n {\displaystyle n} can be described by a first-order formula φ O ( n , n 1 ) {\displaystyle \varphi _{O}(n,n_{1})} , except that the formula φ 1 ( n , n 1 ) {\displaystyle \varphi _{1}(n,n_{1})} now includes:
If the oracle is for a decision problem, O m {\displaystyle O_{m}} is always "Yes" or "No", which we may formalize as 0 or 1. Suppose the decision problem itself can be formalized by a first-order arithmetic formula ψ O ( m ) {\displaystyle \psi ^{O}(m)} .
Then T {\displaystyle T} halts on n {\displaystyle n} after at most n 1 {\displaystyle n_{1}} steps if and only if the following formula is satisfied: φ O ( n , n 1 ) = ∀ m < 2 n 1 : ( ( ψ O ( m ) → ( O m = 1 ) ) ∧ ( ¬ ψ O ( m ) → ( O m = 0 ) ) ) ∧ φ O 1 ( n , n 1 ) {\displaystyle \varphi _{O}(n,n_{1})=\forall m<2^{n_{1}}:((\psi ^{O}(m)\rightarrow (O_{m}=1))\land (\lnot \psi ^{O}(m)\rightarrow (O_{m}=0)))\land {\varphi _{O}}_{1}(n,n_{1})}
where φ O 1 ( n , n 1 ) {\displaystyle {\varphi _{O}}_{1}(n,n_{1})} is a first-order formula with no unbounded quantifiers.
If O is an oracle to the halting problem of a machine T ′ {\displaystyle T'} , then ψ O ( m ) {\displaystyle \psi ^{O}(m)} is the same as "there exists m 1 {\displaystyle m_{1}} such that T ′ {\displaystyle T'} starting with input m is at the halting state after m 1 {\displaystyle m_{1}} steps".
Thus: ψ O ( m ) = ∃ m 1 : ψ H ( m , m 1 ) {\displaystyle \psi ^{O}(m)=\exists m_{1}:\psi _{H}(m,m_{1})} where ψ H ( m , m 1 ) {\displaystyle \psi _{H}(m,m_{1})} is a first-order formula that formalizes T ′ {\displaystyle T'} . If T ′ {\displaystyle T'} is a Turing machine (with no oracle), ψ H ( m , m 1 ) {\displaystyle \psi _{H}(m,m_{1})} is in Σ 0 0 = Π 0 0 {\displaystyle \Sigma _{0}^{0}=\Pi _{0}^{0}} (i.e. it has no unbounded quantifiers).
Since there is a finite number of numbers m satisfying m < 2 n 1 {\displaystyle m<2^{n_{1}}} , we may choose the same number of steps for all of them: there is a number m 1 {\displaystyle m_{1}} , such that T ′ {\displaystyle T'} halts after m 1 {\displaystyle m_{1}} steps precisely on those inputs m < 2 n 1 {\displaystyle m<2^{n_{1}}} for which it halts at all.
Moving to prenex normal form , we get that the oracle machine halts on input n {\displaystyle n} if and only if the following formula is satisfied: φ ( n ) = ∃ n 1 ∃ m 1 ∀ m 2 : ( ψ H ( m , m 2 ) → ( O m = 1 ) ) ∧ ( ¬ ψ H ( m , m 1 ) → ( O m = 0 ) ) ) ∧ φ O 1 ( n , n 1 ) {\displaystyle \varphi (n)=\exists n_{1}\exists m_{1}\forall m_{2}:(\psi _{H}(m,m_{2})\rightarrow (O_{m}=1))\land (\lnot \psi _{H}(m,m_{1})\rightarrow (O_{m}=0)))\land {\varphi _{O}}_{1}(n,n_{1})}
(informally, there is a "maximal number of steps" m 1 {\displaystyle m_{1}} such every oracle that does not halt within the first m 1 {\displaystyle m_{1}} steps does not stop at all; however, for every m 2 {\displaystyle m_{2}} , each oracle that halts after m 2 {\displaystyle m_{2}} steps does halt).
Note that we may replace both n 1 {\displaystyle n_{1}} and m 1 {\displaystyle m_{1}} by a single number - their maximum - without changing the truth value of φ ( n ) {\displaystyle \varphi (n)} . Thus we may write: φ ( n ) = ∃ n 1 ∀ m 2 : ( ψ H ( m , m 2 ) → ( O m = 1 ) ) ∧ ( ¬ ψ H ( m , n 1 ) → ( O m = 0 ) ) ) ∧ φ O 1 ( n , n 1 ) {\displaystyle \varphi (n)=\exists n_{1}\forall m_{2}:(\psi _{H}(m,m_{2})\rightarrow (O_{m}=1))\land (\lnot \psi _{H}(m,n_{1})\rightarrow (O_{m}=0)))\land {\varphi _{O}}_{1}(n,n_{1})}
For the oracle to the halting problem over Turing machines, ψ H ( m , m 1 ) {\displaystyle \psi _{H}(m,m_{1})} is in Π 0 0 {\displaystyle \Pi _{0}^{0}} and φ ( n ) {\displaystyle \varphi (n)} is in Σ 2 0 {\displaystyle \Sigma _{2}^{0}} . Thus every set that is recursively enumerable by an oracle machine with an oracle for ∅ ( 1 ) {\displaystyle \emptyset ^{(1)}} , is in Σ 2 0 {\displaystyle \Sigma _{2}^{0}} .
The converse is true as well: Suppose φ ( n ) {\displaystyle \varphi (n)} is a formula in Σ 2 0 {\displaystyle \Sigma _{2}^{0}} with k 1 {\displaystyle k_{1}} existential quantifiers followed by k 2 {\displaystyle k_{2}} universal quantifiers. Equivalently, φ ( n ) {\displaystyle \varphi (n)} has k 1 {\displaystyle k_{1}} > existential quantifiers followed by a negation of a formula in Σ 1 0 {\displaystyle \Sigma _{1}^{0}} ; the latter formula can be enumerated by a Turing machine and can thus be checked immediately by an oracle for ∅ ( 1 ) {\displaystyle \emptyset ^{(1)}} .
We may thus enumerate the k 1 {\displaystyle k_{1}} –tuples of natural numbers and run an oracle machine with an oracle for ∅ ( 1 ) {\displaystyle \emptyset ^{(1)}} that goes through all of them until it finds a satisfaction for the formula. This oracle machine halts on precisely the set of natural numbers satisfying φ ( n ) {\displaystyle \varphi (n)} , and thus enumerates its corresponding set.
More generally, suppose every set that is recursively enumerable by an oracle machine with an oracle for ∅ ( p ) {\displaystyle \emptyset ^{(p)}} is in Σ p + 1 0 {\displaystyle \Sigma _{p+1}^{0}} . Then for an oracle machine with an oracle for ∅ ( p + 1 ) {\displaystyle \emptyset ^{(p+1)}} , ψ O ( m ) = ∃ m 1 : ψ H ( m , m 1 ) {\displaystyle \psi ^{O}(m)=\exists m_{1}:\psi _{H}(m,m_{1})} is in Σ p + 1 0 {\displaystyle \Sigma _{p+1}^{0}} .
Since ψ O ( m ) {\displaystyle \psi ^{O}(m)} is the same as φ ( n ) {\displaystyle \varphi (n)} for the previous Turing jump, it can be constructed (as we have just done with φ ( n ) {\displaystyle \varphi (n)} above) so that ψ H ( m , m 1 ) {\displaystyle \psi _{H}(m,m_{1})} in Π p 0 {\displaystyle \Pi _{p}^{0}} . After moving to prenex formal form the new φ ( n ) {\displaystyle \varphi (n)} is in Σ p + 2 0 {\displaystyle \Sigma _{p+2}^{0}} .
By induction, every set that is recursively enumerable by an oracle machine with an oracle for ∅ ( p ) {\displaystyle \emptyset ^{(p)}} , is in Σ p + 1 0 {\displaystyle \Sigma _{p+1}^{0}} .
The other direction can be proven by induction as well: Suppose every formula in Σ p + 1 0 {\displaystyle \Sigma _{p+1}^{0}} can be enumerated by an oracle machine with an oracle for ∅ ( p ) {\displaystyle \emptyset ^{(p)}} .
Now Suppose φ ( n ) {\displaystyle \varphi (n)} is a formula in Σ p + 2 0 {\displaystyle \Sigma _{p+2}^{0}} with k 1 {\displaystyle k_{1}} existential quantifiers followed by k 2 {\displaystyle k_{2}} universal quantifiers etc. Equivalently, φ ( n ) {\displaystyle \varphi (n)} has k 1 {\displaystyle k_{1}} > existential quantifiers followed by a negation of a formula in Σ p + 1 0 {\displaystyle \Sigma _{p+1}^{0}} ; the latter formula can be enumerated by an oracle machine with an oracle for ∅ ( p ) {\displaystyle \emptyset ^{(p)}} and can thus be checked immediately by an oracle for ∅ ( p + 1 ) {\displaystyle \emptyset ^{(p+1)}} .
We may thus enumerate the k 1 {\displaystyle k_{1}} –tuples of natural numbers and run an oracle machine with an oracle for ∅ ( p + 1 ) {\displaystyle \emptyset ^{(p+1)}} that goes through all of them until it finds a satisfaction for the formula. This oracle machine halts on precisely the set of natural numbers satisfying φ ( n ) {\displaystyle \varphi (n)} , and thus enumerates its corresponding set.
Rogers, H. The Theory of Recursive Functions and Effective Computability , MIT Press. ISBN 0-262-68052-1 ; ISBN 0-07-053522-1
Soare, R. Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1987. ISBN 3-540-15299-7 | https://en.wikipedia.org/wiki/Post's_theorem |
Post-acute infection syndromes ( PAISs ) or post-infectious syndromes are medical conditions characterized by symptoms attributed to a prior infection . While it is commonly assumed that people either recover or die from infections, long-term symptoms—or sequelae —are a possible outcome as well. [ 1 ] Examples include long COVID (post-acute sequelae of SARS-CoV-2 infection, PASC), Myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS), and post-Ebola virus syndrome . [ 1 ] Common symptoms include post-exertional malaise (PEM), severe fatigue , neurocognitive symptoms, flu-like symptoms , and pain. The pathology of most of these conditions is not understood and management is generally symptomatic .
PAIS symptoms are often non-specific and similar despite diverse prior infections. Symptoms commonly included in definitions of PAIS include post-exertional malaise, severe fatigue, neurocognitive and sensory symptoms, flu-like symptoms, unrefreshing sleep, muscle pain, and joint pain. Symptoms can vary among affected people. [ 1 ] Some PAIS symptoms are more specific. For example, eye problems are common in post-Ebola virus syndrome , and profound weakness is seen in post-polio syndrome and post- West Nile fevers . [ 1 ]
Symptoms can be severe and debilitating, resulting in lowered quality of life or inability to work. [ 1 ] The onset of symptoms may be delayed, often by decades in the case of post-polio syndrome, and their severity may fluctuate over time. [ 2 ]
Pathogens associated with PAISs include SARS-CoV-2 (causing COVID-19), Ebolavirus , Dengue virus , poliovirus , SARS-CoV-1 (causing SARS), Chikungunya virus , Epstein–Barr virus (EBV), West Nile virus (WNV), Ross River virus (RRV), Coxsackie B , influenza A virus subtype H1N1 , varicella zoster virus (VZV), Coxiella burnetii , Borrelia , and Giardia . However, the strength of evidence associating these pathogens with chronic illness varies. [ 1 ]
The pathophysiology of most PAISs is poorly understood, but the overlap in symptoms despite disparate infectious triggers implies a possible shared pathology. For most conditions, no chronic infection has been detected. [ 1 ]
The pathology of post-acute infections syndromes is not understood. The commonality in symptoms between illnesses may point to a common pathology. [ 1 ] Major hypotheses include persistence of the original pathogen or its remnants, autoimmunity , chronic inflammation , reactivation of other latent infections, microbiome dysbiosis , or damage to organs, which may include the lungs , brain , kidneys , heart , or blood vessels . [ 1 ] [ 3 ]
In the absence of tests for most PAISes, diagnosis is usually based on the patient's history, symptoms, and eliminating other potential causes . Available tests often fail to explain patients' symptoms, but this does not suggest they are not real. The complexity of diagnosing PAISes may lead to long delays in diagnosis. [ 2 ]
Diagnostic criteria vary among illnesses, and have at times been the subject of intense debate. [ 1 ] For example, several definitions of ME/CFS have been in use. [ 1 ]
PAIS is a broad term describing conditions attributed to various infections, including long COVID , ME/CFS, post-Ebola virus syndrome, post- dengue fatigue syndrome, post-polio syndrome , post- SARS syndrome, post- chikungunya disease, Q fever fatigue syndrome, post-treatment Lyme disease syndrome (PTLDS), and symptoms observed after other infections lacking a specific name. [ 1 ] [ 3 ] Other known sequelae of infections include multisystem inflammatory syndrome in children (MIS-C), and subacute sclerosing panencephalitis (a deadly consequence of measles that can be delayed by years). [ 1 ]
Treatment options for most PAISes are limited. In general, the focus is on managing symptoms, [ 3 ] and management strategies for ME/CFS may benefit patients with similar symptoms. [ 2 ]
Some people with PAISs recover over a period ranging from weeks to years while others remain ill. [ 2 ] [ 1 ] Many studies have shown that symptoms can continue for at least several years until the studies' conclusion. Studies of PTLDS ran longer and found increased rates of symptoms for up to 27 years. [ 1 ] In the case of ME/CFS prognosis is poor and the illness is lifelong for most. [ 4 ] [ 5 ]
Data on epidemiology is limited by the lack of large, rigorous studies; and rates vary by infection. Mononucleosis is among the best studied, and available studies found that 7-9% had persistent symptoms 12 months after infection, and 4% had serious symptoms after 2 years. The British Office of National Statistics data on long COVID say that about 10% of people who had COVID-19 self-reported long COVID 6 months after infection, and about 7% reported long COVID with activity limitations. An Australian study of EBV, C. burnetii , and Ross River Virus found that 11% of participants met the criteria for ME/CFS at 6 months. Around 10-20% of people with SARS also experienced long-term effects. [ 1 ]
While PAISs were described prior to the COVID-19 pandemic, the emergence of long COVID brought them increased attention. [ 1 ]
PAISs cause a significant disease burden , but have received relatively little attention from scientists, potentially delaying the discovery of causes, diagnostic tests, and treatments. [ 1 ] [ 6 ] Infectious disease surveillance programs track acute illness but rarely track the health effects of PAISes. [ 7 ] Many doctors are also unfamiliar with them, and may fail to take symptoms seriously. [ 8 ] [ 3 ]
PAISs may have a common cause, and different hypotheses are being studied. [ 3 ] Long COVID has increased the overall pace of research. [ 3 ] Yale School of Medicine operates a research center, founded in 2023, that focuses on PAISs called the Center for Infection & Immunity. [ 9 ] | https://en.wikipedia.org/wiki/Post-acute_infection_syndrome |
The method of building wooden buildings with a traditional timber frame with horizontal plank or log infill has many names, the most common of which are piece sur piece (French. Also used to describe log building ), corner post construction , post-and-plank , Ständerbohlenbau (German) and skiftesverk (Swedish). This traditional building method is believed to be the predecessor to half-timber construction widely known by its German name fachwerkbau which has wall infill of wattle and daub , brick, or stone. This carpentry was used from parts of Scandinavia to Switzerland to western Russia. Though relatively rare now, two types are found in a number of regions in North America, more common are the walls with planks or timbers which slide in a groove in the posts and less common is a type where horizontal logs are tenoned into individual mortises in the posts. This method is not the same as the plank-frame buildings in North America with vertical plank walls.
"The support of horizontal timbers by corner posts is an old form of construction in Europe. It was apparently carried across much of the continent from Silesia by the Lausitz urnfield culture in the late Bronze Age." [ 7 ] The Lausitz culture is also known as the Lusatian culture and within their territory is an archaeological site and archaeological open-air museum at Biskupin , Poland, where remnants of such structures were found and reconstructed. The structures found dated from 747 to 722 B.C and are similar in concept to piece sur piece construction. [ 8 ] This historic carpentry is known in southern Sweden (skiftesverk), particularly Gotland where it is also known as bulhus , Germany, Poland, including Silesia, Bohemia - Czech Republic, Hungary, Lithuania, Switzerland, Austria.
In 2018, an oak well structure assembled in a post-and plank method was unearthed in the Czech Republic, near Ostrov, Pardubice Region, during motorway construction. The wood was well preserved, as it was submerged in water. Its age was established using the dendrochronological method, by tree ring dating - the oak trees used to build the well were fell 5256/55 BC, and started growing in 5481 BC, during the Early Neolithic period, more than 7000 years ago. "The shape of the individual structural elements and tool marks preserved on their surface confirm sophisticated carpentry skills." researchers note. [ 9 ] [1]
Some researchers believe this building method was introduced to the United States by Alpine-Alemannic Germans or Swiss, and to by French fur trappers working for the Hudson's Bay Company . And, [ 10 ] Others, who have studied the development house building in New France believe that the method was developed endemically in Canada as a local adaption of the half-timbered house, spreading from Québec to the Pacific through the Hudson's Bay Company. [ 11 ] The Hudson's Bay Company adopted this style for most of its outposts all the way to the Pacific coast. [ 12 ]
Some examples of surviving houses of this structural type are the circa 1809 Cray House in Stevensville, Maryland , 1832 Jacob Highbarger House in Maryland, and the George Diehl Homestead .
Red River Frame was a popular name for the post-and-plank construction technique used in the Red River Colony in the 19th century. The building style was characterized by a dressed timber structure with a horizontal log infill. The spaces between the logs were filled or 'chinked' with clay and straw. The exterior would either be whitewashed with a limestone / water plaster mixture, or in later years, the exterior would be covered by board siding. This style was popular because it could use smaller trees for logs—the longest trees needed were for the vertical logs. The Farm Manager's House at Lower Fort Garry , the William Brown House at the Historical Museum of St James—Assiniboia, the historical Fur Warehouse at Fort St. James National Historic Site of Canada and Riel House in Winnipeg, Manitoba are excellent examples of Red River Frame construction.
In southeastern Pennsylvania, numerous log houses feature corner post construction. In many cases, these houses feature diagonal bracing that resembles half-timbered architecture of Europe. In Lancaster County, Pennsylvania, it is estimated that about a quarter of log houses are corner post construction. | https://en.wikipedia.org/wiki/Post-and-plank |
A post-column oxidation-reduction reactor is a chemical reactor that performs derivatization to improve the quantitative measurement of organic analytes . It is used in gas chromatography (GC), after the column and before a flame ionization detector (FID), to make the response factor of the detector uniform for all carbon-based species.
The reactor contains catalysts that converts all of the carbon atoms of organic molecules in GC column effluents into methane before reaching the FID. As a result, all carbon atoms are detected equally, and therefore calibration standards for each compound are not needed. It can improve the response of the FID to many compounds with poor or low response, including carbon monoxide (CO), carbon dioxide (CO 2 ), hydrogen cyanide (HCN), formamide (CH 3 NO), formaldehyde (CH 2 O), and formic acid (CH 2 O 2 ).
The concept of using a post-column catalytic reactor to enhance the response of the FID was first developed for the reduction of carbon dioxide and carbon monoxide to methane using a nickel catalyst. [ 1 ] [ 2 ] The reaction device, often referred to as a methanizer , is limited to the conversion of carbon dioxide and carbon monoxide to methane, and the catalysts are poisoned by sulfur and ethylene among others.
Using a combustion reactor prior to the reduction reactor allows other carbon-containing chemicals to benefit from enhancement in FID detection. [ 3 ] [ 4 ] [ 5 ] In the combustion step, all carbon is converted to carbon dioxide, allowing it to be converted to methane for FID detection regardless of its original chemical form.
The reactor operates by converting organic analytes after GC separation into methane prior to detection by FID. The oxidation and reduction reactions occur sequentially, wherein the organic compound is first combusted to produce carbon dioxide, which is subsequently reduced to methane. The following reactions illustrate the oxidation/reduction process for formic acid .
HCO 2 H + 1 2 O 2 ↽ − − ⇀ CO 2 + H 2 O {\displaystyle {\ce {HCO2H + 1/2O2 <=> CO2 + H2O}}}
CO 2 + 4 H 2 ↽ − − ⇀ CH 4 + 2 H 2 O {\displaystyle {\ce {CO2 + 4H2 <=> CH4 + 2H2O}}}
The reactions are fast compared to the time scales typical of gas chromatography , resulting in manageable peak broadening and tailing. [ citation needed ] Elements other than carbon, as CH 4 , are not ionized in the flame and thus do not contribute to the FID signal.
Only the CHO + ions formed from the ionization of carbon compounds are detected. [ 6 ] Thus, the non-methane byproducts of the reactions are not detected by the FID.
Since every compound passes through the catalyst bed in the reactor, certain substances that might be harmful, or that could negatively affect the efficiency and durability of the FID, are converted into safer forms. For instance, cyanide is catalytically changed into methane, water, and nitrogen.
Comprehensive Conversion: The reactor converts all organic compounds to methane, whereas traditional methanizers typically only convert CO and CO 2 . This comprehensive conversion results in a more uniform response and more sensitive detection for a wider range of organic species.
The PolyArc reactor needs hydrogen and air, which are both gases used in any existing FID setup. Software for capturing and analyzing FID signals remains applicable, and no extra software is necessary for the device. Gas flows to the device are controlled using an external control box that must be calibrated manually for the desired flows of air and hydrogen. The detector's overall response can be analyzed either by an external or an internal standard method.
In the external standard method, the FID signal is correlated to the concentration of carbon separately from the analysis. In practice, this entails the injection of any carbon species at varying amounts to create a plot of signal (i.e. peak area) versus injected carbon amount (e.g. moles of carbon). The user should take care to account for any sample splitting, adsorption, inlet discrimination, and leaks. The data should form a line with a slope, m, and an intercept, b. The inverse of this line can be used to determine the amount of carbon in any subsequent injection from any compound.
m o l C = p e a k a r e a − b m {\displaystyle molC={\frac {peakarea-b}{m}}}
This is different from a typical FID calibration where this procedure would need to be completed for each compound to account for the relative response differences. The calibration should be examined periodically to account for catalyst deactivation and other sources of detector drift.
In the internal standard method, the sample is doped with a known amount of some organic molecule and the amount of all other species can be derived from their relative response to the internal standard (IS). The IS can be any organic molecule and should be chosen for ease of use and compatibility with the compounds in the mixture. For example, one could add 0.01 g of methanol as the IS to 0.9 g of gasoline. The 1 wt% mixture of methanol/gasoline is then injected and the concentration of all other species can be determined from their relative response to methanol on a carbon basis,
m o l C / g = a r e a a r e a ( I S ) × m o l C ( I S ) / g {\displaystyle molC/g={\frac {area}{area(IS)}}\times molC(IS)/g}
The effects of injection-to-injection variability resulting from different injection volumes, varying split ratios, and leaks are eliminated with the internal standard method. However, inlet discrimination caused by adsorption, reaction, or preferential vaporization in the inlet can lead to accuracy issues when the internal standard is influenced differently than the analyte.
Any non-carbon species that would not be detected in a traditional FID setup (e.g. water, nitrogen, ammonia) will not be detected with PolyArc/FID. This detector can be paired with other detectors that give complementary information such as the mass spectrometer or thermal conductivity detector . | https://en.wikipedia.org/wiki/Post-column_oxidation–reduction_reactor |
Post-combustion capture refers to the removal of carbon dioxide (CO 2 ) from a power station flue gas prior to its compression, transportation and storage in suitable geological formations, as part of carbon capture and storage . A number of different techniques are applicable, almost all of which are adaptations of acid gas removal processes used in the chemical and petrochemical industries. Many of these techniques existed before World War II and, consequently, post-combustion capture is the most developed of the various carbon-capture methodologies.
Post-combustion capture plant should aim to maximise the capture of CO 2 emissions from combustion plant and delivery it to secure sequestration in geological strata. [ 1 ] Typically, a plant will aim to achieve a CO 2 capture rate of >95%. To meet the required specification, the following should be monitored:
CO 2 can be transported either as gas phase at about 35 barg or as dense phase at 100 barg. The CO 2 stream should meet or exceed gas quality standards.
CO 2 absorbents include primary amines which require more heat for regeneration than secondary amines However, secondary amines may form nitrosamines with Nitrogen oxides NOx in the flue gases. All non-solvent constituents must be removed from the solvent. Pilot or full-scale tests using actual flue gases and solvents may be performed. [ 1 ]
Calcium looping is a promising second generation post-combustion capture technology in which calcium oxide, often referred to as the "sorbent", is used to separate CO 2 from the flue gas. The ANICA project focuses on developing a novel indirectly heated carbonate lopping process for lowering the energy penalty and CO 2 avoidance costs for CO 2 capture from lime and cement plants. [ 2 ]
This combustion article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Post-combustion_capture |
A post-metallocene catalyst is a kind of catalyst for the polymerization of olefins , i.e., the industrial production of some of the most common plastics. "Post-metallocene" refers to a class of homogeneous catalysts that are not metallocenes . This area has attracted much attention because the market for polyethylene, polypropylene, and related copolymers is large. There is a corresponding intense market for new processes as indicated by the fact that, in the US alone, 50,000 patents were issued between 1991-2007 on polyethylene and polypropylene. [ 1 ]
Many methods exist to polymerize alkenes, including the traditional routes using Philips catalyst and traditional heterogeneous Ziegler-Natta catalysts , which still are used to produce the bulk of polyethylene.
Homogeneous metallocene catalysts, e.g., derived from or related to zirconocene dichloride introduced a level of microstructural control that was unavailable with heterogeneous systems. [ 2 ] Metallocene catalysts are homogeneous single-site systems, implying that a uniform catalyst is present in the solution. In contrast, commercially important Ziegler-Natta heterogeneous catalysts contain a distribution of catalytic sites. The catalytic properties of single-site catalysts can be controlled by modification of the ligand. Initially ligand modifications focused on various cyclopentadienyl derivatives, but great diversity was uncovered through high throughput screening. These post-metallocene catalysts employ a range of chelating ligands, often including pyridine and amido (R 2 N − ). These ligands are available in great diversity with respect to their steric and electronic properties. Such postmetallocene catalysts enabled the introduction of Chain shuttling polymerization . [ 1 ]
The copolymerization of ethylene with polar monomers has been heavily studied. The high oxophilicity of the early metals precluded their use in this application. [ 3 ]
Efforts to copolymerize polar comonomers led to catalysts based upon nickel and palladium , inspired by the success of the Shell Higher Olefin Process . Typical post-metallocene catalysts feature bulky, neutral, alpha- diimine ligands. [ 3 ] DuPont commercialized the Versipol olefin polymerization system. [ 5 ] Eastman commercialized the related Gavilan technology. [ 6 ] These complexes catalyze the homopolymerize ethylene to a variety of structures that range from high density polyethylene through hydrocarbon plastomers and elastomers by a mechanism referred to as “ chain-walking ”. By modifying the bulk of the alpha-diimine , the product distribution of these systems can be 'tuned' to consist of hydrocarbon oils ( alpha-olefins ), similar to those produced by more tradition nickel(II) oligo/polymerization catalysts. As opposed to metallocenes , they can also randomly copolymerize ethylene with polar comonomers such as methyl acrylate .
A second class of catalysts feature mono-anionic bidentate ligands related to salen ligands . [ 7 ] and DuPont. [ 8 ] [ 9 ]
The concept of bulky bis-imine ligands was extended to iron complexes [ 3 ] Representative catalysts feature diiminopyridine ligands. These catalysts are highly active but do not promote chain walking . The give very linear high-density polyethylene when bulky and when the steric bulk is removed, they are very active for ethylene oligomerization to linear alpha-olefins. [ 3 ]
A salicylimine catalyst system based on zirconium exhibits high activity for ethylene polymerization. [ 10 ] The catalysts can also produce some novel polypropylene structures. [ 11 ] Despite intensive efforts, few catalysts have been successfully commercialized for the copolymerization of polar monomers. | https://en.wikipedia.org/wiki/Post-metallocene_catalyst |
Post-mortem chemistry , also called necrochemistry or death chemistry , is a subdiscipline of chemistry in which the chemical structures , reactions , processes and parameters of a dead organism is investigated. Post-mortem chemistry plays a significant role in forensic pathology . Biochemical analyses of vitreous humor, cerebrospinal fluid, blood and urine is important in determining the cause of death or in elucidating forensic cases. [ 1 ]
The post-mortem interval is the time that has elapsed since death. There are several different methods that can be used to estimate the post-mortem interval.
The vitreous humor is four to five milliliters of colorless gel in the vitreous body of the eye. Because of its location and the inert nature of the vitreous humor, it is resistant to some of the post-mortem changes that occur in the rest of the body. This is what makes it useful in determining the time since death, along with the fact that it is not affected by age, sex, or cause of death. [ 2 ] One of the reasons sampling vitreous humour is common is because if the sample being taken for examination is not in contact with blood it can then be clinically tested at a much lower cost. The viscosity of the vitreous humour will be increased after time of death due to water escaping. This requires for the sample to follow certain preparation steps before it can be used for analysis. Standard treatment prior to use of the sample might be required for the accuracy of pipetting. Such as diluting, centrifuging, heating and even the addition of certain analytes. [ 3 ] It is also useful as a source of DNA or for diagnosing diseases. The vitreous humor contains various electrolytes, including but not limited to sodium, potassium, chlorine, calcium, and magnesium. The concentrations of these electrolytes can be measured with analyzers and related to the time since death with various equations. [ 2 ] There are various equations because each study has different results, which results in different equations. This is because there are so many factors and differences in experiments that a single equation cannot be determined to be better than the rest. One of these factors is temperature. At higher temperatures, the concentrations are less stable and the degradation of the sample speeds up. [ 4 ] The temperature can be controlled once a sample is in the lab, but until then, the body will be the same temperature as the environment it was in. If the same equation is used for a sample that was not kept cold, then the result will not be accurate if the equation is for samples kept cold. Even though different equations have been found, the general trends are in agreement. As the time of death increases, the potassium concentration in the vitreous humor rises, and the sodium and calcium concentrations fall. The ratio of potassium to sodium decreases linearly with time. The reason that the potassium levels rise after death is because of a leak in the cell membrane that allows the concentration to reach equilibrium with the potassium levels in the blood plasma. This method is not exact, but a good estimate for the time since death can be obtained. [ 2 ]
Cerebrospinal fluid is found in the brain and spinal cord. It is a clear fluid that provides a barrier to absorb shock and prevent injury to the brain. It is useful for diagnosing neuro-degenerative diseases such as Alzheimers. There are various substances in the cerebrospinal fluid that can be measured including urea, glucose, potassium, chloride, sodium, protein, creatinine, calcium, alkaline phosphatase, and cortisol. [ 5 ] Different things can be learned about the person or how the died by looking at the concentrations of some of these substances. For example, high levels of urea can indicate kidney damage. High levels of cortisol, the hormone released under stress, could indicate a violent death. Creatinine is stable post-mortem, so the concentration at death is preserved. This is also helpful to determine the kidney function of an individual. Sodium and Potassium can also be measured in the cerebrospinal fluid to predict the time since death, [ 5 ] but it is not as accurate as it would be if the vitreous humor was used, since it has a lower correlation. [ 4 ]
Toxicology refers to the science of the chemical and physical properties of toxic substances. Samples from a body are analyzed for drugs or other toxic substances. The concentrations are measured and the substance's contribution to a death can be determined. This is done by comparing concentrations to lethal limits. The most common samples analyzed are blood, urine, kidney, liver, and brain. The samples are usually put through various tests, but the most common instrument used to quantify and determine a substance is gas chromatography-mass spectrometry (GC-MS). These instruments produce chromatograms of the sample, which are then compared to a database of known substances. [ 6 ] In blood samples, the substance can usually be found, but in the liver, kidneys, and urine the metabolite may be the only substance that can be found. A metabolite is the broken down version of the original substance after it has gone through digestion and/or other biological processes. Substances can take anywhere from hours to weeks to metabolize and leave the body and have different retention times in different parts of the body. For example, cocaine can be detected in the blood for two to ten days, while it can be detected in urine for two to five days.
The results of post-mortem toxicology testing are interpreted alongside the victim's history, a thorough investigation of the scene, and autopsy and ancillary study findings to determine the manner of death. [ 7 ]
When blood is used for toxicology testing, drugs of abuse are the usual targets of analysis. Other substances that may be looked for are medications that are known to be prescribed to the individual or poisons if it is suspected. [ 8 ]
Tissues can be analyzed to help determine a cause of death. The tissue samples that are most commonly analyzed are the liver, kidney, brain, and lungs. [ 6 ]
Hair samples can also be analyzed post-mortem to determine if there was a history of drug use or poisoning due to the fact that many substances stay in the hair for a long time. The hair can be separated into sections and a month by month analysis can be performed. Fingernails and hair follicles can also be analyzed for DNA evidence. [ 6 ]
The stomach contents can also be analyzed. This can help with the post-mortem interval identification by looking at the stage of digestion. The contents can also be analyzed for drugs or poisons to help determine a cause of death if it is unknown.
Post-mortem diagnosis is the use of post-mortem chemistry analysis tests to diagnose a disease after someone has died. Some diseases are unknown until death, or were not correctly diagnosed earlier. One way that diseases can be diagnosed is by examining the concentrations of certain substances in the blood or other sample types. For example, diabetic ketoacidosis can be diagnosed by looking at the concentration glucose levels in the vitreous humor, ketone bodies, glycated hemoglobin, or glucose in the urine. Dehydration can be diagnosed by looking for increased urea nitrogen, sodium, and chloride levels, with normal creatinine levels in the vitreous humor. Endocrine disorders can be diagnosed by looking at hormone concentrations and epinephrine and insulin levels. Liver diseases can be diagnosed by looking at the ratio of albumin and globulin in the sample. [ 9 ]
Blood pH and concentrations of several chemicals are tested in a corpse to help determine the time of death of the victim, also known as the post-mortem interval. These chemicals include lactic acid, hypoxanthine, uric acid, ammonia, NADH and formic acid. [ 10 ]
The decrease in the concentration of oxygen because of the lack of circulation causes a dramatic switch from aerobic to anaerobic metabolism [ 10 ]
This type of analysis can be used to help diagnose various different types of deaths such as: drowning, anaphylactic shock, hypothermia or any deaths related to alcohol or diabetes. Although these types of diagnosis become very difficult because of the changes to the body and biochemical measurements vary after death. [ 3 ] | https://en.wikipedia.org/wiki/Post-mortem_chemistry |
Post-normal science ( PNS ) was developed in the 1990s by Silvio Funtowicz and Jerome R. Ravetz . [ 1 ] [ 2 ] [ 3 ] It is a problem-solving strategy appropriate when "facts [are] uncertain, values in dispute, stakes high and decisions urgent", conditions often present in policy-relevant research. In those situations, PNS recommends suspending temporarily the traditional scientific ideal of truth, concentrating on quality as assessed by internal and extended peer communities. [ 1 ] [ 4 ]
PNS can be considered as complementing the styles of analysis based on risk and cost-benefit analysis prevailing at that time and integrating concepts of a new critical science developed in previous works by the same authors. [ 5 ] [ 6 ]
PNS is not a new scientific method following Aristotle and Bacon , a new paradigm in the Kuhnian sense, or an attempt to reach a new ‘normal’. It is instead, a set of insights to guide actionable and robust knowledge production for policy decision making and action in challenges like pandemics, ecosystems collapse, biodiversity loss and, in general, sustainability transitions. [ 7 ] [ 8 ]
According to its proponents [ 3 ] Silvio Funtowicz and Jerome R. Ravetz , the name "post-normal science" echoes the seminal work on modern science by Thomas Kuhn . [ 9 ] For Carrozza [ 10 ] PNS can be "framed in terms of a call for the ‘democratization of expertise’", and as a "reaction against long-term trends of ‘scientization’ of politics—the tendency towards assigning to experts a critical role in policymaking while marginalizing laypeople". For Mike Hulme (2007), writing on The Guardian , climate change seems to fall into the category of issues which are best dealt with in the context of PNS and notes that “Disputes in post-normal science focus as often on the process of science - who gets funded, who evaluates quality, who has the ear of policy - as on the facts of science”. [ 11 ] Climate science as PNS was already proposed by the late Stephen Schneider , [ 12 ] and a similar linkage was propose for the workings of the Intergovernmental Panel on Climate Change . [ 13 ]
From the ecological perspective post-normal science can be situated in the context of 'crisis disciplines' – a term coined by the conservation biologist Michael E. Soulé to indicate approaches addressing fears, emerging in the seventies, that the world was on the verge of ecological collapse . In this respect Michael Egan [ 14 ] defines PNS as a 'survival science'. More recently PNS has been defined as a movement of ‘informed critical resistance, reform and the making of futures’. [ 15 ]
Moving from PNS Ziauddin Sardar developed the concept of Postnormal Times (PNT). Sardar was the editor of FUTURES when it published the article ‘Science for the post-normal age’ [ 3 ] presently the most cited paper of the journal. A recent review of academic literature conducted on the Web of Science and encompassing the topics of Futures studies, Foresight, Forecasting and Anticipation Practice [ 16 ] identifies the same paper as "the all-time publication that received the highest number of citations".
"At birth Post-normal science was conceived as an inclusive set of robust insights more than as an exclusive fully structured theory or field of practice". [ 17 ] Some of the ideas underpinning PNS can already be found in a work published in 1983 and entitled "Three types of risk assessment: a methodological analysis" [ 18 ] This and subsequent works [ 1 ] [ 2 ] [ 3 ] [ 5 ] show that PNS concentrates on few aspects of the complex relation between science and policy: the communication of uncertainty, the assessment of quality, and the justification and practice of the extended peer communities.
Coming to the PNS diagram (figure above) the horizontal axis represents ‘Systems Uncertainties’ and the vertical one ‘Decision Stakes’. The three quadrants identify Applied Science, Professional Consultancy, and Post-Normal Science. Different standards of quality and styles of analysis are appropriate to different regions in the diagram, i.e. post-normal science does not claim relevance and cogency on all of science's application but only on those defined by the PNS's mantram with a fourfold challenge: ‘facts uncertain, values in dispute, stakes high and decisions urgent’. For applied research science's own peer quality control system will suffice (or so was assumed at the moment PNS was formulated in the early nineties), while professional consultancy was considered appropriate for these settings which cannot be ‘peer-reviewed’, and where the skills and the tacit knowledge of a practitioner are needed at the forefront, e.g. in a surgery room, or in a house on fire. Here a surgeon or a firefighter takes a difficult technical decision based on her or his training and appreciation of the situation (the Greek concept of ‘ Metis ’ as discussed by J. C. Scott. [ 19 ] )
There are important linkages between PNS and complexity science, [ 20 ] e.g. system ecology ( C. S. Holling ) and hierarchy theory ( Arthur Koestler ), see e.g. the work of Joseph Tainter , Timothy F. H. Allen and Thomas W. Hoekstra on transition from fossil to renewable fuels. [ 21 ] In PNS, complexity is respected through its recognition of a multiplicity of legitimate perspectives on any issue; this is close to the meaning espoused by Robert Rosen (theoretical biologist) . [ 22 ] Reflexivity is realised through the extension of accepted ‘facts’ beyond the supposedly objective productions of traditional research. Also, the new participants in the process are not treated as passive learners at the feet of the experts, being coercively convinced through scientific demonstration. Rather, they will form an ‘extended peer community’, sharing the work of quality assurance of the scientific inputs to the process, and arriving at a resolution of issues through debate and dialogue. [ 23 ] The necessity to embrace complexity in a post-normal perspective to understand and face zoonoses is argumented by David Waltner-Toews. [ 24 ]
In PNS extended peer communities are spaces where perspectives, values, styles of knowing and power differentials are expressed in a context of inequalities and conflict. Resolutions, compromises and knowledge co-production are contingent and not necessarily achievable. [ 1 ] [ 4 ] [ 25 ] [ 26 ]
Beside its dominating influence in the literature on 'futures', [ 16 ] PNS is considered to have influenced the ecological ‘conservation versus preservation debate’, especially via its reading by American pragmatist Bryan G. Norton . According to Jozef Keulartz [ 27 ] the PNS concept of "extended peer community" influenced how Norton's developed his 'convergence hypothesis'. The hypothesis posits that ecologists of different orientation will converge once they start thinking 'as a mountain', or as a planet. For Norton this will be achieved via deliberative democracy, which will pragmatically overcome the black and white divide between conservationists and preservationists. More recently it has been argued that conservation science, embedded as it is in a multi-layered governance structures of policy-makers, practitioners, and stakeholders, is itself an 'extended peer community', and as a result conservation has always been ‘post-normal’. [ 28 ]
Other authors attribute to PNS the role of having stimulated the take up of transdisciplinary methodological frameworks, reliant on the social constructivist perspective embedded in PNS. [ 29 ] [ 8 ]
Post-normal science is intended as applicable to most instances where the use of evidence is contested due to different norms and values. Typical instances are in the use of evidence based policy [ 30 ] and in evaluation . [ 31 ]
As summarized in a recent work "the ideas and concepts of post normal science bring about the emergence of new problem solving strategies in which the role of science is appreciated in its full context of the complexity and the uncertainty of natural systems and the relevance of human commitments and values." [ 32 ]
For Peter Gluckman (2014), chief science advisor to the Prime Minister of New Zealand, post-normal science approaches are today appropriate for a host of problems including "eradication of exogenous pests […], offshore oil prospecting, legalization of recreational psychotropic drugs, water quality, family violence, obesity, teenage morbidity and suicide, the ageing population, the prioritization of early-childhood education, reduction of agricultural greenhouse gases, and balancing economic growth and environmental sustainability". [ 33 ]
Conservation science is also a field where PNS is suggested as to fill the space between research, policy, and implementation, [ 34 ] [ 35 ] as well as to ensure pluralism in analysis. [ 36 ] [ 37 ] Ecosystem services are a topical subject for PNS. [ 38 ]
Reviews of the history and evolution of PNS, its definitions, conceptualizations,
and uses can be found in Turnpenny et al., 2011, [ 39 ] and in The Routledge Handbook of Ecological Economics (Nature and Society). [ 8 ] Articles on PNS are published in Nature [ 33 ] [ 40 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] and related journals. [ 45 ] [ 46 ]
A criticism of post-normal science is offered by Weingart (1997) [ 47 ] for whom post-normal science does not introduce a new epistemology but retraces earlier debates linked to the so-called "finalization thesis". For Jörg Friedrichs [ 48 ] – comparing the issues of climate change and peak energy – an extension of the peer community has taken place in the climate science community, transforming climate scientists into ‘stealth advocates’, [ 49 ] while scientists working on energy security – without PNS, would still maintain their credentials of neutrality and objectivity. Another criticism is that the extended peer community's use undermines the scientific method 's use of empiricism and that its goal would be better addressed by providing greater science education.
It has been argued [ 50 ] that post-normal science scholars have been prescient in anticipating the present crisis in science's quality control and reproducibility. A group of scholars of post-normal science orientation has published in 2016 a volume on the topic, [ 51 ] discussing inter alia what this community perceive as the root causes of the present science's crisis . [ 52 ] [ 50 ] [ 53 ] [ 54 ]
Among the quantitative styles of analysis which make reference to post-normal science one can mention NUSAP for numerical information, sensitivity auditing for indicators and mathematical modelling, Quantitative storytelling for exploring multiple frames in a quantitative analysis, and MUSIASEM in the field of social metabolism. A work where these approaches are suggested for sustainability is in. [ 55 ]
In relation to mathematical modelling post-normal science suggests a participatory approach, whereby ‘models to predict and control the future’ are replaced by ‘models to map our ignorance about the future’, in the process exploring and revealing the metaphors embedded in the model. [ 56 ] PNS is also known for its definition of garbage in, garbage out (GIGO): in modelling GIGO occurs when the uncertainties in the inputs must be suppressed, lest the outputs become completely indeterminate. [ 57 ]
On 25 March 2020, in the midst of the COVID-19 pandemic , a group of scholars of post-normal orientation published on the blog section of the STEPS Centre (for Social, Technological and Environmental Pathways to Sustainability) at the University of Sussex . The piece [ 58 ] argues that the COVID-19 emergency has all the elements of a post-normal science context, and notes that "this pandemic offers society an occasion to open a fresh discussion on whether we now need to learn how to do science in a different way".
The journal FUTURES devoted several specials issues to post-normal science.
Another special issue on post-normal science was published in 2011 on the journal Science, Technology, & Human Values . [ 39 ] | https://en.wikipedia.org/wiki/Post-normal_science |
Post Occupancy Evaluation (POE) has its origins in Scotland and the United States and has been used in one form or another since the 1960s. Preiser and colleagues define POE as "the process of evaluating buildings in a systematic and rigorous manner after they have been built and occupied for some time".
The unique aspect of Post Occupancy Evaluation is that it generates recommendations based on all stakeholder groups' experiences of subject buildings' effects on productivity and wellbeing. [ 1 ]
Post Occupancy Evaluations is used to improve the ways that buildings are used to support productivity and wellbeing.
Specifically it is used to:
The British Council for Offices (BCO) [ 2 ] summarises that a POE provides feedback of how successful the workplace is in supporting the occupying organization and the requirements of individual end-users. The BCO also suggests that POE can be used to assess if a project brief has been met. Furthermore, the BCO recommends that POE is used as part of the Evidence-based design process, where the project usually refers to a building design fit-out or refurbishment, or to inform the project brief where the project is the introduction of a new initiative, system or process. POE usually involves feedback from the building occupants, through questionnaires, interviews and workshops, but may also involve more objective measures such as environmental monitoring, space measurement and cost analysis.
Post Occupancy Evaluations involve all stakeholder groups with interests in the subject buildings. Stakeholders are typically:
The POE process provides value-neutral prompts to stimulate stakeholders to make testable observations about their experiences of buildings' effect on productivity and wellbeing. These observations are clarified and documented by the evaluator. Stakeholders' testable observations will be specific to building design, use and operating conditions and these may involve "negotiation" of all three dimensions of building evaluation to realize the optimum ways of achieving productivity and wellbeing.
Recommendations are based on complete set of stakeholders' observations. Most recommendations are to inform planning and design of future new buildings the operational practices. They also generate some recommendations for modifications to the subject buildings and for changes in the ways that they are used. POE evaluators may recommend monitoring, research, investigation or project management studies.
Some POE include other building studies. POE incorporate may include quantitative and qualitative techniques. Most POEs will involve seeking feedback from the occupants of the place being evaluated; this may be achieved through various survey methodology including questionnaire , interview or focus group . The occupant feedback may be supplemented by environmental monitoring , such as temperature, noise levels, lighting levels and indoor air quality. More recently, POEs tend to include sustainable measures such as energy consumption, waste levels, and water usage. Other commonly used quantitative measures include space metrics, for example occupational density, space utilization and tenant efficiency ratio. Cost, either expressed as the cost of the project per square meter or the total cost of occupancy, is considered a key metric in building evaluation and may be compared with the occupant feedback to provide a better understanding of value.
Both LEED and WELL consider occupant surveys as part of their certification scheme.
IEQ survey, as a part of POE, is involved in LEED certification scheme v4. For all LEED O+M projects using LEED v4, one point (EQ Credit: Occupant Comfort Survey) can be awarded to those who intents to assess building occupants' comfort. [ 3 ]
WELL involves the occupant survey in both preconditions [ 4 ] and optimization [ 5 ] features in the concept of community. Besides, it has a particular feature related to occupant survey on thermal comfort concept. [ 6 ]
Occupant Survey (Feature C03) is one of the precondition features of the community concept, which means it is mandatory for certification. This feature intends to establish minimum standards for the evaluation of experience and self-reported health and well-being of building occupants. The requirements of this feature are the following two parts: [ 4 ]
Enhanced Occupant Survey (Feature C04) is an optimization feature of community concept, which helps to achieve WELL certification. The purpose of this feature is to evaluate comfort, satisfaction, behavior change, self-reported health and other robust factors related to the well-being of occupants in buildings. A project can have up to 3 points it meets below requirements: [ 5 ]
Enhanced Thermal Performance (Feature T02) is one of the optimization features of thermal comfort concept. This feature aims to enhance thermal comfort and promote human productivity by ensuring that a substantial majority of building users (above 80%) perceive their environment as thermally acceptable. The maximum points are three, and there are achieved by following parts: [ 6 ]
The term "post occupancy" can be confusing and simply refers to an occupied building rather than a vacant one. Furthermore, POEs may be conducted at regular intervals to monitor how the building facilities and its operation are currently supporting the occupants. A pre-project POE may be used to:
A POE is often carried out six to twelve months after construction work is complete and buildings are occupied. Post Occupancy Evaluations are also undertaken at any time buildings' "lives"; particularly to understand stakeholders' experience of them, for briefing alterations and changes.
POE's have been conducted of facilities for schools, universities, technical institutes, kindergartens, museums, offices, courts, corrections, military, hospitals, [ 7 ] landscape/civil works, learning environments, [ 8 ] libraries, [ 9 ] [ 10 ] jails, [ 11 ] police stations, [ 12 ] housing, [ 13 ] health centres [ 14 ] and zoos. [ 15 ] It would be equally possible to apply POE techniques to ships and "virtual" environments. POE has been applied to selected parts of buildings, aspects of buildings and groups of buildings on the same campus and several sites.
POE is usually carried out by architects or building professionals with a social science or workplace consulting background. POE by independent consultants can offer an impartial evaluation.
POE can be informed by reference to other building studies and POEs sometimes recommend other studies such as:
POE has been developed and discussed within these organisations: | https://en.wikipedia.org/wiki/Post-occupancy_evaluation |
Post-tech , (or post-technology, post-digital-technology) is type of technology that is more concerned about being human than about technology. It advocates a design that is not merely focused on efficiency and exploiting users by increasing their time spent with digital devices and technology itself but to support the user's focus and intent, well-being, and independence (from technology). [ 1 ] With this interstitial spaces could also be created, similar to what Michel Foucault describes as Heterotopia (space) . [ 2 ]
This computing article is a stub . You can help Wikipedia by expanding it .
This postmodernism -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Post-tech |
Transcriptional modification or co-transcriptional modification is a set of biological processes common to most eukaryotic cells by which an RNA primary transcript is chemically altered following transcription from a gene to produce a mature, functional RNA molecule that can then leave the nucleus and perform any of a variety of different functions in the cell. [ 1 ] There are many types of post-transcriptional modifications achieved through a diverse class of molecular mechanisms.
One example is the conversion of precursor messenger RNA transcripts into mature messenger RNA that is subsequently capable of being translated into protein . This process includes three major steps that significantly modify the chemical structure of the RNA molecule: the addition of a 5' cap , the addition of a 3' polyadenylated tail, and RNA splicing . Such processing is vital for the correct translation of eukaryotic genomes because the initial precursor mRNA produced by transcription often contains both exons (coding sequences) and introns (non-coding sequences); splicing removes the introns and links the exons directly, while the cap and tail facilitate the transport of the mRNA to a ribosome and protect it from molecular degradation. [ 2 ]
Post-transcriptional modifications may also occur during the processing of other transcripts which ultimately become transfer RNA , ribosomal RNA , or any of the other types of RNA used by the cell.
Capping of the pre-mRNA involves the addition of 7-methylguanosine (m 7 G) to the 5' end. To achieve this, the terminal 5' phosphate requires removal, which is done with the aid of enzyme RNA triphosphatase . The enzyme guanosyl transferase then catalyses the reaction, which produces the diphosphate 5' end. The diphosphate 5' end then attacks the alpha phosphorus atom of a GTP molecule in order to add the guanine residue in a 5'5' triphosphate link. The enzyme (guanine- N 7 -)-methyltransferase ("cap MTase") transfers a methyl group from S-adenosyl methionine to the guanine ring. [ 4 ] This type of cap, with just the (m 7 G) in position is called a cap 0 structure. The ribose of the adjacent nucleotide may also be methylated to give a cap 1. Methylation of nucleotides downstream of the RNA molecule produce cap 2, cap 3 structures and so on. In these cases the methyl groups are added to the 2' OH groups of the ribose sugar.
The cap protects the 5' end of the primary RNA transcript from attack by ribonucleases that have specificity to the 3'5' phosphodiester bonds . [ 5 ]
The pre-mRNA processing at the 3' end of the RNA molecule involves cleavage of its 3' end and then the addition of about 250 adenine residues to form a poly(A) tail . The cleavage and adenylation reactions occur primarily if a polyadenylation signal sequence (5'- AAUAAA-3') is located near the 3' end of the pre-mRNA molecule, which is followed by another sequence, which is usually (5'-CA-3') and is the site of cleavage. A GU-rich sequence is also usually present further downstream on the pre-mRNA molecule. More recently, it has been demonstrated that alternate signal sequences such as UGUA upstream off the cleavage site can also direct cleavage and polyadenylation in the absence of the AAUAAA signal. These two signals are not mutually independent, and often coexist. After the synthesis of the sequence elements, several multi-subunit proteins are transferred to the RNA molecule. The transfer of these sequence specific binding proteins cleavage and polyadenylation specificity factor (CPSF), Cleavage Factor I (CF I) and cleavage stimulation factor (CStF) occurs from RNA Polymerase II . The three factors bind to the sequence elements. The AAUAAA signal is directly bound by CPSF. For UGUA dependent processing sites, binding of the multi protein complex is done by Cleavage Factor I (CF I). The resultant protein complex formed contains additional cleavage factors and the enzyme Polyadenylate Polymerase (PAP). This complex cleaves the RNA between the polyadenylation sequence and the GU-rich sequence at the cleavage site marked by the (5'-CA-3') sequences. Poly(A) polymerase then adds about 200 adenine units to the new 3' end of the RNA molecule using ATP as a precursor. As the poly(A) tail is synthesized, it binds multiple copies of poly(A)-binding protein , which protects the 3'end from ribonuclease digestion by enzymes including the CCR4-Not complex. [ 5 ]
RNA splicing is the process by which introns , regions of RNA that do not code for proteins, are removed from the pre-mRNA and the remaining exons connected to re-form a single continuous molecule. Exons are sections of mRNA which become "expressed" or translated into a protein. They are the coding portions of a mRNA molecule. [ 6 ] Although most RNA splicing occurs after the complete synthesis and end-capping of the pre-mRNA, transcripts with many exons can be spliced co-transcriptionally. [ 7 ] The splicing reaction is catalyzed by a large protein complex called the spliceosome assembled from proteins and small nuclear RNA molecules that recognize splice sites in the pre-mRNA sequence. Many pre-mRNAs, including those encoding antibodies , can be spliced in multiple ways to produce different mature mRNAs that encode different protein sequences . This process is known as alternative splicing , and allows production of a large variety of proteins from a limited amount of DNA.
Histones H2A, H2B, H3 and H4 form the core of a nucleosome and thus are called core histones . Processing of core histones is done differently because typical histone mRNA lacks several features of other eukaryotic mRNAs, such as poly(A) tail and introns. Thus, such mRNAs do not undergo splicing and their 3' processing is done independent of most cleavage and polyadenylation factors. Core histone mRNAs have a special stem-loop structure at 3-prime end that is recognized by a stem–loop binding protein and a downstream sequence, called histone downstream element (HDE) that recruits U7 snRNA . Cleavage and polyadenylation specificity factor 73 cuts mRNA between stem-loop and HDE [ 8 ]
Histone variants, such as H2A.Z or H3.3, however, have introns and are processed as normal mRNAs including splicing and polyadenylation. [ 8 ] | https://en.wikipedia.org/wiki/Post-transcriptional_modification |
Post-transcriptional regulation is the control of gene expression at the RNA level. It occurs once the RNA polymerase has been attached to the gene's promoter and is synthesizing the nucleotide sequence. Therefore, as the name indicates, it occurs between the transcription phase and the translation phase of gene expression. These controls are critical for the regulation of many genes across human tissues. [ 1 ] [ 2 ] It also plays a big role in cell physiology, being implicated in pathologies such as cancer and neurodegenerative diseases. [ 3 ]
After being produced, the stability and distribution of the different transcripts is regulated (post-transcriptional regulation) by means of RNA binding protein (RBP) that control the various steps and rates controlling events such as alternative splicing , nuclear degradation ( exosome ), processing, nuclear export (three alternative pathways), sequestration in P-bodies for storage or degradation and ultimately translation . These proteins achieve these events thanks to an RNA recognition motif (RRM) that binds a specific sequence or secondary structure of the transcripts, typically at the 5’ and 3’ UTR of the transcript. In short, the dsRNA sequences, which will be broken down into siRNA inside of the organism, will match up with the RNA to inhibit the gene expression in the cell.
Modulating the capping, splicing, addition of a Poly(A) tail , the sequence-specific nuclear export rates and in several contexts sequestration of the RNA transcript occurs in eukaryotes but not in prokaryotes . This modulation is a result of a protein or transcript which in turn is regulated and may have an affinity for certain sequences.
Transcription attenuation is a type of prokaryotic regulation that happens only under certain conditions. This process occurs at the beginning of RNA transcription and causes the RNA chain to terminate before gene expression. [ 5 ] Transcription attenuation is caused by the incorrect formation of a nascent RNA chain. This nascent RNA chain adopts an alternative secondary structure that does not interact appropriately with the RNA polymerase . [ 1 ] In order for gene expression to proceed, regulatory proteins must bind to the RNA chain and remove the attenuation, which is costly for the cell. [ 1 ] [ 6 ]
In prokaryotes there are two mechanisms of transcription attenuation. These two mechanisms are intrinsic termination and factor-dependent termination.
- In the intrinsic termination mechanism , also known as Rho-independent termination , the RNA chain forms a stable transcript hairpin structure at the 3'end of the genes that cause the RNA polymerase to stop transcribing. [ 6 ] The stem-loop is followed by a run of U's (poly U tail) which stalls the polymerase, so the RNA hairpin have enough time to form. Then, the polymerase is dissociated due to the weak binding between the poly U tail , from the transcript RNA, and the poly A tail, from the DNA template, causing the mRNA to be prematurely released. This process inhibits transcription. [ 7 ] To clarify, this mechanism is called Rho-independent because it does not require any additional protein factor as the factor-dependent termination does, which is a simpler mechanism for the cell to regulate gene transcription. [ 7 ] Some examples of bacteria where this type of regulation predominates are Neisseria, Psychrobacter and Pasteurellaceae , as well as the majority of bacteria in the Firmicutes phylum. [ 7 ] [ 6 ]
- In factor-dependent termination , which is a protein factor complex containing Rho factor , is bound to a segment from the RNA chain transcript. The Rho complex then starts looking in the 3' direction for a paused RNA polymerase. If the polymerase is found, the process immediately stops, which results in the abortion of RNA transcription. [ 5 ] [ 6 ] Even though this system is not as common as the one described above, there are some bacteria that uses this type of termination, such as the tna operon in E.coli . [ 7 ]
This type of regulation is not efficient in eukaryotes because transcription occurs in the nucleus while translation occurs in the cytoplasm. Therefore, the mechanism is not continued and it cannot execute appropriately as it would if both processes happen on the cytoplasm. [ 8 ]
MicroRNAs (miRNAs) appear to regulate the expression of more than 60% of protein coding genes of the human genome. [ 9 ] If an miRNA is abundant it can behave as a "switch", turning some genes on or off. [ 10 ] However, altered expression of many miRNAs only leads to a modest 1.5- to 4-fold change in protein expression of their target genes. [ 10 ] Individual miRNAs often repress several hundred target genes. [ 9 ] [ 11 ] Repression usually occurs either through translational silencing of the mRNA or through degradation of the mRNA, via complementary binding, mostly to specific sequences in the 3' untranslated region of the target gene's mRNA. [ 12 ] The mechanism of translational silencing or degradation of mRNA is implemented through the RNA-induced silencing complex (RISC).
RNA-Binding Proteins (RBPs) are dynamic assemblages between mRNAs and different proteins that form messenger ribonucleoprotein complexes (mRNPs). [ 13 ] These complexes are essential for the regulation of gene expression to ensure that all the steps are performed correctly throughout the whole process. Therefore, they are important control factors for protein levels and cell phenotypes. Moreover, they affect mRNA stability by regulating its conformation due to the environment, stress or extracellular signals. [ 13 ] However, their ability to bind and control such a wide variety of RNA targets allows them to form complex regulatory networks (PTRNs).These networks represent a challenge to study each RNA-binding protein individually. [ 3 ] Thankfully, due to new methodological advances, the identification of RBPs is slowly expanding, which demonstrates that they are contained in broad families of proteins. RBPs can significantly impact multiple biological processes, and have to be very accurately expressed. [ 7 ] Overexpression can change the mRNA target rate, binding to low-affinity RNA sites and causing deleterious results on cellular fitness. Not being able to synthesize at the right level is also problematic because it can lead to cell death. Therefore, RBPs are regulated via auto-regulation , so they are in control of their own actions. Furthermore, they use both negative feedback , to maintain homeostasis, and positive feedback , to create binary genetic changes in the cell. [ 14 ]
In metazoans and bacteria, many genes involved in post-post transcriptional regulation are regulated post transcriptionally. [ 15 ] [ 16 ] [ 17 ] For Drosophila RBPs associated with splicing or nonsense mediated decay, analyses of protein-protein and protein-RNA interaction profiles have revealed ubiquitous interactions with RNA and protein products of the same gene. [ 17 ] It remains unclear whether these observations are driven by ribosome proximal or ribosome mediated contacts, or if some protein complexes, particularly RNPs, undergo co-translational assembly.
This area of study has recently gained more importance due to the increasing evidence that post-transcriptional regulation plays a larger role than previously expected. Even though protein with DNA binding domains are more abundant than protein with RNA binding domains, a recent study by Cheadle et al. (2005) showed that during T-cell activation 55% of significant changes at the steady-state level had no corresponding changes at the transcriptional level, meaning they were a result of stability regulation alone. [ 19 ]
Furthermore, RNA found in the nucleus is more complex than that found in the cytoplasm: more than 95% (bases) of the RNA synthesized by RNA polymerase II never reaches the cytoplasm . The main reason for this is due to the removal of introns which account for 80% of the total bases. [ 20 ] Some studies have shown that even after processing the levels of mRNA between the cytoplasm and the nucleus differ greatly. [ 21 ]
Developmental biology is a good source of models of regulation, but due to the technical difficulties it was easier to determine the transcription factor cascades than regulation at the RNA level. In fact several key genes such as nanos are known to bind RNA but often their targets are unknown. [ 22 ] Although RNA binding proteins may regulate post transcriptionally large amount of the transcriptome, the targeting of a single gene is of interest to the scientific community for medical reasons, this is RNA interference and microRNAs which are both examples of posttranscriptional regulation, which regulate the destruction of RNA and change the chromatin structure. To study post-transcriptional regulation several techniques are used, such as RIP-Chip (RNA immunoprecipitation on chip). [ 23 ]
Deficiency of expression of a DNA repair gene occurs in many cancers (see DNA repair defect and cancer risk and microRNA and DNA repair ). Altered microRNA (miRNA) expression that either decreases accurate DNA repair or increases inaccurate microhomology-mediated end joining (MMEJ) DNA repair is often observed in cancers. Deficiency of accurate DNA repair may be a major source of the high frequency of mutations in cancer (see mutation frequencies in cancers ). Repression of DNA repair genes in cancers by changes in the levels of microRNAs may be a more frequent cause of repression than mutation or epigenetic methylation of DNA repair genes.
For instance, BRCA1 is employed in the accurate homologous recombinational repair (HR) pathway. Deficiency of BRCA1 can cause breast cancer. [ 24 ] Down-regulation of BRCA1 due to mutation occurs in about 3% of breast cancers. [ 25 ] Down-regulation of BRCA1 due to methylation of its promoter occurs in about 14% of breast cancers. [ 26 ] However, increased expression of miR-182 down-regulates BRCA1 mRNA and protein expression, [ 27 ] and increased miR-182 is found in 80% of breast cancers. [ 28 ]
In another example, a mutated constitutively (persistently) expressed version of the oncogene c-Myc is found in many cancers. Among many functions, c-Myc negatively regulates microRNAs miR-150 and miR-22. These microRNAs normally repress expression of two genes essential for MMEJ, Lig3 and Parp1 , thereby inhibiting this inaccurate, mutagenic DNA repair pathway. Muvarak et al. [ 29 ] showed, in leukemias, that constitutive expression of c-Myc, leading to down-regulation of miR-150 and miR-22, allowed increased expression of Lig3 and Parp1 . This generates genomic instability through increased inaccurate MMEJ DNA repair, and likely contributes to progression to leukemia.
To show the frequent ability of microRNAs to alter DNA repair expression, Hatano et al. [ 30 ] performed a large screening study, in which 810 microRNAs were transfected into cells that were then subjected to ionizing radiation (IR). For 324 of these microRNAs, DNA repair was reduced (cells were killed more efficiently by IR) after transfection. For a further 75 microRNAs, DNA repair was increased, with less cell death after IR. This indicates that alterations in microRNAs may often down-regulate DNA repair, a likely important early step in progression to cancer. | https://en.wikipedia.org/wiki/Post-transcriptional_regulation |
In molecular biology , post-translational modification ( PTM ) is the covalent process of changing proteins following protein biosynthesis . PTMs may involve enzymes or occur spontaneously. Proteins are created by ribosomes , which translate mRNA into polypeptide chains , which may then change to form the mature protein product. PTMs are important components in cell signalling , as for example when prohormones are converted to hormones .
Post-translational modifications can occur on the amino acid side chains or at the protein's C- or N- termini. [ 1 ] They can expand the chemical set of the 22 amino acids by changing an existing functional group or adding a new one such as phosphate. Phosphorylation is highly effective for controlling the enzyme activity and is the most common change after translation. [ 2 ] Many eukaryotic and prokaryotic proteins also have carbohydrate molecules attached to them in a process called glycosylation , which can promote protein folding and improve stability as well as serving regulatory functions. Attachment of lipid molecules, known as lipidation , often targets a protein or part of a protein attached to the cell membrane .
Other forms of post-translational modification consist of cleaving peptide bonds , as in processing a propeptide to a mature form or removing the initiator methionine residue. The formation of disulfide bonds from cysteine residues may also be referred to as a post-translational modification. [ 3 ] For instance, the peptide hormone insulin is cut twice after disulfide bonds are formed, and a propeptide is removed from the middle of the chain; the resulting protein consists of two polypeptide chains connected by disulfide bonds.
Some types of post-translational modification are consequences of oxidative stress . Carbonylation is one example that targets the modified protein for degradation and can result in the formation of protein aggregates. [ 4 ] [ 5 ] Specific amino acid modifications can be used as biomarkers indicating oxidative damage. [ 6 ] PTMs and metal ions play a crucial and reciprocal role in regulating protein function, influencing cellular processes such as signal transduction and gene expression, with dysregulated interactions implicated in diseases like cancer and neurodegenerative disorders. [ 7 ]
Sites that often undergo post-translational modification are those that have a functional group that can serve as a nucleophile in the reaction: the hydroxyl groups of serine , threonine , and tyrosine ; the amine forms of lysine , arginine , and histidine ; the thiolate anion of cysteine ; the carboxylates of aspartate and glutamate ; and the N- and C-termini. In addition, although the amide of asparagine is a weak nucleophile, it can serve as an attachment point for glycans . Rarer modifications can occur at oxidized methionines and at some methylene groups in side chains. [ 8 ]
Post-translational modification of proteins can be experimentally detected by a variety of techniques, including mass spectrometry , Eastern blotting , and Western blotting .
Examples of non-enzymatic PTMs are glycation, glycoxidation, nitrosylation, oxidation, succination, and lipoxidation. [ 16 ]
In 2011, statistics of each post-translational modification experimentally and putatively detected have been compiled using proteome-wide information from the Swiss-Prot database. [ 25 ] The 10 most common experimentally found modifications were as follows: [ 26 ]
Some common post-translational modifications to specific amino-acid residues are shown below. Modifications occur on the side-chain unless indicated otherwise.
Protein sequences contain sequence motifs that are recognized by modifying enzymes, and which can be documented or predicted in PTM databases. With the large number of different modifications being discovered, there is a need to document this sort of information in databases. PTM information can be collected through experimental means or predicted from high-quality, manually curated data. Numerous databases have been created, often with a focus on certain taxonomic groups (e.g. human proteins) or other features.
List of software for visualization of proteins and their PTMs | https://en.wikipedia.org/wiki/Post-translational_modification |
Post-translational regulation refers to the control of the levels of active protein.
There are several forms. [ 1 ]
It is performed either by means of reversible events ( posttranslational modifications , such as phosphorylation or sequestration) or by means of irreversible events ( proteolysis ). | https://en.wikipedia.org/wiki/Post-translational_regulation |
In computational complexity theory , PostBQP is a complexity class consisting of all of the computational problems solvable in polynomial time on a quantum Turing machine with postselection and bounded error (in the sense that the algorithm is correct at least 2/3 of the time on all inputs).
Postselection is not considered to be a feature that a realistic computer (even a quantum one) would possess, but nevertheless postselecting machines are interesting from a theoretical perspective.
Removing either one of the two main features (quantumness, postselection) from PostBQP gives the following two complexity classes, both of which are subsets of PostBQP :
The addition of postselection seems to make quantum Turing machines much more powerful: Scott Aaronson proved [ 2 ] [ 3 ] PostBQP is equal to PP , a class which is believed to be relatively powerful, whereas BQP is not known even to contain the seemingly smaller class NP . Using similar techniques, Aaronson also proved that small changes to the laws of quantum computing would have significant effects. As specific examples, under either of the two following changes, the "new" version of BQP would equal PP :
In order to describe some of the properties of PostBQP we fix a formal way of describing quantum postselection. Define a quantum algorithm to be a family of quantum circuits (specifically, a uniform circuit family ). We designate one qubit as the postselection qubit P and another as the output qubit Q . Then PostBQP is defined by postselecting upon the event that the postselection qubit is | 1 ⟩ {\displaystyle |1\rangle } . Explicitly, a language L is in PostBQP if there is a quantum algorithm A so that after running A on input x and measuring the two qubits P and Q ,
One can show that allowing a single postselection step at the end of the algorithm (as described above) or allowing intermediate postselection steps during the algorithm are equivalent. [ 2 ] [ 4 ]
Here are three basic properties of PostBQP (which also hold for BQP via similar proofs):
More generally, combinations of these ideas show that PostBQP is closed under union and BQP truth-table reductions.
Scott Aaronson showed [ 5 ] that the complexity classes P o s t B Q P {\displaystyle {\mathsf {PostBQP}}} (postselected bounded error quantum polynomial time) and PP (probabilistic polynomial time) are equal. The result was significant because this quantum computation reformulation of P P {\displaystyle {\mathsf {PP}}} gave new insight and simpler proofs of properties of P P {\displaystyle {\mathsf {PP}}} .
The usual definition of a P o s t B Q P {\displaystyle {\mathsf {PostBQP}}} circuit family is one with two outbit qubits P (postselection) and Q (output) with a single measurement of P and Q at the end such that the probability of measuring P = 1 has nonzero probability, the conditional probability Pr[ Q = 1| P = 1] ≥ 2/3 if the input x is in the language, and Pr[ Q = 0| P = 1] ≥ 2/3 if the input x is not in the language. For technical reasons we tweak the definition of P o s t B Q P {\displaystyle {\mathsf {PostBQP}}} as follows: we require that Pr[ P = 1] ≥ 2 − n c for some constant c depending on the circuit family. Note this choice does not affect the basic properties of P o s t B Q P {\displaystyle {\mathsf {PostBQP}}} , and also it can be shown that any computation consisting of typical gates (e.g. Hadamard, Toffoli) has this property whenever Pr[ P = 1] > 0 .
Suppose we are given a P o s t B Q P {\displaystyle {\mathsf {PostBQP}}} family of circuits to decide a language L . We assume without loss of generality (e.g. see the inessential properties of quantum computers ) that all gates have transition matrices that are represented with real numbers, at the expense of adding one more qubit.
Let Ψ denote the final quantum state of the circuit before the postselecting measurement is made. The overall goal of the proof is to construct a P P {\displaystyle {\mathsf {PP}}} algorithm to decide L . More specifically it suffices to have L correctly compare the squared amplitude of Ψ in the states with Q = 1, P = 1 to the squared amplitude of Ψ in the states with Q = 0, P = 1 to determine which is bigger. The key insight is that the comparison of these amplitudes can be transformed into comparing the acceptance probability of a P P {\displaystyle {\mathsf {PP}}} machine with 1/2.
Let n denote the input size, B = B ( n ) denote the total number of qubits in the circuit (inputs, ancillary, output and postselection qubits), and G = G ( n ) denote the total number of gates.
Represent the i th gate by its transition matrix A i (a real unitary 2 B × 2 B {\displaystyle 2^{B}\times 2^{B}} matrix) and let the initial state be | x ⟩ {\displaystyle |x\rangle } (padded with zeroes). Then Ψ = A G A G − 1 ⋯ A 2 A 1 | x ⟩ {\displaystyle \Psi =A^{G}A^{G-1}\dotsb A^{2}A^{1}|x\rangle } . Define S 1 (resp. S 0 ) to be the set of basis states corresponding to P = 1, Q = 1 (resp. P = 1, Q = 0 ) and define the probabilities
The definition of P o s t B Q P {\displaystyle {\mathsf {PostBQP}}} ensures that either π 1 ≥ 2 π 0 {\displaystyle \pi _{1}\geq 2\pi _{0}} or π 0 ≥ 2 π 1 {\displaystyle \pi _{0}\geq 2\pi _{1}} according to whether x is in L or not.
Our P P {\displaystyle {\mathsf {PP}}} machine will compare π 1 {\displaystyle \pi _{1}} and π 0 {\displaystyle \pi _{0}} . In order to do this, we expand the definition of matrix multiplication:
where the sum is taken over all lists of G basis vectors α i {\displaystyle \alpha _{i}} . Now π 1 {\displaystyle \pi _{1}} and π 0 {\displaystyle \pi _{0}} can be expressed as a sum of pairwise products of these terms. Intuitively, we want to design a machine whose acceptance probability is something like 1 2 ( 1 + π 1 − π 0 ) {\displaystyle {\tfrac {1}{2}}(1+\pi _{1}-\pi _{0})} , since then x ∈ L {\displaystyle x\in L} would imply that the acceptance probability is 1 2 ( 1 + π 1 − π 0 ) > 1 2 {\displaystyle {\tfrac {1}{2}}(1+\pi _{1}-\pi _{0})>{\tfrac {1}{2}}} , while x ∉ L {\displaystyle x\not \in L} would imply that the acceptance probability is 1 2 ( 1 + π 1 − π 0 ) < 1 2 {\displaystyle {\tfrac {1}{2}}(1+\pi _{1}-\pi _{0})<{\tfrac {1}{2}}} .
The definition of P o s t B Q P {\displaystyle {\mathsf {PostBQP}}} tells us that π 1 ≥ 2 3 ( π 0 + π 1 ) {\displaystyle \pi _{1}\geq {\tfrac {2}{3}}(\pi _{0}+\pi _{1})} if x is in L , and that otherwise π 0 ≥ 2 3 ( π 0 + π 1 ) {\displaystyle \pi _{0}\geq {\tfrac {2}{3}}(\pi _{0}+\pi _{1})} . Let us replace all entries of A by the nearest fraction with denominator 2 f ( n ) {\displaystyle 2^{f(n)}} for a large polynomial f ( n ) {\displaystyle f(n)} that we presently describe. What will be used later is that the new π values satisfy π 1 > 1 2 ( π 0 + π 1 ) {\displaystyle \pi _{1}>{\tfrac {1}{2}}(\pi _{0}+\pi _{1})} if x is in L , and π 0 > 1 2 ( π 0 + π 1 ) {\displaystyle \pi _{0}>{\tfrac {1}{2}}(\pi _{0}+\pi _{1})} if x is not in L . Using the earlier technical assumption and by analyzing how the 1-norm of the computational state changes, this is seen to be satisfied if ( 1 + 2 − f ( n ) 2 B ) G − 1 < 1 6 2 − n c , {\displaystyle (1+2^{-f(n)}2^{B})^{G}-1<{\tfrac {1}{6}}2^{-n^{c}},} thus clearly there is a large enough f that is polynomial in n .
Now we provide the detailed implementation of our P P {\displaystyle {\mathsf {PP}}} machine. Let α denote the sequence { α i } i = 1 G {\displaystyle \{\alpha _{i}\}_{i=1}^{G}} and define the shorthand notation
then
We define our P P {\displaystyle {\mathsf {PP}}} machine to
Then it is straightforward to compute that this machine accepts with probability 1 2 + π 1 − π 0 2 1 + B ( n ) + 2 B ( n ) G ( n ) , {\displaystyle {\frac {1}{2}}+{\frac {\pi _{1}-\pi _{0}}{2^{1+B(n)+2B(n)G(n)}}},} so this is a P P {\displaystyle {\mathsf {PP}}} machine for the language L , as needed.
Suppose we have a P P {\displaystyle {\mathsf {PP}}} machine with time complexity T := T ( n ) {\displaystyle T:=T(n)} on input x of length n := | x | {\displaystyle n:=|x|} . Thus the machine flips a coin at most T times during the computation. We can thus view the machine as a deterministic function f (implemented, e.g. by a classical circuit) which takes two inputs ( x, r ) where r , a binary string of length T , represents the results of the random coin flips that are performed by the computation, and the output of f is 1 (accept) or 0 (reject). The definition of P P {\displaystyle {\mathsf {PP}}} tells us that
Thus, we want a P o s t B Q P {\displaystyle {\mathsf {PostBQP}}} algorithm that can determine whether the above statement is true.
Define s to be the number of random strings which lead to acceptance,
and so 2 T − s {\displaystyle 2^{T}-s} is the number of rejected strings.
It is straightforward to argue that without loss of generality, s ∉ { 0 , 2 T / 2 , 2 T } {\displaystyle s\not \in \{0,2^{T}/2,2^{T}\}} ; for details, see a similar without loss of generality assumption in the proof that P P {\displaystyle {\mathsf {PP}}} is closed under complementation .
Aaronson's algorithm for solving this problem is as follows. For simplicity, we will write all quantum states as unnormalized. First, we prepare the state | x ⟩ ⊗ ∑ r ∈ { 0 , 1 } T | r ⟩ | f ( x , r ) ⟩ {\displaystyle |x\rangle \otimes \sum _{r\in \{0,1\}^{T}}|r\rangle |f(x,r)\rangle } . Second, we apply Hadamard gates to the second register (each of the first T qubits), measure the second register and postselect on it being the all-zero string. It is easy to verify that this leaves the last register (the last qubit) in the residual state
Where H denotes the Hadamard gate, we compute the state
Where α , β {\displaystyle \alpha ,\beta } are positive real numbers to be chosen later with α 2 + β 2 = 1 {\displaystyle \alpha ^{2}+\beta ^{2}=1} , we compute the state α | 0 ⟩ | ψ ⟩ + β | 1 ⟩ | H ψ ⟩ {\displaystyle \alpha |0\rangle |\psi \rangle +\beta |1\rangle |H\psi \rangle } and measure the second qubit, postselecting on its value being equal to 1, which leaves the first qubit in a residual state depending on β / α {\displaystyle \beta /\alpha } which we denote
Visualizing the possible states of a qubit as a circle, we see that if s > 2 T − 1 {\displaystyle s>2^{T-1}} , (i.e. if x ∈ L {\displaystyle x\in L} ) then ϕ β / α {\displaystyle \phi _{\beta /\alpha }} lies in the open quadrant Q a c c := ( − | 1 ⟩ , | 0 ⟩ ) {\displaystyle Q_{acc}:=(-|1\rangle ,|0\rangle )} while if s < 2 T − 1 {\displaystyle s<2^{T-1}} , (i.e. if x ∉ L {\displaystyle x\not \in L} ) then ϕ β / α {\displaystyle \phi _{\beta /\alpha }} lies in the open quadrant Q r e j := ( | 0 ⟩ , | 1 ⟩ ) {\displaystyle Q_{rej}:=(|0\rangle ,|1\rangle )} . In fact for any fixed x (and its corresponding s ), as we vary the ratio β / α {\displaystyle \beta /\alpha } in ( 0 , ∞ ) {\displaystyle (0,\infty )} , note that the image of | ϕ β / α ⟩ {\displaystyle |\phi _{\beta /\alpha }\rangle } is precisely the corresponding open quadrant. In the rest of the proof, we make precise the idea that we can distinguish between these two quadrants.
Let | + ⟩ = ( | 1 ⟩ + | 0 ⟩ ) / 2 {\displaystyle |+\rangle =(|1\rangle +|0\rangle )/{\sqrt {2}}} , which is the center of Q r e j {\displaystyle Q_{rej}} , and let | − ⟩ {\displaystyle |-\rangle } be orthogonal to | + ⟩ {\displaystyle |+\rangle } . Any qubit in Q a c c {\displaystyle Q_{acc}} , when measured in the basis { | + ⟩ , | − ⟩ } {\displaystyle \{|+\rangle ,|-\rangle \}} , gives the value | + ⟩ {\displaystyle |+\rangle } less than 1/2 of the time. On the other hand, if x ∉ L {\displaystyle x\not \in L} and we had picked β / α = r ∗ := 2 s / ( 2 T − 2 s ) {\displaystyle \beta /\alpha =r^{*}:={\sqrt {2}}s/(2^{T}-2s)} then measuring | ϕ β / α ⟩ {\displaystyle |\phi _{\beta /\alpha }\rangle } in the basis { | + ⟩ , | − ⟩ } {\displaystyle \{|+\rangle ,|-\rangle \}} would give the value | + ⟩ {\displaystyle |+\rangle } all of the time. Since we don't know s we also don't know the precise value of r* , but we can try several (polynomially many) different values for β / α {\displaystyle \beta /\alpha } in hopes of getting one that is "near" r* .
Specifically, note 2 − T < r ∗ < 2 T {\displaystyle 2^{-T}<r*<2^{T}} and let us successively set β / α {\displaystyle \beta /\alpha } to every value of the form 2 i {\displaystyle 2^{i}} for − T ≤ i ≤ T {\displaystyle -T\leq i\leq T} . Then elementary calculations show that for one of these values of i , the probability that the measurement of | ϕ 2 i ⟩ {\displaystyle |\phi _{2^{i}}\rangle } in the basis { | + ⟩ , | − ⟩ } {\displaystyle \{|+\rangle ,|-\rangle \}} yields | + ⟩ {\displaystyle |+\rangle } is at least ( 3 + 2 2 ) / 6 ≈ 0.971. {\displaystyle (3+2{\sqrt {2}})/6\approx 0.971.}
Overall, the P o s t B Q P {\displaystyle {\mathsf {PostBQP}}} algorithm is as follows. Let k be any constant strictly between 1/2 and ( 3 + 2 2 ) / 6 {\displaystyle (3+2{\sqrt {2}})/6} .
We do the following experiment for each − T ≤ i ≤ T {\displaystyle -T\leq i\leq T} : construct and measure | ϕ 2 i ⟩ {\displaystyle |\phi _{2^{i}}\rangle } in the basis { | + ⟩ , | − ⟩ } {\displaystyle \{|+\rangle ,|-\rangle \}} a total of C log T {\displaystyle C\log T} times where C is a constant. If the proportion of | + ⟩ {\displaystyle |+\rangle } measurements is greater than k , then reject. If we don't reject for any i , accept. Chernoff bounds then show that for a sufficiently large universal constant C , we correctly classify x with probability at least 2/3.
Note that this algorithm satisfies the technical assumption that the overall postselection probability is not too small: each individual measurement of | ϕ 2 i ⟩ {\displaystyle |\phi _{2^{i}}\rangle } has postselection probability 1 / 2 O ( T ) {\displaystyle 1/2^{O(T)}} and so the overall probability is 1 / 2 O ( T 2 log T ) {\displaystyle 1/2^{O(T^{2}\log T)}} . | https://en.wikipedia.org/wiki/PostBQP |
A post is a main vertical or leaning support in a structure similar to a column or pillar, the term post generally refers to a timber but may be metal or stone. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] A stud in wooden or metal building construction is similar but lighter duty than a post and a strut may be similar to a stud or act as a brace. In the U.K. a strut may be very similar to a post but not carry a beam. [ 8 ] In wood construction posts normally land on a sill , but in rare types of buildings the post may continue through to the foundation called an interrupted sill or into the ground called earthfast, post in ground , or posthole construction. A post is also a fundamental element in a fence . The terms "jack" and "cripple" are used with shortened studs and rafters but not posts, except in the specialized vocabulary of shoring .
Timber framing is a general term for building with wooden posts and beams. The term post is the namesake of other general names for timber framing such as post-and-beam, post-and-girt construction and more specific types of timber framing such as Post and lintel , post-frame , post in ground , and ridge-post construction. In roof construction such as king post , queen post , and crown post framing. A round post is often called a pole or mast depending on its diameter thus pole building framing , or a mast church . | https://en.wikipedia.org/wiki/Post_(structural) |
Post Irradiation Examination (PIE) is the study of used nuclear materials such as nuclear fuel . It has several purposes. It is known that by examination of used fuel that the failure modes which occur during normal use (and the manner in which the fuel will behave during an accident) can be studied. In addition information is gained which enables the users of fuel to assure themselves of its quality and it also assists in the development of new fuels. After major accidents the core (or what is left of it) is normally subject to PIE in order to find out what happened. One site where PIE is done is the ITU which is the EU centre for the study of highly radioactive materials.
Materials in a high radiation environment (such as a reactor) can undergo unique behaviors such as swelling [ 1 ] and non-thermal creep. If there are nuclear reactions within the material (such as what happens in the fuel), the stoichiometry will also change slowly over time. These behaviors can lead to new material properties, cracking, and fission gas release:
As the fuel is degraded or heated the more volatile fission products which are trapped within the uranium dioxide may become free. [ 2 ]
As the fuel expands on heating, the core of the pellet expands more than the rim which may lead to cracking. Because of the thermal stress thus formed the fuel cracks, the cracks tend to go from the centre to the edge in a star shaped pattern.
In order to better understand and control these changes in materials, these behaviors are studied. [1] [2] [3] [4] . Due to the intensely radioactive nature of the used fuel this is done in a hot cell . A combination of nondestructive and destructive methods of PIE are common.
In addition to the effects of radiation and the fission products on materials, scientists also need to consider the temperature of materials in a reactor, and in particular, the fuel. Too high fuel temperatures can compromise the fuel, and therefore it is important to control the temperature in order to control the fission chain reaction.
The temperature of the fuel varies as a function of the distance from the centre to the rim. At distance x from the centre the temperature (T x ) is described by the equation where ρ is the power density (W m −3 ) and K f is the thermal conductivity .
To explain this for a series of fuel pellets being used with a rim temperature of 200 °C (typical for a BWR ) with different diameters and power densities of 250 Wm −3 have been modeled using the above equation. Note that these fuel pellets are rather large; it is normal to use oxide pellets which are about 10 mm in diameter.
Radiochemistry and Nuclear Chemistry, G. Choppin, J-O Liljenzin and J. Rydberg, 3rd Ed, 2002, Butterworth-Heinemann, ISBN 0-7506-7463-6 | https://en.wikipedia.org/wiki/Post_Irradiation_Examination |
The Post-War Building Studies are a set of technical reports published by the British Ministry of Works starting in 1944. The Directorate of Post-War Building was established in 1941 under Sir James West . The Directorate was charged with coordinating solutions for construction of housing to replace homes that had been destroyed as well as homes that had been deferred due to war. The Directorate reported to the Minister of Works, initially Lord Reith then later Lord Portal . The publications were produced by various committees (such as the Burt Committee ) composed of architects, engineers, and representatives from the building industry. [ 1 ] The studies standardized non-traditional methods of building construction including the use of pre-fabricated elements and poured concrete . A new standard system for wiring homes for electricity was described in report no. 11. The reports had a significant impact on the design and construction of buildings in the UK after the war and continue to be cited as references, though their recommendations on fire safety were later found to be insufficient as apartment buildings became taller. [ 2 ] However, the BS 1363 power socket (another product of the studies) has proved long-lasting and is still in use in British homes today. [ 3 ] [ 4 ] While not made part of mandatory building codes and regulations, the reports provided technical guidance and information on application of non-traditional building techniques and materials while overcoming material and labour shortages.
Based on the experience following World War I, it was expected that housing construction demand would be very high after WWII ended, both due to pent-up demand that had not been fulfilled and also due to replacement or repair of housing that had been bombed during the war. Labour and material were expected to be in short supply. Interest in industrial methods, pre-fabrication and new materials was high during the period between the wars, and such publications as the Tudor Walters Report of 1918 gave details on new methods of construction and new materials, including recommendations to improve construction efficiency by better site organisation, increased accuracy of cost accounting, and keeping building trade workers regularly employed. The Directorate of Post-War Building and the Directorate of Building Materials were established by the Ministry of Works. These groups took on research into new methods and published the Post War Building Studies in 33 volumes between 1944 and 1946. Experimental work was carried out at the Building Research Station and reported in the series. [ 5 ] | https://en.wikipedia.org/wiki/Post_War_Building_Studies |
Post and lintel (also called prop and lintel , a trabeated system , or a trilithic system ) is a building system where strong horizontal elements are held up by strong vertical elements with large spaces between them. This is usually used to hold up a roof, creating a largely open space beneath, for whatever use the building is designed. The horizontal elements are called by a variety of names including lintel , header, architrave or beam , and the supporting vertical elements may be called posts , columns , or pillars . The use of wider elements at the top of the post, called capitals , to help spread the load, is common to many architectural traditions.
In architecture, a post-and-lintel or trabeated system refers to the use of horizontal stone beams or lintels which are borne by columns or posts. The name is from the Latin trabs , beam ; influenced by trabeatus , clothed in the trabea , a ritual garment.
Post-and-lintel construction is one of four ancient structural methods of building, the others being the corbel , arch-and-vault , and truss . [ 1 ]
A noteworthy example of a trabeated system is in Volubilis , from the Roman era, where one side of the Decumanus Maximus is lined with trabeated elements, while the opposite side of the roadway is designed in arched style. [ 2 ]
The trabeated system is a fundamental principle of Neolithic architecture , ancient Indian architecture , ancient Greek architecture and ancient Egyptian architecture . Other trabeated styles are the Persian , Lycian, Japanese , traditional Chinese , and ancient Chinese architecture, especially in northern China, [ 3 ] and nearly all the Indian styles. [ 4 ] The traditions are represented in North and Central America by Mayan architecture , and in South America by Inca architecture . In all or most of these traditions, certainly in Greece and India, the earliest versions developed using wood, which were later translated into stone for larger and grander buildings. [ 5 ] Timber framing , also using trusses , remains common for smaller buildings such as houses to the modern day.
There are two main forces acting upon the post and lintel system: weight carrying compression at the joint between lintel and post, and tension induced by deformation of self-weight and the load above between the posts. The two posts are under compression from the weight of the lintel (or beam) above. The lintel will deform by sagging in the middle because the underside is under tension and the upper is under compression.
The biggest disadvantage to lintel construction is the limited weight that can be held up, and the resulting small distances required between the posts. Ancient Roman architecture 's development of the arch allowed for much larger structures to be constructed. The arcuated system spreads larger loads more effectively, and replaced the post-and-lintel system in most larger buildings and structures, until the introduction of steel girder beams and steel-reinforced concrete in the industrial era.
As with the Roman temple portico front and its descendants in later classical architecture , trabeated features were often retained in parts of buildings as an aesthetic choice. The classical orders of Greek origin were in particular retained in buildings designed to impress, even though they usually had little or no structural role. [ 6 ]
The flexural strength of a stone lintel can be dramatically increased with the use of Post-tensioned stone . | https://en.wikipedia.org/wiki/Post_and_lintel |
A post in ground construction, also called earthfast [ 1 ] or hole-set posts , is a type of construction in which vertical, roof-bearing timbers, called posts , are in direct contact with the ground. They may be placed into excavated postholes , [ 2 ] driven into the ground, or on sills which are set on the ground without a foundation. Earthfast construction is common from the Neolithic period to the present and is used worldwide. Post-in-the-ground construction is sometimes called an "impermanent" form, used for houses which are expected to last a decade or two before a better quality structure can be built. [ 3 ]
Post in ground construction can also include sill on grade, wood-lined cellars , and pit houses . Most pre-historic and medieval wooden dwellings worldwide were built post in ground.
This type of construction is often believed [ by whom? ] to be an intermediate form between a palisade construction and a stave construction. Because the postholes are easily detected in archaeological surveys, they can be distinguished from the other two.
Post in ground was one of the timber construction methods used for French colonial structures in New France ; it was called poteaux-en-terre.
The Japanese also used a type of earthfast construction until the eighteenth century, which they call Hottate-bashira (literally "embedded pillars"). [ 4 ]
The Dogon people in Africa use post in ground construction for their toguna , community gathering places typically located in the center of villages for official and informal meetings.
In the historical region of New France in North America, poteaux-en-terre was a historic style of earthfast timber framing . This method is similar to poteaux-sur-sol , but the boulin (hewn posts) are planted in the ground rather than landing on a sill plate . The spaces between the boulin are filled with bousillage (reinforced mud) or pierrotage (stones and mud). Surviving examples of both types of structures can be found at Ste. Genevieve, Missouri . | https://en.wikipedia.org/wiki/Post_in_ground |
The postage stamp problem (also called the Frobenius Coin Problem and the Chicken McNugget Theorem [ 1 ] ) is a mathematical riddle that asks what is the smallest postage value which cannot be placed on an envelope, if the latter can hold only a limited number of stamps, and these may only have certain specified face values. [ 2 ]
For example, suppose the envelope can hold only three stamps, and the available stamp values are 1 cent, 2 cents, 5 cents, and 20 cents. Then the solution is 13 cents; since any smaller value can be obtained with at most three stamps (e.g. 4 = 2 + 2, 8 = 5 + 2 + 1, etc.), but to get 13 cents one must use at least four stamps.
Mathematically, the problem can be formulated as follows:
This problem can be solved by brute force search or backtracking with maximum time proportional to | V | m , where | V | is the number of distinct stamp values allowed. Therefore, if the capacity of the envelope m is fixed, it is a polynomial time problem. If the capacity m is arbitrary, the problem is known to be NP-hard . [ 2 ] | https://en.wikipedia.org/wiki/Postage_stamp_problem |
Postal Telegraph Company (Postal Telegraph & Cable Corporation) was a major operator of telegraph networks in the United States prior to its consolidation with Western Union in 1943. [ 1 ] Postal partnered with Commercial Cable Company for overseas cable messaging.
Postal was founded in the 1880s by John William Mackay , an entrepreneur who had made a fortune in silver mining in the Comstock Lode . Mackay's original purpose was to provide a domestic wire network to directly link with the Atlantic Cable . Mackay built the Postal network by the purchase of existing insolvent firms. The company was initially called The Pacific Postal Telegraph Cable Co . [ 2 ] Under president Albert Brown Chandler , the Postal network was able to achieve sufficient economy of scale to compete with Western Union, occasionally controlling as much as 20% of the business. [ 1 ]
By 1893, the company's rate of growth had allowed it to become the only viable competitor to Western Union. It had grown so large that management had to move out of the company's New York City headquarters at 187 Broadway to accommodate more operations staff. [ 3 ] Chandler oversaw the design and construction of the Postal Telegraph Company Building , a new headquarters at 253 Broadway and Murray Street. [ 4 ]
In the film of William Saroyan 's The Human Comedy the story is mainly told through the eyes of a teenager working as a delivery boy for the Postal Telegraph Company, and some of the action takes place in the telegraph office. [ 5 ]
This United States corporation or company article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Postal_Telegraph_Company |
Postcibalome is the comprehensive array of biochemical and physiological responses that occur in the body, most notably in the blood , following the consumption of food . [ 1 ] This term encompasses the complex interplay of hormonal , nutritional , and metabolic changes that take place as the body processes food and returns to its fasting state. [ citation needed ] It includes fluctuations in hormones , nutrients , metabolites , and proteins , as well as stress responses associated with excessive food intake. The term "postcibalome" is derived from "postcibal," which combines the prefix "post-" (after) with the Latin word "cibus," meaning food. [ citation needed ]
The changes in blood composition after eating are intricate and can serve as important indicators of metabolic health. [ 2 ] Studies have shown that alterations in glucose and insulin levels are significant markers of metabolic dysfunction , with insulin resistance often signifying a risk for diabetes . [ 3 ] [ 4 ] Additionally, the gene expression and proteome of white blood cells, as well as the metabolome and proteome of the blood, exhibit dynamic changes in response to food intake. [ citation needed ] These collective fluctuations highlight the body's adaptive mechanisms in managing nutrient intake and maintaining metabolic balance. [ citation needed ]
This medical article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Postcibalome |
Postcoital bleeding (PCB) is non- menstrual vaginal bleeding that occurs during or after sexual intercourse . [ 1 ] [ 2 ] Though some causes are with associated pain, it is typically painless and frequently associated with intermenstrual bleeding . [ 1 ]
The bleeding can be from the uterus, cervix, vagina and other tissue or organs located near the vagina. [ 3 ] Postcoital bleeding can be one of the first indications of cervical cancer. [ 4 ] [ 5 ] There are other reasons why vaginal bleeding may occur after intercourse. Some women will bleed after intercourse for the first time but others will not. The hymen may bleed if it is stretched since it is thin tissue. Other activities may have an effect on the vagina such as sports and tampon use. [ 6 ] Postcoital bleeding may stop without treatment. [ 7 ] In some instances, postcoital bleeding may resemble menstrual irregularities. [ 8 ] Postcoital bleeding may occur throughout pregnancy. The presence of cervical polyps may result in postcoital bleeding during pregnancy because the tissue of the polyps is more easily damaged. [ 9 ] Postcoital bleeding can be due to trauma after consensual and non-consensual sexual intercourse. [ 3 ] [ 10 ]
A diagnosis to determine the cause will include obtaining a medical history and assessing the symptoms. Treatment is not always necessary. [ 11 ]
Vaginal bleeding after sex is a symptom that can indicate:
Bleeding from hemorrhoids and vulvar lesions can be mistaken for postcoital bleeding. [ 3 ] Post coital bleeding can occur with discharge, itching, or irritation. This may be due to Trichomonas or Candida . [ 12 ] A lack of estrogen can make vaginal tissue thinner and more susceptible to bleeding. Some have proposed that birth control pills may cause postcoital bleeding. [ 5 ]
Risk factors for developing postcoital bleeding are: low estrogen levels, rape and 'rough sex'. [ 3 ]
Tests and detailed examination are used to determine the cause of the bleeding:
A referral may be made to a specialist. [ 11 ] [ 15 ] Imaging may not be necessary. Cryotherapy has been used but is not recommended. [ 3 ]
Postcoital bleeding rarely is associated with gynecological cancer in young women and its incidence is projected to drop due to the widespread immunizations against HPV . Postcoital bleeding has been most studied in women in the US. In a large Taiwanese study, the overall incidence of postcoital bleeding was found to be 39-59 per 100,000 women. Those with postcoital bleeding had a higher risk of cervical dysplasia and cervical cancer . Benign causes of postcoital bleeding were associated with cervical erosion , ectropion, vaginitis and vulvovaginitis. Other associations were noted such as the presence of leukoplakia of the cervix, an intrauterine contraceptive device , cervical polyps , cervicitis , menopause , dyspareunia , and vulvodynia . [ 16 ] In Scotland approximately 1 in 600 women aged 20–24 experience unexplained bleeding. [ 5 ] A study of African women found that trauma from consensual sexual intercourse was a cause of postcoital bleeding in young women. [ 17 ]
Hymenorrhaphy is a controversial procedure to surgically repair a damaged hymen, thus restoring the appearance of virginity:
"From a Western-ethics perspective, the life-saving potential of the procedure is weighed against the role of the surgeon in directly assisting in a deception and in indirectly promoting cultural practices of sexual inequality. From an Islamic bioethical vantage point, jurists offer two opinions. The first is that the surgery is always impermissible. The second is that although the surgery is generally impermissible, it can become licit when the risks of not having postcoital bleeding harm are sufficiently great." [ 18 ] | https://en.wikipedia.org/wiki/Postcoital_bleeding |
The posterior compartment of the thigh is one of the fascial compartments that contains the knee flexors and hip extensors known as the hamstring muscles , as well as vascular and nervous elements, particularly the sciatic nerve .
The posterior compartment is a fascial compartment bounded by fascia . It is separated from the anterior compartment by two folds of deep fascia , known as the medial intermuscular septum and the lateral intermuscular septum . [ 1 ]
The muscles of the posterior compartment of the thigh are the: [ 2 ] [ 3 ]
These muscles (or their tendons) apart from the short head of the biceps femoris, are commonly known as the hamstrings . The depression at the back of the knee, or kneepit is the popliteal fossa , colloquially called the ham . The tendons of the above muscles can be felt as prominent cords on both sides of the fossa—the biceps femoris tendon on the lateral side and the semimembranosus and semitendinosus tendons on the medial side. The hamstrings flex the knee, and aided by the gluteus maximus, they extend the hip during walking and running. The semitendinosus is named for its unusually long tendon. The semimembranosus is named for the flat shape of its superior attachment. [ 4 ]
The hamstrings are innervated by the sciatic nerve, specifically by a main branch of it: the tibial nerve . (The short head of the biceps femoris is innervated by the common fibular nerve ). The sciatic nerve runs along the longitudinal axis of the compartment, giving the cited terminal branches close to the superior angle of the popliteal fossa.
The arteries that supply the posterior compartment of the thigh arise from the inferior gluteal and the perforating branches of the profunda femoris artery , [ 5 ] a major collateral branch of the femoral artery and part of the anterior compartment of thigh . The femoral artery itself crosses the adductor hiatus to enter the posterior compartment at the level of the popliteal fossa, giving branches that supply the knee. This crossing marks the point in which the vessel changes its name to popliteal artery .
As with any other fascial compartment, the posterior compartment of thigh can develop compartment syndrome when pressure builds up inside it, reducing the ability of arteries to transport blood to muscles and nerves. In acute cases, this is most frequently a consequence of trauma. [ 6 ] | https://en.wikipedia.org/wiki/Posterior_compartment_of_thigh |
The posterior median line is a sagittal line on the posterior torso at the midline.
A similar term is "vertebral line", which defined by the spinous processes . [ 1 ] However, this term is not in Terminologia Anatomica .
This anatomy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Posterior_median_line |
Postglacial vegetation refers to plants that colonize the newly exposed substrate after a glacial retreat . [ 1 ] The term "postglacial" typically refers to processes and events that occur after the departure of glacial ice or glacial climates . [ 2 ]
Climate change is the main force behind changes in species distribution and abundance . Repeated changes in climate throughout the Quaternary Period are thought to have had a significant impact on the current vegetation species diversity present today. [ 3 ] Functional and phylogenetic diversity are considered to be closely related to changing climatic conditions and this indicates that trait differences are extremely important in long term responses to climate change. During the transition from the last glaciation of the Pleistocene to the Holocene period, climate warming resulted in the expansion of taller plants and larger seed bearing plants which resulted in lower proportions of vegetation regeneration. [ 4 ] Hence, low temperatures can be strong environmental filters that prevent tall and large-seeded plants from establishing in postglacial environments. [ 5 ] Throughout Europe vegetation dynamics within the first half of the Holocene appear to have been influenced mainly by climate and the reorganization of atmospheric circulation associated with the disappearance of the North American ice sheet . This is evident in the rapid increase of forestation and changing biomes during the postglacial period between 11500ka and 8000ka before the present. [ 6 ] [ 7 ] Vegetation development periods of post-glacial land forms on Ellesmere Island , Northern Canada, is assumed to have been at least ca. 20,000 years in duration. This slow progression is mostly due to climatic restrictions such as an estimated annual rainfall amount of only 64mm and a mean annual temperature of -19.7 degrees Celsius. The length in time of vegetation development observed on Ellesmere Island is evidence that post glacial vegetation development is much more restricted in the Arctic and colder climates as compared to milder climatic regions such as the boreal, temperate and tropical zones. [ 8 ]
As land became exposed following the glaciation of the last ice age, a variety of geographic settings ranging from the tropics to the Arctic and Antarctic became available for the establishment of vegetation. Species that now exist on formerly glaciated terrain must have undergone a change in distribution of hundreds to thousands of kilometers, or have evolved from other taxa that have once done so in the past. [ 9 ] In a newly developing environment, plant growth is often strongly influenced by the introduction of new organisms into that environment, where competitive or mutualistic relationships may develop. Often, competitive balances are eventually reached and species abundances remain somewhat constant over a period of generations.
Studies done on the Norwegian Island of Svalbard , have been very useful in understanding the behavior of postglacial vegetation. Studies show that many vascular plants that are considered pioneers of vegetation development, eventually become less frequent. For example, the abundance of species such as Braya purpurascens has fallen nearly 30% due to the introduction of new species in the area. [ 10 ]
Arctic vegetation has distinct postglacial development characteristics compared to the more temperate zones of lower latitudes. A study of postglacial moraines conducted in the Canadian Arctic on Ellesmere Island have found that dwarf shrubs of Dryas integrifolia and Cassiope tetragona are often good indicators of vegetation development and progression. Dwarf shrubs have been found to increase with the age of the moraine, with Dryas integrifolia becoming the most predominant. As well the cover of vegetation, including lichens and bryophytes showed consistent increase with the moraine age, suggesting directional vegetation development. [ 11 ] It is also suggested that part of the high proportions of polypoids occurring in arctic floras is the result of speciation as continental ice-sheets withdrew. [ 12 ] Pollen diagrams from northern Quebec, Canada , show advances throughout the Holocene of post-glacial vegetation development. The initial phase of open vegetation began about 6000 years before the present. Following deglaciation, shrub and herbaceous tundra plants dominated for a brief period of time. Plants such as the Larix laricina , Populus and Juniperus , were also important in the initial vegetation development. Some species that followed later include: Alnus crispa , and Betula . Though later vegetation development was mainly dominated by Picea , shortly following deglaciation, they reached their present day limit. Today black spruce is mainly dominant throughout much of northern Quebec . [ 13 ] The continental U.S. is considered to have strongly contributed to the re-establishment of postglacial vegetation in Canada following the last ice age. Roughly 300 taxa of vascular plants and mosses that were found to have existed below the extent of the last glacial period within the United States were also found to have migrated to Canada. These patterns are recorded within either pollen or macro fossils. [ 14 ]
Studies done by Reitalu, (2015) have found that human impact throughout much of Europe has negatively influenced plant diversity by suppressing the establishment of tall-growing, large seeded taxa. Although human influence has facilitated many ruderal species , this is believed to have led to an overall decrease in phylogenetic diversity . [ 15 ]
Many pollen diagrams around the world indicate that major climate changes caused the last continental ice sheets to retreat, leading to dramatic effects on the distribution and abundance of plants. [ 16 ] By converting pollen data into plant functional type (PFT) assemblages and interpolating the data, researchers have been able to reconstruct postglacial vegetation patterns around the world. [ 17 ] Core sampling and analysis of lake sediments that contain pollen and other plant remains are often used to obtain good records of past pollination cycles. Such paleorecords preserved in lake sediments can be used to reconstruct the history of post glacial vegetation. [ 18 ] Lake sediments have an advantage over other core sampling sites, such as fen and bog peats, as they provide no overwhelming local pollen components. As well, lake sediments contain stratigraphic changes in soil character, which are useful for understanding changes in vegetation development over a period of time. [ 19 ] Macrofossils that are obtained from sedimentary deposits are also useful for constructing the history of changing postglacial vegetation. [ 20 ] | https://en.wikipedia.org/wiki/Postglacial_vegetation |
In computers and technology , a postmaster is the administrator of a mail server . Nearly every domain should have the e-mail address postmaster@example.com where errors in e-mail processing are directed. Error e-mails automatically generated by mail servers' MTAs usually appear to have been sent to the postmaster address.
Every domain that supports the SMTP protocol for electronic mail is required by RFC 5321 [ 1 ] and, as early as 1982, by RFC 822, [ 2 ] to have the postmaster address.
Quoting from the RFC:
Any system that includes an SMTP server supporting mail relaying or delivery MUST support the reserved mailbox "postmaster" as a case-insensitive local name. This postmaster address is not strictly necessary if the server always returns 554 on connection opening (as described in section 3.1). [ 3 ] The requirement to accept mail for postmaster implies that RCPT commands which specify a mailbox for postmaster at any of the domains for which the SMTP server provides mail service, as well as the special case of "RCPT TO:<Postmaster>" (with no domain specification), MUST be supported.
SMTP systems are expected to make every reasonable effort to accept mail directed to Postmaster from any other system on the Internet. In extreme cases (such as to contain a denial of service attack or other breach of security) an SMTP server may block mail directed to Postmaster. However, such arrangements SHOULD be narrowly tailored so as to avoid blocking messages which are not part of such attacks.
Since most domains have a postmaster address, it is commonly targeted by spamming operations. Even if not directly spammed, a postmaster address may be sent bounced spam from other servers that mistakenly trust fake return-paths commonly used in spam.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Postmaster_(computing) |
Postmenopausal confusion , also commonly referred to as postmenopausal brain fog , is a group of symptoms of menopause in which women report problems with cognition at a higher frequency during postmenopause than before. [ 1 ] [ 2 ]
Multiple studies on cognitive performance following menopause have reported noticeable declines in greater than 60% of the patients. [ 3 ] [ 4 ] The common issues presented included impairments in reaction time and attention, difficulty recalling numbers or words, and forgetting reasons for involvement in certain behaviors. Association between subjective cognitive complaints and objective measures of performance show a significant impact on health-related quality of life for postmenopausal women. [ 5 ]
Treatment primarily involves symptom management through non-pharmacological treatment strategies. This includes involvement in physical activity and following medically supervised diets, especially those that contain phytoestrogens or resveratrol . [ 6 ] [ 7 ] [ 8 ] [ 9 ] Pharmacological interventions in treating postmenopausal confusion are currently being researched. Hormone replacement therapy (HRT) is currently not indicated for the treatment of postmenopausal confusion due to inefficacy. [ 10 ] [ 11 ] The use of HRT for approved indications has identified no significant negative effect on postmenopausal cognition. [ 10 ]
Although much of the literature references women, all people who undergo menopause, including those who do not self-identify as women, may experience symptoms of postmenopausal confusion.
Research on menopause as a whole declined with the end of the Women's Health Initiative (WHI) studies, but research on the treatment of symptoms associated with menopause—especially the treatment of cognitive decline—continues. The Study of Women's Health Across the Nation (SWAN), first started in 1996, continues to publish progress reports which include cognitive symptoms associated with menopausal transition, including those in postmenopause. [ 4 ] As of 2019 [update] , SWAN indicated, "Approximately 60% of midlife women report problems with memory during the [menopause transition], yet studies of measured cognitive performance during the transition are rare." [ 4 ]
Although there are many relationships between hormone levels in postmenopause and cognitive function, the previously favored HRT therapies ( estrogen therapies) have been shown to be ineffective in specifically treating postmenopausal confusion. [ 12 ] [ 10 ] [ 13 ] The use of hormone replacement therapies, once considered detrimental to cognition in postmenopausal women, has now been shown to have no negative effect when used properly for approved indications. [ 10 ] [ 13 ] [ 14 ] There are no conclusive studies to support any pharmacological agents, but several potential drug candidates are still being explored.
Menopause is a natural decline in the ovarian function of women who reach the age between 45 and 54 years. "About 25 million women pass through menopause worldwide each year, and it has been estimated that, by the year 2030, the world population of menopausal and postmenopausal women will be 1.2 billion, with 47 million new entrants each year." [ 15 ]
Postmenopause begins immediately following menopause (one year after the final menstrual cycle ). [ 16 ] Postmenopausal confusion is often manifested through the following cognitive symptoms: memory problems, forgetfulness , and poor concentration. [ 17 ] Confusion which is otherwise unexplained and coincides with the onset of postmenopause may be postmenopausal confusion. [ 18 ]
A 2019 literature review identified hypertension and history of pre-eclampsia as significant risk factors for the accelerated decline of cognitive function in women during midlife. Although the mechanism remains unclear, neuroimaging studies included in the review found that those with hypertension have evident structural changes in their brains; specifically, gray matter brain volume decreased and white matter hyperintensity volume increased. [ 18 ]
Atherosclerosis and comorbidities such as hyperlipidemia and diabetes mellitus have long been considered risk factors for cognitive decline because they have the propensity to cause the formation of amyloid plaques (aggregates of misfolded , deleterious proteins) in the brain. [ 19 ]
Many postmenopausal women report insomnia . Studies have shown "associations between poor sleep quality and cognitive decline" in postmenopausal women as those with insufficient sleep, or with difficulty falling or staying asleep, reported decreased cognitive performance including "verbal memory, attention , and general cognition." [ 3 ]
There is evidence linking depression and cognitive decline in postmenopausal women. Research suggests that increased cortisol levels from depressive episodes may affect the hippocampus , area of the brain responsible for episodic memory . Studies have also shown a correlation between depression and decreased cognitive performance including " processing speed , verbal memory , and working memory " in postmenopausal women. [ 3 ]
There are studies indicating a correlation between frequency of hot flashes in postmenopausal women and a deficit in verbal memory performance. It is suggested that faster blood flow in the brain or higher cortisol levels from hot flashes may cause changes in the brain and affect information processing and memory. [ 3 ]
A 2019 systematic review and meta-analysis identified surgical menopause , especially when performed at or before the age of 45, as a substantial risk factor for cognitive decline and dementia . [ 20 ]
Cardiac procedures such as invasive cerebral and coronary angiography , coronary artery bypass graft surgery (CABG), surgical aortic valve replacement , and transcatheter aortic valve replacement (TAVR) have been found to increase the risk of cognitive decline in females as they been found to increase the incidence of brain lesions . [ 19 ]
The mechanism of postmenopausal confusion is poorly understood due to simultaneous aging-related physiological changes, as well as differential diagnoses presenting with similar symptoms. [ 2 ] Research remains ongoing.
There are pharmacological and non-pharmacological considerations in improving the symptoms of postmenopausal confusion. Currently, no pharmacological agents are indicated to treat postmenopausal confusion, but research remains ongoing. [ citation needed ] Non-pharmacological strategies to manage postmenopausal confusion symptoms are utilized, with focus on diet and exercise.
Hormone therapy , also known as estrogen therapy, was previously a common treatment for postmenopausal confusion. However, more recent research indicates that hormone therapy is not an effective treatment for postmenopausal cognitive symptoms. [ 10 ] [ 11 ] A 2008 Cochrane review of 16 trials concluded that there is a body of evidence that suggests that hormone replacement therapy is unable to prevent cognitive decline or maintain cognitive function in healthy postmenopausal women when given over a short or long period of time. [ 11 ] Conversely, studies have also suggested that the use of hormone replacement therapy are unlikely to have negative cognitive effects when used for their approved indications. [ 10 ]
Previous research suggested that increases in blood flow to the hippocampus and temporal lobe occurred from hormone therapy, improving postmenopausal confusion symptoms. [ 21 ] More recent research no longer supports this, and is inconclusive as to the true effects of estrogen on hippocampal volume as studies show results differing from improved cognition and maintained hippocampal volume when hormone therapy is administered during menopause to results showing no obvious beneficial results. [ 22 ]
Research focusing on adiponectin (ADPN) has yielded positive results in the development of possible treatments for postmenopausal confusion. A study has shown an association between higher levels of ADPN and increased cognitive performance in postmenopausal women. However, an ADPN receptor agonist has yet to be discovered. [ 23 ]
There is ongoing research regarding the efficacy of psychostimulant drugs such as lisdexamphetamine (Vyvanse) and atomoxetine (Strattera) in treating postmenopausal and menopausal confusion. [ 24 ] [ 25 ]
Individuals play an important role in maintaining their cognitive health. One way to achieve this is by the promotion of healthy nutrition. In particular, the Mediterranean diet , defined as being low in saturated fat and high in vegetable oils, showed improvement in aspects of cognitive function. This diet consists of low intake of sweets and eggs, moderate intakes of meat and fish, dairy products and red wine, and high intake of leafy green vegetables , pulses/legumes and nuts, fruits, cereal, and cold pressed extra virgin olive oil. [ 7 ] Further analysis concluded that the Mediterranean diet supplemented by olive oil resulted in better cognition and memory as compared to the Mediterranean diet plus mixed nuts combination. [ 26 ]
Soy isoflavones (SIF), a type of phytoestrogen which can be found in soybeans, fruits and nuts, has been shown to improve cognitive outcomes in recent postmenopausal women of less than 10 years. This suggests that the initiation of SIF may have a critical margin of opportunity when used at a younger age in postmenopausal women. In addition to improved cognitive functions and visual memory, no evidence of harm from SIF supplementation was revealed with the dose ranges tested in multiple trials. [ 8 ]
Analysis of multiple randomized controlled trials have brought attention to black cohosh and red clover (which contain phytoestrogen ) and its potential as an efficacious treatment of menopausal symptoms. Black cohosh did not reveal any evidence of risk of harm, but lack of good evidence cannot firmly conclude its safety. [ 27 ] Overall, the results suggested that neither botanical treatments provided any cognitive benefits. [ 28 ]
Resveratrol , another bioactive compound derived from plants, has also shown to improve cognitive performance in postmenopausal women. There are ongoing trials studying the cognitive benefits of resveratrol in early versus late postmenopausal women. [ 9 ]
Chronic ginkgo biloba supplementation has been shown to improve "mental flexibility" in "older and more cognitively impaired" postmenopausal women. However, a combined ginkgo biloba and ginger supplementation had no effect on memory or cognitive performance in postmenopausal women. [ 29 ]
Dehydroepiandrosterone ( DHEA ) supplementation may improve cognition in women with postmenopausal confusion but does not benefit those without cognitive impairment. [ 30 ] More long-term studies are required to study the efficacy of DHEA and its role in cognition and postmenopausal women. [ 31 ]
Regular physical exercise may prevent symptoms of postmenopausal confusion. Studies have shown an association between exercise and "lower rates of cognitive decline" in postmenopausal women. On the other hand, an inactive lifestyle has been strongly associated with "higher rates of cognitive decline" in postmenopausal women. [ 6 ]
Studies have shown benefits of mind-body therapies in women with postmenopausal symptoms including cognitive impairment. Mindfulness , hypnosis , and yoga may help decrease symptoms of insomnia, depression, or hot flashes in postmenopausal women which leads to better cognitive performance. [ 3 ] | https://en.wikipedia.org/wiki/Postmenopausal_confusion |
Postmortem studies are a type of neurobiological research, which provides information to researchers and individuals who will have to make medical decisions in the future. [ 1 ] Postmortem researchers conduct a longitudinal study of the brain of an individual, who has some sort of phenomenological condition (i.e. cannot speak, trouble moving left side of body, Alzheimer's , etc.) that is examined after death. Researchers look at certain lesions in the brain that could have an influence on cognitive or motor functions. [ 2 ] These irregularities, damage, or other cerebral anomalies observed in the brain are attributed to an individual's pathophysiology and their environmental surroundings. [ 3 ] Postmortem studies provide a unique opportunity for researchers to study different brain attributes that would be unable to be studied on a living person. [ 4 ]
Postmortem studies allow researchers to determine causes and cure for certain diseases and functions. [ 4 ] It is critical for researchers to develop hypotheses, in order to discover the characteristics that are meaningful to a particular disorder. [ 3 ] The results that the researcher discovers from the study will help the researcher trace the location in the brain to specific behaviors. [ 2 ]
When tissue from a postmortem study is obtained it is imperative that the researcher ensures the quality is adequate to study. This is specifically important when an individual is researching gene expression (i.e. DNA , RNA , and proteins). Some key ways researchers monitor the quality are by determining the pain level/time of death of the individual, pH of the tissue, refrigeration time and temperature of storage, time until the brain tissue is frozen, and the thawing conditions. As well as finding out specific information about the individual's life such as: age, sex, legal or illegal substance use, and a treatment analysis of the individual. [ 4 ] [ 5 ]
Postmortem studies have been used to further the understanding of the brain for centuries. Before the time of the MRI , CAT Scan , or X-ray it was one of the few ways to study the relation between behavior and the brain.
Paul Broca used postmortem studies to link a specific area of the brain with speech production.
His research began when he noticed that a patient with an aphasic stroke had lesions in the left hemisphere of his brain. His research and theory continued over time.
The most notable of his research subjects was Tan (named for the only syllable he could utter). Tan had lesions in his brain caused by syphilis. These lesions were determined to cover the area of his brain that was important for speech production.
The area of the brain that Broca identified is now known as Broca's area ; damage to this section of the brain can lead to Expressive aphasia .
Karl Wernicke also used postmortem studies to link specific areas of the brain with speech production. However his research focused more on patients who could speak, however their speech made little sense and/or had trouble understanding spoken words or sentences.
His research in language comprehension and the brain also found it to be localized in the left hemisphere, but in a different section. This area is known as Wernicke's area ; damage to this section can lead to Receptive aphasia .
Postmortem studies allows for researchers to give information that is relevant to individuals by explaining the causes of particular diseases and behaviors. This is in hopes that others can avoid some of these experiences in the future. [ 1 ] Postmortem studies also improve medical knowledge and help to determine whether changes happen in the brain itself or in the actual disorder. By doing this researchers are then able to help prioritize experimental studies and integrate the studies into animal and cell research. Another benefit to postmortem studies is that researchers have the ability to make a wide range of discoveries, because of the many different techniques used to obtain tissue samples. Postmortem studies are extremely important and unique despite their limitations. [ 6 ]
Postmortem brain samples are limited resources, because it is extremely difficult for a researcher to get a hold of an individual's brain. The researchers ask their participants or the families to consent to allowing them to study the loved one's brain, however there has been a falling rates of consent in the last few years. [ 1 ] Subsequently, researchers have to use indirect methods to study the locations and processes of the brain. [ 5 ] Another limitation to postmortem studies is the continuous funding and the time it takes to conduct a longitudinal study. Postmortem longitudinal studies usually take place at the time of assessment until the time of death about 20–30 years. [ 4 ] [ 6 ] | https://en.wikipedia.org/wiki/Postmortem_studies |
Postnormal times ( PNT ) is a concept developed by Ziauddin Sardar as a development of post-normal science . Sardar describes the present as "postnormal times", "in an in-between period where old orthodoxies are dying, new ones have yet to be born, and very few things seem to make sense." [ 1 ]
In support of engaging communities of various scope and scale on how to best navigate PNT and imagine preferred pathways toward the future(s), Sardar and Sweeney published an article in the journal Futures outlining The Three Tomorrows method, which fills a gap in the field as "many methods of futures and foresight seldom incorporate pluralism and diversity intrinsically in their frameworks, and few, if any, emphasize the dynamic and merging nature of futures possibilities, or highlight the ignorance and uncertainties we constantly confront". [ 2 ]
Rakesh Kapoor criticized PNT in 2011 as a Western concept that does not apply to India and other emerging markets . [ 3 ] Sam Cole criticised the three Cs of PNT (chaos, complexity and contradictions) as "Alliterative Logic, theorizing through alliterative word-triads that is not based on empirical evidence". [ 4 ] Jay Gary has suggested that PNT is embryonic, needs a more robust framework, and should be extended to include C S Holling's adaptive cycle. [ 5 ] Scientists working on complex evolving systems have pointed out that PNT recalls the "Long Waves" of Kondratiev and Joseph Schumpeter 's view of waves of " creative destruction ". [ 6 ]
PNT is one of the core areas of research for the Center for Postnormal Policy and Futures Studies at East-West University in Chicago , Illinois, US. A number of articles and editorials on PNT have been published in the journal East-West Affairs .
This philosophy of science -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Postnormal_times |
Postprandial somnolence (colloquially known as food coma , after-meal dip , or "the itis" ) is a normal state of drowsiness or lassitude following a meal. Postprandial somnolence has two components: a general state of low energy related to activation of the parasympathetic nervous system in response to mass in the gastrointestinal tract , and a specific state of sleepiness. [ 1 ] [ medical citation needed ] While there are numerous theories surrounding this behavior, such as decreased blood flow to the brain, neurohormonal modulation of sleep through digestive coupled signaling, or vagal stimulation, very few have been explicitly tested. To date, human studies have loosely examined the behavioral characteristics of postprandial sleep, demonstrating potential shifts in EEG spectra and self-reported sleepiness. [ 2 ] To date, the only clear animal models for examining the genetic and neuronal basis for this behavior are the fruit fly, the mouse, and the nematode Caenorhabditis elegans . [ 3 ] [ 4 ] [ 5 ]
The exact cause of postprandial somnolence is unknown, but there are some scientific hypotheses:
Increases in glucose concentration excite and induce vasodilation in ventrolateral preoptic nucleus neurons of the hypothalamus via astrocytic release of adenosine that is blocked by A2A receptor antagonists like caffeine . [ 4 ] Evidence also suggests that the small rise in blood glucose that occurs after a meal is sensed by glucose-inhibited neurons in the lateral hypothalamus . [ 6 ] These orexin -expressing neurons appear to be hyperpolarised (inhibited) by a glucose-activated potassium channel . This inhibition is hypothesized to then reduce output from orexigenic neurons to aminergic , cholinergic , and glutamatergic arousal pathways of the brain, thus decreasing the activity of those pathways. [ 7 ]
In response to the arrival of food in the stomach and small intestine, the activity of the parasympathetic nervous system increases and the activity of the sympathetic nervous system decreases. [ 8 ] [ 9 ] This shift in the balance of autonomic tone towards the parasympathetic system results in a subjective state of low energy and a desire to be at rest, the opposite of the fight-or-flight state induced by high sympathetic tone. The larger the meal, the greater the shift in autonomic tone towards the parasympathetic system, regardless of the composition of the meal. [ citation needed ]
When foods with a high glycemic index are consumed, the carbohydrates in the food are more easily digested than low glycemic index foods. Hence, more glucose is available for absorption. It should not be misunderstood that glucose is absorbed more rapidly because, once formed, glucose is absorbed at the same rate. It is only available in higher amounts due to the ease of digestion of high glycemic index foods. In individuals with normal carbohydrate metabolism , insulin levels rise concordantly to drive glucose into the body's tissues and maintain blood glucose levels in the normal range. [ 10 ] Insulin stimulates the uptake of valine , leucine , and isoleucine into skeletal muscle , but not uptake of tryptophan . This lowers the ratio of these branched-chain amino acids in the bloodstream relative to tryptophan [ 11 ] [ 12 ] (an aromatic amino acid ), making tryptophan preferentially available to the large neutral amino acid transporter at the blood–brain barrier. [ 13 ] [ 12 ] Uptake of tryptophan by the brain thus increases. In the brain, tryptophan is converted to serotonin , [ 14 ] which is then converted to melatonin . Increased brain serotonin and melatonin levels result in sleepiness. [ 15 ] [ 16 ]
Insulin can also cause postprandial somnolence via another mechanism. Insulin increases the activity of Na/K ATPase, causing increased movement of potassium into cells from the extracellular fluid. [ 17 ] The large movement of potassium from the extracellular fluid can lead to a mild hypokalemic state. The effects of hypokalemia can include fatigue, muscle weakness, or paralysis. [ 18 ] The severity of the hypokalemic state can be evaluated using Fuller's Criteria. [ 19 ] Stage 1 is characterized by no symptoms but mild hypokalemia. Stage 2 is characterized with symptoms and mild hypokalemia. Stage 3 is characterized by only moderate to severe hypokalemia.
Cytokines are somnogenic and are likely key mediators of sleep responses to infection [ 20 ] and food. [ 21 ] Some pro inflammatory cytokines correlate with daytime sleepiness. [ 22 ]
Although the passage of food into the gastrointestinal tract results in increased blood flow to the stomach and intestines, this is achieved by diversion of blood primarily from skeletal muscle tissue and by increasing the volume of blood pumped forward by the heart each minute. [ citation needed ] The flow of oxygen and blood to the brain is extremely tightly regulated by the circulatory system [ 23 ] and does not drop after a meal.
A common myth holds that turkey is especially high in tryptophan , [ 24 ] [ 25 ] [ 26 ] resulting in sleepiness after it is consumed, as may occur at the traditional meal of the North American holiday of Thanksgiving . However, the tryptophan content of turkey is comparable to chicken , beef , and other meats , [ 27 ] and does not result in higher blood tryptophan levels than other common foods. Certain foods, such as soybeans , sesame and sunflower seeds , and certain cheeses, are also high in tryptophan. Whether it is possible or not that these may induce sleepiness if consumed in sufficient quantities has yet to be studied. [ medical citation needed ]
A 2015 study, reported in the journal Ergonomics , showed that, for twenty healthy subjects, exposure to blue-enriched light during the post-lunch dip period significantly reduced the EEG alpha activity , and increased task performance. [ 28 ] | https://en.wikipedia.org/wiki/Postprandial_somnolence |
In computational chemistry , post–Hartree–Fock [ 1 ] [ 2 ] ( post-HF ) methods are the set of methods developed to improve on the Hartree–Fock (HF), or self-consistent field (SCF), method. They add electron correlation which is a more accurate way of including the repulsions between electrons than in the Hartree–Fock method where repulsions are only averaged.
In general, the SCF procedure makes several assumptions about the nature of the multi-body Schrödinger equation and its set of solutions:
For the great majority of systems under study, in particular for excited states and processes such as molecular dissociation reactions, the fourth item is by far the most important. As a result, the term post–Hartree–Fock method is typically used for methods of approximating the electron correlation of a system.
Usually, post–Hartree–Fock methods [ 3 ] [ 4 ] give more accurate results than Hartree–Fock calculations, although the added accuracy comes with the price of added computational cost.
Methods that use more than one determinant are not strictly post–Hartree–Fock methods, as they use a single determinant as reference, but they often use similar perturbation, or configuration interaction methods to improve the description of electron correlation. These methods include: | https://en.wikipedia.org/wiki/Post–Hartree–Fock |
A pot still is a type of distillation apparatus or still used to distill liquors such as whisky or brandy . In modern (post-1850s) practice, they are not used to produce rectified spirit , because they do not separate congeners from ethanol as effectively as other distillation methods. Pot stills operate on a batch distillation basis (in contrast to column stills , which operate on a continuous basis). Traditionally constructed from copper , pot stills are made in a range of shapes and sizes depending on the quantity and style of spirit desired.
Spirits distilled in pot stills top out between 60 and 80 percent alcohol by volume (ABV) after multiple distillations. [ citation needed ] Because of this relatively low level of ABV concentration, spirits produced by a pot still retain more of the flavour from the wash than distillation practices that reach higher ethanol concentrations.
Under European law and various trade agreements, cognac (a protected term for a variety of brandy produced in the region around Cognac, France ) and any Irish or Scotch whisky labelled as "pot still whisky" or " malt whisky " must be distilled using a pot still. [ 1 ] [ 2 ] [ 3 ]
During first distillation, the pot still (or "wash still") is filled about two-thirds full of a fermented liquid (or wash ) with an alcohol content of about 7–12%. [ 4 ] [ 5 ] [ 6 ] In the case of whisky distillation, the liquid used is a beer, while in the case of brandy production, it is a base wine . The pot still is then heated so that the liquid boils.
The liquid being distilled is a mixture of mainly water and alcohol, along with smaller amounts of other by-products of fermentation (called congeners ), such as aldehydes and esters. [ 5 ] At sea level , alcohol ( ethanol ) has a normal boiling point of 78.4 °C (173.1 °F) while pure water boils at 100 °C (212 °F). [ 7 ] As alcohol has a lower boiling point, it is more volatile and evaporates at a higher rate than water. Hence the concentration of alcohol in the vapour phase above the liquid is higher than in the liquid itself.
During distillation, this vapour travels up the swan neck at the top of the pot still and down the lyne arm , after which it travels through the condenser (also known as the worm ), where it is cooled to yield a distillate with a higher concentration of alcohol than the original liquid. [ 5 ] After one such stage of distillation, the resulting liquid, called "low wines", has a concentration of about 25–35% alcohol by volume.
These low wines can be distilled again in a pot still to yield a distillate with a higher concentration of alcohol. [ 6 ] In the case of many Irish whiskeys , the spirit is distilled for a third time. However, cognac and most single malt Scotch whiskies are distilled only twice.
A still used for the redistillation of already-distilled products (especially in the United States) is known as a doubler – named after its approximate effect on the level of the distillation purity. [ 8 ] [ 9 ] Distillers from the early 1800s with sufficient resources to operate both a primary still and a separate doubler would typically use a smaller still for the doubler (typically about half the capacity) than for the first distillation. [ 9 ]
An alternative way to reach an increased distillation purity without a full second stage of distillation is to put another pot (often a passive pot – i.e., without an external heat source) between the pot still and the cooling worm. Such a pot is known as a thumper – named after the sound made by the vapour as it bubbles through a pool of liquid in the thumper. [ 10 ] The distinction between a thumper and a doubler is that a thumper receives its input as a vapour prior to cooling, while the intake of a doubler is an already-condensed liquid. [ 8 ] [ 10 ]
During distillation, the initial and final portions of spirit which condense (with the first portions termed the foreshots and heads and the final parts called the tails and feints ) may be captured separately from that in the centre or "heart" of the distillation and may be discarded. This is because these portions of the distillate may contain high concentrations of congeners (which it may be desirable to keep out of the final distillate for reasons of style, taste and toxicity). For example, the presence of pectin in the wash (e.g., due to using a mash made from fruit) may result in the production of methanol (a.k.a. wood alcohol), which has a lower boiling point than ethanol and thus would be more concentrated in the foreshots. Methanol is toxic and at sufficient concentrations, it can cause blindness and fatal kidney failure. It is especially important to discard the initial foreshots, while a small amount of the near-centre heads and tails are often included in the final product for their effect on the flavour. [ 11 ] [ 12 ]
The modern pot still is a descendant of the alembic , an earlier distillation device.
The largest pot still ever used was in the Old Midleton Distillery , County Cork , Ireland. [ 13 ] [ 14 ] Constructed in 1825, it had a capacity of 143,740 litres (31,618 imp gal) and is no longer in use. As of 2014 the largest pot stills in use are in the neighbouring New Midleton Distillery , County Cork, Ireland, and have a capacity of 75,000 L (16,000 imp gal). [ 15 ]
Components of a traditional pot still: [ 7 ] | https://en.wikipedia.org/wiki/Pot_still |
Potamal is a technical geographical term of limnology and hydrology of the lower stretches of a stream or river . It describes the overall habitat, stability and ecology of the biomass.
This article about geography terminology is a stub . You can help Wikipedia by expanding it .
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Potamal |
Potamology (from Ancient Greek : ποταμός - river, Ancient Greek : λόγος - science) is the study of rivers, a branch of hydrology . The subject of study is the hydrological processes of rivers , the morphometry of river basins , the structure of river networks; channel processes, regime of river mouth areas; evaporation and infiltration of water in a river basin; water, thermal, ice regime of rivers; sediment regime; sources and types of rivers feeding, and various chemical and physical processes in rivers.
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Potamology |
Potash ( / ˈ p ɒ t æ ʃ / POT -ash ) includes various mined and manufactured salts that contain potassium in water- soluble form. [ 1 ] The name derives from pot ash , plant ashes or wood ash soaked in water in a pot, the primary means of manufacturing potash before the Industrial Era . The word potassium is derived from potash . [ 2 ]
Potash is produced worldwide in amounts exceeding 71.9 million tonnes (~45.4 million tonnes K 2 O equivalent [ 5 ] ) per year as of 2021, with Canada being the largest producer, mostly for use in fertilizer . [ 6 ] Various kinds of fertilizer-potash constitute the single greatest industrial use of the element potassium in the world. Potassium was first derived in 1807 by electrolysis of caustic potash ( potassium hydroxide ). [ 7 ]
Potash refers to potassium compounds and potassium-bearing materials, most commonly potassium carbonate. The word "potash" originates from the Middle Dutch potaschen , denoting "pot ashes" in 1477. [ 8 ] The old method of making potassium carbonate ( K 2 CO 3 ) was by collecting or producing wood ash (the occupation of ash burners ), leaching the ashes, and then evaporating the resulting solution in large iron pots, which left a white residue denominated "pot ash". [ 9 ] Approximately 10% by weight of common wood ash can be recovered as potash. [ 10 ] [ 11 ] Later, "potash" became widely applied to naturally occurring minerals that contained potassium salts and the commercial product derived from them. [ 12 ]
The following table lists a number of potassium compounds that have "potash" in their traditional names:
Beginning from the late 19th century: one or more of potassium chloride (KCl), potassium sulfate (K 2 SO 4 ) or potassium nitrate (KNO 3 ). [ 13 ] [ 14 ] Does not contain potassium oxide (K 2 O), which plants do not take up; [ 15 ] the amount of potassium is often reported as K 2 O equivalent (that is, how much it would be if in K 2 O form), however, to allow apples-to-apples comparison between different fertilizers using different types of potash.
Most of the world reserves of potassium (K) were deposited as sea water in ancient inland oceans . After the water evaporated, the potassium salts crystallized into beds of potash ore. These are the locations where potash is being mined today. The deposits are a naturally occurring mixture of potassium chloride (KCl) and sodium chloride (NaCl), more commonly known as table salt . Over time, as the surface of the earth changed, these deposits were covered by thousands of feet of earth. [ 16 ]
Potash (especially potassium carbonate) has been used in bleaching textiles, making glass , ceramic, and making soap , since the Bronze Age . [ 17 ] Potash was principally obtained by leaching the ashes of wood burned for heating and cooking.
Beginning in the 14th century potash was mined in Ethiopia . One of the world's largest deposits, 140 to 150 million tons, is located in the Dallol area of the Afar Region . [ 18 ]
Potash was one of the most important industrial chemicals. It was refined from the ashes of broadleaved trees and produced primarily in the forested areas of Europe, Russia , and North America . Although methods for producing artificial alkalis were invented in the late 18th century, these did not become economical until the late 19th century and so the dependence on organic sources of potash remained.
Potash became an important international trade commodity in Europe from at least the early 14th century. It is estimated that European imports of potash required 6 or more million cubic metres each year from the early 17th century. [ 19 ] Between 1420 and 1620, the primary exporting cities for wood-derived potash were Gdańsk , Königsberg and Riga . In the late 15th century, London was the lead importer due to its position as the centre of soft soap making while the Dutch dominated as suppliers and consumers in the 16th century. [ 19 ] From the 1640s, geopolitical disruptions (i.e. Russo-Polish War (1654–1667) ) meant that the centres of export moved from the Baltic to Archangelsk , Russia. In 1700, Russian ash was dominant though Gdańsk remained notable for the quality of its potash.
On the Orkney islands, kelp ash provided potash and soda ash , production starting "possibly as early as 1719" and lasting for a century. The products were "eagerly sought after by the glass and soap industries of the time." [ 20 ]
By the 18th century, higher quality American potash was increasingly exported to Britain. In the late 18th and early 19th centuries, potash production provided settlers in North America badly needed cash and credit as they cleared wooded land for crops. To make full use of their land, settlers needed to dispose of excess wood. The easiest way to accomplish this was to burn any wood not needed for fuel or construction. Ashes from hardwood trees could then be used to make lye , which could either be used to make soap or boiled down to produce valuable potash. Hardwood could generate ashes at the rate of 60 to 100 bushels per acre (500 to 900 m 3 /km 2 ). In 1790, the sale of ashes could generate $3.25 to $6.25 per acre ($800 to $1,500/km 2 ) in rural New York State – nearly the same rate as hiring a laborer to clear the same area. Potash making became a major industry in British North America. Great Britain was always the most important market. The American potash industry followed the woodsman's ax across the country.
The first US patent of any kind was issued in 1790 to Samuel Hopkins for an improvement "in the making of Pot ash and Pearl ash by a new Apparatus and Process". [ 21 ] Pearl ash was a purer quality made by calcination of potash in a reverberatory furnace or kiln. Potash pits were once used in England to produce potash that was used in making soap for the preparation of wool for yarn production.
After about 1820, New York replaced New England as the most important source; by 1840 the center was in Ohio. Potash production was always a by-product industry, following from the need to clear land for agriculture . [ 16 ]
From 1767, potash from wood ashes was exported from Canada. By 1811, 70% of the total 19.6 million lbs of potash imports to Britain came from Canada. [ 19 ] Exports of potash and pearl ash reached 43,958 barrels in 1865. There were 519 asheries in operation in 1871.
The wood-ash industry declined in the late 19th century when large-scale production of potash from mineral salts was established in Germany . In the early 20th century, the potash industry was dominated by a cartel in which Germany had the dominant role. [ 22 ] : 147 WWI saw a brief resurgence of American asheries, with their product typically consisting of 66% hydroxide, 17% carbonate, 16% sulfate and other impurities. [ 23 ] Later in the century, the cartel ended as new potash producers emerged in the USSR and Canada. [ 22 ] : 147
In 1943, potash was discovered in Saskatchewan , Canada, during oil drilling. Active exploration began in 1951. In 1958, the Potash Company of America became the first potash producer in Canada with the commissioning of an underground potash mine at Patience Lake . [ 11 ] As numerous potash producers in Canada developed, the Saskatchewan government became increasingly involved in the industry, leading to the creation of Canpotex in the 1970s. [ 22 ] : 147
In 1964 the Canadian company Kalium Chemicals established the first potash mine using the solution process. The discovery was made during oil reserve exploration. The mine was developed near Regina, Saskatchewan. The mine reached depths greater than 1500 meters. It is now the Mosaic Corporation's Belle Plaine unit.
The USSR's potash production had largely been for domestic use and use in the Council for Mutual Economic Assistance countries. [ 22 ] : 147 After the dissolution of the USSR , Russian and Belarusian potash producers entered into direct competition with producers elsewhere in the world for the first time. [ 22 ] : 147
In the beginning of the 20th century, potash deposits were found in the Dallol Depression in the Musely and Crescent localities near the Ethiopean- Eritrean border. The estimated reserves in Musely and Crescent are 173 and 12 million tonnes respectively. The latter is particularly suitable for surface mining. It was explored in the 1960s but the works stopped due to flooding in 1967. Attempts to continue mining in the 1990s were halted by the Eritrean–Ethiopian War and have not resumed as of 2009. [ 24 ]
All commercial potash deposits come originally from evaporite deposits and are often buried deep below the earth's surface. Potash ores are typically rich in potassium chloride (KCl), sodium chloride (NaCl) and other salts and clays, and are typically obtained by conventional shaft mining with the extracted ore ground into a powder. [ 25 ] Most potash mines today are deep shaft mines as much as 4,400 feet (1,400 m) underground. Others are mined as strip mines, having been laid down in horizontal layers as sedimentary rock . In above-ground processing plants, the KCl is separated from the mixture to produce a high-analysis potassium fertilizer. Other potassium salts can be separated by various procedures, resulting in potassium sulfate and potassium-magnesium sulfate.
Other methods include dissolution mining and evaporation methods from brines. In the evaporation method, hot water is injected into the potash, which is dissolved and then pumped to the surface where it is concentrated by solar induced evaporation. Amine reagents are then added to either the mined or evaporated solutions. The amine coats the KCl but not NaCl. Air bubbles cling to the amine + KCl and float it to the surface while the NaCl and clay sink to the bottom. The surface is skimmed for the amine + KCl, which is then dried and packaged for use as a K rich fertilizer—KCl dissolves readily in water and is available quickly for plant nutrition . [ 26 ]
Recovery of potassium fertilizer salts from sea water has been studied in India . [ 27 ] During extraction of salt from seawater by evaporation, potassium salts get concentrated in bittern , an effluent from the salt industry.
Potash deposits are distributed unevenly throughout the world. [ 22 ] : 147 As of 2015 [update] , deposits are being mined in Canada, Russia, China, Belarus, Israel, Germany, Chile, the United States, Jordan, Spain, the United Kingdom, Uzbekistan and Brazil, [ 28 ] with the most significant deposits present under the great depths of the Prairie Evaporite Formation in Saskatchewan , Canada. [ 11 ] Canada and Russia are the countries where the bulk of potash is produced; Belarus is also a major producer. [ 22 ] : 12
The Permian Basin deposit includes the major mines outside of Carlsbad , New Mexico, to the world's purest potash deposit in Lea County, New Mexico (near the Carlsbad deposits), which is believed to be roughly 80% pure. ( Osceola County, Michigan , has deposits 90+% pure; the only mine there was converted to salt production, however.) Canada is the largest producer, followed by Russia and Belarus. The most significant reserve of Canada's potash is located in the province of Saskatchewan and is mined by The Mosaic Company , Nutrien and K+S . [ 1 ]
In China , most potash deposits are concentrated in the deserts and salt flats of the endorheic basins of its western provinces, particularly Qinghai . Geological expeditions discovered the reserves in the 1950s [ 29 ] but commercial exploitation lagged until Deng Xiaoping 's Reform and Opening Up Policy in the 1980s. The 1989 opening of the Qinghai Potash Fertilizer Factory in the remote Qarhan Playa increased China's production of potassium chloride sixfold, from less than 40,000 t (39,000 long tons; 44,000 short tons) a year at Haixi and Tanggu to just under 240,000 t (240,000 long tons; 260,000 short tons) a year. [ 30 ]
In 2013, almost 70% of potash production was controlled by Canpotex , an exporting and marketing firm, and the Belarusian Potash Company . The latter was a joint venture between Belaruskali and Uralkali , but on July 30, 2013, Uralkali announced that it had ended the venture. [ 31 ]
Potash is water soluble and transporting it requires special transportation infrastructure. [ 22 ] : 152
Excessive respiratory disease due to environmental hazards, such as radon and asbestos , has been a concern for potash miners throughout history. Potash miners are liable to develop silicosis . Based on a study conducted between 1977 and 1987 of cardiovascular disease among potash workers, the overall mortality rates were low, but a noticeable difference in above-ground workers was documented. [ 32 ]
Potassium is the third major plant and crop nutrient after nitrogen and phosphorus . It has been used since antiquity as a soil fertilizer (about 90% of current use). [ 10 ] Fertilizer use is the main driver behind potash consumption, especially for its use in fertilizing crops that contribute to high-protein diets. [ 22 ] : 23 As of at least 2010, more than 95% of potash is mined for use in agricultural purposes. [ 22 ] : 24
Elemental potassium does not occur in nature because it reacts violently with water. [ 34 ] As part of various compounds, potassium makes up about 2.6% of the Earth's crust by mass and is the seventh most abundant element, similar in abundance to sodium at approximately 1.8% of the crust. [ 35 ] Potash is important for agriculture because it improves water retention, yield, nutrient value, taste, color, texture [ 22 ] : 24 and disease resistance of food crops. It has wide application to fruit and vegetables, rice, wheat and other grains, sugar, corn, soybeans, palm oil and cotton, all of which benefit from the nutrient's quality-enhancing properties. [ 36 ]
Demand for food and animal feed has been on the rise since 2000. The United States Department of Agriculture 's Economic Research Service (ERS) attributes the trend to average annual population increases of 75 million people around the world. Geographically, economic growth in Asia and Latin America greatly contributed to the increased use of potash-based fertilizer. Rising incomes in developing countries also were a factor in the growing potash and fertilizer use. With more money in the household budget, consumers added more meat and dairy products to their diets. This shift in eating patterns required more acres to be planted, more fertilizer to be applied and more animals to be fed—all requiring more potash.
After years of trending upward, fertilizer use slowed in 2008. The worldwide economic downturn is the primary reason for the declining fertilizer use, dropping prices, and mounting inventories. [ 37 ] [ 38 ]
The world's largest consumers of potash are China, the United States, Brazil, and India. [ 39 ] Brazil imports 90% of the potash it needs. [ 39 ] Potash consumption for fertilizers is expected to increase to about 37.8 million tonnes by 2022. [ 40 ]
Potash imports and exports are often reported in K 2 O equivalent , although fertilizer never contains potassium oxide, per se, because potassium oxide is caustic and hygroscopic .
At the beginning of 2008, potash prices started a meteoric climb from less than US$200 a tonne to a high of US$875 in February 2009. [ 41 ] These subsequently dropped dramatically to an April 2010 low of US$310 level, before recovering in 2011–12, and relapsing again in 2013. For reference, prices in November 2011 were about US$470 per tonne, but as of May 2013 were stable at US$393. [ 42 ] After the surprise breakup of the world's largest potash cartel at the end of July 2013, potash prices were poised to drop some 20 percent. [ 43 ] At the end of December 2015, potash traded for US$295 a tonne. In April 2016 its price was US$269. [ 44 ] In May 2017, prices had stabilised at around US$216 a tonne down 18% from the previous year. By January 2018, prices have been recovering to around US$225 a tonne. [ 45 ] World potash demand tends to be price inelastic in the short-run and even in the long run. [ 40 ]
In addition to its use as a fertilizer, potassium chloride is important in many industrialized economies, where it is used in aluminium recycling , by the chloralkali industry to produce potassium hydroxide, in metal electroplating , oil-well drilling fluid , snow and ice melting, steel heat-treating, in medicine as a treatment for hypokalemia , and water softening . Potassium hydroxide is used for industrial water treatment and is the precursor of potassium carbonate, several forms of potassium phosphate, many other potassic chemicals, and soap manufacturing. Potassium carbonate is used to produce animal feed supplements, cement , fire extinguishers , food products, photographic chemicals , and textiles. It is also used in brewing beer , pharmaceutical preparations, and as a catalyst for synthetic rubber manufacturing. Also combined with silica sand to produce potassium silicate , sometimes known as waterglass , for use in paints and arc welding electrodes. These non-fertilizer uses have accounted for about 15% of annual potash consumption in the United States. [ 1 ]
No substitutes exist for potassium as an essential plant nutrient and as an essential nutritional requirement for animals and humans. [ 22 ] : 143 Manure and glauconite (greensand) are low-potassium-content sources that can be profitably transported only short distances to crop fields. [ 33 ] | https://en.wikipedia.org/wiki/Potash |
Potassium alum , potash alum , or potassium aluminium sulfate is a chemical compound defined as the double sulfate of potassium and aluminium , with chemical formula KAl(SO 4 ) 2 . It is commonly encountered as the dodecahydrate , KAl(SO 4 ) 2 ·12H 2 O. It crystallizes in an octahedral structure in neutral solution and cubic structure in an alkali solution with space group Pa 3 and lattice parameter of 12.18 Å. [ 5 ] The compound is the most important member of the generic class of compounds called alums , and is often called simply alum . [ 6 ]
Potassium alum is commonly used in water purification , leather tanning, dyeing , [ 7 ] fireproof textiles , and baking powder as E number E522 . It also has cosmetic uses as a deodorant, as an aftershave treatment and as a styptic for minor bleeding from shaving. [ 8 ] [ 9 ]
Historically, potassium alum was used extensively in the wool industry [ 10 ] from Classical antiquity , during the Middle Ages , and well into 19th century as a mordant or dye fixative in the process of turning wool into dyed bolts of cloth . [ citation needed ]
Potassium alum was also known to the Ancient Egyptians , who obtained it from evaporites in the Western desert and reportedly used it as early as 1500 BCE to reduce the visible cloudiness ( turbidity ) in the water. [ citation needed ]
According to the expert on Middle Eastern history of chemistry Martin Levey, potassium alum is one of the few compounds known to the ancients that can be found relatively pure in nature, as well as one of only a few chemicals used in Mesopotamian chemical technology that can be identified with certainty. [ 11 ] Both native and imported potassium alum was used. [ 11 ] Together with other agents, potassium alum was used in glass-making , tanning , and in the dyeing of cloth , wood, and possibly hair. [ 11 ] A tanning process using potassium alum is described in tablets from the first millennium BCE. [ 11 ] When Levey wrote his article in 1958, no description of the dyeing process had been found, so it is not known how potassium alum was used in it. In Mesopotamian medicine potassium alum was used extensively, for example against itch, jaundice , some eye condition, and unidentified ailments. [ 11 ]
According to Levey, potassium alum was used in "classical times" as a flux when soldering copper , in the fireproofing of wood, and in the separation of silver and gold, but that there is no evidence that these uses existed in Mesopotamia. [ 11 ]
The production of potassium alum from alunite is archaeologically attested on the island Lesbos . [ 12 ] This site was abandoned in the 7th century but dates back at least to the 2nd century CE.
Potassium alum was described under the name alumen or salsugoterrae by Pliny , [ 13 ] and it is clearly the same as the stypteria (στυπτηρία) described by Dioscorides . [ 14 ] However, the name alum and other names applied to this substance — like misy , sory , chalcanthum , and atramentum sutorium — were often applied to other products with vaguely similar properties or uses, such as iron sulfate or "green vitriol". [ 15 ] [ full citation needed ]
Potassium alum is mentioned in Ayurvedic texts namely Charak Samhita, Sushurta Samhita, and Ashtanga Hridaya with the name such as sphaṭika kṣāra , phitkari or saurashtri . It is used in traditional Chinese medicine with the name mingfan .
In the 13th and 14th centuries, alum (from alunite) was a major import from Phocaea ( Gulf of Smyrna in Byzantium) by Genoans and Venetians (and was a cause of war between Genoa and Venice ) and later by Florence . After the fall of Constantinople , alunite (the source of alum) was discovered at Tolfa in the Papal States (1461). The textile dyeing industry in Bruges , and many locations in Italy, and later in England, required alum to stabilize the dyes onto the fabric (make the dyes "fast") and also to brighten the colors. [ 16 ] [ 17 ]
Potassium alum was imported into England mainly from the Middle East , and, from the late 15th century onwards, the Papal States for hundreds of years. Its use there was as a dye -fixer ( mordant ) for wool (which was one of England's primary industries, the value of which increased significantly if dyed). [ citation needed ] These sources were unreliable, however, and there was a push to develop a source in England especially as imports from the Papal States ceased following the excommunication of Henry VIII . [ 18 ]
With state financing, attempts were made throughout the 16th century, but without success until the early 17th century. An industry was founded in Yorkshire to process the shale , which contained the key ingredient, aluminium sulfate , and made an important contribution to the Industrial Revolution . One of the oldest historic sites for the production of alum from shale and human urine are the Peak alum works in Ravenscar , North Yorkshire. By the 18th century, the landscape of northeast Yorkshire had been devastated by this process, which involved constructing 100-foot (30 m) stacks of burning shale and fuelling them with firewood continuously for months. The rest of the production process consisted of quarrying, extraction, steeping of shale ash with seaweed in urine, boiling, evaporating, crystallisation, milling and loading into sacks for export. Quarrying ate into the cliffs of the area, the forests were felled for charcoal and the land polluted by sulfuric acid and ash. [ 19 ]
In the early 1700s, Georg Ernst Stahl claimed that reacting sulfuric acid with limestone produced a sort of alum. [ 20 ] [ 21 ] The error was soon corrected by Johann Pott and Andreas Marggraf , who showed that the precipitate obtained when an alkali is poured into a solution of alum, namely alumina , is quite different from lime and chalk , and is one of the ingredients in common clay . [ 22 ] [ 23 ]
Marggraf also showed that perfect crystals with properties of alum can be obtained by dissolving alumina in sulfuric acid and adding potash or ammonia to the concentrated solution. [ 24 ] [ 25 ] In 1767, Torbern Bergman observed the need for potassium or ammonium sulfates to convert aluminium sulfate into alum, while sodium or calcium would not work. [ 24 ] [ 26 ]
At the time, potassium ("potash") was believed to be exclusively found on plants. However, in 1797, Martin Klaproth discovered the presence of potassium in the minerals leucite and lepidolite . [ 27 ] [ 28 ]
Louis Vauquelin then conjectured that potassium was likewise an ingredient in many other minerals . Given Marggraf and Bergman's experiments, he suspected that this alkali constituted an essential ingredient of natural alum. In 1797 he published a dissertation demonstrating that alum is a double salt , composed of sulfuric acid, alumina, and potash. [ 29 ] In the same journal volume, Jean-Antoine Chaptal published the analysis of four different kinds of alum, namely, Roman alum, Levant alum, British alum and alum manufactured by himself, [ 30 ] confirming Vauquelin's results. [ 24 ]
Potassium alum crystallizes in regular octahedra with flattened corners and is very soluble in water. The solution is slightly acidic and is astringent to the taste. Neutralizing a solution of alum with potassium hydroxide will begin to cause the separation of alumina Al(OH) 3 . [ citation needed ]
When heated to nearly a red heat, it gives a porous, friable mass, which is known as "burnt alum". It fuses at 92 °C (198 °F) in its own water of crystallization . [ citation needed ]
Potassium alum dodecahydrate occurs in nature as a sulfate mineral called alum-(K) , typically as encrustations on rocks in areas of weathering and oxidation of sulfide minerals and potassium-bearing minerals. [ citation needed ]
In the past, potassium alum has been obtained from alunite ( KAl(SO 4 ) 2 ·2Al(OH) 3 ), mined from sulfur-containing volcanic sediments. [ 31 ] Alunite is an associate and likely potassium and aluminium source. [ 1 ] [ 32 ] It has been reported at Vesuvius , Italy ; east of Springsure , Queensland ; in Alum Cave, Tennessee ; Alum Gulch, Santa Cruz County, Arizona and the Philippine island of Cebu .
In order to obtain alum from alunite , it is calcined and then exposed to the action of air for a considerable time. During this exposure it is kept continually moistened with water, so that it ultimately falls to a very fine powder. This powder is then lixiviated with hot water, the liquor decanted, and the alum allowed to crystallize. [ citation needed ]
The undecahydrate also occurs as the fibrous mineral kalinite ( KAl(SO 4 ) 2 ·12H 2 O ). [ 33 ]
Potassium alum historically was mainly extracted from alunite .
Potassium alum is now produced industrially by adding potassium sulfate to a concentrated solution of aluminium sulfate . [ 34 ] The aluminium sulfate is usually obtained by treating minerals like alum schist , bauxite and cryolite with sulfuric acid. [ 35 ] If much iron should be present in the sulfate then it is preferable to use potassium chloride in place of potassium sulfate. [ 35 ]
Potassium alum is used in medicine mainly as an astringent (or styptic ) and antiseptic .
Styptic pencils are rods composed of potassium alum or aluminum sulfate, used topically to reduce bleeding in minor cuts (especially from shaving ) and abrasions, nosebleeds , and hemorrhoids , and to relieve pain from stings and bites. [ citation needed ] Potassium alum blocks are rubbed over the wet skin after shaving. [ 9 ]
Potassium alum is also used topically to remove pimples and acne , and to cauterize aphthous ulcers in the mouth and canker sores , as it has a significant drying effect to the area and reduces the irritation felt at the site. [ 37 ] [ 38 ] It has been used to stop bleeding in cases of hemorrhagic cystitis [ 39 ] and is used in some countries as a cure for hyperhidrosis . [ citation needed ]
It is used in dentistry (especially in gingival retraction cords) because of its astringent and hemostatic properties. [ citation needed ]
Potassium and ammonium alum are the active ingredients in some antiperspirants and deodorants , acting by inhibiting the growth of the bacteria responsible for body odor . Alum's antiperspirant and antibacterial properties [ 40 ] [ 41 ] contribute to its traditional use as an underarm deodorant . [ 13 ] It has been used for this purpose in Europe, Mexico, Thailand (where it is called sarn-som ), throughout Asia and in the Philippines (where it is called tawas ). Today, potassium or ammonium alum is sold commercially for this purpose as a "deodorant crystal". [ 42 ] [ 43 ] [ 8 ] Beginning in 2005 the US Food and Drug Administration no longer recognized it as a wetness reducer, but it is still available and used in several other countries, primarily in Asia. [ citation needed ]
Potassium alum was the major immunologic adjuvant used to increase the efficacy of vaccines , and has been used since the 1920s. [ 44 ] But it has been almost completely replaced by aluminium hydroxide and aluminium phosphate in commercial vaccines. [ 45 ]
Alum may be used in depilatory waxes used for the removal of body hair or applied to freshly waxed skin as a soothing agent.
In the 1950s, men sporting crewcut or flattop hairstyles sometimes applied alum to their hair, as an alternative to pomade , to keep the hair standing up. [ citation needed ]
Potassium alum may be an acidic ingredient of baking powder to provide a second leavening phase at high temperatures (although sodium alum is more commonly used for that purpose). [ citation needed ] For example, potassium alum is frequently used in leavening of youtiao , a traditional Chinese fried bread, throughout China. [ 46 ]
Alum was used by bakers in England during the 1800s to make bread whiter. This was theorized by some, including John Snow , to cause rickets . [ 47 ] [ 48 ] The Sale of Food and Drugs Act 1875 ( 38 & 39 Vict. c. 63) prevented this and other adulterations. [ 49 ]
Potassium alum, under the name "alum powder", is found in the spice section of many grocery stores in the US . Its chief culinary use is in pickling recipes, to preserve and add crispness to fruit and vegetables. [ 50 ]
Potassium alum is used as a fire retardant to render cloth, wood, and paper materials less flammable. [ 34 ]
Potassium alum is used in leather tanning , [ 51 ] in order to remove moisture from the hide and prevent rotting. [ citation needed ] Unlike tannic acid , alum doesn't bind to the hide and can be washed out of it. [ citation needed ]
Alum has been used since antiquity as mordant to form a permanent bond between dye and natural textile fibers like wool . [ 52 ] It is also used for this purpose in paper marbling . [ 53 ]
Potassium alum has been used since remote antiquity for purification of turbid liquids. [ 54 ] It is still widely used in the purification of water for drinking and industrial processes water, treatment of effluents and post-storm treatment of lakes to precipitate contaminants. [ 55 ]
Between 30 and 40 ppm of alum [ 54 ] [ 56 ] for household wastewater, often more for industrial wastewater, [ 57 ] is added to the water so that the negatively charged colloidal particles clump together into " flocs ", which then float to the top of the liquid, settle to the bottom of the liquid, or can be more easily filtered from the liquid, prior to further filtration and disinfection of the water. [ 34 ] Like other similar salts, it works by neutralizing the electrical double layer surrounding very fine suspended particles, allowing them to join into flocs.
The same principle is exploited when using alum to increase the viscosity of a ceramic glaze suspension ; this makes the glaze more readily adherent and slows its rate of sedimentation . [ citation needed ]
Aluminum hydroxide from potassium alum serves as a base for the majority of lake pigments . [ 58 ]
Alum solution has the property of dissolving steels while not affecting aluminium or base metals . Alum solution can be used to dissolve steel tool bits that have become lodged in machined castings. [ 59 ] [ 60 ]
In traditional Japanese art , alum and animal glue were dissolved in water, forming a liquid known as dousa ( ja:礬水 ), and used as an undercoat for paper sizing . [ citation needed ]
Alum is an ingredient in some recipes for homemade modeling compounds, often called "play clay" or "play dough", intended for use by children. [ citation needed ]
Potassium alum was formerly used as a hardener for photographic emulsions (films and papers), usually as part of the fixer . It has now been replaced in that use by other chemicals.
Potassium alum may be a weak irritant to the skin. [ 61 ] | https://en.wikipedia.org/wiki/Potassium_alum |
Potassium amyl xanthate (/pəˈtæsiəm ˌæmɪl ˈzænθeɪt/) is an organosulfur compound with the chemical formula CH 3 (CH 2 ) 4 OCS 2 K. It is a pale yellow powder with a pungent odor that is soluble in water. It is widely used in the mining industry for the separation of ores using the flotation process .
As typical for xanthates , potassium amyl xanthate is prepared by reacting n - amyl alcohol with carbon disulfide and potassium hydroxide . [ 1 ]
Potassium amyl xanthate is a pale yellow powder. Its solutions are relatively stable between pH 8 and 13 with a maximum of stability at pH 10. [ 2 ]
The LD 50 is 90-148 mg/kg (oral, rat). [ 4 ]
It is a biodegradable compound. | https://en.wikipedia.org/wiki/Potassium_amyl_xanthate |
Potassium azodicarboxylate is a chemical compound with the formula C 2 K 2 N 2 O 4 . This chemical is used as a precursor to diimide . It can be synthesized by the reaction of potassium hydroxide with azodicarbonamide and it reacts with carboxylic acids to form diimide. [ 1 ] | https://en.wikipedia.org/wiki/Potassium_azodicarboxylate |
Potassium bifluoride is the inorganic compound with the formula K[HF 2 ] . This colourless salt consists of the potassium cation ( K + ) and the bifluoride anion ( [HF 2 ] − ). The salt is used as an etchant for glass. Sodium bifluoride is related and is also of commercial use as an etchant as well as in cleaning products. [ 3 ]
The salt was prepared by Edmond Frémy by treating potassium carbonate or potassium hydroxide with hydrofluoric acid:
With one more equivalent of HF, K[H 2 F 3 ] ( CAS RN 12178-06-2, m.p. 71.7 °C [ 4 ] ) is produced:
Thermal decomposition of K[HF 2 ] gives hydrogen fluoride :
The industrial production of fluorine entails the electrolysis of molten K[HF 2 ] and K[H 2 F 3 ] . [ 3 ] The electrolysis of K[HF 2 ] was first used by Henri Moissan in 1886. | https://en.wikipedia.org/wiki/Potassium_bifluoride |
Potassium bromate ( KBrO 3 ) is a bromate of potassium and takes the form of white crystals or powder. It is a strong oxidizing agent.
Potassium bromate is produced when bromine is passed through a hot solution of potassium hydroxide . This first forms unstable potassium hypobromite , which quickly disproportionates into bromide and bromate: [ 3 ]
Electrolysis of potassium bromide solutions will also give bromate. Both processes are analogous to those used in the production of chlorates . [ citation needed ]
Potassium bromate is readily separated from the potassium bromide present in both methods owing to its much lower solubility; when a solution containing potassium bromate and bromide is cooled to 0°C, nearly all bromate will precipitate, while nearly all of the bromide will stay in solution. [ 3 ]
As established by X-ray crystallography , the O-Br-O angles are 104.5°, consistent with its pyramidal shape of the anion. The Br-O distances are 1.66 Å. [ 1 ]
Potassium bromate is typically used in the United States as a flour improver ( E number E924). It acts to strengthen the dough and to allow higher rising. It is an oxidizing agent , and under the right conditions, is reduced to bromide in the baking process. [ 4 ] [ 5 ] However, if too much is added, or if the bread is under-baked long or baked at a low enough temperature, then a residual amount remains, which may be harmful if consumed. [ 5 ]
Potassium bromate may be used in the production of malt barley, but under safety conditions prescribed by the U.S. Food and Drug Administration (FDA), including labeling standards for the finished product. [ 6 ] It is a powerful oxidizer ( electrode potential E ⊖ {\displaystyle E^{\ominus }} = 1.5 volts, similar to potassium permanganate ). [ citation needed ]
Potassium bromate is classified as a category 2B carcinogen by the IARC . [ 7 ] The FDA allowed the use of bromate before the Delaney clause of the Food, Drug, and Cosmetic Act – which bans potentially carcinogenic substances – went into effect in 1958. Since 1991, the FDA has urged bakers to not use it, but has not mandated a ban.
Japanese baked goods manufacturers stopped using potassium bromate voluntarily in 1980; however, Yamazaki Baking resumed its use in 2005, claiming it had new production methods to reduce the amount of the chemical which remained in the final product. [ 8 ]
Potassium bromate is banned from food products in the European Union, Argentina, Brazil, [ 9 ] Canada, Nigeria, South Korea, and Peru. It was banned in Sri Lanka in 2001, [ 10 ] China in 2005, [ 11 ] and India in 2016, [ 12 ] but it is allowed in most of the United States. As of May 2023, the U.S. state of New York is considering banning the use of potassium bromate. [ 13 ]
In California , a warning label is required when bromated flour is used. [ 14 ] In October 2023, California enacted a law that banned the manufacture, sale, and distribution of potassium bromate (along with three other additives: brominated vegetable oil , propylparaben , and Red 3 ). The law takes effect in 2027. It was the first U.S. state to ban it. [ 15 ] [ 16 ] [ 17 ] | https://en.wikipedia.org/wiki/Potassium_bromate |
Potassium carbonate is the inorganic compound with the formula K 2 C O 3 . It is a white salt , which is soluble in water and forms a strongly alkaline solution. It is deliquescent , often appearing as a damp or wet solid . Potassium carbonate is mainly used in the production of soap and glass . [ 3 ] Commonly, it can be found as the result of leakage of alkaline batteries . [ 4 ] Potassium carbonate is a potassium salt of carbonic acid . This salt consists of potassium cations K + and carbonate anions CO 2− 3 , and is therefore an alkali metal carbonate.
Potassium carbonate is the primary component of potash and the more refined pearl ash or salt of tartar. Historically, pearl ash was created by baking potash in a kiln to remove impurities. The fine, white powder remaining was the pearl ash. The first patent issued by the US Patent Office was awarded to Samuel Hopkins in 1790 for an improved method of making potash and pearl ash. [ 5 ]
In late 18th-century North America , before the development of baking powder , pearl ash was used as a leavening agent for quick breads . [ 6 ] [ 7 ]
The modern commercial production of potassium carbonate is by reaction of potassium hydroxide with carbon dioxide : [ 3 ]
From the solution crystallizes the sesquihydrate K 2 CO 3 ·1.5H 2 O ("potash hydrate"). Heating this solid above 200 °C (392 °F) gives the anhydrous salt. In an alternative method, potassium chloride is treated with carbon dioxide in the presence of an organic amine to give potassium bicarbonate , which is then calcined : | https://en.wikipedia.org/wiki/Potassium_carbonate |
Potassium chlorochromate is an inorganic compound with the formula KCrO 3 Cl . [ 4 ] It is the potassium salt of chlorochromate, [CrO 3 Cl] − . It is a water-soluble orange compound is used occasionally for oxidation of organic compounds. It is sometimes called Péligot's salt , in recognition of its discoverer Eugène-Melchior Péligot .
Potassium chlorochromate was originally prepared by treating potassium dichromate with hydrochloric acid . An improved route involves the reaction of chromyl chloride and potassium chromate : [ 5 ]
The salt consists of the tetrahedral chlorochromate anion. The average Cr=O bond length is 159 pm, and the Cr-Cl distance is 219 pm. [ 6 ]
Although air-stable, its aqueous solutions undergo hydrolysis in the presence of strong acids. With concentrated hydrochloric acid, it converts to chromyl chloride , which in turn reacts with water to form chromic acid and additional hydrochloric acid. When treated with 18-crown-6 , it forms the lipophilic salt [K(18-crown-6)]CrO 3 Cl. [ 7 ]
Peligot's salt can oxidize benzyl alcohol , a reaction which can be catalyzed by acid . [ 8 ] A related salt, pyridinium chlorochromate , is more commonly used for this reaction.
Potassium chlorochromate is toxic upon ingestion , and may cause irritation, chemical burns , and even ulceration on contact with the skin or eyes. . [ 9 ] Like other hexavalent chromium compounds, it is also carcinogenic and mutagenic.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Potassium_chlorochromate |
Potassium chromate is the inorganic compound with the formula K 2 CrO 4 . This yellow solid is the potassium salt of the chromate anion. It is a common laboratory chemical, whereas sodium chromate is important industrially.
Two crystalline forms are known, both being very similar to the corresponding potassium sulfate. Orthorhombic β-K 2 CrO 4 is the common form, but it converts to an α-form above 666 °C. [ 1 ] These structures are complex, although the chromate ion adopts the typical tetrahedral geometry. [ 2 ]
It is prepared by treating potassium dichromate with potassium hydroxide :
Or, the fusion of potassium hydroxide and chromium trioxide :
In solution, the behavior of potassium and sodium dichromates are very similar. When treated with lead(II) nitrate, it gives an orange-yellow precipitate, lead(II) chromate.
Unlike the less expensive sodium salt, potassium salt is mainly used for laboratory work in situations where an anhydrous salt is required. [ 1 ] It is as an oxidizing agent in organic synthesis . It is used in qualitative inorganic analysis , e.g. as a colorimetric test for silver ion. It is also used as an indicator in precipitation titrations with silver nitrate and sodium chloride (they can be used as standard as well as titrant for each other) as potassium chromate turns red in the presence of excess of silver ions.
Tarapacaite is the natural, mineral form of potassium chromate. It occurs very rarely and until now is known from only few localities on Atacama Desert . [ citation needed ]
As with other Cr(VI) compounds, potassium chromate is carcinogenic . [ 3 ] The compound is also corrosive and exposure may produce severe eye damage or blindness. [ 4 ] Human exposure further encompasses impaired fertility, heritable genetic damage and harm to unborn children. | https://en.wikipedia.org/wiki/Potassium_chromate |
Potassium dichromate , K 2 Cr 2 O 7 , is a common inorganic chemical reagent, most commonly used as an oxidizing agent in various laboratory and industrial applications. As with all hexavalent chromium compounds, it is acutely and chronically harmful to health. It is a crystalline ionic solid with a very bright, red-orange color. The salt is popular in laboratories because it is not deliquescent , in contrast to the more industrially relevant salt sodium dichromate . [ 6 ]
Potassium dichromate is usually prepared by the reaction of sodium dichromate and potassium chloride . Alternatively, it can be also obtained from potassium chromate by roasting chromite ore with potassium hydroxide . It is soluble in water and in the dissolution process it ionizes:
Potassium dichromate is an oxidising agent in organic chemistry , and is milder than potassium permanganate . It is used to oxidize alcohols . It converts primary alcohols into aldehydes and, under more forcing conditions, into carboxylic acids. In contrast, potassium permanganate tends to give carboxylic acids as the sole products. Secondary alcohols are converted into ketones . For example, menthone may be prepared by oxidation of menthol with acidified dichromate. [ 7 ] Tertiary alcohols cannot be oxidized.
In an aqueous solution the color change exhibited can be used to test for distinguishing aldehydes from ketones. Aldehydes reduce dichromate from the +6 to the +3 oxidation state , changing color from orange to green. This color change arises because the aldehyde can be oxidized to the corresponding carboxylic acid. A ketone will show no such change because it cannot be oxidized further, and so the solution will remain orange.
When heated strongly, it decomposes with the evolution of oxygen.
When an alkali is added to an orange-red solution containing dichromate ions, a yellow solution is obtained due to the formation of chromate ions ( CrO 2− 4 ). For example, potassium chromate is produced industrially using potash :
The reaction is reversible.
Treatment with cold sulfuric acid gives red crystals of chromic anhydride (chromium trioxide, CrO 3 ):
On heating with concentrated acid, oxygen is evolved:
Potassium dichromate has few major applications, as the sodium salt is dominant industrially. The main use is as a precursor to potassium chrome alum , used in leather tanning . [ 6 ] [ 8 ]
Like other chromium(VI) compounds ( chromium trioxide , sodium dichromate ), potassium dichromate has been used to prepare " chromic acid " for cleaning glassware and etching materials. Because of safety concerns associated with hexavalent chromium, this practice has been largely discontinued.
It is used as an ingredient in cement in which it retards the setting of the mixture and improves its density and texture. This usage commonly causes contact dermatitis in construction workers . [ 9 ]
In 1839, Mungo Ponton discovered that paper treated with a solution of potassium dichromate was visibly tanned by exposure to sunlight, the discoloration remaining after the potassium dichromate had been rinsed out. In 1852, Henry Fox Talbot discovered that exposure to ultraviolet light in the presence of potassium dichromate hardened organic colloids such as gelatin and gum arabic , making them less soluble.
These discoveries soon led to the carbon print , gum bichromate , and other photographic printing processes based on differential hardening. Typically, after exposure, the unhardened portion was rinsed away with warm water, leaving a thin relief that either contained a pigment included during manufacture or was subsequently stained with a dye. Some processes depended on the hardening only, in combination with the differential absorption of certain dyes by the hardened or unhardened areas. Because some of these processes allowed the use of highly stable dyes and pigments, such as carbon black , prints with an extremely high degree of archival permanence and resistance to fading from prolonged exposure to light could be produced.
Dichromated colloids were also used as photoresists in various industrial applications, most widely in the creation of metal printing plates for use in photomechanical printing processes.
Chromium intensification or Photochromos uses potassium dichromate together with equal parts of concentrated hydrochloric acid diluted down to approximately 10% v/v to treat weak and thin negatives of black and white photograph roll. This solution reconverts the elemental silver particles in the film to silver chloride . After thorough washing and exposure to actinic light, the film can be redeveloped to its end-point yielding a stronger negative which is able to produce a more satisfactory print.
A potassium dichromate solution in sulfuric acid can be used to produce a reversal negative (that is, a positive transparency from a negative film). This is effected by developing a black and white film but allowing the development to proceed more or less to the end point. The development is then stopped by copious washing and the film then treated in the acid dichromate solution. This converts the silver metal to silver sulfate , a compound that is insensitive to light. After thorough washing and exposure to actinic light, the film is developed again allowing the previously unexposed silver halide to be reduced to silver metal. The results obtained can be unpredictable, but sometimes excellent results are obtained producing images that would otherwise be unobtainable. This process can be coupled with solarisation so that the end product resembles a negative and is suitable for printing in the normal way.
Cr(VI) compounds have the property of tanning animal proteins when exposed to strong light. This quality is used in photographic screen-printing .
In screen-printing a fine screen of bolting silk or similar material is stretched taut onto a frame similar to the way canvas is prepared before painting. A colloid sensitized with a dichromate is applied evenly to the taut screen. Once the dichromate mixture is dry, a full-size photographic positive is attached securely onto the surface of the screen, and the whole assembly exposed to strong light – times vary from 3 minutes to a half an hour in bright sunlight – hardening the exposed colloid. When the positive is removed, the unexposed mixture on the screen can be washed off with warm water, leaving the hardened mixture intact, acting as a precise mask of the desired pattern, which can then be printed with the usual screen-printing process.
Because it is non-hygroscopic, potassium dichromate is a common reagent in classical "wet tests" in analytical chemistry.
The concentration of ethanol in a sample can be determined by back titration with acidified potassium dichromate. Reacting the sample with an excess of potassium dichromate, all ethanol is oxidized to acetic acid :
Full reaction of converting ethanol to acetic acid:
The excess dichromate is determined by titration against sodium thiosulfate . Adding the amount of excess dichromate from the initial amount, gives the amount of ethanol present. Accuracy can be improved by calibrating the dichromate solution against a blank.
One major application for this reaction is in old police breathalyzer tests. When alcohol vapor makes contact with the orange dichromate-coated crystals, the color changes from Cr(VI) orange to Cr(III) green. The degree of the color change is directly related to the level of alcohol in the suspect's breath.
When dissolved in an approximately 35% nitric acid solution it is called Schwerter's solution and is used to test for the presence of various metals, notably for determination of silver purity. Pure silver will turn the solution bright red, sterling silver will turn it dark red, low grade coin silver (0.800 fine) will turn brown (largely due to the presence of copper which turns the solution brown) and even green for 0.500 silver.
Brass turns dark brown, copper turns brown, lead and tin both turn yellow while gold and palladium do not change.
Potassium dichromate paper can be used to test for sulfur dioxide , as it turns distinctively from orange to green. This is typical of all redox reactions where hexavalent chromium is reduced to trivalent chromium. Therefore, it is not a conclusive test for sulfur dioxide. The final product formed is Cr 2 (SO 4 ) 3 .
Potassium dichromate is used to stain certain types of wood by darkening the tannins in the wood. It produces deep, rich browns that cannot be achieved with modern color dyes. It is a particularly effective treatment on mahogany . [ 10 ]
Potassium dichromate occurs naturally as the rare mineral lópezite . It has only been reported as vug fillings in the nitrate deposits of the Atacama Desert of Chile and in the Bushveld igneous complex of South Africa . [ 11 ]
In 2005–06, potassium dichromate was the 11th-most-prevalent allergen in patch tests (4.8%). [ 12 ]
Potassium dichromate is one of the most common causes of chromium dermatitis ; [ 13 ] chromium is highly likely to induce sensitization leading to dermatitis, especially of the hand and forearms, which is chronic and difficult to treat. Toxicological studies have further illustrated its highly toxic nature. With rabbits and rodents, concentrations as low as 14 mg/kg have shown a 50% fatality rate amongst test groups. [ 14 ] Aquatic organisms are especially vulnerable if exposed, and hence responsible disposal according to the local environmental regulations is advised.
As with other Cr(VI) compounds, potassium dichromate is carcinogenic . [ 15 ] The compound is also corrosive and exposure may produce severe eye damage or blindness. [ 16 ] Human exposure further encompasses impaired fertility. | https://en.wikipedia.org/wiki/Potassium_dichromate |
Potassium ethyl xanthate (KEX) is an organosulfur compound with the chemical formula CH 3 CH 2 OCS 2 K . It is a pale yellow powder that is used in the mining industry for the separation of ores . It is a potassium salt of ethyl xanthic acid . Many xanthates are known.
Xanthate salts are prepared by the action of alkoxides on carbon disulfide . The alkoxide is often generated in situ from potassium hydroxide: [ 2 ]
The salt KS 2 COC 5 H 11 , prepared from potassium pentanolate and carbon disulfide has been characterized by X-ray crystallography . The COCS 2 portion of the anion is planar. The C-S bond lengths are both 1.65 Å, and the C-O distance is 1.38 Å. [ 3 ]
Potassium ethyl xanthate is a pale yellow powder that is stable at high pH , but rapidly hydrolyses below pH = 9:
Oxidation of xanthate salts gives diethyl dixanthogen disulfide :
KEX is a source of ethylxanthate coordination complexes . For example, the octahedral complexes (CH 3 CH 2 OCS 2 ) 3 Cr , (CH 3 CH 2 OCS 2 ) 3 In , and (CH 3 CH 2 OCS 2 ) 3 Co have been prepared from KEX. [ 4 ]
Potassium ethyl xanthate is used in the mining industry as flotation agent for extraction of the ores of copper, nickel, and silver. [ 5 ] The method exploits the affinity of these "soft" metals for the organosulfur ligand.
Potassium ethyl xanthate is a useful reagent for preparing xanthate esters from alkyl and aryl halides. The resulting xanthate esters are useful intermediates in organic synthesis . [ 6 ]
The LD 50 is 103 mg/kg (oral, rats ) for potassium ethyl xanthate. [ 5 ] | https://en.wikipedia.org/wiki/Potassium_ethyl_xanthate |
Potassium ferrate is an inorganic compound with the formula K 2 FeO 4 . It is the potassium salt of ferric acid . Potassium ferrate is a powerful oxidizing agent with applications in green chemistry , organic synthesis, and cathode technology.
Generally, there are three ways to produce hexavalent iron: dry oxidation , wet oxidation, and electrochemical synthesis. [ citation needed ] The methods used to produce potassium ferrate are similar to those used to produce sodium ferrate and barium ferrate .
The dry oxidation method entails heating or melting iron oxides in an alkaline, oxygenated environment. The combination of high temperature (200 °C - 800 °C) and oxygen presents an explosion hazard that has led many researchers to believe this method of production is not suitable from a safety viewpoint, although many attempts have been made to overcome this problem. [ 1 ]
In the wet oxidation method, K 2 FeO 4 is prepared by oxidizing an alkaline solution of an iron(III) salt. Generally, this method employs either ferrous (Fe II ) or ferric (Fe III ) salts as the source of iron ions, calcium, sodium hypochlorite (Ca(ClO) 2 , NaClO), sodium thiosulfate (Na 2 S 2 O 3 ) or chlorine (Cl 2 ) as oxidizing agents and, finally, sodium hydroxide, sodium carbonate (NaOH, NaCO 3 ) or potassium hydroxide (KOH) to increase the pH of the solution. [ 2 ] [ 3 ] [ 4 ] For example:
3 ClO − + 3 Fe(OH) 3 (H 2 O) 3 + 4 K + + 4 OH − → 3 Cl − + 2 K 2 FeO 4 + 11 H 2 O
Electrochemical methods used to synthesize potassium ferrate usually consist of an iron anode which electrolyzes a KOH solution. [ 1 ]
Potassium ferrate is a dark purple crystalline solid that dissolves in water to form a reddish-purple solution. The salt is paramagnetic and is isostructural with K 2 MnO 4 , K 2 SO 4 , and K 2 CrO 4 . The solid consists of K + and the tetrahedral FeO 2− 4 anion, with Fe-O distances of 1.66 Å. [ 5 ] Potassium ferrate decomposes rapidly in neutral and acidic water, e.g.: [ 6 ]
In alkaline solution and as a dry solid, K 2 FeO 4 is stable. Under the acidic conditions, the oxidation–reduction potential of the ferrate(VI) ions (2.2 V) is greater than that of ozone (2.0 V) . [ 7 ]
Like sodium ferrate , K 2 FeO 4 generally does not generate environmentally toxic by-products and can be used in water treatment processes. [ citation needed ] It can act as:
In addition, potassium ferrate can be used as a bleeding stopper for fresh wounds. [ 8 ] [ 9 ] In organic synthesis , K 2 FeO 4 oxidizes primary alcohols . [ 10 ] K 2 FeO 4 has also attracted attention as a potential cathode material in a " super iron battery ." [ 11 ]
Stabilised forms of potassium ferrate have been proposed for the removal of transuranium elements , both dissolved and suspended, from aqueous solutions . [ 12 ] Tonnage quantities were proposed to help remediate the effects of the Chernobyl disaster in Belarus [ citation needed ] . This new technique was successfully applied for the removal of a broad range of heavy metals. Work on the use of potassium ferrate precipitation of transuranium elements and heavy metals was carried out in the Laboratories of IC Technologies Inc. in partnership with ADC Laboratories, in 1987 though 1992. The removal of the transuranium elements was demonstrated on samples from various Dept. of Energy nuclear sites in the USA. [ citation needed ]
Because the side products of its redox reactions are rust-like iron oxides, K 2 FeO 4 has been described as an " environmentally friendly " oxidant . In contrast, related oxidants such as chromates are considered environmentally hazardous. [ 13 ]
In 1702, Georg Ernst Stahl (1660 – 1734) observed that the ignition product of potassium nitrate (saltpetre) and iron powder displayed a red-purple color in an aqueous solution, which was eventually attributed to hexavalent potassium ferrate. Eckenberg and Becquerel in 1834 reported that a red-purple color appeared during heating of a mixture of potassium hydroxide and iron ore. In 1840, Edmond Frémy (1814 – 1894) discovered that fusion of potassium hydroxide and iron(III) oxide in air produced a high-capacity iron compound that was soluble in water: [ 1 ] | https://en.wikipedia.org/wiki/Potassium_ferrate |
Potassium ferricyanide is the chemical compound with the formula K 3 [Fe(CN) 6 ]. This bright red salt contains the octahedrally coordinated [Fe(CN) 6 ] 3− ion. [ 2 ] It is soluble in water and its solution shows some green-yellow fluorescence . It was discovered in 1822 by Leopold Gmelin . [ 3 ] [ 4 ]
Potassium ferricyanide is manufactured by passing chlorine through a solution of potassium ferrocyanide . Potassium ferricyanide separates from the solution:
Like other metal cyanides, solid potassium ferricyanide has a complicated polymeric structure. The polymer consists of octahedral [Fe(CN) 6 ] 3− centers crosslinked with K + ions that are bound to the CN ligands . [ 5 ] The K + ---NCFe linkages break when the solid is dissolved in water.
The compound is also used to harden iron and steel , in electroplating , dyeing wool , as a laboratory reagent , and as a mild oxidizing agent in organic chemistry .
The compound has widespread use in blueprint drawing and in photography ( Cyanotype process). Several photographic print toning processes involve the use of potassium ferricyanide. It is often used as a mild bleach in a concentration of 10g/L to reduce film or print density.
Potassium ferricyanide was used as an oxidizing agent to remove silver from color negatives and positives during processing, a process called bleaching. Because potassium ferricyanide bleaches are environmentally unfriendly, short-lived, and capable of releasing hydrogen cyanide gas if mixed with high concentrations and volumes of acid, bleaches using ferric EDTA have been used in color processing since the 1972 introduction of the Kodak C-41 process . In color lithography , potassium ferricyanide is used to reduce the size of color dots without reducing their number, as a kind of manual color correction called dot etching.
Ferricyanide is also used in black-and-white photography with sodium thiosulfate (hypo) to reduce the density of a negative or gelatin silver print where the mixture is known as Farmer's reducer; this can help offset problems from overexposure of the negative, or brighten the highlights in the print. [ 6 ]
Potassium ferricyanide is a used as an oxidant in organic chemistry. [ 7 ] [ 8 ] It is an oxidant for catalyst regeneration in Sharpless dihydroxylations . [ 9 ] [ 10 ]
Potassium ferricyanide is also one of two compounds present in ferroxyl indicator solution (along with phenolphthalein ) that turns blue ( Prussian blue ) in the presence of Fe 2+ ions, and which can therefore be used to detect metal oxidation that will lead to rust. It is possible to calculate the number of moles of Fe 2+ ions by using a colorimeter , because of the very intense color of Prussian blue .
In physiology experiments potassium ferricyanide provides a means increasing a solution's redox potential (E°' ~ 436 mV at pH 7). As such, it can oxidize reduced cytochrome c (E°' ~ 247 mV at pH 7) in isolated mitochondria. Sodium dithionite is usually used as a reducing chemical in such experiments (E°' ~ −420 mV at pH 7).
Potassium ferricyanide is used to determine the ferric reducing power potential of a sample (extract, chemical compound, etc.). [ 11 ] Such a measurement is used to determine of the antioxidant property of a sample.
Potassium ferricyanide is a component of amperometric biosensors as an electron transfer agent replacing an enzyme's natural electron transfer agent such as oxygen as with the enzyme glucose oxidase . It is an ingredient in commercially available blood glucose meters for use by diabetics .
Potassium ferricyanide is combined with potassium hydroxide (or sodium hydroxide as a substitute) and water to formulate Murakami's etchant. This etchant is used by metallographers to provide contrast between binder and carbide phases in cemented carbides.
Prussian blue , the deep blue pigment in blue printing, is generated by the reaction of K 3 [Fe(CN) 6 ] with ferrous (Fe 2+ ) ions as well as K 4 [Fe(CN) 6 ] with ferric salts. [ 12 ]
In histology , potassium ferricyanide is used to detect ferrous iron in biological tissue. Potassium ferricyanide reacts with ferrous iron in acidic solution to produce the insoluble blue pigment, commonly referred to as Turnbull's blue or Prussian blue . To detect ferric (Fe 3+ ) iron, potassium ferrocyanide is used instead in the Perls' Prussian blue staining method. [ 13 ] The material formed in the Turnbull's blue reaction and the compound formed in the Prussian blue reaction are the same. [ 14 ] [ 15 ]
Potassium ferricyanide has low toxicity, its main hazard being that it is a mild irritant to the eyes and skin. However, under very strongly acidic conditions, highly toxic hydrogen cyanide gas is evolved, according to the equation:
For example, it will react with diluted sulfuric acid under heating forming potassium sulfate , ferric sulfate and hydrogen cyanide.
This will not occur with concentrated sulfuric acid as hydrolysis to formic acid and dehydration to carbon monoxide will take place instead. [ 17 ] | https://en.wikipedia.org/wiki/Potassium_ferricyanide |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.