id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
16,589,498 | https://en.wikipedia.org/wiki/Vector%20spherical%20harmonics | In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.
Definition
Several conventions have been used to define the VSH.
We follow that of Barrera et al.. Given a scalar spherical harmonic , we define three VSH:
with being the unit vector along the radial direction in spherical coordinates and the vector along the radial direction with the same norm as the radius, i.e., . The radial factors are included to guarantee that the dimensions of the VSH are the same as those of the ordinary spherical harmonics and that the VSH do not depend on the radial spherical coordinate.
The interest of these new vector fields is to separate the radial dependence from the angular one when using spherical coordinates, so that a vector field admits a multipole expansion
The labels on the components reflect that is the radial component of the vector field, while and are transverse components (with respect to the radius vector ).
Main properties
Symmetry
Like the scalar spherical harmonics, the VSH satisfy
which cuts the number of independent functions roughly in half. The star indicates complex conjugation.
Orthogonality
The VSH are orthogonal in the usual three-dimensional way at each point :
They are also orthogonal in Hilbert space:
An additional result at a single point (not reported in Barrera et al, 1985) is, for all ,
Vector multipole moments
The orthogonality relations allow one to compute the spherical multipole moments of a vector field as
The gradient of a scalar field
Given the multipole expansion of a scalar field
we can express its gradient in terms of the VSH as
Divergence
For any multipole field we have
By superposition we obtain the divergence of any vector field:
We see that the component on is always solenoidal.
Curl
For any multipole field we have
By superposition we obtain the curl of any vector field:
Laplacian
The action of the Laplace operator separates as follows:
where and
Also note that this action becomes symmetric, i.e. the off-diagonal coefficients are equal to , for properly normalized VSH.
Examples
First vector spherical harmonics
Expressions for negative values of are obtained by applying the symmetry relations.
Applications
Electrodynamics
The VSH are especially useful in the study of multipole radiation fields. For instance, a magnetic multipole is due to an oscillating current with angular frequency and complex amplitude
and the corresponding electric and magnetic fields, can be written as
Substituting into Maxwell equations, Gauss's law is automatically satisfied
while Faraday's law decouples as
Gauss' law for the magnetic field implies
and Ampère–Maxwell's equation gives
In this way, the partial differential equations have been transformed into a set of ordinary differential equations.
Alternative definition
In many applications, vector spherical harmonics are defined as fundamental set of the solutions of vector Helmholtz equation in spherical coordinates.
In this case, vector spherical harmonics are generated by scalar functions, which are solutions of scalar Helmholtz equation with the wavevector .
here are the associated Legendre polynomials, and are any of the spherical Bessel functions.
Vector spherical harmonics are defined as:
longitudinal harmonics
magnetic harmonics
electric harmonics
Here we use harmonics real-valued angular part, where , but complex functions can be introduced in the same way.
Let us introduce the notation . In the component form vector spherical harmonics are written as:
There is no radial part for magnetic harmonics. For electric harmonics, the radial part decreases faster than angular, and for big can be neglected. We can also see that for electric and magnetic harmonics angular parts are the same up to permutation of the polar and azimuthal unit vectors, so for big electric and magnetic harmonics vectors are equal in value and perpendicular to each other.
Longitudinal harmonics:
Orthogonality
The solutions of the Helmholtz vector equation obey the following orthogonality relations:
All other integrals over the angles between different functions or functions with different indices are equal to zero.
Rotation and inversion
Under rotation, vector spherical harmonics are transformed through each other in the same way as the corresponding scalar spherical functions, which are generating for a specific type of vector harmonics. For example, if the generating functions are the usual spherical harmonics, then the vector harmonics will also be transformed through the Wigner D-matrices
The behavior under rotations is the same for electrical, magnetic and longitudinal harmonics.
Under inversion, electric and longitudinal spherical harmonics behave in the same way as scalar spherical functions, i.e.
and magnetic ones have the opposite parity:
Fluid dynamics
In the calculation of the Stokes' law for the drag that a viscous fluid exerts on a small spherical particle, the velocity distribution obeys Navier–Stokes equations neglecting inertia, i.e.,
with the boundary conditions
where U is the relative velocity of the particle to the fluid far from the particle. In spherical coordinates this velocity at infinity can be written as
The last expression suggests an expansion in spherical harmonics for the liquid velocity and the pressure
Substitution in the Navier–Stokes equations produces a set of ordinary differential equations for the coefficients.
Integral relations
Here the following definitions are used:
In case, when instead of are spherical Bessel functions, with help of plane wave expansion one can obtain the following integral relations:
In case, when are spherical Hankel functions, one should use the different formulae. For vector spherical harmonics the following relations are obtained:
where , index means, that spherical Hankel functions are used.
See also
Spherical harmonics
Spinor spherical harmonics
Spin-weighted spherical harmonics
Electromagnetic radiation
Spherical basis
References
External links
Vector Spherical Harmonics at Eric Weisstein's Mathworld
Vector calculus
Special functions
Differential equations
Applied mathematics
Theoretical physics | Vector spherical harmonics | [
"Physics",
"Mathematics"
] | 1,199 | [
"Special functions",
"Applied mathematics",
"Theoretical physics",
"Mathematical objects",
"Differential equations",
"Equations",
"Combinatorics"
] |
13,801,080 | https://en.wikipedia.org/wiki/Ion-attachment%20mass%20spectrometry | Ion-attachment mass spectrometry (IAMS) is a form of mass spectrometry that uses a "soft" form of ionization similar to chemical ionization in which a cation is attached to the analyte molecule in a reactive collision:
{M} + {X+} + A -> {MX+} + A
Where M is the analyte molecule, X+ is the cation and A is a non-reacting collision partner.
Principle
This technique is applicable to gases or any materials that can be vaporized. It uses a non-fragmenting non-conventional ionisation mode, by attachment of a lithium (or alkaline) ion to the gas to be analysed with a more traditional mass filter. This instrument is more dedicated to analysis of moderately-sized molecules such as organic or aromatic compounds.
Applications
Currently, it is used industrially to verify, with a high throughput, the concentrations of brominated flame retardants (BFR) in plastics in compliance with European RoHS (Restriction of Hazardous Substances) regulation in place since 2006. The banned molecules include PBB and PBDE, whose concentration should not exceed 0.1% w/w.
IAMS has also been used to analyze diesel exhaust particles, in ceramic processing and in critical silicon etching during semiconductor manufacturing.
References
Bibliography
External links
Mass spectrometry
Ion source | Ion-attachment mass spectrometry | [
"Physics",
"Chemistry"
] | 284 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Ion source",
"Mass spectrometry",
"Matter"
] |
13,803,076 | https://en.wikipedia.org/wiki/Drug%20liberalization | Drug liberalization is a drug policy process of decriminalizing, legalizing, or repealing laws that prohibit the production, possession, sale, or use of prohibited drugs. Variations of drug liberalization include drug legalization, drug relegalization, and drug decriminalization. Proponents of drug liberalization may favor a regulatory regime for the production, marketing, and distribution of some or all currently illegal drugs in a manner analogous to that for alcohol, caffeine and tobacco.
Proponents of drug liberalization argue that the legalization of drugs would eradicate the illegal drug market and reduce the law enforcement costs and incarceration rates. They frequently argue that prohibition of recreational drugs—such as cannabis, opioids, cocaine, amphetamines and hallucinogens—has been ineffective and counterproductive and that substance use is better responded to by implementing practices for harm reduction and increasing the availability of addiction treatment. Additionally, they argue that relative harm should be taken into account in the regulation of drugs. For instance, they may argue that addictive or dependence-forming substances such as alcohol, tobacco and caffeine have been a traditional part of many cultures for centuries and remain legal in most countries, although other drugs which cause less harm than alcohol, caffeine or tobacco are entirely prohibited, with possession punishable with severe criminal penalties.
Opponents of drug liberalization argue that it would increase the amount of drug users, increase crime, destroy families, and increase the amount of adverse physical effects among drug users.
Policies
The 1988 United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances made it mandatory for the signatory countries to "adopt such measures as may be necessary to establish as criminal offences under its domestic law" (art. 3, § 1) all the activities related to the production, sale, transport, distribution, etc. of the substances included in the most restricted lists of the 1961 Single Convention on Narcotic Drugs and 1971 Convention on Psychotropic Substances. Criminalization also applies to the "cultivation of opium poppy, coca bush or cannabis plants for the purpose of the production of narcotic drugs". The Convention distinguishes between the intent to traffic and personal consumption, stating that the latter should also be considered a criminal offence, but "subject to the constitutional principles and the basic concepts of [the state's] legal system" (art. 3, § 2).
Drug liberalization proponents hold differing reasons to support liberalization, and have differing policy proposals. The two most common positions are drug legalization (or re-legalization), and drug decriminalization. The European Monitoring Centre for Drugs and Drug Addiction (EMCDDA) defines decriminalization as the removal of a conduct or activity from the sphere of criminal law; depenalisation signifying merely a relaxation of the penal sanction exacted by law. Decriminalization usually applies to offences related to drug consumption and may include either the imposition of sanctions of a different kind (administrative) or the abolition of all sanctions; other (noncriminal) laws then regulate the conduct or activity that has been decriminalized. Depenalisation usually consists of personal consumption as well as small-scale trading and generally signifies the elimination or reduction of custodial penalties, while the conduct or activity still remains a criminal offence. The term legalization refers to the removal of all drug-related offences from criminal law, such as use, possession, cultivation, production, and trading.
Harm reduction refers to a range of public health policies designed to reduce the harmful consequences associated with recreational drug use and other high risk activities. Harm reduction is put forward as a useful perspective alongside the more conventional approaches of demand and supply reduction. Many advocates argue that prohibitionist laws criminalize people for suffering from a disease and cause harm, for example by obliging drug addicts to obtain drugs of unknown purity from unreliable criminal sources at high prices, increasing the risk of overdose and death. Its critics are concerned that tolerating risky or illegal behaviour sends a message to the community that these behaviours are acceptable.
The Controlled Substance Act (United States)
The Controlled Substance Act (CSA) categorizes all substances in need of regulation into one of the five schedules under the federal law. The categorization of these substances is determined by the potential for abuse and how safe it is to consume. In addition, a big determinant of this is the way in which the substance can be consumed or used medically. In its earliest stages, the CSA was created to combine the needs of two international treaties. These treaties were known as the Single Convention on Narcotic Drugs of 1961 and the Convention of Psychotropic Substances of 1971. Both treaties allowed public health authorities to work with the medical and scientific communities to create a classification system. The Schedule I substances were described as those that have no medical use whatsoever; meaning there is no prescription written for such substance. Schedule II substances are those that can be easily abused and lead to dependence. These substances can only be accessed through a written or electronic prescription from a physician. The schedule III substances are classified as those which have less potential for abuse than Schedule I and II but can still cause the individual to develop a mild dependence. Schedule IV substances are those with the least likeliness for abuse, therefore its medical use is common in the United States. Lastly, the Schedule V substances are those with little to no likelihood of abuse, along with very minimal dependence development.
Drug legalization (United States)
Drug legalization calls for a return to pre–1906 Pure Food and Drug Act attitudes when almost all drugs were legal. This would require ending government-enforced prohibition on the distribution or sale and personal use of specified (or all) currently banned drugs. Proposed ideas range from full legalization which would completely remove all forms of government control, to various forms of regulated legalization, where drugs would be legally available, but under a system of government control which might mean for instance:
Mandated labels with dosage and medical warnings.
Restrictions on advertising.
Age limitations.
Restrictions on amount purchased at one time.
Requirements on the form in which certain drugs would be supplied.
Ban on sale to intoxicated persons.
Special user licenses to purchase particular drugs.
A possible clinical setting for the consumption of some intravenous drugs or supervised consumption.
The regulated legalization system would probably have a range of restrictions for different drugs, depending on their perceived risk, so while some drugs would be sold over the counter in pharmacies or other licensed establishments, drugs with greater risks of harm might only be available for sale on licensed premises where use could be monitored and emergency medical care made available. Examples of drugs with different levels of regulated distribution in most countries include: caffeine (coffee, tea), nicotine (tobacco), and ethyl alcohol (beer, wine, spirits). Since each country has its own regulations and most distinguish between different classes of drugs, there can be difficulties when it come to regulating which should be more readily accessible, since a particular drug criminalized in one area might be completely acceptable elsewhere. Full legalization is often proposed by groups, such as libertarians, who object to drug laws on moral grounds, while regulated legalization is suggested by groups like Law Enforcement Against Prohibition who object to the drug laws on the grounds that they fail to achieve their stated aims and instead they say greatly worsen the problems associated with use of prohibited drugs but acknowledge that there are harms associated with currently prohibited drugs which need to be minimized. Not all proponents of drug re-legalization necessarily share a common ethical framework, and people may adopt this viewpoint for a variety of reasons. In particular, favoring drug legalization does not imply approval of drug use.
Drug decriminalization
Drug decriminalization calls for reduced or eliminated control or penalties compared to existing laws. There are proponents of drug decriminalization that support a system whereby those who use and possess drugs for personal use are not penalized. While others support the use of fines or other punishments to replace prison terms, and often propose systems whereby illegal drug users who are caught would be fined, but would not receive a permanent criminal record as a result. A central feature of drug decriminalization is the concept of harm reduction. Drug decriminalization is in some ways an intermediate between prohibition and legalization, and has been criticized by Peter Lilley as being "the worst of both worlds", in that drug sales would still be illegal, thus perpetuating the problems associated with leaving production and distribution of drugs to the criminal underworld, while also failing to discourage illegal drug use by removing the criminal penalties that might otherwise cause some people to choose not to use drugs.
In 2001, Portugal began treating use and possession of small quantities of drugs as a public health issue. Rather than incarcerating those in possession, they are referred to a treatment program by a regional panel composed of social workers, medical professionals, and drug experts. This also decreases the amount of money the government spends fighting a war on drugs and money spent keeping drug users incarcerated. HIV infection rates also have dropped from 104.2 new cases per million in 2000 to 4.2 cases per million in 2015. Anyone caught with any type of drug in Portugal, if it is for personal consumption, will not be imprisoned. Portugal is the first country that has decriminalized the possession of small amounts of drugs, to positive results.
As noted by the EMCDDA, across Europe in the last decades, there has been a movement toward "an approach that distinguishes between the drug trafficker, who is viewed as a criminal, and the drug user, who is seen more as a sick person who is in need of treatment" (EMCDDA 2008, 22). A number of Latin American countries have similarly moved to reduce the penalties associated with drug use and personal possession" (Laqueur, 2015, p. 748). Mexico City has decriminalized certain drugs and Greece has just announced that it is going to do so. Spain has also followed the Portugal model. Italy after waiting 10 years to see the result of the Portugal model, which Portugal deemed a success, has since recently followed suit. In May 2014, the Criminal Chamber of the Italian Supreme Court upheld a previous decision in 2013 by Italy's Constitutional Court, to reduce the penalties for the convictions for sale of soft drugs. Some other countries have virtual decriminalization for marijuana only, including in three U.S. states, such as Colorado, Washington, and Oregon, the Australian State of South Australia, and across the Netherlands, where there are legal marijuana cafes. In the Netherlands these cafes are called "coffeeshops".
History
The cultivation, use and trade of psychoactive and other drugs has occurred since the dawn of civilization. Motivations claimed by supporters of drug prohibition laws across various societies and eras have included religious observance, allegations of violence by racial minorities, and public health concerns. Those who are proponents of drug legislation characterize these motivations as religious intolerance, racism, and public healthism. The British had gone to war with China in the 19th century in what became known as the First and Second Opium Wars to protect their valuable trade in narcotics. It was only in the 20th century that Britain and the United States outlawed cannabis. The campaign against alcohol prohibition culminated in the Twenty-first Amendment to the United States Constitution repealing prohibition on 5 December 1933, as well as liberalization in Canada, and some but not all of the other countries that enforced prohibition. Despite this, many laws controlling the use of alcohol continue to exist even in these countries. In the mid-20th century, the United States government led a major renewed surge in drug prohibition called the war on drugs.
Initial attempts to change the punitive drug laws which were introduced all over the world from the late 1800s onwards were primarily based around recreational use. Timothy Leary was one of the most prominent campaigners for the legal and recreational use of LSD. In 1967, a "Legalise pot" rally was held in Britain. As death toll from the drug war rose, other organisations began to form to campaign on a more political and humanitarian basis. Drug Policy Foundation formed in America and Release, a charity which gives free legal advice to drugs users and currently campaigns for drug decriminalization, also incorporated in the 1970s. Into the 21st century, the focus of the world's drug policy reform organisations is on the promotion of harm reduction in the Western World, and attempting to prevent the catastrophic loss of human life in developing countries where much of the world's supply of heroin, cocaine, and marijuana are produced. Drug policy reform advocates point to failed efforts, such as the Mexican Drug War, as signs that a new approach to drug policy is needed. According to some observers, the Mexican Drug War has claimed as many as 80,000 lives.
In 2014, a European Citizens' Initiative called "Weed Like to Talk" was launched within the European Union, with the aim of starting a debate in Europe about the legalization of the production, sale and use of marijuana in the European Union and finding a common policy for all EU member states. As of June 30, 2014, the initiative has collected 100,000 signatures from citizens in European member states. Should they reach 1 million signatures, from nationals of at least one quarter of the member states, the European Commission will be required to initiate a legislative proposal and a debate on the issue.
Economics
There are numerous economic and social impacts of the criminalization of drugs. According to economist Mark Thornton, prohibition increases the prices of drugs, political corruption, and criminal activity. It also produces more dangerous and addictive drugs. In many developing countries the production of drugs offers a way to escape poverty. Milton Friedman estimated that over 10,000 deaths a year in the US are caused by the criminalization of drugs, and if drugs were to be made legal innocent victims such as those shot down in drive by shootings, would cease or decrease.
The economic inefficiency and ineffectiveness of such government intervention in preventing drug trade has been fiercely criticised by drug-liberty advocates. The war on drugs of the United States, that provoked legislation within several other Western governments, has also garnered criticism for these reasons. The legalization of drugs would affect the supply and demand that is present today with these illegal substances. The price of production would increase due to the costs that come with the transportation and distribution of these substances. It has been noted that the prohibition of drugs has led to a decrease in the consumer surplus. The decrease in consumption is due to the price increase of these drugs. In a clear example of the way in which the supply and demand is affected, individuals have responded to the price increase from high levels, rather than responding to the prices which started off low.
Prices and consumption
Much of the debate surrounding the economics of drug legalization centers on the shape of the demand curve for illegal drugs and the sensitivity of consumers to changes in the prices of illegal drugs. Proponents of drug legalization often assume that the quantity of addictive drugs consumed is unresponsive to changes in price; however, studies into addictive but legal substances like alcohol and cigarettes have shown that consumption can be quite responsive to changes in prices. In the same study, economists Michael Grossman and Frank J. Chaloupka estimated that a 10% reduction in the price of cocaine would lead to a 14% increase in the frequency of cocaine use. This increase indicates that consumers are responsive to price changes in the cocaine market. There is also evidence that in the long run, consumers are much more responsive to price changes than in the short run, but other studies have led to a wide range of conclusions.
Considering that legalization would likely lead to an increase in the supply of drugs, the standard economic model predicts that the quantity of drugs consumed would rise and the prices would fall. Andrew E. Clark, an economist who has studied the effects of drug legalization, suggests that a specific tax, or sin tax, would counteract the increase in consumption. Additionally, the legalization of it would reduce the cost of having to mass incarcerate marginalized communities, which are those who are disproportionately affected. Of those arrested for drug possession or drug related crimes, the majority of those individuals arrested are Black or Hispanic.
Associated costs
Proponents of drug prohibition argue that many negative externalities, or third party costs, are associated with the consumption of illegal drugs. Externalities like violence, environmental effects on neighborhoods, increased health risks, and increased healthcare costs are often associated with the illegal drug market. Opponents of prohibition argue that many of those externalities are created by current drug policies. They believe that much of the violence associated with drug trade is due to the illegal nature of drug trade, where there is no mediating authority to solve disputes peacefully and legally. The illegal nature of the market also affects the health of consumers by making it difficult to acquire syringes, which often leads to needle sharing.
Prominent economist Milton Friedman argues that prohibition of drugs creates many negative externalities like increased incarceration rates, the undertreatment of chronic pain, corruption, disproportional imprisonment of African Americans, compounding harm to users, the destruction of inner cities and harm to foreign countries. Proponents of legalization also argue that prohibition decrease the quality of the drugs made, which often leads to more physical harm, like accidental overdoses and poisoning, to the drug users. Steven D. Levitt and Ilyana Kuziemko point to the over crowding of prisons as another negative side effect of the war on drugs. They believe that by sending such a large number of drug offenders to prison, the war on drugs has reduced the prison space available for other offenders. This increased incarceration rate not only costs tax payers more to maintain, it could possibly increase crime by crowding violent offenders out of prison cells and replacing them with drug offenders.
Direct costs
A Harvard economist, Jeffrey Miron, estimated that ending the war on drugs would inject 76.8 billion dollars into the US economy in 2010 alone. He estimates that the government would save $41.3 billion for law enforcement and the government would gain up to $46.7 billion in tax revenue. Since the war on drugs began under the administration of President Richard Nixon, the federal drug-fighting budget has increased from $100 million in 1970 to $15.1 billion in 2010, with a total cost estimated near 1 trillion dollars over 40 years. In the same time period an estimated 37 million nonviolent drug offenders have been incarcerated. $121 billion was spent to arrest these offenders and $450 billion to incarcerate them.
Size of the illegal drug market
According to 2013 data from the United Nations Office on Drugs and Crime (UNODC) and European crime-fighting agency Europol, the annual global drugs trade is worth around $435 billion a year, with the annual cocaine trade worth $84 billion of that amount.
Policies by country
Asia
Philippines
Senator Bato dela Rosa, despite having the reputation of leading the deadly war on drugs during the presidency of Rodrigo Duterte as chief of the Philippine National Police, filed a bill in the senate in November 2022 proposing the decriminalization of illegal drug use. This bid was an attempt to deal with prison overcrowding and underutilization of drug rehabilitation centers. While the proposal do not include drug trafficking and manufacturing, the bill was met with opposition from law enforcement agencies who believes it would send a "wrong signal" and encourage drug abuse. The Department of Health has supported the proposal.
Thailand
"A committee tasked with controlling illegal drugs has won a majority vote to have cannabis and hemp reclassified as narcotics, and the listing will take effect on" 1 January 2024, according to media.
Although Thailand has a strict drug policy, in May 2018, the Cabinet approved draft legislation that allows for more research into the effects of marijuana on people. Thus, the Government Pharmaceutical Organization (GPO) will soon begin clinical trials of marijuana as a preliminary step in the production of drugs from this plant. These medical studies are considered exciting, new landmarks in the history of Thailand, because the manufacture, storage, and use of marijuana has been completely outlawed in Thailand since 1979.
On 9 November 2018, the National Assembly of Thailand officially proposed to allow licensed medical use of marijuana, thereby legalizing what was previously considered a dangerous drug. The National Assembly on Friday submitted its amendments to the Ministry of Health, which would place marijuana and vegetable kratom in the category allowing their licensed possession and distribution in regulated conditions. The ministry reviewed the amendments before sending them to the cabinet, which returned it to the National Assembly for a final vote. This process was completed on 25 December 2018. Thus, Thailand became the first Asian country to legalize medical cannabis. These changes did not allow recreational use of drugs. These actions were taken because of the growing interest in the use of marijuana and its components for the treatment of certain diseases. Cannabis became decriminalized in Thailand on 9 June 2022, making recreational use also legal, although smoking in public can still incur penalties due to being considered a public nuisance. Supporters of legalization argue that the legal market for marijuana in Thailand could increase to $5 billion by 2024.
Europe
Czech Republic
In the Czech Republic, until 31 December 1998 only drug possession "for other person" (i.e. intent to sell) was criminal (apart from production, importation, exportation, offering or mediation, which was and remains criminal) while possession for personal use remained legal. On 1 January 1999, an amendment of the Criminal Code, which was necessitated in order to align the Czech drug rules with the Single Convention on Narcotic Drugs, became effective, criminalizing possession of "amount larger than small" also for personal use (Art. 187a of the Criminal Code) while possession of small amounts for personal use became a misdemeanor. The judicial practice came to the conclusion that the "amount larger than small" must be five to ten times larger (depending on drug) than a usual single dose of an average consumer.
On 14 December 2009, the Government of the Czech Republic adopted Regulation No. 467/2009 Coll., that took effect on 1 January 2010, and specified what "amount larger than small" under the Criminal Code meant, effectively taking over the amounts that were already established by the previous judicial practice. According to the regulation, a person could possess up to 15 grams of marijuana or 1.5 grams of heroin without facing criminal charges. These amounts were higher (often many times) than in any other European country, possibly making the Czech Republic the most liberal country in the European Union when it comes to drug liberalization, apart from Portugal. Under the Regulation No. 467/2009 Coll, possession of the following amounts or less of illicit drugs was to be considered smaller than large for the purposes of the Criminal Code and was to be treated as a misdemeanor subject to a fine equal to a parking ticket:
Marijuana 15 grams (or five plants)
Hashish 5 grams
Magic mushrooms 40 pieces
Peyote 5 plants
LSD 5 tablets
Ecstasy 4 tablets
Amphetamine 2 grams
Methamphetamine 2 grams
Heroin 1.5 grams
Coca 5 plants
Cocaine 1 gram
In 2013, a District Court in Liberec was deciding a case of a person that was accused of criminal possession for having 3.25 grams of methamphetamine (1.9 grams of straight methamphetamine base), well over the Regulation's limit of 2 grams. The court considered that basing a decision on mere Regulation would be unconstitutional and in breach of Article 39 of the Czech Charter of Fundamental Rights and Freedoms which states that "only a law may designate which acts constitute a crime and what penalties, or other detriments to rights or property, may be imposed for committing them" and proposed to the Constitutional Court to abolish the Regulation. In line with the District Courts' argument, the Constitutional Court abolished the Regulation effective from 23 August 2013, noting that the "amount larger than small" within the meaning of the Criminal Code may be designated only by the means of an Act of Parliament, and not a Governmental Regulation. Moreover, the Constitutional Court further noted that the Regulation merely took over already existing judicial practice of interpretation of what constitutes "amount larger than small" and thus its abolishment will not really change the criminality of drug possession in the country. Thus, the above-mentioned amounts from the now-not-effective Regulation remain as the base for consideration of police and prosecutors, while courts are not bound by the precise grammage.
Sale of any amount (not purchase) remains a criminal act. Possession of "amount larger than a small" of marijuana can result in a jail sentence of up to one year. For other illicit drugs, the sentence is up to two years. Trafficking as well as production (apart from growing up to five plants of marijuana) offenses carry stiffer sentences. Medical use of cannabis on prescription has been legal and regulated since 1 April 2013.
France
Following a contentious debate France opened its first supervised injection centre on 11 October 2016. Marisol Touraine, the Minister of Health, declared that the centre, located near the Gare du Nord in Paris, was "a strong political response, for a pragmatic and responsible policy that brings high-risk people back towards the health system rather than stigmatizing them."
Germany
In 1994, the Federal Constitutional Court ruled that drug addiction was not a crime, nor was the possession of small amounts of drugs for personal use. In 2000, the German narcotic law (BtmG) was changed to allow for supervised drug injection rooms. In 2002, a pilot study was started in seven German cities to evaluate the effects of heroin-assisted treatment on addicts, compared to methadone-assisted treatment. The positive results of the study led to the inclusion of heroin-assisted treatment into the services of the mandatory health insurance in 2009. On 4 May 2016, the Cabinet of Germany decided to approve the measure for legal cannabis for seriously ill patients who have consulted with a doctor and "have no therapeutic alternative". German Health Minister, Hermann Gröhe, presented the legal draft on the legalization of medical cannabis to the cabinet which was expected to take effect early 2017.
Ireland
On 2 November 2015, Aodhán Ó Ríordáin, the minister in charge of the National Drugs Strategy, announced that Ireland planned to introduce supervised injection rooms. The minister also referenced that possession of controlled substances will be decriminalized although supply and production will remain criminalized. On 12 July 2017, the Health Committee of the Irish government rejected a bill that would have legalized medical cannabis.
Netherlands
The drug policy of the Netherlands is based on two principles: (1) drug use is a public health issue, not a criminal matter, and (2) a distinction between hard and soft drugs exists. Additionally, a policy of non-enforcement has led to a situation where reliance upon non-enforcement has become common; because of this, the courts have ruled against the government when individual cases were prosecuted. Cannabis remains a controlled substance in the Netherlands and both possession and production for personal use are still misdemeanors, punishable by fine. Cannabis coffee shops are also illegal according to the statutes.
Norway
On 14 June 2010, the Stoltenberg commission recommended implementing heroin assisted treatment and expanding harm reduction measures. On 18 June 2010, Knut Storberget, Minister of Justice and the Police, announced that the ministry was working on new drug policy involving decriminalization by the Portugal model, which was to be introduced to parliament before the next general election. Storberget later changed his statements, saying the decriminalization debate is "for academics", instead calling for coerced treatment. In early March 2013, minister of health and care services Jonas Gahr Støre proposed to decriminalize the inhalation of heroin by 2014 as a measure to decrease drug overdoses. In 2011, there were 294 fatal overdoses, in comparison to only 170 traffic related deaths.
The country was preparing a massive policy change in terms of how to deal with drug use and drug possession for personal use. The reform titled "From punishment to help" was approved by the Norwegian government in 2017 and was in the final phase of approval by the parliament. Changes were expected to be implemented by early 2021. The new reform policy emphasizes that criminalizing drug use has no significant effect on rates of drug consumption and that drug addiction is better dealt with by health care services, hence the slogan "from punishment to help". Instead of fines or prison time, a person caught with a drug quantity for personal use will now be met with an independent panel consisting of social and health care workers that will discuss administrative sanctions or addiction treatment methods. This will hopefully encourage problematic users to seek help rather than fear of prosecution. There is also hope that this will improve the relationship between drug users and law enforcement officers. Opponents of the reform, including the police force and the Progress Party, fear that drug use will increase once a person is no longer at risk of facing criminal charges.
As of 21 July 2022, drug decriminalisation has not materialised in Norway. As of this date, only those who have substance use disorders may go unpunished if the amount of illegal drugs they have meets the criteria of what is deemed an amount for personal use.
Portugal
In 2001, Portugal became the first European country to abolish all criminal penalties for personal drug possession, under Law 30/2000. In addition, drug users were to be provided with therapy rather than prison sentences. Research commissioned by the Cato Institute and led by Glenn Greenwald found that in the five years after the start of decriminalization, illegal drug use by teenagers had declined, the rate of HIV infections among drug users had dropped, deaths related to heroin and similar drugs had been cut by more than half, and the number of people seeking treatment for drug addiction had doubled. Peter Reuter, a professor of criminology and public policy at the University of Maryland, College Park, suggested that the heroin usage rates and related deaths may have been due to the cyclical nature of drug epidemics. In 2009, he stated that "decriminalization in Portugal has met its central goal. Drug use did not rise." In 2023, drug use had increased by 7,8 percent, compared to 2001 when the policies had been implemented.
Ukraine
The use of marijuana in Ukraine is not prohibited, but the manufacture, storage, transportation and sale of cannabis and its derivatives are under administrative and criminal liability. Speaking on the legalization of soft drugs in Ukraine has been going on for a long time. In June 2016, the Parliament received a bill on the legalization of marijuana for medical purposes. It dealt with changes to the current act "On narcotic drugs, psychotropic substances and precursors" and was registered number 4533. The document must examine the relevant committee, and then submit it to the government. It was expected that this would happen in the fall of 2016, but the bill was not considered. In October 2018, a petition appeared on the website of electronic appeals to the President of Ukraine asking for the legalization of marijuana. In October 2018, the State Service of Ukraine on Drugs and Drug Control issued the first license for the import and re-export of raw materials and products derived from cannabis. The corresponding licenses were obtained by the USA company C21. The company is also in the process of applying for additional licenses, including the cultivation of cannabis.
Latin America
In the late 2000s and early 2010s, advocacy for drug legalization has increased in Latin America. Spearheading the movement Uruguayan government announced in 2012 plans to legalize state-controlled sales of marijuana in order to fight drug-related crimes. Some countries in this region have already advanced towards depenalization of personal consumption.
Argentina
In August 2009, the Supreme Court of Argentina declared in a landmark ruling that it was unconstitutional to prosecute citizens for having drugs for their personal use – "adults should be free to make lifestyle decisions without the intervention of the state". The decision affected the second paragraph of Article 14 of the country's drug control legislation (Law Number 23,737) that punishes the possession of drugs for personal consumption with prison sentences ranging from one month to two years (although education or treatment measures can be substitute penalties). The unconstitutionality of the article concerns cases of drug possession for personal consumption that does not affect others.
Brazil
In 2002 and 2006, Brazil went through legislative changes, resulting in a partial decriminalization of possession for personal use. Prison sentences no longer applied and were replaced by educational measures and community services; however, the 2006 law does not provide objective means to distinguish between users or traffickers. A disparity exists between the decriminalization of drug use and the increased penalization of selling drugs, punishable with a maximum prison sentences of 5 years for the sale of very minor quantities of drugs. Most of those incarcerated for drug trafficking are offenders caught selling small quantities of drugs, among them drug users who sell drugs to finance their drug habits. Since 2006, there has been a long debate whether the anti-drug law goes against the Constitution and principle of personal freedom. In 2009, the Supreme Federal Court re-opened to vote if the law is Constitutional, or if it goes against the Constitution specifically against personal Freedom of choice. Since each Minister inside the tribunal can take a personal time to evaluate the law, the voting can take years. In fact, the voting was re-opened in 2015, 3 ministers voted in favor, and then the law was again paused by another minister.
Colombia
Guatemalan President Otto Pérez Molina and Colombian President Juan Manuel Santos proposed the legalization of drugs in an effort to counter the failure of the war on drugs, which was said to have yielded poor results at a huge cost. On 25 May 2016, the Colombian congress approved the legalization of marijuana for medical usage.
Costa Rica
Costa Rica has decriminalized drugs for personal consumption. Manufacturing or selling drugs is still a jailable offense.
Ecuador
According to the 2008 Constitution of Ecuador, in its Article 364, the Ecuadorian state does not see drug consumption as a crime but only as a health concern. Since June 2013, the state drugs regulatory office CONSEP has published a table which establishes maximum quantities carried by persons so as to be considered in legal possession and that person as not a seller of drugs. The "CONSEP established, at their latest general meeting, that the following quantities be considered the maximum consumer amounts: 10 grams of marijuana or hash, 4 grams of opiates, 100 milligrams of heroin, 5 grams of cocaine, 0.020 milligrams of LSD, and 80 milligrams of methamphetamine or MDMA".
Honduras
On 22 February 2008, Honduras President Manuel Zelaya called on the United States to legalize drugs in order to prevent the majority of violent murders occurring in Honduras. Honduras is used by cocaine smugglers as a transiting point between Colombia and the US. Honduras, with a population of 7 million affected people an average of 8–10 murders a day, with an estimated 70% being as a result of this international drug trade. According to Zelaya, the same problem is occurring in Guatemala, El Salvador, Costa Rica, and Mexico.
Mexico
In April 2009, the Mexican Congress approved changes in the General Health Law that decriminalized the possession of illegal drugs for immediate consumption and personal use allowing a person to possess up to 5 g of marijuana or 500 mg of cocaine. The only restriction is that people in possession of drugs should not be within a 300-meter radius of schools, police departments, or correctional facilities. Opium, heroin, LSD, and other synthetic drugs were also decriminalized, it will not be considered as a crime as long as the dose does not exceed the limit established in the General Health Law. Many question this, as cocaine is as much synthesised as heroin, both are produced as extracts from plants. The law establishes very low amount thresholds and strictly defines personal dosage. For those arrested with more than the threshold allowed by the law this can result in heavy prison sentences, as they will be assumed to be small traffickers even if there are no other indications that the amount was meant for selling.
Uruguay
Uruguay is one of few countries that never criminalized the possession of drugs for personal use. Since 1974, the law establishes no quantity limits, leaving it to the judge's discretion to determine whether the intent was personal use. Once it is determined by the judge that the amount in possession was meant for personal use, there are no sanctions. In June 2012, the Uruguayan government announced plans to legalize state-controlled sales of marijuana in order to fight drug-related crimes. The government also stated that they will ask global leaders to do the same.
On 31 July 2013, the Uruguayan House of Representatives approved a bill to legalize the production, distribution, sale, and consumption of marijuana by a vote of 50 to 46. The bill then passed the Senate, where the left-leaning majority coalition, the Broad Front, held a comfortable majority. The bill was approved by the Senate by 16 to 13 on 10-December-2013. The bill was presented to the President José Mujica, also of the Broad Front coalition, who has supported legalization since June 2012. Relating this vote to the 2012 legalization of marijuana by the U.S. states Colorado and Washington, John Walsh, drug policy expert of the Washington Office on Latin America, stated that "Uruguay's timing is right. Because of last year's Colorado and Washington State votes to legalize, the U.S. government is in no position to browbeat Uruguay or others who may follow."
In July 2014, government officials announced that part of the implementation of the law (the sale of cannabis through pharmacies) is postponed to 2015, as "there are practical difficulties". Authorities will grow all the cannabis that can be sold legal. Concentration of THC shall be 15% or lower. In August 2014, an opposition presidential candidate, who was not elected in the November 2014 presidential elections, claimed that the new law was never going to be applied, as it was not workable. By the end of 2016 the government announced that the sale through pharmacies will be fully implemented during 2017.
North America
Canada
The cultivation of cannabis is currently legal in Canada, except in Manitoba and Quebec. Citizens outside those provinces may grow up to four plants per residence for personal use, and recreational use of cannabis by the general public is legal with restrictions on smoking in public locations that vary by jurisdiction. The sale of marijuana seeds is also legal.
In 2001, The Globe and Mail reported that a poll found 47% of Canadians agreed with the statement, "The use of marijuana should be legalized" in 2000, compared to 26% in 1975. A more recent poll found that more than half of Canadians supported legalization. In 2007, Prime Minister Stephen Harper's government tabled Bill C-26 to amend the Controlled Drugs and Substances Act, 1996 to bring forth a more restrictive law with higher minimum penalties for drug crimes. Bill-26 died in committee after the dissolution of the 39th Canadian Parliament in September 2008, but the Bill was subsequently resurrected by the government twice.
In 2015, Prime Minister Justin Trudeau and the Liberal Party of Canada campaigned on a promise to legalize marijuana. The Cannabis Act was passed on 19 June 2018, which made marijuana legal across Canada on 17 October 2018. Since legalization, the country has set up an online framework to allow consumers to purchase a wide variety of merchandise ranging from herbs, extract, oil capsules, and paraphernalia. Most provinces also provide a venue for purchase through physical brick and mortar stores.
In 2021, the city councils of Vancouver and Toronto voted to decriminalize the simple possession of all drugs; and submitted proposals requesting special exemption from the federal Health Minister to do so, citing numerous scientific, psychological, medical, and socio-economic benefits. In early 2022, the Province of British Columbia submitted its own request for exemption, closely following the Vancouver model. By April of that year, the Edmonton City Council had also passed a motion to request exemption from federal drug enforcement laws in order decriminalize "simple personal possession" of illegal drugs, voting in favour 11–2. On 31 May 2022, the federal government of Canada approved British Columbia's proposal to decriminalize all "hard drugs", such as heroin, fentanyl, cocaine, and methamphetamine. As of 1 January 2023, British Columbians aged 18 years or older are allowed to carry up to a cumulative total of 2.5 grams of these substances without the risk of arrest or criminal charges. Police are not to confiscate the drugs, and there is no requirement that people found to be in possession seek treatment; however, the production, trafficking, and exportation of these drugs remain illegal.
United States
As of 2024, prior to November elections, 38 states, Washington, D.C., and certain U.S. territories allow medical use of cannabis. Of those 38 states, 24 also allow recreational use, as does Washington, D.C. Voters in North and South Dakota and Florida will decide on recreational use in November, and Nebraskans will vote on cannabis use for medical reasons. Legalization in states created significant legal and policy tensions between federal and state governments and sometimes between states. State laws in conflict with federal law about cannabis remain valid, and prevent state level prosecution, despite cannabis being illegal under federal law, as determined in Gonzales v. Raich (2005).
Throughout the United States, various people and groups have been pushing for the legalization of marijuana for medical reasons. Organizations such as NORML and the Marijuana Policy Project work to decriminalize and legalize possession, use, cultivation, and sale of marijuana by adults. In 1996, 56% of California voters voted for California Proposition 215, legalizing the growing and use of marijuana for medical purposes and making California both the first state to outlaw marijuana, in 1913, and the first state to legalize medical marijuana.
On 6 November 2012, the states of Washington and Colorado legalized possession of small amounts of marijuana for private recreational use and created a process for writing rules for legal growing and commercial distribution of marijuana within each state, after having legalized medical cannabis in 1998 and 2000, respectively. In 2014, voters in Oregon, Alaska, and Washington, D.C. voted to legalize marijuana for recreational use, as did California in 2016, with the passage of California Proposition 64, and Michigan in 2018. In 2019, Illinois passed the Illinois Cannabis Regulation and Tax Act, making Illinois the first state to legalize recreational use by an act of the state legislature, which took effect 1 January 2020. In 2020, Oregon decriminalized the possession of all drugs in Measure 110, but in 2024, the Oregon State Senate passed a bill to reverse the decriminalization of hard drugs such as heroin after there was public backlash to the impacts of the measure. In 2021, New York legalized adult-use cannabis when it passed the Marijuana Regulation and Taxation Act (MRTA).
Oceania
Australia
In 2016, Australia legalised medicinal cannabis on a federal level. Since 1985, the Federal Government has run a declared war on drugs and while initially Australia led the world in 'harm-minimization' approach, they have since lagged. Australia has a number of political parties that focus on cannabis reform, The (HEMP) Help End Marijuana Prohibition Party was founded in 1993 and registered by the Australian Electoral Commission in 2000. The Legalise Cannabis Queensland Party was established in 2020. A number of Australian and international groups have promoted reform in regard to 21st-century Australian drug policy. Organisations such as Australian Parliamentary Group on Drug Law Reform, Responsible Choice, the Australian Drug Law Reform Foundation, Norml Australia, Law Enforcement Against Prohibition (LEAP) Australia and Drug Law Reform Australia advocate for drug law reform without the benefit of government funding. The membership of some of these organisations is diverse and consists of the general public, social workers, lawyers and doctors, and the Global Commission on Drug Policy has been a formative influence on a number of these organisations. In 1994, the Australian National Task Force on Cannabis formed under the Ministerial Council on Drug Strategy noted that the social harm of cannabis prohibition is greater than the harm from cannabis itself, total prohibition policies have been unsuccessful in reducing drug use and have caused significant social harm, as well as higher law enforcement costs, the use of cannabis is widespread in Australia and that its adverse health effects are modest and only affect a minority of users.
In 2012, the think tank Australia 21, released a report on the decriminalization of drugs in Australia. It noted that "by defining the personal use and possession of certain psychoactive drugs as criminal acts, governments have also avoided any responsibility to regulate and control the quality of substances that are in widespread use." Prohibition has fostered the development of a criminal industry that is corrupting civil society and government and killing our children." The report also highlighted the fact that, just as alcohol and tobacco are regulated for quality assurance, distribution, marketing and taxation, so should currently, unregulated, illicit drugs. There has been a number of enquires in Australia relating to cannabis and other illicit drugs, in 2019 the Queensland government instructed the Queensland Productivity Commission to conduct an enquiry into imprisonment and recidivism in QLD; the final report was sent to the Queensland Government on 1 August 2019 and publicly released on 31 January 2020. The commission found that "all available evidence shows the war on drugs fails to restrict usage or supply" and that "decriminalisation would improve the lives of drug users without increasing the rate of drug use" with the commission ultimately recommending that the Queensland government legalise cannabis. The QPC said the system had also fuelled an illegal market, particularly for methamphetamine. Although the Palaszczuk Queensland Labor Party led state government rejected the recommendations of its own commission and said it had no plans to alter any laws around cannabis, a decision that received heavy scrutiny from supporters of decriminalization, legalisation, progressive and non progressive drug policy advocates alike.
In 2019, The Royal Australasian College of Physicians (RACP) and St. Vincent's Health Australia called on the NSW Government to publicly release the findings of the Special Commission of Inquiry into the Drug 'Ice, saying there was "no excuse" for the delay. The report was the culmination of months of evidence from health and judicial experts, as well as families and communities affected by amphetamine-type substances across NSW. The report made 109 recommendations aimed to strengthen the NSW Governments response regarding amphetamine-based drugs such as crystal meth or ice. Major recommendations included more supervised drug use rooms, a prison needle and syringe exchange program, state-wide clinically supervised substance testing, including mobile pill testing at festivals, decriminalisation of drugs for personal use, a cease to the use of drug detection dogs at music festivals and to limit the use of strip searches. The report, also called for the NSW Government to adopt a comprehensive Drug and Alcohol policy, with the last drug and Alcohol policy expiring over a decade ago. The reports commissioner said the state's approach to drug use was profoundly flawed and said reform would require "political leadership and courage" and "Criminalising use and possession encourages us to stigmatise people who use drugs as the authors of their own misfortunate". Mr Howard said current laws "allow us tacit permission to turn a blind eye to the factors driving most problematic drug use" including childhood abuse, domestic violence and mental illness. The NSW government rejected the reports key recommendations, saying it would consider the other remaining recommendations. Director of the Drug Policy Modelling Program (DPMP) at UNSW Sydney's Social Policy Research Centre said the NSW Government has missed an opportunity to reform the state's response to drugs based on evidence. The NSW Government is yet to officially respond to the inquiry as of November 2020, a statement was released from the government citing intention to respond by the end of 2020.
In the Australian Capital Territory, after a bill was passed on 25 September 2019, new laws came into effect on 31 January 2020. While personal possession and growth of small amounts of cannabis remains prohibited non-medicinal purposes in every other jurisdiction in Australia, it allowed for possession of up to 50 grams of dry material, 150 grams of wet material, and cultivation of 2 plants per individual up to 4 plants per household, effectively legalising the possession and growing of cannabis in the ACT; however the sale and supply of cannabis and cannabis seeds is still illegal, so the effects of the laws are limited and the laws also contradict federal laws. It is also still illegal to smoke or use cannabis in a public place, expose a child or young person to cannabis smoke, store cannabis where children can reach it, grow cannabis using hydroponics or artificial cultivation, grow plants where they can be accessed by the public, share or give cannabis as a gift to another person, to drive with any cannabis in your system, or for people aged under 18 to grow, possess, or use cannabis.
New Zealand
On 18 December 2018, the Labour-led government announced a nationwide, binding referendum on the legality of cannabis for personal use, set to be held as part of the 2020 general election. This was a condition of the Green Party giving confidence and supply to the Government. On 7 May 2019, the government announced that the 2020 New Zealand cannabis referendum would be a yes/no question to enact a yet-to-be created piece of legislation. Despite the earlier commitment, the referendum was non-binding, the proposed Cannabis Legalisation and Control Bill would have need to be introduced into Parliament and passed like any other piece of legislation; therefore, the government was not in fact bound to the results of the referendum. Official results for the general election and referendums were released on 6 November 2020. The number opposed to legalisation was 50.7% with 48.4% in favour and 0.9% of votes were declared Informal.
Groups advocating change
The Senlis Council, a European development and policy think tank, has, since its conception in 2002, advocated that drug addiction should be viewed as a public health issue rather than a purely criminal matter. The group does not support the decriminalisation of illegal drugs. Since 2003, the council has called for the licensing of poppy cultivation in Afghanistan in order to manufacture poppy-based medicines, such as morphine and codeine, and to combat poverty in rural communities, breaking ties with the illicit drugs trade. The Senlis Council outlined proposals for the implementation of a village based poppy for medicine project and calls for a pilot project for Afghan morphine at the next planting season.
Organisations involved in lobbying, research and advocacy
Canada
Le Dain Commission of Inquiry into the Non-Medical Use of Drugs
Europe
Beckley Foundation
Cannabis Law Reform
Drug Equality Alliance (DEA)
European Coalition for Just and Effective Drug Policies (ENCOD) (Branches in Austria, Germany and Norway)
Legalize.net (Netherlands)
Schildower Kreis (Goethe University Frankfurt, Germany)
NORML UK
Re:Vision Drug Policy Network (United Kingdom)
Regulación Responsible (Spain)
Release (agency) (United Kingdom)
Students for Sensible Drug Policy UK (United Kingdom)
Transform Drug Policy Foundation
Australia
Australian National Council on Drugs
Drug Policy Australia
Network Against Prohibition
New Zealand
The Helen Clark Foundation
NORML New Zealand
The STAR Trust
United States
American Civil Liberties Union
Americans for Safe Access
Drug Policy Alliance
High Times
High Times Freedom Fighters
Law Enforcement Against Prohibition
Lindesmith Center
Marijuana Policy Project
MASS CANN/NORML
Multidisciplinary Association for Psychedelic Studies (MAPS)
National Organization for the Reform of Marijuana Laws
Students for Sensible Drug Policy
Veterans for Medical Marijuana Access
November Coalition (United States)
Women Grow
Political parties with drug liberalization policies
Many political parties support, to various degrees, and for various reasons, liberalising drug control laws, from liberal parties to far-left movements, as well as some right-wing intellectuals. Drug liberalization is fundamental in the platforms of most Libertarian parties. There are also numerous single issue marijuana parties devoted to campaign for the legalisation of cannabis exclusively.
Australia
Australian Greens
Drug Law Reform Australia
Fusion Party
Legalise Cannabis Australia
Legalise Cannabis Queensland
Legalise Cannabis Western Australia Party
Reason Party
Canada
Liberal Party of Canada
New Democratic Party of Canada
Libertarian Party of Canada
Marijuana Party
Hungary
MKKP
Netherlands
GroenLinks
D66
New Zealand
Green Party of Aotearoa New Zealand
Portugal
Left Bloc
Liberal Initiative
LIVRE
United Kingdom
Green Party of England and Wales
Liberal Democrats – in March 2016, the Liberal Democrats became the first major political party in the United Kingdom to support the legalisation of cannabis.
International
Pirate Party
See also
Arguments for and against drug prohibition
Cannabis rights
Cannabis Social Club
Chasing the Scream
Civil libertarianism
Cognitive liberty
Gateway drug theory
Global Commission on Drug Policy
Harm reduction
Latin American Initiative on Drugs and Democracy
Left-libertarianism
Legality of cannabis by country
Psilocybin decriminalization in the United States
Recreational drug use
Responsible drug use
School district drug policies
Students for Sensible Drug Policy
Supervised injection site
The War We Never Fought
Trans-European Drug Information
Transform Drug Policy Foundation
U.S. Pure Food and Drug Act of 1906
World Federation Against Drugs
References
Further reading
Anderson, D. Mark, and Daniel I. Rees. 2023. "The Public Health Effects of Legalizing Marijuana." Journal of Economic Literature 61(1): 86–143.
International Coalition on Drug Policy Reform and Environmental Justice. 2023. "Revealing the missing link to Climate Justice: Drug Policy."
External links
Transform Drug Policy Foundation – A UK-based think-tank that works to develop systems for control and regulation that can be applied globally.
Law Enforcement Against Prohibition – Run by retired law enforcement professionals who oppose prohibition.
Voluntary Committee of Lawyers – a New York-based network of judges and lawyers opposed to current federal drug laws.
NORML (US National Organization for the Reform of Marijuana Laws) – a US wide network of activists seeking to liberalize cannabis legislation.
Re:Vision Drug Policy Network – an organisation for young people aged 16–25 campaigning against prohibition.
The Report of the Canadian Government Commission of Inquiry into the Non-Medical Use of Drugs – 1972 – The LeDain Commission Report
Drug Law Reform – a project of the Transnational Institute (TNI)
Draft Plan for Legalization from LIFE – an example of a policy formulation proposed for substance legalization
Count The Costs
Schaffer Library of Drug Policy
Worldwide Psychedelic Laws Tracker
Civil rights and liberties
Drug control law
Drug policy reform
Drug culture
fi:Huumeiden dekriminalisointi | Drug liberalization | [
"Chemistry"
] | 11,202 | [
"Drug control law",
"Regulation of chemicals"
] |
13,805,095 | https://en.wikipedia.org/wiki/Pubmeth | PubMeth is a database that contains information about DNA hypermethylation in cancer. It can be queried either by searching a list of genes, or cancer (sub)types.
It was created at the lab for bioinformatics and computational genomics in the Department of Molecular Biotechnology, Faculty of Bioscience Engineering at Ghent University, Belgium.
It was published in Nucleic Acids Research
References
External links
Official website
Medical databases | Pubmeth | [
"Chemistry",
"Biology"
] | 89 | [
"Biochemistry stubs",
"Biotechnology stubs",
"Biochemistry"
] |
13,806,732 | https://en.wikipedia.org/wiki/Non-evaporable%20getter | Non evaporable getters (NEG), based on the principle of metallic surface sorption of gas molecules, are mostly porous alloys or powder mixtures of Al, Zr, Ti, V and Fe. They help to establish and maintain vacuums by soaking up or bonding to gas molecules that remain within a partial vacuum. This is done through the use of materials that readily form stable compounds with active gases. They are important tools for improving the performance of many vacuum systems. Sintered onto the inner surface of high vacuum vessels, the NEG coating can be applied even to spaces that are narrow and hard to pump out, which makes it very popular in particle accelerators where this is an issue. The main sorption parameters of the kind of NEGs, like pumping speed and sorption capacity, have low limits.
A different type of NEG, which is not coated, is the Tubegetter. The activation of these getters is accomplished mechanically or at a temperature from 550 K. The temperature range is from 0 to 800 K under HV/UHV conditions.
The NEG acts as a getter or getter pump that is able to reduce the pressure to less than 10−12 mbar.
See also
Ion pump (physics)
References
External links
Video: Non-Evaporable Getter (NEG) Operation
Folder: TubeGetter
Vacuum | Non-evaporable getter | [
"Physics",
"Chemistry"
] | 282 | [
"Alloy stubs",
"Alloys",
"Vacuum",
"Matter"
] |
13,808,875 | https://en.wikipedia.org/wiki/Afterdepolarization | Afterdepolarizations are abnormal depolarizations of cardiac myocytes that interrupt phase 2, phase 3, or phase 4 of the cardiac action potential in the electrical conduction system of the heart. Afterdepolarizations may lead to cardiac arrhythmias. Afterdepolarization is commonly a consequence of myocardial infarction, cardiac hypertrophy, or heart failure. It may also result from congenital mutations associated with calcium channels and sequestration.
Early afterdepolarizations
Early afterdepolarizations (EADs) occur with abnormal depolarization during phase 2 or phase 3, and are caused by an increase in the frequency of abortive action potentials before normal repolarization is completed. EADs most commonly originate in mid-myocardial cells and Purkinje fibers, but can develop in other cardiac cells that carry an action potential. Phase 2 may be interrupted due to augmented opening of calcium channels, while phase 3 interruptions are due to the opening of sodium channels. Early afterdepolarizations can result in torsades de pointes, tachycardia, and other arrhythmias. EADs can be triggered by hypokalemia and drugs that prolong the QT interval, including class Ia and III antiarrhythmic agents, as well as catecholamines.
Afterhyperpolarizations can also occur in cortical pyramidal neurons. There, they typically follow an action potential and are mediated by voltage gated sodium or chloride channels. This phenomenon requires potassium channels to close quickly to limit repolarization. It is responsible for the difference between regular spiking and intrinsically bursting pyramidal neurons.
Delayed afterdepolarizations
Delayed afterdepolarizations (DADs) begin during phase 4, after repolarization is completed but before another action potential would normally occur via the normal conduction systems of the heart. They are due to elevated cytosolic calcium concentrations, classically seen with digoxin toxicity. The overload of the sarcoplasmic reticulum may cause spontaneous Ca2+ release after repolarization, causing the released Ca2+ to exit the cell through the 3Na+/Ca2+-exchanger. This results in a net depolarizing current. The classical feature is Bidirectional ventricular tachycardia. Also seen in catecholaminergic polymorphic ventricular tachycardia (CPVT). Delayed afterdepolarization is also seen in myocardial infarction. Purkinje fibers which survive myocardial infarction remain partially depolarized due to its high concentration of cations. Partially depolarized tissue fires rapidly resulting in delayed after depolarization.
References
Membrane biology
Cardiac electrophysiology | Afterdepolarization | [
"Chemistry"
] | 596 | [
"Membrane biology",
"Molecular biology"
] |
9,769,489 | https://en.wikipedia.org/wiki/Nuclear%20emulsion | A nuclear emulsion plate is a type of particle detector first used in nuclear and particle physics experiments in the early decades of the 20th century. It is a modified form of photographic plate that can be used to record and investigate fast charged particles like alpha-particles, nucleons, leptons or mesons. After exposing and developing the emulsion, single particle tracks can be observed and measured using a microscope.
Description
The nuclear emulsion plate is a modified form of photographic plate, coated with a thicker photographic emulsion of gelatine containing a higher concentration of very fine silver halide grains; the exact composition of the emulsion being optimised for particle detection.
It has the primary advantage of extremely high spatial precision and resolution, limited only by the size of the silver halide grains (sub micron); precision and resolution that surpass even the best of modern particle detectors (observe the scale in the image below, of K-meson decay). A stack of emulsion plates, effectively forming a block of emulsion, can record and preserve the interactions of particles so that their trajectories are recorded in 3-dimensional space as a trail of silver-halide grains, which can be viewed from any aspect on a microscopic scale. In addition, the emulsion plate is an integrating device that can be exposed or irradiated until the desired amount of data has been accumulated. It is compact, with no associated read-out cables or electronics, allowing the plates to be installed in very confined spaces and, compared to other detector technologies, is significantly less expensive to manufacture, operate and maintain. These features were decisive in enabling the high-altitude, mountain and balloon based studies of cosmic rays that led to the discovery of the pi-meson and parity violating charged K-meson decays; shedding light on the true nature and extent of the subnuclear "particle zoo", defining a milestone in the development of modern experimental particle physics.
The chief disadvantage of nuclear emulsion is that it is a dense and complex material (silver, bromine, carbon, nitrogen, oxygen) which potentially impedes the flight of particles to other detector components through multiple scattering and ionising energy loss. Finally, the development and scanning of large volumes of emulsion, to obtain useful, 3-dimensional digitised data, has in the past been a slow and labour intensive process. However, recent developments in automation of the process may overcome that drawback.
These disadvantages, coupled with the emergence of new particle detector and particle accelerator technologies, led to a decline in use of nuclear emulsion plates in particle physics towards the end of the 20th century. However there remains a continuing use of the method in the study of rare processes and in other branches of science, such as autoradiography in medicine and biology.
For a comprehensive and technically detailed account of the subject refer to the books by Barkas and by Powell, Fowler and Perkins. For an extensive review of the history and wider scientific context of the nuclear emulsion method, refer to the book by Galison.
History
Following the 1896 discovery of radioactivity by Henri Becquerel using photographic emulsion, Ernest Rutherford, working first at McGill University in Canada, then at the University of Manchester in England, was one of the first physicists to use that method to study in detail the radiation emitted by radioactive materials.
In 1905 he was using commercially available photographic plates to continue his research into the properties of the recently discovered alpha rays produced in the radioactive decay of some atomic nuclei.
This involved analysing the darkening of photographic plates caused by irradiation with the alpha rays. This darkening was enabled by the interaction of the many charged alpha particles, making up the rays, with silver halide grains in the photographic emulsion that were made visible by photographic development. Rutherford encouraged his research colleague at Manchester, Kinoshita Suekiti, to investigate in more detail the photographic action of the alpha-particles.
Kinoshita included in his objectives “to see whether a single 𝛂-particle produced a detectable photographic event”. His method was to expose the emulsion to radiation from a well measured radioactive source, for which the emission rate of 𝛂-particles was known. He used that knowledge and the relative proximity of the plate to the source, to compute the number of 𝛂-particles expected to traverse the plate. He compared that number with the number of developed halide grains he counted in the emulsion, taking careful account of 'background radiation' that produced additional 'non-alpha' grains in the exposure. He completed this research project in 1909, showing that it was possible “by preparing an emulsion film of very fine silver halide grains, and by using a microscope of high magnification, that the photographic method can be applied for counting 𝛂-particles with considerable accuracy”. This was the first time that the observation of individual charged particles by means of a photographic emulsion had been achieved. However, that was the detection of individual particle impacts, not the observation of a particle's extended trajectory. Soon after that, in 1911, Max Reinganum showed that the passage of an 𝛂-particle at glancing incidence through a photographic emulsion produced, when the emulsion was developed, a row of silver halide grains outlining the trajectory of the 𝛂-particle; the first recorded observation of an extended particle track in an emulsion.
The next steps would naturally have been to apply this technique to the detection and research of other particle types, including the Cosmic Rays newly discovered by Victor Hess in 1912. However, progress was halted by the onset of World War I in 1914. The outstanding issue of improving the particle detection performance of standard photographic emulsions, in order to detect other types of particle - protons, for example, produce about one quarter of the ionisation caused by an 𝛂-particle - was taken up again by various physical research laboratories in the 1920s.
In particular Marietta Blau, working at the Institute for Radium Research, Vienna in Austria, began in 1923 to investigate alternative types of photographic emulsion plates for detection of protons, known as “H-rays” at that time.She used a radioactive source of 𝛂-particles to irradiate paraffin wax, which has a high content of hydrogen. An 𝛂-particle may collide with a hydrogen nucleus (proton), knocking that proton out of the wax and into the photographic emulsion, where it produces a visible track of silver halide grains. After many trials, using different plates and careful shielding of the emulsion from unwanted radiation, she succeeded in making the first ever observation of proton tracks in a nuclear emulsion.
By an ingenious example of lateral thinking, she applied a similar method to make the first ever observation of the impact of neutrons in nuclear emulsion. Being electrically neutral the neutron cannot, of course, be directly detected in a photographic emulsion, but if it strikes a proton in the emulsion, that recoiling proton can be detected. She used this method to determine the energy spectrum of neutrons resulting from specific nuclear reaction processes. She developed a method to determine proton energies by measuring the exposed grain density along their tracks (fast minimum ionising particles interact with fewer grains than slow particles). To record the long tracks of fast protons more accurately, she enlisted British film manufacturer Ilford (now Ilford Photo) to thicken the emulsion on its commercial plates, and she experimented with other emulsion parameters — grain size, latent image retention, development conditions — to improve the visibility of alpha-particle and fast-proton tracks.
In 1937, Marietta Blau and her former student Hertha Wambacher discovered nuclear disintegration stars (Zertrümmerungsterne) due to spallation in nuclear emulsions that had been exposed to cosmic radiation at a height of 2300m on the Hafelekarspitze above Innsbruck. This discovery caused a sensation in the world of nuclear and cosmic ray physics, which brought the nuclear emulsion method to the attention of a wider audience. But the onset of political unrest in Austria and Germany, leading to World War II, brought a sudden halt to progress in that field of research for Marietta Blau.
In 1938 the German physicist Walter Heitler, who had escaped Germany as a scientific refugee to live and work in England, was at Bristol University researching a number of theoretical topics, including the formation of cosmic ray showers. He mentioned to Cecil Powell, at that time considering the use of cloud chambers for cosmic ray detection, that in 1937 the two Viennese physicists, Blau and Wambacher, had exposed photographic emulsions in the Austrian Alps and had seen the tracks of low energy protons as well as 'stars' or nuclear disintegrations caused by cosmic rays.
This intrigued Powell, who convinced Heitler to travel to Switzerland with a batch of llford half-tone emulsions and expose them on the Jungfraujoch at 3,500 m. In a letter to 'Nature' in August 1939, they were able to confirm the observations of Blau and Wambacher.
Although war brought a decisive halt to cosmic ray research in Europe between 1939 and 1945, in India Debendra Mohan Bose and Bibha Chowdhuri, working at the Bose Institute, Kolkata, undertook a series of high altitude mountain-top experiments using photographic emulsion to detect and analyse cosmic rays. These measurements were notable for the first ever detection of muons by the photographic method: Chowdhuri's painstaking analysis of the observed tracks’ properties, including exposed halide grain densities with range and multiple-scattering correlations, revealing the detected particles to have a mass about 200 times that of the electron - the same ‘mesotron’ (later 'mu-meson' now muon) discovered in 1936 by Anderson and Neddermeyer using a Cloud Chamber. Distance and circumstances denied Bose and Chowdhuri the relatively easy access to manufacturers of photographic plates available to Blau and later, to Heitler, Powell et al.. It meant that Bose and Chowdhuri had to use standard commercial half-tone emulsions, rather than nuclear emulsions specifically designed for particle detection, which makes even more remarkable the quality of their work.
Following on from those developments, after World War II, Powell and his research group at Bristol University collaborated with Ilford (now Ilford Photo), to further optimise emulsions for the detection of cosmic ray particles. Ilford produced a concentrated ‘nuclear-research’ emulsion containing eight times the normal amount of silver bromide per unit volume (see External Link to 'Nuclear emulsions by Ilford'). Powell's group first calibrated the new ‘nuclear-research’ emulsions using the University of Cambridge Cockcroft-Walton generator/accelerator, which provided artificial disintegration particles as probes to measure the required range-energy relations for charged particles in the new emulsion.
They subsequently used these emulsions to make two of the most significant discoveries in physics of the 20th century. First, in 1947 Cecil Powell, César Lattes, Giuseppe Occhialini and Hugh Muirhead (University of Bristol), using plates exposed to cosmic rays at the Pic du Midi Observatory in the Pyrenees and scanned by Irene Roberts and Marietta Kurz, discovered the charged Pi-meson.
Second, two years later In 1949, analysing plates exposed at the Sphinx Observatory on the Jungfraujoch in Switzerland, first precise observations of the positive K-meson and its ‘strange’ decays were made by Rosemary Brown (now Rosemary Fowler), a research student in Cecil Powell's group at Bristol. Then known as the ‘Tau meson’ in the Tau-theta puzzle, precise measurement of these K-meson decay modes led to the introduction of the quantum concept of Strangeness and to the discovery of Parity violation in the weak interaction. Rosemary Brown called the striking four-track emulsion image, of one 'Tau' decaying to three charged pions, her "K track", thus effectively naming the newly discovered ‘strange’ K-meson. Cecil Powell was awarded the 1950 Nobel Prize in Physics "for his development of the photographic method of studying nuclear processes and his discoveries regarding mesons made with this method".
The emergence of new particle detector and particle accelerator technologies, coupled with the disadvantages noted in the introduction, led to a decline in use of Nuclear Emulsion plates in Particle Physics towards the end of the 20th century. However there remained a continuing use of the method in the study of rare interactions and decay processes.
More recently, searches for "Physics beyond the Standard Model", in particular the study of neutrinos and dark matter in their exceedingly rare interactions with normal matter, have led to a revival of the technique, including automation of emulsion image processing. Examples are the OPERA experiment, studying neutrino oscillations at the Gran Sasso Laboratory in Italy, and the FASER experiment at the CERN LHC, which will search for new, light and weakly interacting particles including dark photons.
Other applications
There exist a number of scientific and technical fields where the ability of nuclear emulsion to accurately record the position, direction and energy of electrically charged particles, or to integrate their effect, has found application. These applications in most cases involve the tracing of implanted radioactive markers by Autoradiography. Examples are:
Medical research
Biological research
Metallurgy
Reactive surface chemistry
Radiation protection
Muon tomography (Muography)
Archaeology.
References & Footnotes
External links
Nuclear emulsions by Ilford
Particle detectors
Nuclear physics | Nuclear emulsion | [
"Physics",
"Technology",
"Engineering"
] | 2,786 | [
"Particle detectors",
"Measuring instruments",
"Nuclear physics"
] |
9,769,502 | https://en.wikipedia.org/wiki/Localized%20molecular%20orbitals | Localized molecular orbitals are molecular orbitals which are concentrated in a limited spatial region of a molecule, such as a specific bond or lone pair on a specific atom. They can be used to relate molecular orbital calculations to simple bonding theories, and also to speed up post-Hartree–Fock electronic structure calculations by taking advantage of the local nature of electron correlation. Localized orbitals in systems with periodic boundary conditions are known as Wannier functions.
Standard ab initio quantum chemistry methods lead to delocalized orbitals that, in general, extend over an entire molecule and have the symmetry of the molecule. Localized orbitals may then be found as linear combinations of the delocalized orbitals, given by an appropriate unitary transformation.
In the water molecule for example, ab initio calculations show bonding character primarily in two molecular orbitals, each with electron density equally distributed among the two O-H bonds. The localized orbital corresponding to one O-H bond is the sum of these two delocalized orbitals, and the localized orbital for the other O-H bond is their difference; as per Valence bond theory.
For multiple bonds and lone pairs, different localization procedures give different orbitals. The Boys and Edmiston-Ruedenberg localization methods mix these orbitals to give equivalent bent bonds in ethylene and rabbit ear lone pairs in water, while the Pipek-Mezey method preserves their respective σ and π symmetry.
Equivalence of localized and delocalized orbital descriptions
For molecules with a closed electron shell, in which each molecular orbital is doubly occupied, the localized and delocalized orbital descriptions are in fact equivalent and represent the same physical state. It might seem, again using the example of water, that placing two electrons in the first bond and two other electrons in the second bond is not the same as having four electrons free to move over both bonds. However, in quantum mechanics all electrons are identical and cannot be distinguished as same or other. The total wavefunction must have a form which satisfies the Pauli exclusion principle such as a Slater determinant (or linear combination of Slater determinants), and it can be shown that if two electrons are exchanged, such a function is unchanged by any unitary transformation of the doubly occupied orbitals.
For molecules with an open electron shell, in which some molecular orbitals are singly occupied, the electrons of alpha and beta spin must be localized separately. This applies to radical species such as nitric oxide and dioxygen. Again, in this case the localized and delocalized orbital descriptions are equivalent and represent the same physical state.
Computation methods
Localized molecular orbitals (LMO) are obtained by unitary transformation upon a set of canonical molecular orbitals (CMO). The transformation usually involves the optimization (either minimization or maximization) of the expectation value of a specific operator. The generic form of the localization potential is:
,
where is the localization operator and is a molecular spatial orbital. Many methodologies have been developed during the past decades, differing in the form of .
The optimization of the objective function is usually performed using pairwise Jacobi rotations. However, this approach is prone to saddle point convergence (if it even converges), and thus other approaches have also been developed, from simple conjugate gradient methods with exact line searches, to Newton-Raphson and trust-region methods.
Foster-Boys
The Foster-Boys (also known as Boys) localization method minimizes the spatial extent of the orbitals by minimizing , where . This turns out to be equivalent to the easier task of maximizing . In one dimension, the Foster-Boys (FB) objective function can also be written as
.
Fourth moment
The fourth moment (FM) procedure is analogous to Foster-Boys scheme, however the orbital fourth moment is used instead of the orbital second moment. The objective function to be minimized is
.
The fourth moment method produces more localized virtual orbitals than Foster-Boys method, since it implies a larger penalty on the delocalized tails. For graphene (a delocalized system), the fourth moment method produces more localized occupied orbitals than Foster-Boys and Pipek-Mezey schemes.
Edmiston-Ruedenberg
Edmiston-Ruedenberg localization maximizes the electronic self-repulsion energy by maximizing , where .
Pipek-Mezey
Pipek-Mezey localization takes a slightly different approach, maximizing the sum of orbital-dependent partial charges on the nuclei:
.
Pipek and Mezey originally used Mulliken charges, which are mathematically ill defined. Recently, Pipek-Mezey style schemes based on a variety of mathematically well-defined partial charge estimates have been discussed. Some notable choices are Voronoi charges, Becke charges, Hirshfeld or Stockholder charges, intrinsic atomic orbital charges (see intrinsic bond orbitals)", Bader charges, or "fuzzy atom" charges. Rather surprisingly, despite the wide variation in the (total) partial charges reproduced by the different estimates, analysis of the resulting Pipek-Mezey orbitals has shown that the localized orbitals are rather insensitive to the partial charge estimation scheme used in the localization process. However, due to the ill-defined mathematical nature of Mulliken charges (and Löwdin charges, which have also been used in some works), as better alternatives are nowadays available it is advisable to use them in favor of the original version.
The most important quality of the Pipek-Mezey scheme is that it preserves σ-π separation in planar systems, which sets it apart from the Foster-Boys and Edmiston-Ruedenberg schemes that mix σ and π bonds. This property holds independent of the partial charge estimate used.
While the usual formulation of the Pipek-Mezey method invokes an iterative procedure to localize the orbitals, a non-iterative method has also been recently suggested.
In organic chemistry
Organic chemistry is often discussed in terms of localized molecular orbitals in a qualitative and informal sense. Historically, much of classical organic chemistry was built on the older valence bond / orbital hybridization models of bonding. To account for phenomena like aromaticity, this simple model of bonding is supplemented by semi-quantitative results from Hückel molecular orbital theory. However, the understanding of stereoelectronic effects requires the analysis of interactions between donor and acceptor orbitals between two molecules or different regions within the same molecule, and molecular orbitals must be considered. Because proper (symmetry-adapted) molecular orbitals are fully delocalized and do not admit a ready correspondence with the "bonds" of the molecule, as visualized by the practicing chemist, the most common approach is to instead consider the interaction between filled and unfilled localized molecular orbitals that correspond to σ bonds, π bonds, lone pairs, and their unoccupied counterparts. These orbitals and typically given the notation σ (sigma bonding), π (pi bonding), n (occupied nonbonding orbital, "lone pair"), p (unoccupied nonbonding orbital, "empty p orbital"; the symbol n* for unoccupied nonbonding orbital is seldom used), π* (pi antibonding), and σ* (sigma antibonding). (Woodward and Hoffmann use ω for nonbonding orbitals in general, occupied or unoccupied.) When comparing localized molecular orbitals derived from the same atomic orbitals, these classes generally follow the order σ < π < n < p (n*) < π* < σ* when ranked by increasing energy.
The localized molecular orbitals that organic chemists often depict can be thought of as qualitative renderings of orbitals generated by the computational methods described above. However, they do not map onto any single approach, nor are they used consistently. For instance, the lone pairs of water are usually treated as two equivalent spx hybrid orbitals, while the corresponding "nonbonding" orbitals of carbenes are generally treated as a filled σ(out) orbital and an unfilled pure p orbital, even though the lone pairs of water could be described analogously by filled σ(out) and p orbitals (for further discussion, see the article on lone pair and the discussion above on sigma-pi and equivalent-orbital models). In other words, the type of localized orbital invoked depends on context and considerations of convenience and utility.
References
Quantum chemistry
Computational chemistry
Molecular physics | Localized molecular orbitals | [
"Physics",
"Chemistry"
] | 1,765 | [
"Quantum chemistry",
"Molecular physics",
"Quantum mechanics",
"Theoretical chemistry",
"Computational chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
9,772,947 | https://en.wikipedia.org/wiki/Boiler%20feedwater | Boiler feedwater is the water which is supplied to a boiler. The feed water is put into the steam drum from a feed pump. In the steam drum the feed water is then turned into steam from the heat. After the steam is used, it is then dumped to the main condenser. From the condenser, it is then pumped to the deaerated feed tank. From this tank it then goes back to the steam drum to complete its cycle. The feedwater is never open to the atmosphere. This cycle is known as a closed system or Rankine cycle.
History of feedwater treatment
During the early development of boilers, water treatment was not so much of an issue, as temperatures and pressures were so low that high amounts of scale and rust would not form to such a significant extent, especially if the boiler was “blown down”. It was general practice to install zinc plates and/or alkaline chemicals to reduce corrosion within the boiler. Many tests had been performed to determine the cause (and possible protection) from corrosion in boilers using distilled water, various chemicals, and sacrificial metals. Silver nitrate can be added to feedwater samples to detect contamination by seawater. Use of lime for alkalinity control was mentioned as early as 1900, and was used by the French and British Navies until about 1935. In modern boilers, treatment of feedwater is critical, as problems result from using untreated water in extreme pressure and temperature environments. This includes lower efficiency in terms of heat transfer, overheating, damage, and costly cleaning.
Characteristics of boiler feedwater
Water has a higher heat capacity than most other substances. This quality makes it an ideal raw material for boiler operations. Boilers are part of a closed system as compared to open systems in a gas turbine. The closed system that is used is the Rankine cycle. This means that the water is recirculated throughout the system and is never in contact with the atmosphere. The water is reused and needs to be treated to continue efficient operations. Boiler water must be treated in order to be proficient in producing steam. Boiler water is treated to prevent scaling, corrosion, foaming, and priming. Chemicals are put into boiler water through the chemical feed tank to keep the water within chemical range. These chemicals are mostly oxygen scavengers and phosphates. The boiler water also has frequent blowdowns in order to keep the chloride content down. The boiler operations also include bottom blows in order to get rid of solids. Scale is precipitated impurities out of the water and then forms on heat transfer surfaces. This is a problem because scale does not transfer heat very well and causes the tubes to fail by getting too hot. Corrosion is caused by oxygen in the water. The oxygen causes the metal to oxidize which lowers the melting point of the metal. Foaming and priming are aused when the boiler water does not have the correct amount of chemicals and there are suspended solids in the water which carry over in the dry pipe. The dry pipe is where the steam and water mixtures are separated.
Boiler feedwater treatment
Boiler water treatment is used to control alkalinity, prevent scaling, correct pH, and to control conductivity. The boiler water needs to be alkaline and not acidic, so that it does not ruin the tubes. There can be too much conductivity in the feed water when there are too many dissolved solids. These correct treatments can be controlled by efficient operator and use of treatment chemicals. The main objectives to treat and condition boiler water is to exchange heat without scaling, protect against scaling, and produce high quality steam. The treatment of boiler water can be put into two parts. These are internal treatment and external treatment. (Sendelbach, p. 131) The internal treatment is for boiler feed water and external treatment is for make-up feed water and the condensate part of the system. Internal treatment protects against feed water hardness by preventing precipitating of scale on the boiler tubes. This treatment also protects against concentrations of dissolved and suspended solids in the feed water without priming or foaming. These treatment chemicals also help with the alkalinity of the feed water making it more of a base to help protect against boiler corrosion. The correct alkalinity is protected by adding phosphates. These phosphates precipitate the solids to the bottom of the boiler drum. At the bottom of the boiler drum there is a bottom blow to remove these solids. These chemicals also include anti-scaling agents, oxygen scavengers, and anti-foaming agents. Sludge can also be treated by two approaches. These are by coagulation and dispersion. When there is a high amount of sludge content it is better to coagulate the sludge to form large particles in order to just use the bottom blow to remove them from the feed water. When there is a low amount of sludge content it is better to use dispersants because it disperses the sludge throughout the feed water so sludge does not form.
Deaeration of feed water
Oxygen and carbon dioxide are removed from the feed water by deaeration. Deaeration can be accomplished by using deaerator heaters, vacuum deaerators, mechanical pumps, and steam-jet ejectors. In deaerating heaters, steam sprays incoming feed water and carries away the dissolved gases. The deaerators also store hot feed water which is ready to be used in the boiler. This means of mechanical deaeration is used with chemical oxygen scavenging agents to increase efficiency. (Sendelbach, p. 129) Deaerating heaters can be classified in two groups: spray types and tray types. With tray type heaters the incoming water is sprayed into steam atmosphere to reach saturation temperature. When the saturation temperature is reached most of the oxygen and non-condensable gases are released. There are seals that prevent the recontamination of the water in the spray section. The water then falls to the storage tank below. The non-condensables and oxygen are then vented to the atmosphere. The components of the tray type deaerating heater are a shell, spray nozzles, a direct contact vent condenser, tray stacks, and protective interchamber walls. The spray type deaerator is similar to the tray type deaerator. The water is sprayed into a steam atmosphere and most of the oxygen and non-condensables are released to the steam. The water then falls to the steam scrubber where the slight pressure loss causes the water to flash a little bit which also aids the removal of oxygen and non-condensables. The water then overflows to the storage tank. The gases are then vented to the atmosphere. With vacuum deaeration a vacuum is applied to the system and water is then brought to its saturation temperature. The water is sprayed into the tank like the spray and tray deaerators. The oxygen and non-condensables are vented to the atmosphere. (Sendelbach, p. 130)
Conditioning
The feedwater must be specially treated to avoid problems in the boiler and downstream systems. Untreated boiler feed water can cause corrosion and fouling.
Boiler corrosion
Corrosive compounds, especially O2 and CO2 must be removed, usually by use of a deaerator. Residual amounts can be removed chemically, by use of oxygen scavengers. Additionally, feed water is typically alkalized to a pH of 9.0 or higher, to reduce oxidation and to support the formation of a stable layer of magnetite on the water-side surface of the boiler, protecting the material underneath from further corrosion. This is usually done by dosing alkaline agents into the feed water, such as sodium hydroxide (caustic soda) or ammonia. Corrosion in boilers is due to the presence of dissolved oxygen, dissolved carbon dioxide, or dissolved salts.
Fouling
Deposits reduce the heat transfer in the boiler, reduce the flow rate and eventually block boiler tubes. Any non-volatile salts and minerals that will remain when the feedwater is evaporated must be removed, because they will become concentrated in the liquid phase and require excessive "blow-down" (draining) to prevent the formation of solid precipitates. Even worse are minerals that form scale. Therefore, the make-up water added to replace any losses of feedwater must be demineralized/deionized water, unless a purge valve is used to remove dissolved minerals.
Caustic embrittlement
Priming and foaming
Locomotive boilers
Steam locomotives usually do not have condensers so the feedwater is not recycled and water consumption is high. The use of deionized water would be prohibitively expensive so other types of water treatment are used. Chemicals employed typically include sodium carbonate, sodium bisulfite, tannin, phosphate and an anti-foaming agent.
Treatment systems have included:
Alfloc, developed by British Railways and Imperial Chemical Industries
Traitement Integral Armand (TIA), developed by Louis Armand
Porta Treatment, developed by Livio Dante Porta
See also
Boiler feedwater pump
Evaporator
Helamin
References
Shun'an, C., Qing, Z., & Zhixin, Z. (2008). A study of the influence of chloride ion concentration on the corrosion behavior of carbon steel in phosphate high-temperature boiler water chemistries. Anti-Corrosion Methods and Materials, 55(1), 15–19.
Sendelbach, M. (1988). Boiler-water treatment: Why, what and how. Chemical Engineering, 95(11), 127.
Characteristics of boiler feed water. (n.d.). Retrieved March 21, 2015, from http://www.lenntech.com/applications/process/boiler/boiler-feedwater-characteristics.htm
External links
Boiler Feedwater System Configuration
Power station technology
Boilers
Chemical process engineering
Steam locomotive technologies | Boiler feedwater | [
"Chemistry",
"Engineering"
] | 2,041 | [
"Chemical process engineering",
"Chemical engineering",
"Boilers",
"Pressure vessels"
] |
9,773,583 | https://en.wikipedia.org/wiki/Hormone%20antagonist | For the use of hormone antagonists in cancer, see hormonal therapy (oncology)
A hormone antagonist is a specific type of receptor antagonist which acts upon hormone receptors. Such pharmaceutical drugs are used in antihormone therapy.
External links
Hormonal agents
Receptor antagonists | Hormone antagonist | [
"Chemistry",
"Biology"
] | 58 | [
"Biotechnology stubs",
"Biochemistry stubs",
"Receptor antagonists",
"Biochemistry",
"Neurochemistry"
] |
9,773,858 | https://en.wikipedia.org/wiki/Bisulfite%20sequencing | Bisulfite sequencing (also known as bisulphite sequencing) is the use of bisulfite treatment of DNA before routine sequencing to determine the pattern of methylation. DNA methylation was the first discovered epigenetic mark, and remains the most studied. In animals it predominantly involves the addition of a methyl group to the carbon-5 position of cytosine residues of the dinucleotide CpG, and is implicated in repression of transcriptional activity.
Treatment of DNA with bisulfite converts cytosine residues to uracil, but leaves 5-methylcytosine residues unaffected. Therefore, DNA that has been treated with bisulfite retains only methylated cytosines. Thus, bisulfite treatment introduces specific changes in the DNA sequence that depend on the methylation status of individual cytosine residues, yielding single-nucleotide resolution information about the methylation status of a segment of DNA. Various analyses can be performed on the altered sequence to retrieve this information. The objective of this analysis is therefore reduced to differentiating between single nucleotide polymorphisms (cytosines and thymidine) resulting from bisulfite conversion (Figure 1).
Methods
Bisulfite sequencing applies routine sequencing methods on bisulfite-treated genomic DNA to determine methylation status at CpG dinucleotides. Other non-sequencing strategies are also employed to interrogate the methylation at specific loci or at a genome-wide level. All strategies assume that bisulfite-induced conversion of unmethylated cytosines to uracil is complete, and this serves as the basis of all subsequent techniques. Ideally, the method used would determine the methylation status separately for each allele. Alternative methods to bisulfite sequencing include Combined Bisulphite Restriction Analysis and methylated DNA immunoprecipitation (MeDIP).
Methodologies to analyze bisulfite-treated DNA are continuously being developed. To summarize these rapidly evolving methodologies, numerous review articles have been written.
The methodologies can be generally divided into strategies based on methylation-specific PCR (MSP) (Figure 4), and strategies employing polymerase chain reaction (PCR) performed under non-methylation-specific conditions (Figure 3). Microarray-based methods use PCR based on non-methylation-specific conditions also.
Non-methylation-specific PCR based methods
Direct sequencing
The first reported method of methylation analysis using bisulfite-treated DNA utilized PCR and standard dideoxynucleotide DNA sequencing to directly determine the nucleotides resistant to bisulfite conversion. Primers are designed to be strand-specific as well as bisulfite-specific (i.e., primers containing non-CpG cytosines such that they are not complementary to non-bisulfite-treated DNA), flanking (but not involving) the methylation site of interest. Therefore, it will amplify both methylated and unmethylated sequences, in contrast to methylation-specific PCR. All sites of unmethylated cytosines are displayed as thymines in the resulting amplified sequence of the sense strand, and as adenines in the amplified antisense strand. By incorporating high throughput sequencing adaptors into the PCR primers, PCR products can be sequenced with massively parallel sequencing. Alternatively, and labour-intensively, PCR product can be cloned and sequenced. Nested PCR methods can be used to enhance the product for sequencing.
All subsequent DNA methylation analysis techniques using bisulfite-treated DNA is based on this report by Frommer et al. (Figure 2). Although most other modalities are not true sequencing-based techniques, the term "bisulfite sequencing" is often used to describe bisulfite-conversion DNA methylation analysis techniques in general.
Pyrosequencing
Pyrosequencing has also been used to analyze bisulfite-treated DNA without using methylation-specific PCR. Following PCR amplification of the region of interest, pyrosequencing is used to determine the bisulfite-converted sequence of specific CpG sites in the region. The ratio of C-to-T at individual sites can be determined quantitatively based on the amount of C and T incorporation during the sequence extension. The main limitation of this method is the cost of the technology. However, Pyrosequencing does well allow for extension to high-throughput screening methods.
A variant of this technique, described by Wong et al., uses allele-specific primers that incorporate single-nucleotide polymorphisms into the sequence of the sequencing primer, thus allowing for separate analysis of maternal and paternal alleles. This technique is of particular usefulness for genomic imprinting analysis.
Methylation-sensitive single-strand conformation analysis (MS-SSCA)
This method is based on the single-strand conformation polymorphism analysis (SSCA) method developed for single-nucleotide polymorphism (SNP) analysis. SSCA differentiates between single-stranded DNA fragments of identical size but distinct sequence based on differential migration in non-denaturating electrophoresis. In MS-SSCA, this is used to distinguish between bisulfite-treated, PCR-amplified regions containing the CpG sites of interest. Although SSCA lacks sensitivity when only a single nucleotide difference is present, bisulfite treatment frequently makes a number of C-to-T conversions in most regions of interest, and the resulting sensitivity approaches 100%. MS-SSCA also provides semi-quantitative analysis of the degree of DNA methylation based on the ratio of band intensities. However, this method is designed to assess all CpG sites as a whole in the region of interest rather than individual methylation sites.
High resolution melting analysis (HRM)
A further method to differentiate converted from unconverted bisulfite-treated DNA is using high-resolution melting analysis (HRM), a quantitative PCR-based technique initially designed to distinguish SNPs. The PCR amplicons are analyzed directly by temperature ramping and resulting liberation of an intercalating fluorescent dye during melting. The degree of methylation, as represented by the C-to-T content in the amplicon, determines the rapidity of melting and consequent release of the dye. This method allows direct quantitation in a single-tube assay, but assesses methylation in the amplified region as a whole rather than at specific CpG sites.
Methylation-sensitive single-nucleotide primer extension (MS-SnuPE)
MS-SnuPE employs the primer extension method initially designed for analyzing single-nucleotide polymorphisms. DNA is bisulfite-converted, and bisulfite-specific primers are annealed to the sequence up to the base pair immediately before the CpG of interest. The primer is allowed to extend one base pair into the C (or T) using DNA polymerase terminating dideoxynucleotides, and the ratio of C to T is determined quantitatively.
A number of methods can be used to determine this C:T ratio. At the beginning, MS-SnuPE relied on radioactive ddNTPs as the reporter of the primer extension. Fluorescence-based methods or Pyrosequencing can also be used. However, matrix-assisted laser desorption ionization/time-of-flight (MALDI-TOF) mass spectrometry analysis to differentiate between the two polymorphic primer extension products can be used, in essence, based on the GOOD assay designed for SNP genotyping. Ion pair reverse-phase high-performance liquid chromatography (IP-RP-HPLC) has also been used to distinguish primer extension products.
Base-specific cleavage/MALDI-TOF
A recently described method by Ehrich et al. further takes advantage of bisulfite-conversions by adding a base-specific cleavage step to enhance the information gained from the nucleotide changes. By first using in vitro transcription of the region of interest into RNA (by adding an RNA polymerase promoter site to the PCR primer in the initial amplification), RNase A can be used to cleave the RNA transcript at base-specific sites. As RNase A cleaves RNA specifically at cytosine and uracil ribonucleotides, base-specificity is achieved by adding incorporating cleavage-resistant dTTP when cytosine-specific (C-specific) cleavage is desired, and incorporating dCTP when uracil-specific (U-specific) cleavage is desired. The cleaved fragments can then be analyzed by MALDI-TOF. Bisulfite treatment results in either introduction/removal of cleavage sites by C-to-U conversions or shift in fragment mass by G-to-A conversions in the amplified reverse strand. C-specific cleavage will cut specifically at all methylated CpG sites. By analyzing the sizes of the resulting fragments, it is possible to determine the specific pattern of DNA methylation of CpG sites within the region, rather than determining the extent of methylation of the region as a whole. This method demonstrated efficacy for high-throughput screening, allowing for interrogation of numerous CpG sites in multiple tissues in a cost-efficient manner.
Methylation-specific PCR (MSP)
This alternative method of methylation analysis also uses bisulfite-treated DNA but avoids the need to sequence the area of interest. Instead, primer pairs are designed themselves to be "methylated-specific" by including sequences complementing only unconverted 5-methylcytosines, or, on the converse, "unmethylated-specific", complementing thymines converted from unmethylated cytosines. Methylation is determined by the ability of the specific primer to achieve amplification. This method is particularly useful to interrogate CpG islands with possibly high methylation density, as increased numbers of CpG pairs in the primer increase the specificity of the assay. Placing the CpG pair at the 3'-end of the primer also improves the sensitivity. The initial report using MSP described sufficient sensitivity to detect methylation of 0.1% of alleles. In general, MSP and its related protocols are considered to be the most sensitive when interrogating the methylation status at a specific locus.
The MethyLight method is based on MSP, but provides a quantitative analysis using quantitative PCR. Methylated-specific primers are used, and a methylated-specific fluorescence reporter probe is also used that anneals to the amplified region. In alternative fashion, the primers or probe can be designed without methylation specificity if discrimination is needed between the CpG pairs within the involved sequences. Quantitation is made in reference to a methylated reference DNA. A modification to this protocol to increase the specificity of the PCR for successfully bisulfite-converted DNA (ConLight-MSP) uses an additional probe to bisulfite-unconverted DNA to quantify this non-specific amplification.
Further methodology using MSP-amplified DNA analyzes the products using melting curve analysis (Mc-MSP). This method amplifies bisulfite-converted DNA with both methylated-specific and unmethylated-specific primers, and determines the quantitative ratio of the two products by comparing the differential peaks generated in a melting curve analysis. A high-resolution melting analysis method that uses both quantitative PCR and melting analysis has been introduced, in particular, for sensitive detection of low-level methylation
Microarray-based methods
Microarray-based methods are a logical extension of the technologies available to analyze bisulfite-treated DNA to allow for genome-wide analysis of methylation. Oligonucleotide microarrays are designed using pairs of oligonucleotide hybridization probes targeting CpG sites of interest. One is complementary to the unaltered methylated sequence, and the other is complementary to the C-to-U-converted unmethylated sequence. The probes are also bisulfite-specific to prevent binding to DNA incompletely converted by bisulfite. The Illumina Methylation Assay is one such assay that applies the bisulfite sequencing technology on a microarray level to generate genome-wide methylation data.
Limitations
5-Hydroxymethylcytosine
Bisulfite sequencing is used widely across mammalian genomes, however complications have arisen with the discovery of a new mammalian DNA modification 5-hydroxymethylcytosine. 5-Hydroxymethylcytosine converts to cytosine-5-methylsulfonate upon bisulfite treatment, which then reads as a C when sequenced. Therefore, bisulfite sequencing cannot discriminate between 5-methylcytosine and 5-hydroxymethylcytosine. This means that the output from bisulfite sequencing can no longer be defined as solely DNA methylation, as it is the composite of 5-methylcytosine and 5-hydroxymethylcytosine.
Incomplete conversion
Bisulfite sequencing relies on the conversion of every single unmethylated cytosine residue to uracil. If conversion is incomplete, the subsequent analysis will incorrectly interpret the unconverted unmethylated cytosines as methylated cytosines, resulting in false positive results for methylation. Only cytosines in single-stranded DNA are susceptible to attack by bisulfite, therefore denaturation of the DNA undergoing analysis is critical. It is important to ensure that reaction parameters such as temperature and salt concentration are suitable to maintain the DNA in a single-stranded conformation and allow for complete conversion. Embedding the DNA in agarose gel has been reported to improve the rate of conversion by keeping strands of DNA physically separate. Incomplete conversion rates can be estimated and adjusted-for after sequencing by including an internal control in the sequencing library, such as lambda phage DNA (which is known to be unmethylated) or by aligning bisulfite sequencing reads to a known unmethylated region in the organism, such as the chloroplast genome.
Degradation of DNA during bisulfite treatment
A major challenge in bisulfite sequencing is the degradation of DNA that takes place concurrently with the conversion. The conditions necessary for complete conversion, such as long incubation times, elevated temperature, and high bisulfite concentration, can lead to the degradation of about 90% of the incubated DNA. Given that the starting amount of DNA is often limited, such extensive degradation can be problematic. The degradation occurs as depurinations resulting in random strand breaks. Therefore, the longer the desired PCR amplicon, the more limited the number of intact template molecules will likely be. This could lead to the failure of the PCR amplification, or the loss of quantitatively accurate information on methylation levels resulting from the limited sampling of template molecules. Thus, it is important to assess the amount of DNA degradation resulting from the reaction conditions employed, and consider how this will affect the desired amplicon. Techniques can also be used to minimize DNA degradation, such as cycling the incubation temperature.
In 2020, New England Biolabs developed NEBNext Enzymatic Methyl-seq, an alternative enzymatic approach to minimize DNA damage. Instead of bisulfite, APOBEC is used to convert C into U. Distinction between C, 5mC, and 5hmC is granted by the further modifications that "protect" the modified bases from APOBEC.
Other concerns
A potentially significant problem following bisulfite treatment is incomplete desulfonation of pyrimidine residues due to inadequate alkalization of the solution. This may inhibit some DNA polymerases, rendering subsequent PCR difficult. However, this situation can be avoided by monitoring the pH of the solution to ensure that desulfonation will be complete.
A final concern is that bisulfite treatment greatly reduces the level of complexity in the sample, which can be problematic if multiple PCR reactions are to be performed (2006). Primer design is more difficult, and inappropriate cross-hybridization is more frequent.
Applications: genome-wide methylation analysis
The advances in bisulfite sequencing have led to the possibility of applying them at a genome-wide scale, where, previously, global measure of DNA methylation was feasible only using other techniques, such as Restriction landmark genomic scanning. The mapping of the human epigenome is seen by many scientists as the logical follow-up to the completion of the Human Genome Project. This epigenomic information will be important in understanding how the function of the genetic sequence is implemented and regulated. Since the epigenome is less stable than the genome, it is thought to be important in gene-environment interactions.
Epigenomic mapping is inherently more complex than genome sequencing, however, since the epigenome is much more variable than the genome. One's epigenome varies with age, differs between tissues, is altered by environmental factors, and shows aberrations in diseases. Such rich epigenomic mapping, however, representing different ages, tissue types, and disease states, would yield valuable information on the normal function of epigenetic marks as well as the mechanisms leading to aging and disease.
Direct benefits of epigenomic mapping include probable advances in cloning technology. It is believed that failures to produce cloned animals with normal viability and lifespan result from inappropriate patterns of epigenetic marks. Also, aberrant methylation patterns are well characterized in many cancers. Global hypomethylation results in decreased genomic stability, while local hypermethylation of tumour suppressor gene promoters often accounts for their loss of function. Specific patterns of methylation are indicative of specific cancer types, have prognostic value, and can help to guide the best course of treatment.
Large-scale epigenome mapping efforts are under way around the world and have been organized under the Human Epigenome Project. This is based on a multi-tiered strategy, whereby bisulfite sequencing is used to obtain high-resolution methylation profiles for a limited number of reference epigenomes, while less thorough analysis is performed on a wider spectrum of samples. This approach is intended to maximize the insight gained from a given amount of resources, as high-resolution genome-wide mapping remains a costly undertaking.
Gene-set analysis (for example using tools like DAVID and GoSeq) has been shown to be severely biased when applied to high-throughput methylation data (e.g. genome-wide bisulfite sequencing); it has been suggested that this can be corrected using sample label permutations or using a statistical model to control for differences in the numberes of CpG probes / CpG sites that target each gene.
Oxidative bisulfite sequencing
5-Methylcytosine and 5-hydroxymethylcytosine both read as a C in bisulfite sequencing. In oxidative bisulfite sequencing (oxBS), Tet is used to convert 5-hydroxymethylcytosine to 5-formylcytosine, which subsequently converts to uracil during bisulfite treatment. The only base that then reads as a C is 5‑methylcytosine, giving a map of the true methylation status in the DNA sample. Levels of 5‑hydroxymethylcytosine can also be quantified by measuring the difference between bisulfite and oxidative bisulfite sequencing.
Another method, Tet-assisted oxidative bisulfite sequencing (TAB-Seq) by Chuan He at the University of Chicago, converts the bases differently: 5hmC reads as C, while 5mC and C both read as T. To achieve this, 5hmC bases are first "protected" by conversion to β-glucosyl-5-hydroxymethylcytosine (5gmC). The Tet enzyme is introduced to convert all 5mC to 5caC. Bisulfite then converts both C and 5caC into uracil. 5gmC will be read out like C in PCR amplification.
See also
Reduced representation bisulfite sequencing
References
External links
Bisulfite conversion protocol
Human Epigenome Project (HEP) - Data — by the Sanger Institute
The Epigenome Network of Excellence
Molecular biology
Epigenetics
Genomics techniques | Bisulfite sequencing | [
"Chemistry",
"Biology"
] | 4,262 | [
"Genetics techniques",
"Genomics techniques",
"Molecular biology techniques",
"Molecular biology",
"Biochemistry"
] |
9,774,017 | https://en.wikipedia.org/wiki/Biomolecular%20Object%20Network%20Databank | The Biomolecular Object Network Databank is a bioinformatics databank containing information on small molecule structures and interactions. The databank integrates a number of existing databases to provide a comprehensive overview of the information currently available for a given molecule.
Background
The Blueprint Initiative started as a research program in the lab of Dr. Christopher Hogue at the Samuel Lunenfeld Research Institute at Mount Sinai Hospital in Toronto. On December 14, 2005, Unleashed Informatics Limited acquired the commercial rights to The Blueprint Initiative intellectual property. This included rights to the protein interaction database BIND, the small molecule interaction database SMID, as well as the data warehouse SeqHound. Unleashed Informatics is a data management service provider and is overseeing the management and curation of The Blueprint Initiative under the guidance of Dr. Hogue.
Construction
BOND integrates the original Blueprint Initiative databases as well as other databases, such as Genbank, combined with many tools required to analyze these data. Annotation links for sequences, including taxon identifiers, redundant sequences, Gene Ontology descriptions, Online Mendelian Inheritance in Man identifiers, conserved domains, data base cross-references, LocusLink Identifiers and complete genomes are also available. BOND facilitates cross-database queries and is an open access resource which integrates interaction and sequence data.
Small Molecule Interaction Database (SMID)
The Small Molecule Interaction Database is a database containing protein domain-small molecule interactions. It uses a domain-based approach to identify domain families, found in the Conserved Domain Database (CDD), which interact with a query small molecule. The CDD from NCBI amalgamates data from several different sources; Protein FAMilies (PFAM), Simple Modular Architecture Research Tool (SMART), Cluster of Orthologous Genes (COGs), and NCBI's own curated sequences. The data in SMID is derived from the Protein Data Bank (PDB), a database of known protein crystal structures.
SMID can be queried by entering a protein GI, domain identifier, PDB ID or SMID ID. The results of a search provide small molecule, protein, and domain information for each interaction identified in the database. Interactions with non-biological contacts are normally screened out by default.
SMID-BLAST is a tool developed to annotate known small-molecule binding sites as well as to predict binding sites in proteins whose crystal structures have not yet been determined. The prediction is based on extrapolation of known interactions, found in the PDB, to interactions between an uncrystallized protein with a small molecule of interest. SMID-BLAST was validated against a test set of known small molecule interactions from the PDB. It was shown to be an accurate predictor of protein-small molecule interactions; 60% of predicted interactions identically matched the PDB annotated binding site, and of these 73% had greater than 80% of the binding residues of the protein correctly identified. Hogue, C et al. estimated that 45% of predictions that were not observed in the PDB data do in fact represent true positives.
Biomolecular Interaction Network Database (BIND)
Introduction
The idea of a database to document all known molecular interactions was originally put forth by Tony Pawson in the 1990s and was later developed by scientists at the University of Toronto in collaboration with the University of British Columbia. The development of the Biomolecular Interaction Network Database (BIND) has been supported by grants from the Canadian Institutes of Health Research (CIHR), Genome Canada, the Canadian Foundation for Innovation and the Ontario Research and Development Fund. BIND was originally designed to be a constantly growing depository for information regarding biomolecular interactions, molecular complexes and pathways. As proteomics is a rapidly advancing field, there is a need to have information from scientific journals readily available to researchers. BIND facilitates the understanding of molecular interactions and pathways involved in cellular processes and will eventually give scientists a better understanding of developmental processes and disease pathogenesis
The major goals of the BIND project are: to create a public proteomics resource that is available to all; to create a platform to enable datamining from other sources (PreBIND); to create a platform capable of presenting visualizations of complex molecular interactions. From the beginning, BIND has been open access and software can be freely distributed and modified. Currently, BIND includes a data specification, a database and associated data mining and visualization tools. Eventually, it is hoped that BIND will be a collection of all the interactions occurring in each of the major model organisms.
Database structure
BIND contains information on three types of data: interactions, molecular complexes and pathways.
Interactions are the basic component of BIND and describe how 2 or more objects (A and B) interact with each other. The objects can be a variety of things: DNA, RNA, genes, proteins, ligands, or photons. The interaction entry contains the most information about a molecule; it provides information on its name and synonyms, where it is found (e.g. where in the cell, what species, when it is active, etc.), and its sequence or where its sequence can be found. The interaction entry also outlines the experimental conditions required to observe binding in vitro, chemical dynamics (including thermodynamics and kinetics).
The second type of BIND entries are the molecular complexes. Molecular complexes are defined as an aggregate of molecules that are stable and have a function when bound to each other. The record may also contain some information on the role of the complex in various interactions and the molecular complex entry links data from 2 or more interaction records.
The third component of BIND is the pathway record section. A pathway consists of a network of interactions that are involved in the regulation of cellular processes. This section may also contain information on phenotypes and diseases related to the pathway.
The minimum amount of information needed to create an entry in BIND is a PubMed publication reference and an entry in another database (e.g. GenBank). Each entry within the database provides references/authors for the data. As BIND is a constantly growing database, all components of BIND track updates and changes.
BIND is based on a data specification written using Abstract Syntax Notation 1 (ASN.1) language. ASN.1 is used also by NCBI when storing data for their Entrez system and because of this BIND uses the same standards as NCBI for data representation. The ASN.1 language is preferred because it can be easily translated into other data specification languages (e.g. XML), can easily handle complex data and can be applied to all biological interactions – not just proteins. Bader and Hogue (2000) have prepared a detailed manuscript on the ASN.1 data specification used by BIND.
Data submission and curation
User submission to the database is encouraged. To contribute to the database, one must submit: contact info, PubMed identifier and the two molecules that interact. The person who submits a record is the owner of it. All records are validated before being made public and BIND is curated for quality assurance. BIND curation has two tracks: high-throughput (HTP) and low-throughput (LTP). HTP records are from papers which have reported more than 40 interaction results from one experimental methodology. HTP curators typically have a bioinformatics backgrounds. The HTP curators are responsible for the collection of storage of experimental data and they also create scripts to update BIND based on new publications. LTP records are curated by individuals with either an MSc or PhD and laboratory experience in interaction research. LTP curators are given further training through the Canadian Bioinformatics Workshops. Information on small molecule chemistry is curated separately by chemists to ensure the curator is knowledgeable about the subject. The priority for BIND curation is to focus on LTP to collect information as it is published. Although, HTP studies provide more information at once, there are more LTP studies being reported and similar numbers of interactions are being reported by both tracks. In 2004, BIND collected data from 110 journals.
Database growth
BIND has grown significantly since its conception; in fact, the database saw a 10 fold increase in entries between 2003 and 2004. By September 2004, there were over 100,000 interaction records by 2004 (including 58,266 protein-protein, 4,225 genetic, 874 protein-small molecule, 25,857 protein-DNA, and 19,348 biopolymer interactions). The database also contains sequence information for 31,972 proteins, 4560 DNA samples and 759 RNA samples. These entries have been collected from 11,649 publications; therefore, the database represents an important amalgamation of data. The organisms with entries in the database include: Saccharomyces cerevisiae, Drosophila melanogaster, Homo sapiens, Mus musculus, Caenorhabditis elegans, Helicobacter pylori, Bos taurus, HIV-1, Gallus gallus, Arabidopsis thaliana, as well as others. In total, 901 taxa were included by September 2004 and BIND has been split up into BIND-Metazoa, BIND-Fungi, and BIND-Taxroot.
Not only is the information contained within the database continually updated, the software itself has gone through several revisions. Version 1.0 of BIND was released in 1999 and based on user feedback it was modified to include additional detail on experimental conditions required for binding and a hierarchical description of cellular location of the interaction. Version 2.0 was released in 2001 and included the capability to link to information available in other databases. Version 3.0 (2002) expanded the database from physical/biochemical interactions to also include genetic interactions. Version 3.5 (2004) included a refined user-interface that aimed to simplify information retrieval. In 2006, BIND was incorporated into the Biomolecular Object Network Database (BOND) where it continues to be updated and improved.
Special features
BIND was the first database of its kind to contain info on biomolecular interactions, reactions and pathways in one schema. It is also the first to base its ontology on chemistry which allows 3D representation of molecular interactions. The underlying chemistry allows molecular interactions to be described down to the atomic level of resolution.
PreBIND an associated system for data mining to locate biomolecular interaction information in the scientific literature. The name or accession number of a protein can be entered and PreBIND will scan the literature and return a list of potentially interacting proteins. BIND BLAST is also available to find interactions with proteins that are similar to the one specified in the query.
BIND offers several “features” that many other proteomics databases do not include. The authors of this program have created an extension to traditional IUPAC nomenclature to help describe post-translational modifications that occur to amino acids. These modifications include: acetylation, formylation, methylation, palmitoylation, etc. the extension of the traditional IUPAC codes allows these amino acids to be represented in sequence form as well. BIND also utilizes a unique visualization tool known as OntoGlyphs. The OntoGlyphs were developed based on Gene Ontology (GO) and provide a link back to the original GO information. A number of GO terms have been grouped into categories, each one representing a specific function, binding specificity, or localization in the cell. There are 83 OntoGlyph characters in total. There are 34 functional OntoGlyphs which contain information about the role of the molecule (e.g. cell physiology, ion transport, signaling). There are 25 binding OntoGlyphs which describe what the molecule binds (e.g. ligands, DNA, ions). The other 24 OntoGlyphs provide information about the location of the molecule within a cell (e.g. nucleus, cytoskeleton). The OntoGlyphs can be selected and manipulated to include or exclude certain characteristics from search results. The visual nature of the OntoGlyphs also facilitates pattern recognition when looking at search results. ProteoGlyphs are graphical representations of the structural and binding properties of proteins at the level of conserved domains. The protein is diagrammed as a straight horizontal line and glyphs are inserted to represent conserved domains. Each glyph is displayed to represent the relative position and length of its alignment in the protein sequence.
Accessing the database
The database user interface is web-based and can be queried using text or accession numbers/identifiers. Since its integration with the other components of BOND, sequences have been added to interactions, molecular complexes and pathways in the results. Records include information on: BIND ID, description of the interaction/complex/pathway, publications, update records, organism, OntoGlyphs, ProteoGlyphs, and links to other databases where additional information can be found. BIND records include various viewing formats (e.g. HTML, ASN.1, XML, FASTA), various formats for exporting results (e.g. ASN.1, XML, GI list, PDF), and visualizations (e.g. Cytoscape). The exact viewing and exporting options vary depending on what type of data has been retrieved.
User statistics
The number of Unleashed Registrants has increased 10 fold since the integration of BIND. As of December 2006 registration fell just short of 10,000. Subscribers to the commercial versions of BOND fall into six general categories; agriculture and food, biotechnology, pharmaceuticals, informatics, materials and other. The biotechnology sector is the largest of these groups, holding 28% of subscriptions. Pharmaceuticals and informatics follow with 22% and 18% respectively. The United States holds the bulk of these subscriptions, 69%. Other countries with access to the commercial versions of BOND include Canada, the United Kingdom, Japan, China, Korea, Germany, France, India and Australia. All of these countries fall below 6% in user share.
References
Biochemistry databases | Biomolecular Object Network Databank | [
"Chemistry",
"Biology"
] | 2,879 | [
"Biochemistry",
"Biochemistry databases"
] |
9,775,312 | https://en.wikipedia.org/wiki/Wool%20insulation | Wool insulation is made from sheep wool fibres that are either mechanically held together or bonded using between 5% and 20% recycled polyester adhesive to form insulating batts, rolls and ropes. Some companies do not use any adhesives or bonding agents, but rather entangle the wool fibers into in high R-Value, air capturing knops (or balls) that hold themselves together. Natural wool insulation is effective for both thermal and acoustic insulation. The wool is often sourced from the less expensive black wools of the UK and Europe. Batts are commonly used in the walls and ceilings of timber-frame buildings, rolls can be cut to size for lofts, and ropes can be used between the logs in log homes. Wool knops are installed loosely in attics or in walls as a blow-in-blanket system utilizing a fiber mesh to hold the wool in place during the blow in process.
Natural wool insulation should be distinguished from mineral wool insulation, also called slag wool or rock wool, which only resembles natural wool fibers. It is actually made from rock, blast furnace slag, and other raw materials which are melted and spun into fibers.
Sheep wool is a natural, sustainable, recyclable material, which is biodegradable, and has low embodied energy. It does not endanger the health of people or the environment, and does not require protection to install, unlike fiberglass insulation. Wool is a highly effective insulating material which performs better than its rated R value because it can absorb and release moisture.
Mongolian nomads used felted and woven sheep wool pads as an insulating layer on the walls and floors of their dwellings, called ger or yurts. The use of wool for insulation is starting to rise in popularity. It is already popular in Australia, which produces 55% of the world's raw and processed wool, as well as in Europe and Canada, and is gaining ground in the United States
Building considerations
Wool insulation commonly comes in rolls of batts or ropes with varied widths and thicknesses depending on the manufacturer. Generally, wool batts have thicknesses of 50 mm (2 in) to 100mm (4 in), with widths of 400 mm (16 in) and 600 mm (24 in), and lengths of 4000 mm (13 ft 4 in), 5000 mm (16 ft 8 in), 6000 mm (20 ft) and 7200 mm (24 ft). The widths of 16 in and 24 in are the standard measurements between studs in a stud frame wall. Most manufacturers provide custom sizes as well and batts and ropes are easy to cut once on site.
Wool insulation costs significantly more than conventional fiberglass insulation, but does not require the use of protective gloves, and may have significantly lower health risks to both the building occupants and the installation crew. It can be used in the roof, walls and floors of any building type as long as there are spaces to put the insulation in. Installing wool insulation is very similar to installing conventional insulation batts; it can be held into place with staples or it can be friction-fit, which involves cutting the insulation slightly bigger than the space it occupies, using friction to hold it in place.
Environmental considerations
Some wool used to manufacture insulation is the wool discarded as waste by other industries due to its colour or grade. As a controlled waste product it cannot be disposed of until it is cleaned. Hence the energy footprint of washing the wool is attributed to the livestock industry under PAS2050. There are some primary environmental factors that need to be considered when looking for a source of wool, such as the way the flock is treated for pesticides, the chemicals used in the treatment of the wool after shearing, and the distance from the source to its final destination. Sheep are often treated with insecticide and fungicide in a process called dipping. This leaves a residue on the fleece and can result in groundwater contamination if used improperly. These residues are often washed off once the fleece is sheared, but this results in three byproducts: grease, liquor, and sludge. The first two can be safely disposed of, but the third contains remnants of the pesticides which cause a concern for disposal.
Sheep wool insulation is often treated with borax to enhance its fire retardant and pest repellent qualities. The level of Borax is relatively low only 4% dry weight although the scouring baths have a higher load of 8–9%. Borax mining employs one of the cleanest mining techniques available but borax is increasingly coming into focus as a suspected reproductive toxin having been considered relatively safe for many years; animal ingestion studies in several species, at high doses, indicate that borates cause reproductive and developmental effects. The most significant exposure route for humans is inhalation which raises questions over whether dust from wool insulation could lead to a significant exposure to humans, though this is probably unlikely for anyone except professional installers.
There are some companies that use DE (Diatomaceous Earth) as pesticide. DE is believed to be harmless and is used in feed and animals to stave off parasites. DE has the drawback of having to be applied in two steps to be effective and the wool having to be installed loose.
Some companies use Thorlan IW (Kaliumfluorotitanat), which repels moths. It is a titanium based treatment which remains with the fiber during its entire period of use. Thorlan IW is in the European Union chemical law register of certified products. A rubber solution is applied hot and allows a coating of Thorlan to stay with the product for its lifetime.
References
Sustainable building
Wool industry
Building insulation materials
Heating, ventilation, and air conditioning | Wool insulation | [
"Engineering"
] | 1,168 | [
"Building engineering",
"Sustainable building",
"Construction"
] |
9,775,613 | https://en.wikipedia.org/wiki/Superadobe | Superadobe is a form of earthbag construction that was developed by Iranian architect Nader Khalili. The technique uses layered long fabric tubes or bags filled with adobe to form a compression structure. The resulting beehive-shaped structures employ corbelled arches, corbelled domes, and vaults to create sturdy single and double-curved shells. It has received growing interest for the past two decades in the natural building and sustainability movements.
History
Although it is not known exactly how long, Earthbag shelters have been used for decades, primarily as implements of refuge in times of war. Military infantrymen have used sand filled sacks to create bunkers and barriers for protection prior to World War I. In the last century, other earthbag buildings have undergone extensive research and are slowly beginning to gain worldwide recognition as a plausible solution to provide affordable housing.
German architect Frei Otto is said to have experimented with earthbags, as is more recently Gernot Minke. It was Nader Khalili who popularized earthbag construction. Initially in 1984 in response to a NASA call for housing designs for future human settlements on the Moon and on Mars, he proposed using Moon dust to fill the plastic Superadobe tubes and velcroing together the layers (instead of using barbed wire). He came to term his particular technique of earthbag construction "Superadobe". Some projects have been done using bags as low-tech foundations for straw-bale construction. They can be covered in a waterproof membrane to keep the straw dry.
In 1995, 15 refugee shelters were built in Iran, by Nader Khalili and the United Nations Development Programme (UNDP) and the United Nations High Commissioner for Refugees (UNHCR) in response to refugees from the Persian Gulf War. According to Khalili the cluster of 15 domes that was built could have been repeated by the thousands. The government dismantled the camp a few years later.
Since then, the Superadobe Method has been put to use in Canada, Mexico, Brazil, Belize, Costa Rica, Chile, Iran, India, Russia, Mali, and Thailand, as well as in the U.S.
While Superadobe constructions have generally been limited to approximately 4 meters in diameter, larger structures have been created by grouping several "beehives" together to form a network of domes. There is a 32′ (10m) dome being constructed in the San Ignacio area of Belize, which when finished will be the center dome of an eco-resort complex.
BBC News reported in March 2019 that superadobe structures have withstood earthquakes as severe as 7.2 magnitude.
Methodology
Superadobe's earthbag technique lends itself to a wide range of materials. Polypropylene tubing is ideal, although burlap is also sufficient. Likewise, while sand, cement, or lime are preferred, virtually any fill material (e.g. gravel, crushed volcanic rock, or rice hulls) will work.
After materials are gathered and the dimensions of the building are decided upon, a circular foundation trench is dug, approximately 1 foot deep and 8–14 feet in diameter, giving room for at least two layers of earthbags to be laid down underground. A chain is anchored to the ground in the center of the circle and used as a pair of compasses to trace the shape of the base. Another chain is fastened just outside the dome wall: this is the fixed or height guide and provides an interior measurement for the layers as they corbel higher, ensuring the accuracy of each new layer as it is laid and tamped.
Between layers of tamped, filled tubes, loop of barbed wire functions as mortar and holds the structure together. Window voids can be placed in several ways: either by rolling the filled tube back on itself around a circular plug (forming an arched header) or by sawing out a Gothic or pointed arch void after the filler material has set.
Once the corbelled dome is complete, it can be covered in several different kinds of exterior treatments, both for aesthetic reasons and to protect the structure from environmental damage such as that from ultraviolet radiation. Like the materials for the construction itself, there are multiple choices. While CalEarth names plaster as the most common finishing option, soil and living grass have also been used. Khalili has also used a mix of earth and plaster, further covered by a "reptile" layer of cement and earth balls that strengthen the finish by redirecting stress.
Emergency shelters
According to Khalili's website, in an emergency, impermanent shelters can be built with unskilled labor, using only dirt with no cement or lime, and for the sake of speed of construction, windows can be punched out later due to the strength of the compressive nature of the dome/beehive. Superadobe is not an exact art and similar materials may be substituted if the most ideal ones are not readily available. Ordinary sand bags can also be used to form the dome if no Superadobe tubes can be procured; this in fact was how the original design was developed.
In an interview with an AIA (American Institute of Architects) representative, Nader Khalili, Superadobe's founder and figurehead, said this about the emergency shelter aspects of Superadobe:
Usage of Superadobe in contemporary architecture
There exists a great number of Superadobe projects around the world. According to CalEarth, Superadobe domes and vaults have been built in at least 49 countries on six continents, including Algeria, Australia, Brazil, Canada, Colombia, Costa Rica, Guatemala, Hungary, India, Iran, Japan, Jordan, Mexico, Oman, Sierra Leone, Tanzania, United States, Venezuela, and the West Bank. However they range from backyard landscaping, to private homes to eco-resorts or community centres.
The superdobe was initially intended for temporary shelter and housing the displaced, because of its low-tech construction, the availability of its materials, and its resistance against natural forces. In 2004, the Aga Khan Award for Architecture went to a cluster of fourteen modest buildings in Baninajar, Iran by Nader Khalili. These domes were built ten years earlier to house refugees from the Iran-Iraq war and the construction was carried out by the United Nations Development Program and the United Nations High Commissioner of Refugees. The award also recognized the potential of these buildings as a prototype for a new kind of temporary housing.
Since then, many permanent private homes were built using the superadobe technique. For instance, house Quetzalcoatl in Costa Rica is composed of five full domes and four half domes using the earthbag technique.
Some buildings use it for other uses than residential. The 100 Classrooms for Refugee Children by Emergency Architecture & Human Rights hosts Syrian and Jordanian children in Za’atari village 10 km from the Syrian border.
The Langbos Children’s Centre, in the rural Eastern Cape of South Africa, was designed by Jason Erlank Architects to provide a multi-functional space for community-driven initiatives in Langbos. It consists of four domes of a total area of 217 square metres.
The Majara Residence on Hormuz Island in the South of Iran is a cluster of 200 small-scale interconnected superadobe domes that form a neighbourhood of 15 residences and public facilities and is designed by ZAV Architects. The domes and the landscape surrounding them covers an area of 10300 square metres, while the built space is around 4000 square metres. This project was preceded by a prototype called Rong Cultural Center which tested the Superadobe technique on the same island.
Bibliography
Khalili, Nader. "Nader Khalili." Cal-Earth. 19 Jan. 2007 .
Katauskas, Ted. "Dirt-Cheap Houses from Elemental Materials." Architecture Week. Aug. 1998. 19 Jan. 2007 .
Husain, Yasha. "Space-Friendly Architecture: Meet Nader Khalili." Space.com. 17 Nov. 2000. 19 Jan. 2007 Space News - Latest Space and Astronomy News | Space.
Sinclair, Cameron, and Kate Stohr. "SuperAdobe." Design Like You Give a Damn. Ed. Diana Murphy, Adrian Crabbs, and Cory Reynolds. New York: Distributed Art Publishers, Inc., 2006. 104–13.
Kellogg, Stuart, and James Quigg. "Good Earth." Daily Press. 18 Dec. 2005.
Freedom Communications, Inc. 22 Jan. 2007 .
Alternative Construction: Contemporary Natural Building Methods. Ed. Lynne Elizabeth and Cassandra Adams. New York: John Wiley & Sons, Inc., 2000.
Hunter, Kaki, and Donald Kiffmeyer. Earthbag Building. Gabriola Island, BC: New Society Publishers, 2004.
Kennedy, Joseph F. "Building With Earthbags." Natural Building Colloquium. NetWorks Productions. 14 Feb. 2007 Natural Building Colloquium.
Aga Khan Development Network. "The Aga Khan Award for Architecture 2004." Sandbag Shelter Prototypes, various locations. 14 Feb. 2007 .
The Green Building Program. "Earth Construction." Sustainable Building Sourcebook. 2006. 14 Feb. 2007 .
NBRC. "NBRC Misc. Photos." NBRC: Other Super Adobe Buildings. 10 Dec. 1997. 14 Feb. 2007 .
CCD. "CS05__Cal-Earth SuperAdobe." Combating Crisis with Design. 20 Sept. 2006. 14 Feb. 2007 CS05__Cal-Earth SuperAdobe.
American Institute of Architects. A Conversation with Nader Khalili. 2004. 14 Feb. 2007 .
New York Times. When Shelter is made from the Earth's Own Dust. 15 Apr 1999
See also
Earthbag construction
References
External links
Earthbagstore.com - informations, workshops, blog, e-shop with earthbags for superadobe in Europe.
- www.videterra.org, Superadobe Italia, sharing information and promoting superadobe domes and houses. Vide Terra provides superadobe workshops in Italy, Europe and the Mediterranean area.
Khalili's site describing his explanation of his method which he names "Superadobe", not "Super Adobe".
earthbagbuilding.com - Includes many photos of Superadobe projects.
calearth-superadobe
Masonry
Appropriate technology
Soil-based building materials
Foundations (buildings and structures) | Superadobe | [
"Engineering"
] | 2,128 | [
"Construction",
"Structural engineering",
"Foundations (buildings and structures)",
"Masonry"
] |
1,087,818 | https://en.wikipedia.org/wiki/Frobenius%20normal%20form | In linear algebra, the Frobenius normal form or rational canonical form of a square matrix A with entries in a field F is a canonical form for matrices obtained by conjugation by invertible matrices over F. The form reflects a minimal decomposition of the vector space into subspaces that are cyclic for A (i.e., spanned by some vector and its repeated images under A). Since only one normal form can be reached from a given matrix (whence the "canonical"), a matrix B is similar to A if and only if it has the same rational canonical form as A. Since this form can be found without any operations that might change when extending the field F (whence the "rational"), notably without factoring polynomials, this shows that whether two matrices are similar does not change upon field extensions. The form is named after German mathematician Ferdinand Georg Frobenius.
Some authors use the term rational canonical form for a somewhat different form that is more properly called the primary rational canonical form. Instead of decomposing into a minimum number of cyclic subspaces, the primary form decomposes into a maximum number of cyclic subspaces. It is also defined over F, but has somewhat different properties: finding the form requires factorization of polynomials, and as a consequence the primary rational canonical form may change when the same matrix is considered over an extension field of F. This article mainly deals with the form that does not require factorization, and explicitly mentions "primary" when the form using factorization is meant.
Motivation
When trying to find out whether two square matrices A and B are similar, one approach is to try, for each of them, to decompose the vector space as far as possible into a direct sum of stable subspaces, and compare the respective actions on these subspaces. For instance if both are diagonalizable, then one can take the decomposition into eigenspaces (for which the action is as simple as it can get, namely by a scalar), and then similarity can be decided by comparing eigenvalues and their multiplicities. While in practice this is often a quite insightful approach, there are various drawbacks this has as a general method. First, it requires finding all eigenvalues, say as roots of the characteristic polynomial, but it may not be possible to give an explicit expression for them. Second, a complete set of eigenvalues might exist only in an extension of the field one is working over, and then one does not get a proof of similarity over the original field. Finally A and B might not be diagonalizable even over this larger field, in which case one must instead use a decomposition into generalized eigenspaces, and possibly into Jordan blocks.
But obtaining such a fine decomposition is not necessary to just decide whether two matrices are similar. The rational canonical form is based on instead using a direct sum decomposition into stable subspaces that are as large as possible, while still allowing a very simple description of the action on each of them. These subspaces must be generated by a single nonzero vector v and all its images by repeated application of the linear operator associated to the matrix; such subspaces are called cyclic subspaces (by analogy with cyclic subgroups) and they are clearly stable under the linear operator. A basis of such a subspace is obtained by taking v and its successive images as long as they are linearly independent. The matrix of the linear operator with respect to such a basis is the companion matrix of a monic polynomial; this polynomial (the minimal polynomial of the operator restricted to the subspace, which notion is analogous to that of the order of a cyclic subgroup) determines the action of the operator on the cyclic subspace up to isomorphism, and is independent of the choice of the vector v generating the subspace.
A direct sum decomposition into cyclic subspaces always exists, and finding one does not require factoring polynomials. However it is possible that cyclic subspaces do allow a decomposition as direct sum of smaller cyclic subspaces (essentially by the Chinese remainder theorem). Therefore, just having for both matrices some decomposition of the space into cyclic subspaces, and knowing the corresponding minimal polynomials, is not in itself sufficient to decide their similarity. An additional condition is imposed to ensure that for similar matrices one gets decompositions into cyclic subspaces that exactly match: in the list of associated minimal polynomials each one must divide the next (and the constant polynomial 1 is forbidden to exclude trivial cyclic subspaces). The resulting list of polynomials are called the invariant factors of (the K[X]-module defined by) the matrix, and two matrices are similar if and only if they have identical lists of invariant factors. The rational canonical form of a matrix A is obtained by expressing it on a basis adapted to a decomposition into cyclic subspaces whose associated minimal polynomials are the invariant factors of A; two matrices are similar if and only if they have the same rational canonical form.
Example
Consider the following matrix A, over Q:
A has minimal polynomial , so that the dimension of a subspace generated by the repeated images of a single vector is at most 6. The characteristic polynomial is , which is a multiple of the minimal polynomial by a factor . There always exist vectors such that the cyclic subspace that they generate has the same minimal polynomial as the operator has on the whole space; indeed most vectors will have this property, and in this case the first standard basis vector does so: the vectors for are linearly independent and span a cyclic subspace with minimal polynomial . There exist complementary stable subspaces (of dimension 2) to this cyclic subspace, and the space generated by vectors and is an example. In fact one has , so the complementary subspace is a cyclic subspace generated by ; it has minimal polynomial . Since is the minimal polynomial of the whole space, it is clear that must divide (and it is easily checked that it does), and we have found the invariant factors and of A. Then the rational canonical form of A is the block diagonal matrix with the corresponding companion matrices as diagonal blocks, namely
A basis on which this form is attained is formed by the vectors above, followed by for ; explicitly this means that for
,
one has
General case and theory
Fix a base field F and a finite-dimensional vector space V over F. Given a polynomial P ∈ F[X], there is associated to it a companion matrix CP whose characteristic polynomial and minimal polynomial are both equal to P.
Theorem: Let V be a finite-dimensional vector space over a field F, and A a square matrix over F. Then V (viewed as an F[X]-module with the action of X given by A) admits a F[X]-module isomorphism
V ≅ F[X]/f1 ⊕ … ⊕ F[X]/fk
where the fi ∈ F[X] may be taken to be monic polynomials of positive degree (so they are non-units in F[X]) that satisfy the relations
f1 | f2 | … | fk
(where "a | b" is notation for "a divides b"); with these conditions the list of polynomials fi is unique.
Sketch of Proof: Apply the structure theorem for finitely generated modules over a principal ideal domain to V, viewing it as an F[X]-module. The structure theorem provides a decomposition into cyclic factors, each of which is a quotient of F[X] by a proper ideal; the zero ideal cannot be present since the resulting free module would be infinite-dimensional as F vector space, while V is finite-dimensional. For the polynomials fi one then takes the unique monic generators of the respective ideals, and since the structure theorem ensures containment of every ideal in the preceding ideal, one obtains the divisibility conditions for the fi. See [DF] for details.
Given an arbitrary square matrix, the elementary divisors used in the construction of the Jordan normal form do not exist over F[X], so the invariant factors fi as given above must be used instead. The last of these factors fk is then the minimal polynomial, which all the
invariant factors therefore divide, and the product of the invariant factors gives the characteristic polynomial. Note that this implies that the minimal polynomial divides the characteristic polynomial (which is essentially the Cayley-Hamilton theorem), and that every irreducible factor of the characteristic polynomial also divides the minimal polynomial (possibly with lower multiplicity).
For each invariant factor fi one takes its companion matrix Cfi, and the block diagonal matrix formed from these blocks yields the rational canonical form of A. When the minimal polynomial is identical to the characteristic polynomial (the case k = 1), the Frobenius normal form is the companion matrix of the characteristic polynomial. As the rational canonical form is uniquely determined by the unique invariant factors associated to A, and these invariant factors are independent of basis, it follows that two square matrices A and B are similar if and only if they have the same rational canonical form.
A rational normal form generalizing the Jordan normal form
The Frobenius normal form does not reflect any form of factorization of the characteristic polynomial, even if it does exist over the ground field F. This implies that it is invariant when F is replaced by a different field (as long as it contains the entries of the original matrix A). On the other hand, this makes the Frobenius normal form rather different from other normal forms that do depend on factoring the characteristic polynomial, notably the diagonal form (if A is diagonalizable) or more generally the Jordan normal form (if the characteristic polynomial splits into linear factors). For instance, the Frobenius normal form of a diagonal matrix with distinct diagonal entries is just the companion matrix of its characteristic polynomial.
There is another way to define a normal form, that, like the Frobenius normal form, is always defined over the same field F as A, but that does reflect a possible factorization of the characteristic polynomial (or equivalently the minimal polynomial) into irreducible factors over F, and which reduces to the Jordan normal form when this factorization only contains linear factors (corresponding to eigenvalues). This form is sometimes called the generalized Jordan normal form, or primary rational canonical form. It is based on the fact that the vector space can be canonically decomposed into a direct sum of stable subspaces corresponding to the distinct irreducible factors P of the characteristic polynomial (as stated by the ), where the characteristic polynomial of each summand is a power of the corresponding P. These summands can be further decomposed, non-canonically, as a direct sum of cyclic F[x]-modules (like is done for the Frobenius normal form above), where the characteristic polynomial of each summand is still a (generally smaller) power of P. The primary rational canonical form is a block diagonal matrix corresponding to such a decomposition into cyclic modules, with a particular form called generalized Jordan block in the diagonal blocks, corresponding to a particular choice of a basis for the cyclic modules. This generalized Jordan block is itself a block matrix of the form
where C is the companion matrix of the irreducible polynomial , and is a matrix whose sole nonzero entry is a 1 in the upper right-hand corner. For the case of a linear irreducible factor , these blocks are reduced to single entries and and, one finds a (transposed) Jordan block. In any generalized Jordan block, all entries immediately below the main diagonal are 1. A basis of the cyclic module giving rise to this form is obtained by choosing a generating vector (one that is not annihilated by where the minimal polynomial of the cyclic module is ), and taking as basis
where .
See also
Smith normal form
References
[DF] David S. Dummit and Richard M. Foote. Abstract Algebra. 2nd Edition, John Wiley & Sons. pp. 442, 446, 452-458. .
External links
Rational Canonical Form (Mathworld)
Algorithms
An O(n3) Algorithm for Frobenius Normal Form
An Algorithm for the Frobenius Normal Form (pdf)
A rational canonical form Algorithm (pdf)
Linear algebra
Matrix normal forms | Frobenius normal form | [
"Mathematics"
] | 2,515 | [
"Linear algebra",
"Algebra"
] |
1,088,425 | https://en.wikipedia.org/wiki/Mahler%20measure | In mathematics, the Mahler measure of a polynomial with complex coefficients is defined as
where factorizes over the complex numbers as
The Mahler measure can be viewed as a kind of height function. Using Jensen's formula, it can be proved that this measure is also equal to the geometric mean of for on the unit circle (i.e., ):
By extension, the Mahler measure of an algebraic number is defined as the Mahler measure of the minimal polynomial of over . In particular, if is a Pisot number or a Salem number, then its Mahler measure is simply .
The Mahler measure is named after the German-born Australian mathematician Kurt Mahler.
Properties
The Mahler measure is multiplicative:
where is the norm of .
Kronecker's Theorem: If is an irreducible monic integer polynomial with , then either or is a cyclotomic polynomial.
(Lehmer's conjecture) There is a constant such that if is an irreducible integer polynomial, then either or .
The Mahler measure of a monic integer polynomial is a Perron number.
Higher-dimensional Mahler measure
The Mahler measure of a multi-variable polynomial is defined similarly by the formula
It inherits the above three properties of the Mahler measure for a one-variable polynomial.
The multi-variable Mahler measure has been shown, in some cases, to be related to special values
of zeta-functions and -functions. For example, in 1981, Smyth proved the formulas
where is a Dirichlet L-function, and
where is the Riemann zeta function. Here is called the logarithmic Mahler measure.
Some results by Lawton and Boyd
From the definition, the Mahler measure is viewed as the integrated values of polynomials over the torus (also see Lehmer's conjecture). If vanishes on the torus , then the convergence of the integral defining is not obvious, but it is known that does converge and is equal to a limit of one-variable Mahler measures, which had been conjectured by Boyd.
This is formulated as follows: Let denote the integers and define . If is a polynomial in variables and define the polynomial of one variable by
and define by
where .
Boyd's proposal
Boyd provided more general statements than the above theorem. He pointed out that the classical Kronecker's theorem, which characterizes monic polynomials with integer coefficients all of whose roots are inside the unit disk, can be regarded as characterizing those polynomials of one variable whose measure is exactly 1, and that this result extends to polynomials in several variables.
Define an extended cyclotomic polynomial to be a polynomial of the form
where is the m-th cyclotomic polynomial, the are integers, and the are chosen minimally so that is a polynomial in the . Let be the set of polynomials that are products of monomials and extended cyclotomic polynomials.
This led Boyd to consider the set of values
and the union . He made the far-reaching conjecture that the set of is a closed subset of . An immediate consequence of this conjecture would be the truth of Lehmer's conjecture, albeit without an explicit lower bound. As Smyth's result suggests that , Boyd further conjectures that
Mahler measure and entropy
An action of by automorphisms of a compact metrizable abelian group may be associated via duality to any countable module over the ring . The topological entropy (which is equal to the measure-theoretic entropy) of this action, , is given by a Mahler measure (or is infinite). In the case of a cyclic module for a non-zero polynomial the formula proved by Lind, Schmidt, and Ward gives , the logarithmic Mahler measure of . In the general case, the entropy of the action is expressed as a sum of logarithmic Mahler measures over the generators of the principal associated prime ideals of the module. As pointed out earlier by Lind in the case of a single compact group automorphism, this means that the set of possible values of the entropy of such actions is either all of or a countable set depending on the solution to Lehmer's problem. Lind also showed that the infinite-dimensional torus either has ergodic automorphisms of finite positive entropy or only has automorphisms of infinite entropy depending on the solution to Lehmer's problem.
See also
Bombieri norm
Height of a polynomial
Notes
References
Everest, Graham and Ward, Thomas (1999). "Heights of polynomials and entropy in algebraic dynamics". Springer-Verlag London, Ltd., London. xii+211 pp. ISBN: 1-85233-125-9
.
External links
Mahler Measure on MathWorld
Jensen's Formula on MathWorld
Analytic number theory
Polynomials | Mahler measure | [
"Mathematics"
] | 983 | [
"Analytic number theory",
"Polynomials",
"Algebra",
"Number theory"
] |
1,089,161 | https://en.wikipedia.org/wiki/Euler%E2%80%93Tricomi%20equation | In mathematics, the Euler–Tricomi equation is a linear partial differential equation useful in the study of transonic flow. It is named after mathematicians Leonhard Euler and Francesco Giacomo Tricomi.
It is elliptic in the half plane x > 0, parabolic at x = 0 and hyperbolic in the half plane x < 0.
Its characteristics are
which have the integral
where C is a constant of integration. The characteristics thus comprise two families of semicubical parabolas, with cusps on the line x = 0, the curves lying on the right hand side of the y-axis.
Particular solutions
A general expression for particular solutions to the Euler–Tricomi equations is:
where
These can be linearly combined to form further solutions such as:
for k = 0:
for k = 1:
etc.
The Euler–Tricomi equation is a limiting form of Chaplygin's equation.
See also
Burgers equation
Chaplygin's equation
Bibliography
A. D. Polyanin, Handbook of Linear Partial Differential Equations for Engineers and Scientists, Chapman & Hall/CRC Press, 2002.
External links
Tricomi and Generalized Tricomi Equations at EqWorld: The World of Mathematical Equations.
Partial differential equations
Equations of fluid dynamics
Leonhard Euler | Euler–Tricomi equation | [
"Physics",
"Chemistry"
] | 266 | [
"Equations of fluid dynamics",
"Equations of physics",
"Fluid dynamics"
] |
1,089,172 | https://en.wikipedia.org/wiki/Chaplygin%27s%20equation | In gas dynamics, Chaplygin's equation, named after Sergei Alekseevich Chaplygin (1902), is a partial differential equation useful in the study of transonic flow. It is
Here, is the speed of sound, determined by the equation of state of the fluid and conservation of energy. For polytropic gases, we have , where is the specific heat ratio and is the stagnation enthalpy, in which case the Chaplygin's equation reduces to
The Bernoulli equation (see the derivation below) states that maximum velocity occurs when specific enthalpy is at the smallest value possible; one can take the specific enthalpy to be zero corresponding to absolute zero temperature as the reference value, in which case is the maximum attainable velocity. The particular integrals of above equation can be expressed in terms of hypergeometric functions.
Derivation
For two-dimensional potential flow, the continuity equation and the Euler equations (in fact, the compressible Bernoulli's equation due to irrotationality) in Cartesian coordinates involving the variables fluid velocity , specific enthalpy and density are
with the equation of state acting as third equation. Here is the stagnation enthalpy, is the magnitude of the velocity vector and is the entropy. For isentropic flow, density can be expressed as a function only of enthalpy , which in turn using Bernoulli's equation can be written as .
Since the flow is irrotational, a velocity potential exists and its differential is simply . Instead of treating and as dependent variables, we use a coordinate transform such that and become new dependent variables. Similarly the velocity potential is replaced by a new function (Legendre transformation)
such then its differential is , therefore
Introducing another coordinate transformation for the independent variables from to according to the relation and , where is the magnitude of the velocity vector and is the angle that the velocity vector makes with the -axis, the dependent variables become
The continuity equation in the new coordinates become
For isentropic flow, , where is the speed of sound. Using the Bernoulli's equation we find
where . Hence, we have
See also
Euler–Tricomi equation
References
Partial differential equations
Fluid dynamics | Chaplygin's equation | [
"Chemistry",
"Engineering"
] | 458 | [
"Piping",
"Chemical engineering",
"Fluid dynamics"
] |
1,089,282 | https://en.wikipedia.org/wiki/Ultrafast%20monochromator | An ultrafast monochromator is a monochromator that preserves the duration of an ultrashort pulse (in the femtosecond, or lower, time-scale). Monochromators are devices that select for a particular wavelength, typically using a diffraction grating to disperse the light and a slit to select the desired wavelength; however, a diffraction grating introduces path delays that measurably lengthen the duration of an ultrashort pulse. An ultrafast monochromator uses a second diffraction grating to compensate time delays introduced to the pulse by the first grating and other dispersive optical elements.
Diffraction grating
Diffraction gratings are constructed such that the angle of the incident ray, θi, is related to the angle of the mth outgoing ray, θm, by the expression
.
Two rays diffracted by adjacent grooves will differ in path length by a distance mλ. The total difference between the longest and shortest path within a beam is computed by multiplying mλ by the total number of grooves illuminated.
For instance, a beam of width 10 mm illuminating a grating with 1200 grooves/mm uses 12,000 grooves. At a wavelength of 10 nm, the first order diffracted beam, m = 1, will have a path length variation across the beam of 120 μm. This corresponds to a time difference in the arrival of 400 femtoseconds. This is often negligible for picosecond pulses but not for those of femtosecond duration.
Applications
A major application is the extraction, without time-broadening, of a single high-order harmonic pulse out of the many generated by an ultrafast laser pulse interacting with a gas target.
See also
Ultrashort pulse
DESY
References
Optical devices | Ultrafast monochromator | [
"Materials_science",
"Engineering"
] | 380 | [
"Glass engineering and science",
"Materials science stubs",
"Electromagnetism stubs",
"Optical devices"
] |
1,090,018 | https://en.wikipedia.org/wiki/Newton%27s%20cradle | Newton's cradle is a device, usually made of metal, that demonstrates the principles of conservation of momentum and conservation of energy in physics with swinging spheres. When one sphere at the end is lifted and released, it strikes the stationary spheres, compressing them and thereby transmitting a pressure wave through the stationary spheres, which creates a force that pushes the last sphere upward. The last sphere swings back and strikes the stationary spheres, repeating the effect in the opposite direction. The device is named after 17th-century English scientist Sir Isaac Newton and was designed by French scientist Edme Mariotte. It is also known as Newton's pendulum, Newton's balls, Newton's rocker or executive ball clicker (since the device makes a click each time the balls collide, which they do repeatedly in a steady rhythm).
Operation
When one of the balls at the end ("the first") is pulled sideways, the attached string constrains it along an upward arc. When released it strikes the second ball and comes nearly, but not entirely, to a dead stop. The succeeding ball acquires most of the velocity of the first ball propagates the diminished momentum. Eventually the last ball, having received a successively diminished portion of the first's energy and momentum, begins the process anew in the opposite direction. Each impact produces a sonic wave that propagates through the medium of the intermediate balls. (Any efficiently elastic material such as steel suffices, as long as the kinetic energy is temporarily stored as potential energy in the compression of the material rather than being lost as heat. This is similar to bouncing one coin of a line of touching coins by striking it with another coin, and which happens even if the first struck coin is constrained by pressing on its center such that it cannot move.) In each phase of the process, efficient mechanical energy is lost; Newton's cradle is not a perpetual motion machine. This would stand true even in the absence of air resistance, as in a vacuum.
There are slight movements in all the balls after the initial strike, but the last ball receives most of the initial energy from the impact of the first ball. When two (or three) balls are dropped, the two (or three) balls on the opposite side swing out. Some say that this behavior demonstrates the conservation of momentum and kinetic energy in elastic collisions. However, if the colliding balls behave as described above with the same mass possessing the same velocity before and after the collisions, then any function of mass and velocity is conserved in such an event. Thus, this first-level explanation is a true, but incomplete, description of the motion.
Physics explanation
Newton's cradle can be modeled fairly accurately with simple mathematical equations with the assumption that the balls always collide in pairs. If one ball strikes four stationary balls that are already touching, these simple equations can not explain the resulting movements in all five balls, which are not due to friction losses. For example, in a real Newton's cradle the fourth has some movement and the first ball has a slight reverse movement. All the animations in this article show idealized action (simple solution) that only occurs if the balls are not touching initially and only collide in pairs.
Simple solution
The conservation of momentum and kinetic energy can be used to find the resulting velocities for two colliding perfectly elastic objects. These two equations are used to determine the resulting velocities of the two objects. For the case of two balls constrained to a straight path by the strings in the cradle, the velocities are a single number instead of a 3D vector for 3D space, so the math requires only two equations to solve for two unknowns. When the two objects have the same mass, the solution is simple: the moving object stops relative to the stationary one and the stationary one picks up all the other's initial velocity. This assumes perfectly elastic objects, so there is no need to account for heat and sound energy losses.
Steel does not compress much, but its elasticity is very efficient, so it does not cause much waste heat. The simple effect from two same-mass efficiently elastic colliding objects constrained to a straight path is the basis of the effect seen in the cradle and gives an approximate solution to all its activities.
For a sequence of same-mass elastic objects constrained to a straight path, the effect continues to each successive object. For example, when two balls are dropped to strike three stationary balls in a cradle, there is an unnoticed but crucial small distance between the two dropped balls, and the action is as follows: the first moving ball that strikes the first stationary ball (the second ball striking the third ball) transfers all of its momentum to the third ball and stops. The third ball then transfers the momentum to the fourth ball and stops, and then the fourth to the fifth ball.
Right behind this sequence, the second moving ball is transferring its momentum to the first moving ball that just stopped, and the sequence repeats immediately and imperceptibly behind the first sequence, ejecting the fourth ball right behind the fifth ball with the same small separation that was between the two initial striking balls. If they are simply touching when they strike the third ball, precision requires the more complete solution below.
Other examples of this effect
The effect of the last ball ejecting with a velocity nearly equal to the first ball can be seen in sliding a coin on a table into a line of identical coins, as long as the striking coin and its twin targets are in a straight line. The effect can similarly be seen in billiard balls. The effect can also be seen when a sharp and strong pressure wave strikes a dense homogeneous material immersed in a less-dense medium. If the identical atoms, molecules, or larger-scale sub-volumes of the dense homogeneous material are at least partially elastically connected to each other by electrostatic forces, they can act as a sequence of colliding identical elastic balls.
The surrounding atoms, molecules, or sub-volumes experiencing the pressure wave act to constrain each other similarly to how the string constrains the cradle's balls to a straight line. As a medical example, lithotripsy shock waves can be sent through the skin and tissue without harm to burst kidney stones. The side of the stones opposite to the incoming pressure wave bursts, not the side receiving the initial strike. In the Indian game carrom, a striker stops after hitting a stationery playing piece, transferring all of its momentum into the piece that was hit.
When the simple solution applies
For the simple solution to precisely predict the action, no pair in the midst of colliding may touch the third ball, because the presence of the third ball effectively makes the struck ball appear more massive. Applying the two conservation equations to solve the final velocities of three or more balls in a single collision results in many possible solutions, so these two principles are not enough to determine resulting action.
Even when there is a small initial separation, a third ball may become involved in the collision if the initial separation is not large enough. When this occurs, the complete solution method described below must be used.
Small steel balls work well because they remain efficiently elastic with little heat loss under strong strikes and do not compress much (up to about 30 μm in a small Newton's cradle). The small, stiff compressions mean they occur rapidly, less than 200 microseconds, so steel balls are more likely to complete a collision before touching a nearby third ball. Softer elastic balls require a larger separation to maximize the effect from pair-wise collisions.
More complete solution
A cradle that best follows the simple solution needs to have an initial separation between the balls that measures at least twice the amount that any one ball compresses, but most do not. This section describes the action when the initial separation is not enough and in subsequent collisions that involve more than two balls even when there is an initial separation. This solution simplifies to the simple solution when only two balls touch during a collision. It applies to all perfectly elastic identical balls that have no energy losses due to friction and can be approximated by materials such as steel, glass, plastic, and rubber.
For two balls colliding, only the two equations for conservation of momentum and energy are needed to solve the two unknown resulting velocities. For three or more simultaneously colliding elastic balls, the relative compressibilities of the colliding surfaces are the additional variables that determine the outcome. For example, five balls have four colliding points and scaling (dividing) three of them by the fourth gives the three extra variables needed to solve for all five post-collision velocities.
Newtonian, Lagrangian, Hamiltonian, and stationary action are the different ways of mathematically expressing classical mechanics. They describe the same physics but must be solved by different methods. All enforce the conservation of energy and momentum. Newton's law has been used in research papers. It is applied to each ball and the sum of forces is made equal to zero. So there are five equations, one for each ball—and five unknowns, one for each velocity. If the balls are identical, the absolute compressibility of the surfaces becomes irrelevant, because it can be divided out of both sides of all five equations, producing zero.
Determining the velocities for the case of one ball striking four initially touching balls is found by modeling the balls as weights with non-traditional springs on their colliding surfaces. Most materials, like steel, that are efficiently elastic approximately follow Hooke's force law for springs, , but because the area of contact for a sphere increases as the force increases, colliding elastic balls follow Hertz's adjustment to Hooke's law, . This and Newton's law for motion () are applied to each ball, giving five simple but interdependent differential equations that can be solved numerically.
When the fifth ball begins accelerating, it is receiving momentum and energy from the third and fourth balls through the spring action of their compressed surfaces. For identical elastic balls of any type with initially touching balls, the action is the same for the first strike, except the time to complete a collision increases in softer materials. Forty to fifty percent of the kinetic energy of the initial ball from a single-ball strike is stored in the ball surfaces as potential energy for most of the collision process. Of the initial velocity, 13% is imparted to the fourth ball (which can be seen as a 3.3-degree movement if the fifth ball moves out 25 degrees) and there is a slight reverse velocity in the first three balls, the first ball having the largest at −7% of the initial velocity. This separates the balls, but they come back together just before as the fifth ball returns. This is due to the pendulum phenomenon of different small angle disturbances having approximately the same time to return to the center.
The Hertzian differential equations predict that if two balls strike three, the fifth and fourth balls will leave with velocities of 1.14 and 0.80 times the initial velocity. This is 2.03 times more kinetic energy in the fifth ball than the fourth ball, which means the fifth ball would swing twice as high in the vertical direction as the fourth ball. But in a real Newton's cradle, the fourth ball swings out as far as the fifth ball. To explain the difference between theory and experiment, the two striking balls must have at least ≈ 10 μm separation (given steel, 100 g, and 1 m/s). This shows that in the common case of steel balls, unnoticed separations can be important and must be included in the Hertzian differential equations, or the simple solution gives a more accurate result.
Effect of pressure waves
The forces in the Hertzian solution above were assumed to propagate in the balls immediately, which is not the case. Sudden changes in the force between the atoms of material build up to form a pressure wave. Pressure waves (sound) in steel travel about 5 cm in 10 microseconds, which is about 10 times faster than the time between the first ball striking and the last ball being ejected. The pressure waves reflect back and forth through all five balls about ten times, although dispersing to less of a wavefront with more reflections. This is fast enough for the Hertzian solution to not require a substantial modification to adjust for the delay in force propagation through the balls. In less-rigid but still very elastic balls such as rubber, the propagation speed is slower, but the duration of collisions is longer, so the Hertzian solution still applies. The error introduced by the limited speed of the force propagation biases the Hertzian solution towards the simple solution because the collisions are not affected as much by the inertia of the balls that are further away.
Identically shaped balls help the pressure waves converge on the contact point of the last ball: at the initial strike point one pressure wave goes forward to the other balls while another goes backward to reflect off the opposite side of the first ball, and then it follows the first wave, being exactly one ball's diameter behind. The two waves meet up at the last contact point because the first wave reflects off the opposite side of the last ball and it meets up at the last contact point with the second wave. Then they reverberate back and forth like this about 10 times until the first ball stops connecting with the second ball. Then the reverberations reflect off the contact point between the second and third balls, but they still converge at the last contact point, until the last ball is ejected —but this is a lessening of a wavefront with each reflection.
Effect of different types of balls
Using different types of material does not change the action as long as the material is efficiently elastic. The size of the spheres does not change the results unless the increased weight exceeds the elastic limit of the material. If the solid balls are too large, energy is being lost as heat, because the elastic limit increases with the radius raised to the power 1.5, but the energy which had to be absorbed and released increases as the cube of the radius. Making the contact surfaces flatter can overcome this to an extent by distributing the compression to a larger amount of material but it can introduce an alignment problem. Steel is better than most materials because it allows the simple solution to apply more often in collisions after the first strike, its elastic range for storing energy remains good despite the higher energy caused by its weight, and the higher weight decreases the effect of air resistance.
Uses
The most common application is that of a desktop executive toy. Another use is as an educational physics demonstration, as an example of conservation of momentum and conservation of energy.
History
The principle demonstrated by the device, the law of impacts between bodies, was first demonstrated by the French physicist Abbé Mariotte in the 17th century. His work on the topic was first presented to the French Academy of Sciences in 1671; it was published in 1673 as Traité de la percussion ou choc des corps ("Treatise on percussion or shock of bodies").
Newton acknowledged Mariotte's work, along with Wren, Wallis and Huygens as the pioneers of experiments on the collisions of pendulum balls, in his Principia.
Christiaan Huygens used pendulums to study collisions. His work, De Motu Corporum ex Percussione (On the Motion of Bodies by Collision) published posthumously in 1703, contains a version of Newton's first law and discusses the collision of suspended bodies including two bodies of equal mass with the motion of the moving body being transferred to the one at rest.
There is much confusion over the origins of the modern Newton's cradle. Marius J. Morin has been credited as being the first to name and make this popular executive toy. However, in early 1967, an English actor, Simon Prebble, coined the name "Newton's cradle" (now used generically) for the wooden version manufactured by his company, Scientific Demonstrations Ltd. After some initial resistance from retailers, they were first sold by Harrods of London, thus creating the start of an enduring market for executive toys. Later a very successful chrome design for the Carnaby Street store Gear was created by the sculptor and future film director Richard Loncraine.
The largest cradle device in the world was designed by MythBusters and consisted of five one-ton concrete and steel rebar-filled buoys suspended from a steel truss. The buoys also had a steel plate inserted in between their two-halves to act as a "contact point" for transferring the energy; this cradle device did not function well because concrete is not elastic so most of the energy was lost to a heat buildup in the concrete. A smaller-scale version constructed by them consists of five chrome steel ball bearings, each weighing , and is nearly as efficient as a desktop model.
The cradle device with the largest-diameter collision balls on public display was visible for more than a year in Milwaukee, Wisconsin, at the retail store American Science and Surplus (see photo). Each ball was an inflatable exercise ball in diameter (encased in steel rings), and was supported from the ceiling using extremely strong magnets. It was dismantled in early August 2010 due to maintenance concerns.
In popular culture
Newton's cradle appears in some films, often as a trope on the desk of a lead villain such as Paul Newman's role in The Hudsucker Proxy, Magneto in X-Men, and the Kryptonians in Superman II. It was used to represent the unyielding position of the NFL towards head injuries in Concussion. It has also been used as a relaxing diversion on the desk of lead intelligent/anxious/sensitive characters such as Henry Winkler's role in Night Shift, Dustin Hoffman's role in Straw Dogs, and Gwyneth Paltrow's role in Iron Man 2. It was featured more prominently as a series of clay pots in Rosencrantz and Guildenstern Are Dead, and as a row of 1968 Eero Aarnio bubble chairs with scantily clad women in them in Gamer. In Storks, Hunter, the CEO of Cornerstore, has one not with balls, but with little birds. Newton's cradle is an item in Nintendo's Animal Crossing where it is referred to as "executive toy". In 2017, an episode of the Omnibus podcast, featuring Jeopardy! champion Ken Jennings and musician John Roderick, focused on the history of Newton's cradle. Newton's cradle is also featured on the desk of Deputy White House Communications Director Sam Seaborn in The West Wing. In the Futurama episode "The Day the Earth Stood Stupid", professor Hubert Farnsworth is shown with his head in a Newton's cradle and saying he's a genius as Philip J. Fry walks by.
Progressive rock band Dream Theater uses the cradle as imagery in album art of their 2005 release Octavarium. Rock band Jefferson Airplane used the cradle on the 1968 album Crown of Creation as a rhythm device to create polyrhythms on an instrumental track.
See also
Galilean cannon
Pendulum wave – another demonstration with pendulums swinging in parallel without collision
References
Literature
B. Brogliato: Nonsmooth Mechanics. Models, Dynamics and Control, Springer, 2nd Edition, 1999.
External links
Educational toys
Office toys
Novelty items
Metal toys
Physics education
Science demonstrations
Science education materials
Office equipment | Newton's cradle | [
"Physics"
] | 3,978 | [
"Applied and interdisciplinary physics",
"Physics education"
] |
1,090,861 | https://en.wikipedia.org/wiki/Glass%20brick | Glass brick, also known as glass block, is an architectural element made from glass. The appearance of glass blocks can vary in color, size, texture and form. Glass bricks provide visual obscuration while admitting light. The modern glass block was developed from pre-existing prism lighting principles in the early 1900s to provide natural light in manufacturing plants. Today glass blocks are used in walls, skylights, and sidewalk lights.
Attributes
Appearance
The texture and color of glass blocks can vary in order to provide a range of transparency. Patterns can be pressed into either the inner void or the outside surface of the glass when it is cooling in order to provide differing effects. Glazes or inserts may also be added in order to create a desired private or decorative effect.
Standards and grading
Glass blocks in Europe are manufactured in accordance with the European Standard EN1052-2. The International Standard is ISO TC 160/SG1. The Standards allow for variation in sizes and production irregularity. Blocks fall within three classifications; Class 1, Class 2 and Class 3 with Class 1 being the highest and best rating with a maximum permissible deviation from designed size and rectangularity of 1 mm.
Insulation
Glass brick has an r value between 1.75 and 1.96, close to that of thermopane windows. There are newer glass blocks injected with argon gas and having a layer of low-emissivity glass between the halves, which increases the insulative (U) value to 1.5 W/m2·K, which is between triple glazed windows (1.8 W/m2·K) and specialty double glazed windows with advanced frame and coatings(1.2 W/m2·K).
Applications
Wall blocks
Glass blocks can provide light and serve as a decorative addition to an architectural structure, but hollow glass blocks are non load-bearing unless stated otherwise. Hollow glass wall blocks are manufactured as two separate halves and, while the glass is still molten, the two pieces are pressed together and annealed. The resulting glass blocks will have a partial vacuum at the hollow center. Due to the hollow center, wall glass blocks do not have the load-bearing capacity of masonry bricks and therefore are utilized in curtain walls. Glass block walls are constrained based on the framing in which they are set. If a masonry or steel frame exists, the maximum area of the wall can be , whereas the maximum area without a frame is .
The William Lescaze House and Office at 211 East 48th Street in New York City, built in 1934, was the city's first house to use glass blocks as walls.
Skylights and sidewalk lights
Glass blocks used in flooring are normally manufactured as a single solid piece, or as a hollow glass block with thicker side walls than the standard wall blocks. These blocks are normally cast into a reinforced concrete gridwork or set into a metal frame, allowing multiple units to be combined to span over openings in basements and roofs to create skylights. Glass wall blocks should not be used in flooring applications because the way in which they are manufactured does not allow them to support a load.
Construction methods
Glass wall blocks are fixed together to form complete walls by several methods – the most common method of construction is to bed the blocks together in a Portland cement-based mortar with reinforcing rods of steel placed within the mortar as recommended by the project architect or block manufacturer.
Other methods of construction include several proprietary systems whereby the mortar is replaced by timber or PVC extrusions.
Specialty types
Specialist glass blocks are produced for various applications including:
Bullet and vandal resistance
Bullet and vandal resistant blocks are generally solid glass or have very thick side walls similar to pavement blocks.
Fire resistant
Fire resistance of varying degrees can be achieved by several methods. Standard production hollow wall block will offer little fire resistance; however, resistance is improved by utilizing specially produced hollow blocks with thicker sidewalls, or the inclusion of a special layer of fire-resisting material between the two halves of the block during manufacture. Some manufacturers of glass blocks have developed a method of bonding two glass blocks together with adhesive, producing blocks of up to 160 mm (6½") thick with enhanced fire resistance. It is important that the block manufacturer's recommendations are followed with regards to the installation of fire resisting glass block walls, as without special construction techniques, the wall will not achieve the desired fire resistance.
Gas insulated
A recent innovation in the manufacture of glass blocks is the inclusion of argon gas within the hollow center of glass wall blocks. This advancement in production technique has resulted in a glass block which is able to offer significantly improved thermal insulation properties.
Colored
Some hollow glass wall blocks are available in colored variants. These colored variants fall into two categories. The first type is manufactured with UV stable colored glass and can be used in the same locations as standard clear glass blocks. The second type utilizes a colored material (dye or transparent paint) which is injected into the hollow center of the blocks to form a permanent coating, enabling vibrant colors to be achieved which are not possible with colored glass. However, the colored coating may not be UV stable and can fade in bright sunshine over time, and may therefore not be suitable for all locations.
19th century precursors
Falconnier
Modern glass bricks were preceded by Falconnier Hollow Glass Bricks in the late nineteenth century. Falconnier Bricks were blown glass bricks available in multiple colors and were formed in molds while the glass was molten. They could be used for walls or roofs and were joined with wire and cement. The suggested use for Falconnier glass bricks was in greenhouse construction due to the non-conductivity of the glass for temperature control and lack of porosity of glass for moisture control. They were touted for not tarnishing, trapping dust, or retaining water.
Prisms
Vault lights in sidewalks, which utilized prism lighting, were one of the first steps towards the modern hollow glass brick. At the end of the nineteenth century glass prisms became a popular way to diffuse light into spaces that would otherwise be difficult or unsafe to light via flame-based oil lamps (e.g. basements underneath sidewalks).
Examples of architectural use
Real-Time Control Building #3 in Edmonton, Canada.
Crown Fountain in Chicago, United States.
Maison de Verre (for House of Glass) in Paris, France
Michigan State Capitol in Lansing, Michigan
Hermès luxury retail space in Ginza, Tokyo, Japan by Renzo Piano
Streamline Moderne
Ibrox Stadium, Glasgow
Österreichische Postsparkasse in Vienna, Austria by Otto Wagner
Raphael's Refuge, outside of Flatonia, Texas
See also
Bottle wall
References
External links
"Architects are Rediscovering Glass Block" —Masonry Magazine, 2003
Masonry
Glass architecture
Building materials | Glass brick | [
"Physics",
"Materials_science",
"Engineering"
] | 1,361 | [
"Glass engineering and science",
"Masonry",
"Building engineering",
"Architecture",
"Glass architecture",
"Construction",
"Materials",
"Matter",
"Building materials"
] |
1,091,018 | https://en.wikipedia.org/wiki/Gas%20electron%20diffraction | Gas electron diffraction (GED) is one of the applications of electron diffraction techniques. The target of this method is the determination of the structure of gaseous molecules, i.e., the geometrical arrangement of the atoms from which a molecule is built up. GED is one of two experimental methods (besides microwave spectroscopy) to determine the structure of free molecules, undistorted by intermolecular forces, which are omnipresent in the solid and liquid state. The determination of accurate molecular structures by GED studies is fundamental for an understanding of structural chemistry.
Introduction
Diffraction occurs because the wavelength of electrons accelerated by a potential of a few thousand volts is of the same order of magnitude as internuclear distances in molecules. The principle is the same as that of other electron diffraction methods such as LEED and RHEED, but the obtainable diffraction pattern is considerably weaker than those of LEED and RHEED because the density of the target is about one thousand times smaller. Since the orientation of the target molecules relative to the electron beams is random, the internuclear distance information obtained is one-dimensional. Thus only relatively simple molecules can be completely structurally characterized by electron diffraction in the gas phase. It is possible to combine information obtained from other sources, such as rotational spectra, NMR spectroscopy or high-quality quantum-mechanical calculations with electron diffraction data, if the latter are not sufficient to determine the molecule's structure completely.
The total scattering intensity in GED is given as a function of the momentum transfer, which is defined as the difference between the wave vector of the incident electron beam and that of the scattered electron beam and has the reciprocal dimension of length. The total scattering intensity is composed of two parts: the atomic scattering intensity and the molecular scattering intensity. The former decreases monotonically and contains no information about the molecular structure. The latter has sinusoidal modulations as a result of the interference of the scattering spherical waves generated by the scattering from the atoms included in the target molecule. The interferences reflect the distributions of the atoms composing the molecules, so the molecular structure is determined from this part.
Experiment
Figure 1 shows a drawing and a photograph of an electron diffraction apparatus. Scheme 1 shows the schematic procedure of an electron diffraction experiment. A fast electron beam is generated in an electron gun, enters a diffraction chamber typically at a vacuum of 10−7 mbar. The electron beam hits a perpendicular stream of a gaseous sample effusing from a nozzle of a small diameter (typically 0.2 mm). At this point, the electrons are scattered. Most of the sample is immediately condensed and frozen onto the surface of a cold trap held at -196 °C (liquid nitrogen). The scattered electrons are detected on the surface of a suitable detector in a well-defined distance to the point of scattering.
The scattering pattern consists of diffuse concentric rings (see Figure 2). The steep decent of intensity can be compensated for by passing the electrons through a fast rotation sector (Figure 3). This is cut in a way, that electrons with small scattering angles are more shadowed than those at wider scattering angles. The detector can be a photographic plate, an electron imaging plate (usual technique today) or other position sensitive devices such as hybrid pixel detectors (future technique).
The intensities generated from reading out the plates or processing intensity data from other detectors are then corrected for the sector effect. They are initially a function of distance between primary beam position and intensity, and then converted into a function of scattering angle. The so-called atomic intensity and the experimental background are subtracted to give the final experimental molecular scattering intensities as a function of s (the change of momentum).
These data are then processed by suitable fitting software like UNEX for refining a suitable model for the compound and to yield precise structural information in terms of bond lengths, angles and torsional angles.
Theory
GED can be described by scattering theory. The outcome if applied to gases with randomly oriented molecules is provided here in short:
Scattering occurs at each individual atom (), but also at pairs (also called molecular scattering) (), or triples (), of atoms.
is the scattering variable or change of electron momentum, and its absolute value is defined as
with being the electron wavelength defined above, and being the scattering angle.
The above-mentioned contributions of scattering add up to the total scattering
where is the experimental background intensity, which is needed to describe the experiment completely.
The contribution of individual atom scattering is called atomic scattering and easy to calculate:
with , being the distance between the point of scattering and the detector, being the intensity of the primary electron beam, and being the scattering amplitude of the i-th atom. In essence, this is a summation over the scattering contributions of all atoms independent of the molecular structure. is the main contribution and easily obtained if the atomic composition of the gas (sum formula) is known.
The most interesting contribution is the molecular scattering, because it contains information about the distance between all pairs of atoms in a molecule (bonded or non-bonded):
with being the parameter of main interest: the atomic distance between two atoms, being the mean square amplitude of vibration between the two atoms, the anharmonicity constant (correcting the vibration description for deviations from a purely harmonic model), and is a phase factor, which becomes important if a pair of atoms with very different nuclear charge is involved.
The first part is similar to the atomic scattering, but contains two scattering factors of the involved atoms. Summation is performed over all atom pairs.
is negligible in most cases and not described here in more detail. is mostly determined by fitting and subtracting smooth functions to account for the background contribution.
So it is the molecular scattering intensity that is of interest, and this is obtained by calculation all other contributions and subtracting them from the experimentally measured total scattering function.
Results
Figure 5 shows two typical examples of results. The molecular scattering intensity curves are used to refine a structural model by means of a least squares fitting program. This yield precise structural information. The Fourier transformation of the molecular scattering intensity curves gives the radial distribution curves (RDC). These represent the probability to find a certain distance between two nuclei of a molecule. The curves below the RDC represent the diffrerence between the experiment and the model, i.e. the quality of fit.
The very simple example in Figure 5 shows the results for evaporated white phosphorus, P4. It is a perfectly tetrahedral molecule and has thus only one P-P distance. This makes the molecular scattering intensity curve a very simple one; a sine curve which is damped due to molecular vibration. The radial distribution curve (RDC) shows a maximum at 2.1994 Å with a least-squares error of 0.0003 Å, represented as 2.1994(3) Å. The width of the peak represents the molecular vibration and is the result of Fourier transformation of the damping part. This peak width means that the P-P distance varies by this vibration within a certain range given as a vibrational amplitude u, in this example uT(P‒P) = 0.0560(5) Å.
The slightly more complicated molecule P3As has two different distances P-P and P-As. Because their contributions overlap in the RDC, the peak is broader (also seen in a more rapid damping in the molecular scattering). The determination of these two independent parameters is more difficult and results in less precise parameter values than for P4.
Some selected other examples of important contributions to the structural chemistry of molecules are provided here:
Structure of diborane B2H6
Structure of the planar trisilylamine
Determinations of the structures of gaseous elemental phosphorus P4 and of the binary P3As
Determination of the structure of C60 and C70
Structure of tetranitromethane
Absence of local C3 symmetry in the simplest phosphonium ylide H2C=PMe3 and in amino-phosphanes like P(NMe2)3 and ylides H2C=P(NMe2)3
Determination of intramolecular London dispersion interaction effects on gas-phase and solid-state structures of diamondoid dimers
Links
http://molwiki.org/wiki/Main_Page—A free encyclopaedia, mainly focused on molecular structure and dynamics.
The story of gas-phase electron diffraction (GED) in Norway
References
Diffraction | Gas electron diffraction | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,770 | [
"Crystallography",
"Diffraction",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
1,091,100 | https://en.wikipedia.org/wiki/Similitude | Similitude is a concept applicable to the testing of engineering models. A model is said to have similitude with the real application if the two share geometric similarity, kinematic similarity and dynamic similarity. Similarity and similitude are interchangeable in this context.
The term dynamic similitude is often used as a catch-all because it implies that geometric and kinematic similitude have already been met.
Similitude's main application is in hydraulic and aerospace engineering to test fluid flow conditions with scaled models. It is also the primary theory behind many textbook formulas in fluid mechanics.
The concept of similitude is strongly tied to dimensional analysis.
Overview
Engineering models are used to study complex fluid dynamics problems where calculations and computer simulations aren't reliable. Models are usually smaller than the final design, but not always. Scale models allow testing of a design prior to building, and in many cases are a critical step in the development process.
Construction of a scale model, however, must be accompanied by an analysis to determine what conditions it is tested under. While the geometry may be simply scaled, other parameters, such as pressure, temperature or the velocity and type of fluid may need to be altered. Similitude is achieved when testing conditions are created such that the test results are applicable to the real design.
The following criteria are required to achieve similitude;
Geometric similarity – the model is the same shape as the application, usually scaled.
Kinematic similarity – fluid flow of both the model and real application must undergo similar time rates of change motions. (fluid streamlines are similar)
Dynamic similarity – ratios of all forces acting on corresponding fluid particles and boundary surfaces in the two systems are constant.
To satisfy the above conditions the application is analyzed;
All parameters required to describe the system are identified using principles from continuum mechanics.
Dimensional analysis is used to express the system with as few independent variables and as many dimensionless parameters as possible.
The values of the dimensionless parameters are held to be the same for both the scale model and application. This can be done because they are dimensionless and will ensure dynamic similitude between the model and the application. The resulting equations are used to derive scaling laws which dictate model testing conditions.
It is often impossible to achieve strict similitude during a model test. The greater the departure from the application's operating conditions, the more difficult achieving similitude is. In these cases some aspects of similitude may be neglected, focusing on only the most important parameters.
The design of marine vessels remains more of an art than a science in
large part because dynamic similitude is especially difficult to attain
for a vessel that is partially submerged: a ship is affected by wind
forces in the air above it, by hydrodynamic forces within the water
under it, and especially by wave motions at the interface between the
water and the air. The scaling requirements for each of these
phenomena differ, so models cannot replicate what happens to a full
sized vessel nearly so well as can be done for an aircraft or
submarine—each of which operates entirely within one medium.
Similitude is a term used widely in fracture mechanics relating to the strain life approach. Under given loading conditions the fatigue damage in an un-notched specimen is comparable to that of a notched specimen. Similitude suggests that the component fatigue life of the two objects will also be similar.
An example
Consider a submarine modeled at 1/40th scale. The application operates in sea water at 0.5 °C, moving at 5 m/s. The model will be tested in fresh water at 20 °C. Find the power required for the submarine to operate at the stated speed.
A free body diagram is constructed and the relevant relationships of force and velocity are formulated using techniques from continuum mechanics. The variables which describe the system are:
This example has five independent variables and three fundamental units. The fundamental units are: meter, kilogram, second.
Invoking the Buckingham π theorem shows that the system can be described with two dimensionless numbers and one independent variable.
Dimensional analysis is used to rearrange the units to form the Reynolds number () and pressure coefficient (). These dimensionless numbers account for all the variables listed above except F, which will be the test measurement. Since the dimensionless parameters will stay constant for both the test and the real application, they will be used to formulate scaling laws for the test.
Scaling laws:
The pressure () is not one of the five variables, but the force () is. The pressure difference (Δ) has thus been replaced with () in the pressure coefficient. This gives a required test velocity of:
.
A model test is then conducted at that velocity and the force that is measured in the model () is then scaled to find the force that can be expected for the real application ():
The power in watts required by the submarine is then:
Note that even though the model is scaled smaller, the water velocity needs to be increased for testing. This remarkable result shows how similitude in nature is often counterintuitive.
Typical applications
Fluid mechanics
Similitude has been well documented for a large number of engineering problems and is the basis of many textbook formulas and dimensionless quantities. These formulas and quantities are easy to use without having to repeat the laborious task of dimensional analysis and formula derivation. Simplification of the formulas (by neglecting some aspects of similitude) is common, and needs to be reviewed by the engineer for each application.
Similitude can be used to predict the performance of a new design based on data from an existing, similar design. In this case, the model is the existing design. Another use of similitude and models is in validation of computer simulations with the ultimate goal of eliminating the need for physical models altogether.
Another application of similitude is to replace the operating fluid with a different test fluid. Wind tunnels, for example, have trouble with air liquefying in certain conditions so helium is sometimes used. Other applications may operate in dangerous or expensive fluids so the testing is carried out in a more convenient substitute.
Some common applications of similitude and associated dimensionless numbers;
Solid mechanics: structural similitude
Similitude analysis is a powerful engineering tool to design the scaled-down structures. Although both dimensional analysis and direct use of the governing equations may be used to derive the scaling laws, the latter results in more specific scaling laws. The design of the scaled-down composite structures can be successfully carried out using the complete and partial similarities. In the design of the scaled structures under complete similarity condition, all the derived scaling laws must be satisfied between the model and prototype which yields the perfect similarity between the two scales. However, the design of a scaled-down structure which is perfectly similar to its prototype has the practical limitation, especially for laminated structures. Relaxing some of the scaling laws may eliminate the limitation of the design under complete similarity condition and yields the scaled models that are partially similar to their prototype. However, the design of the scaled structures under the partial similarity condition must follow a deliberate methodology to ensure the accuracy of the scaled structure in predicting the structural response of the prototype. Scaled models can be designed to replicate the dynamic characteristic (e.g. frequencies, mode shapes and damping ratios) of their full-scale counterparts. However, appropriate response scaling laws need to be derived to predict the dynamic response of the full-scale prototype from the experimental data of the scaled model.
See also
Similitude of ship models
References
Further reading
External links
MIT open courseware lecture notes on Similitude for marine engineering
Dimensional analysis
Conceptual modelling | Similitude | [
"Engineering"
] | 1,546 | [
"Dimensional analysis",
"Mechanical engineering"
] |
1,091,136 | https://en.wikipedia.org/wiki/Von%20Mises%20yield%20criterion | In continuum mechanics, the maximum distortion energy criterion (also von Mises yield criterion) states that yielding of a ductile material begins when the second invariant of deviatoric stress reaches a critical value. It is a part of plasticity theory that mostly applies to ductile materials, such as some metals. Prior to yield, material response can be assumed to be of a linear elastic, nonlinear elastic, or viscoelastic behavior.
In materials science and engineering, the von Mises yield criterion is also formulated in terms of the von Mises stress or equivalent tensile stress, . This is a scalar value of stress that can be computed from the Cauchy stress tensor. In this case, a material is said to start yielding when the von Mises stress reaches a value known as yield strength, . The von Mises stress is used to predict yielding of materials under complex loading from the results of uniaxial tensile tests. The von Mises stress satisfies the property where two stress states with equal distortion energy have an equal von Mises stress.
Because the von Mises yield criterion is independent of the first stress invariant, , it is applicable for the analysis of plastic deformation for ductile materials such as metals, as onset of yield for these materials does not depend on the hydrostatic component of the stress tensor.
Although it has been believed it was formulated by James Clerk Maxwell in 1865, Maxwell only described the general conditions in a letter to William Thomson (Lord Kelvin). Richard Edler von Mises rigorously formulated it in 1913. Tytus Maksymilian Huber (1904), in a paper written in Polish, anticipated to some extent this criterion by properly relying on the distortion strain energy, not on the total strain energy as his predecessors. Heinrich Hencky formulated the same criterion as von Mises independently in 1924. For the above reasons this criterion is also referred to as the "Maxwell–Huber–Hencky–von Mises theory".
Mathematical formulation
Mathematically the von Mises yield criterion is expressed as:
Here is yield stress of the material in pure shear. As shown later in this article, at the onset of yielding, the magnitude of the shear yield stress in pure shear is √3 times lower than the tensile yield stress in the case of simple tension. Thus, we have:
where is tensile yield strength of the material. If we set the von Mises stress equal to the yield strength and combine the above equations, the von Mises yield criterion is written as:
or
Substituting with the Cauchy stress tensor components, we get
,
where is called deviatoric stress. This equation defines the yield surface as a circular cylinder (See Figure) whose yield curve, or intersection with the deviatoric plane, is a circle with radius , or . This implies that the yield condition is independent of hydrostatic stresses.
Reduced von Mises equation for different stress conditions
Uniaxial (1D) stress
In the case of uniaxial stress or simple tension, , the von Mises criterion simply reduces to
,
which means the material starts to yield when reaches the yield strength of the material , in agreement with the definition of tensile (or compressive) yield strength.
Multi-axial (2D or 3D) stress
An equivalent tensile stress or equivalent von-Mises stress, is used to predict yielding of materials under multiaxial loading conditions using results from simple uniaxial tensile tests. Thus, we define
where are components of stress deviator tensor :
.
In this case, yielding occurs when the equivalent stress, , reaches the yield strength of the material in simple tension, . As an example, the stress state of a steel beam in compression differs from the stress state of a steel axle under torsion, even if both specimens are of the same material. In view of the stress tensor, which fully describes the stress state, this difference manifests in six degrees of freedom, because the stress tensor has six independent components. Therefore, it is difficult to tell which of the two specimens is closer to the yield point or has even reached it. However, by means of the von Mises yield criterion, which depends solely on the value of the scalar von Mises stress, i.e., one degree of freedom, this comparison is straightforward: A larger von Mises value implies that the material is closer to the yield point.
In the case of pure shear stress, , while all other , von Mises criterion becomes:
.
This means that, at the onset of yielding, the magnitude of the shear stress in pure shear is times lower than the yield stress in the case of simple tension. The von Mises yield criterion for pure shear stress, expressed in principal stresses, is
In the case of principal plane stress, and , the von Mises criterion becomes:
This equation represents an ellipse in the plane .
Summary
Physical interpretation of the von Mises yield criterion
Hencky (1924) offered a physical interpretation of von Mises criterion suggesting that yielding begins when the elastic energy of distortion reaches a critical value. For this reason, the von Mises criterion is also known as the maximum distortion strain energy criterion. This comes from the relation between and the elastic strain energy of distortion :
with the elastic shear modulus .
In 1937 Arpad L. Nadai suggested that yielding begins when the octahedral shear stress reaches a critical value, i.e. the octahedral shear stress of the material at yield in simple tension. In this case, the von Mises yield criterion is also known as the maximum octahedral shear stress criterion in view of the direct proportionality that exists between and the octahedral shear stress, , which by definition is
thus we have
Strain energy density consists of two components - volumetric or dialational and distortional. Volumetric component is responsible for change in volume without any change in shape. Distortional component is responsible for shear deformation or change in shape.
Practical engineering usage of the von Mises yield criterion
As shown in the equations above, the use of the von Mises criterion as a yield criterion is only exactly applicable when the following material properties are isotropic, and the ratio of the shear yield strength to the tensile yield strength has the following value:
Since no material will have this ratio precisely, in practice it is necessary to use engineering judgement to decide what failure theory is appropriate for a given material. Alternately, for use of the Tresca theory, the same ratio is defined as 1/2.
The yield margin of safety is written as
See also
Yield surface
Huber's equation
Henri Tresca
Stephen Timoshenko
Mohr–Coulomb theory
Hoek–Brown failure criterion
Yield (engineering)
Stress
Strain
3-D elasticity
Bigoni–Piccolroaz yield criterion
References
Materials science
Plasticity (physics)
Yield criteria
Structural analysis | Von Mises yield criterion | [
"Physics",
"Materials_science",
"Engineering"
] | 1,393 | [
"Structural engineering",
"Applied and interdisciplinary physics",
"Deformation (mechanics)",
"Structural analysis",
"Materials science",
"Plasticity (physics)",
"nan",
"Aerospace engineering",
"Mechanical engineering"
] |
2,317,010 | https://en.wikipedia.org/wiki/Bromine%20trifluoride | Bromine trifluoride is an interhalogen compound with the formula BrF3. At room temperature, it is a straw-coloured liquid with a pungent odor which decomposes violently on contact with water and organic compounds. It is a powerful fluorinating agent and an ionizing inorganic solvent. It is used to produce uranium hexafluoride (UF6) in the processing and reprocessing of nuclear fuel.
Synthesis
Bromine trifluoride was first described by Paul Lebeau in 1906, who obtained the material by the reaction of bromine with fluorine at 20 °C:
The disproportionation of bromine monofluoride also gives bromine trifluoride:
Structure
Like ClF3 and IF3, the BrF3 molecule is T-shaped and planar. In the VSEPR formalism, the bromine center is assigned two electron lone pairs. The distance from the bromine atom to each axial fluorine atom is 1.81 Å and to the equatorial fluorine atom is 1.72 Å. The angle between an axial fluorine atom and the equatorial fluorine atom is slightly smaller than 90° — the 86.2° angle observed is due to the repulsion generated by the electron pairs being greater than that of the Br-F bonds.
Chemical properties
In a highly exothermic reaction, BrF3 reacts with water to form hydrobromic acid and hydrofluoric acid:
BrF3 is a fluorinating agent, but less reactive than ClF3. Already at -196 °C, it reacts with acetonitrile to give 1,1,1-trifluoroethane.
+ Br2 + N2
The liquid is conducting, owing to autoionisation:
Fluoride salts dissolve readily in BrF3 forming tetrafluorobromate:
It reacts as a fluoride donor:
References
External links
WebBook page for BrF3
Bromine(III) compounds
Fluorides
Interhalogen compounds
Fluorinating agents
Oxidizing agents
Substances discovered in the 1900s | Bromine trifluoride | [
"Chemistry"
] | 435 | [
"Redox",
"Interhalogen compounds",
"Oxidizing agents",
"Salts",
"Fluorinating agents",
"Reagents for organic chemistry",
"Fluorides"
] |
2,317,437 | https://en.wikipedia.org/wiki/Synonymous%20substitution | A synonymous substitution (often called a silent substitution though they are not always silent) is the evolutionary substitution of one base for another in an exon of a gene coding for a protein, such that the produced amino acid sequence is not modified. This is possible because the genetic code is "degenerate", meaning that some amino acids are coded for by more than one three-base-pair codon; since some of the codons for a given amino acid differ by just one base pair from others coding for the same amino acid, a mutation that replaces the "normal" base by one of the alternatives will result in incorporation of the same amino acid into the growing polypeptide chain when the gene is translated. Synonymous substitutions and mutations affecting noncoding DNA are often considered silent mutations; however, it is not always the case that the mutation is silent.
Since there are 22 codes for 64 codons, roughly we should expect a random substitution to be synonymous with probability about 22/64 = 34%. The actual value is around 20%.
A synonymous mutation can affect transcription, splicing, mRNA transport, and translation, any of which could alter the resulting phenotype, rendering the synonymous mutation non-silent. The substrate specificity of the tRNA to the rare codon can affect the timing of translation, and in turn the co-translational folding of the protein. This is reflected in the codon usage bias that is observed in many species. A nonsynonymous substitution results in a change in amino acid that may be arbitrarily further classified as conservative (a change to an amino acid with similar physiochemical properties), semi-conservative (e.g. negatively to positively charged amino acid), or radical (vastly different amino acid).
Degeneracy of the genetic code
Protein translation involves a set of twenty amino acids. Each of these amino acids is coded for by a sequence of three DNA base pairs called a codon. Because there are 64 possible codons, but only 20-22 encoded amino acids (in nature) and a stop signal (i.e. up to three codons that do not code for any amino acid and are known as stop codons, indicating that translation should stop), some amino acids are coded for by 2, 3, 4, or 6 different codons. For example, the codons TTT and TTC both code for the amino acid phenylalanine. This is often referred to as redundancy of the genetic code. There are two mechanisms for redundancy: several different transfer RNAs can deliver the same amino acid, or one tRNA can have a non-standard wobble base in position three of the anti-codon, which recognises more than one base in the codon.
In the above phenylalanine example, suppose that the base in position 3 of a TTT codon got substituted to a C, leaving the codon TTC. The amino acid at that position in the protein will remain a phenylalanine. Hence, the substitution is a synonymous one.
Evolution
When a synonymous or silent mutation occurs, the change is often assumed to be neutral, meaning that it does not affect the fitness of the individual carrying the new gene to survive and reproduce.
Synonymous changes may not be neutral because certain codons are translated more efficiently (faster and/or more accurately) than others. For example, when a handful of synonymous changes in the fruit fly alcohol dehydrogenase gene were introduced, changing several codons to sub-optimal synonyms, production of the encoded enzyme was reduced and the adult flies showed lower ethanol tolerance.
Many organisms, from bacteria through animals, display biased use of certain synonymous codons. Such codon usage bias may arise for different reasons, some selective, and some neutral. In Saccharomyces cerevisiae synonymous codon usage has been shown to influence mRNA folding stability, with mRNA encoding different protein secondary structure preferring different codons.
Another reason why synonymous changes are not always neutral is the fact that exon sequences close to exon-intron borders function as RNA splicing signals. When the splicing signal is destroyed by a synonymous mutation, the exon does not appear in the final protein. This results in a truncated protein. One study found that about a quarter of synonymous variations affecting exon 12 of the cystic fibrosis transmembrane conductance regulator gene result in that exon being skipped.
See also
Ka/Ks ratio
Missense mutation
Nonsynonymous substitution
Point mutation
Expanded genetic code, where more than 20-22 natural encoded amino acids are used
References
Molecular evolution
Molecular biology
Protein biosynthesis
Gene expression
Mutation
Neutral theory | Synonymous substitution | [
"Chemistry",
"Biology"
] | 967 | [
"Evolutionary processes",
"Protein biosynthesis",
"Molecular evolution",
"Gene expression",
"Neutral theory",
"Biosynthesis",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry",
"Non-Darwinian evolution",
"Biology theories"
] |
2,318,333 | https://en.wikipedia.org/wiki/RRKM%20theory | The Rice–Ramsperger–Kassel–Marcus (RRKM) theory is a theory of chemical reactivity. It was developed by Rice and Ramsperger in 1927 and Kassel in 1928 (RRK theory) and generalized (into the RRKM theory) in 1952 by Marcus who took the transition state theory developed by Eyring in 1935 into account. These methods enable the computation of simple estimates of the unimolecular reaction rates from a few characteristics of the potential energy surface.
Assumption
Assume that the molecule consists of harmonic oscillators, which are connected and can exchange energy with each other.
Assume the possible excitation energy of the molecule to be , which enables the reaction to occur.
The rate of intra-molecular energy distribution is much faster than that of reaction itself.
As a corollary to the above, the potential energy surface does not have any "bottlenecks" for which certain vibrational modes may be trapped for longer than the average time of the reaction
Derivation
Assume that is an excited molecule:
where stands for product, and for the critical atomic configuration with the maximum energy along the reaction coordinate.
The unimolecular rate constant is obtained as follows:
where is the microcanonical transition state theory rate constant, is the sum of states for the active degrees of freedom in the transition state, is the quantum number of angular momentum, is the collision frequency between molecule and bath molecules, and are the molecular vibrational and external rotational partition functions.
See also
Transition state theory
References
External links
An RRKM online calculator
Chemical physics
Quantum chemistry
Molecular physics
Chemical kinetics | RRKM theory | [
"Physics",
"Chemistry"
] | 323 | [
"Chemical reaction engineering",
"Applied and interdisciplinary physics",
"Quantum chemistry",
"Molecular physics",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
"Chemical kinetics",
"Chemical physics",
" and optical physics"
] |
2,318,488 | https://en.wikipedia.org/wiki/Classification%20of%20electromagnetic%20fields | In differential geometry and theoretical physics, the classification of electromagnetic fields is a pointwise classification of bivectors at each point of a Lorentzian manifold. It is used in the study of solutions of Maxwell's equations and has applications in Einstein's theory of relativity.
The classification theorem
The electromagnetic field at a point p (i.e. an event) of a Lorentzian spacetime is represented by a real bivector defined over the tangent space at p.
The tangent space at p is isometric as a real inner product space to E1,3. That is, it has the same notion of vector magnitude and angle as Minkowski spacetime. To simplify the notation, we will assume the spacetime is Minkowski spacetime. This tends to blur the distinction between the tangent space at p and the underlying manifold; fortunately, nothing is lost by this specialization, for reasons we discuss as the end of the article.
The classification theorem for electromagnetic fields characterizes the bivector F in relation to the Lorentzian metric by defining and examining the so-called "principal null directions". Let us explain this.
The bivector Fab yields a skew-symmetric linear operator defined by lowering one index with the metric. It acts on the tangent space at p by . We will use the symbol F to denote either the bivector or the operator, according to context.
We mention a dichotomy drawn from exterior algebra. A bivector that can be written as , where v, w are linearly independent, is called simple. Any nonzero bivector over a 4-dimensional vector space either is simple, or can be written as , where v, w, x, and y are linearly independent; the two cases are mutually exclusive. Stated like this, the dichotomy makes no reference to the metric η, only to exterior algebra. But it is easily seen that the associated skew-symmetric linear operator Fab has rank 2 in the former case and rank 4 in the latter case.
To state the classification theorem, we consider the eigenvalue problem for F, that is, the problem of finding eigenvalues λ and eigenvectors r which satisfy the eigenvalue equation
The skew-symmetry of F implies that:
either the eigenvector r is a null vector (i.e. ), or the eigenvalue λ is zero, or both.
A 1-dimensional subspace generated by a null eigenvector is called a principal null direction of the bivector.
The classification theorem characterizes the possible principal null directions of a bivector. It states that one of the following must hold for any nonzero bivector:
the bivector has one "repeated" principal null direction; in this case, the bivector itself is said to be null,
the bivector has two distinct principal null directions; in this case, the bivector is called non-null.
Furthermore, for any non-null bivector, the two eigenvalues associated with the two distinct principal null directions have the same magnitude but opposite sign, , so we have three subclasses of non-null bivectors:
spacelike: ν = 0
timelike : ν ≠ 0 and
non-simple: ν ≠ 0 and ,
where the rank refers to the rank of the linear operator F.
Physical interpretation
The algebraic classification of bivectors given above has an important application in relativistic physics: the electromagnetic field is represented by a skew-symmetric second rank tensor field (the electromagnetic field tensor) so we immediately obtain an algebraic classification of electromagnetic fields.
In a cartesian chart on Minkowski spacetime, the electromagnetic field tensor has components
where and denote respectively the components of the electric and magnetic fields, as measured by an inertial observer (at rest in our coordinates). As usual in relativistic physics, we will find it convenient to work with geometrised units in which . In the "Index gymnastics" formalism of special relativity, the Minkowski metric is used to raise and lower indices.
Invariants
The fundamental invariants of the electromagnetic field are:
.
(Fundamental means that every other invariant can be expressed in terms of these two.)
A null electromagnetic field is characterised by . In this case, the invariants reveal that the electric and magnetic fields are perpendicular and that they are of the same magnitude (in geometrised units). An example of a null field is a plane electromagnetic wave in Minkowski space.
A non-null field is characterised by . If , there exists an inertial reference frame for which either the electric or magnetic field vanishes. (These correspond respectively to magnetostatic and electrostatic fields.) If , there exists an inertial frame in which electric and magnetic fields are proportional.
Curved Lorentzian manifolds
So far we have discussed only Minkowski spacetime. According to the (strong) equivalence principle, if we simply replace "inertial frame" above with a frame field, everything works out exactly the same way on curved manifolds.
See also
Electromagnetic peeling theorem
Electrovacuum solution
Lorentz group
Petrov classification
Notes
References
See section 25.
Mathematical physics
Electromagnetism
Lorentzian manifolds
electromagnetic fields | Classification of electromagnetic fields | [
"Physics",
"Mathematics"
] | 1,087 | [
"Electromagnetism",
"Physical phenomena",
"Applied mathematics",
"Theoretical physics",
"Fundamental interactions",
"Mathematical physics"
] |
2,320,078 | https://en.wikipedia.org/wiki/Anosov%20diffeomorphism | In mathematics, more particularly in the fields of dynamical systems and geometric topology, an Anosov map on a manifold M is a certain type of mapping, from M to itself, with rather clearly marked local directions of "expansion" and "contraction". Anosov systems are a special case of Axiom A systems.
Anosov diffeomorphisms were introduced by Dmitri Victorovich Anosov, who proved that their behaviour was in an appropriate sense generic (when they exist at all).
Overview
Three closely related definitions must be distinguished:
If a differentiable map f on M has a hyperbolic structure on the tangent bundle, then it is called an Anosov map. Examples include the Bernoulli map, and Arnold's cat map.
If the map is a diffeomorphism, then it is called an Anosov diffeomorphism.
If a flow on a manifold splits the tangent bundle into three invariant subbundles, with one subbundle that is exponentially contracting, and one that is exponentially expanding, and a third, non-expanding, non-contracting one-dimensional sub-bundle (spanned by the flow direction), then the flow is called an Anosov flow.
A classical example of Anosov diffeomorphism is the Arnold's cat map.
Anosov proved that Anosov diffeomorphisms are structurally stable and form an open subset of mappings (flows) with the C1 topology.
Not every manifold admits an Anosov diffeomorphism; for example, there are no such diffeomorphisms on the sphere . The simplest examples of compact manifolds admitting them are the tori: they admit the so-called linear Anosov diffeomorphisms, which are isomorphisms having no eigenvalue of modulus 1. It was proved that any other Anosov diffeomorphism on a torus is topologically conjugate to one of this kind.
The problem of classifying manifolds that admit Anosov diffeomorphisms turned out to be very difficult, and still has no answer for dimension over 3. The only known examples are infranilmanifolds, and it is conjectured that they are the only ones.
A sufficient condition for transitivity is that all points are nonwandering: . This in turn holds for codimension-one Anosov diffeomorphisms (i.e., those for which the contracting or the expanding subbundle is one-dimensional) and for codimension one Anosov flows on manifolds of dimension greater than three as well as Anosov flows whose Mather spectrum is contained in two sufficiently thin annuli. It is not known whether Anosov diffeomorphisms are transitive (except on infranilmanifolds), but Anosov flows need not be topologically transitive.
Also, it is unknown if every volume-preserving Anosov diffeomorphism is ergodic. Anosov proved it under a assumption. It is also true for volume-preserving Anosov diffeomorphisms.
For transitive Anosov diffeomorphism there exists a unique SRB measure (the acronym stands for Sinai, Ruelle and Bowen) supported on such that its basin is of full volume, where
Anosov flow on (tangent bundles of) Riemann surfaces
As an example, this section develops the case of the Anosov flow on the tangent bundle of a Riemann surface of negative curvature. This flow can be understood in terms of the flow on the tangent bundle of the Poincaré half-plane model of hyperbolic geometry. Riemann surfaces of negative curvature may be defined as Fuchsian models, that is, as the quotients of the upper half-plane and a Fuchsian group. For the following, let H be the upper half-plane; let Γ be a Fuchsian group; let M = H/Γ be a Riemann surface of negative curvature as the quotient of "M" by the action of the group Γ, and let be the tangent bundle of unit-length vectors on the manifold M, and let be the tangent bundle of unit-length vectors on H. Note that a bundle of unit-length vectors on a surface is the principal bundle of a complex line bundle.
Lie vector fields
One starts by noting that is isomorphic to the Lie group PSL(2,R). This group is the group of orientation-preserving isometries of the upper half-plane. The Lie algebra of PSL(2,R) is sl(2,R), and is represented by the matrices
which have the algebra
The exponential maps
define right-invariant flows on the manifold of , and likewise on . Defining and , these flows define vector fields on P and Q, whose vectors lie in TP and TQ. These are just the standard, ordinary Lie vector fields on the manifold of a Lie group, and the presentation above is a standard exposition of a Lie vector field.
Anosov flow
The connection to the Anosov flow comes from the realization that is the geodesic flow on P and Q. Lie vector fields being (by definition) left invariant under the action of a group element, one has that these fields are left invariant under the specific elements of the geodesic flow. In other words, the spaces TP and TQ are split into three one-dimensional spaces, or subbundles, each of which are invariant under the geodesic flow. The final step is to notice that vector fields in one subbundle expand (and expand exponentially), those in another are unchanged, and those in a third shrink (and do so exponentially).
More precisely, the tangent bundle TQ may be written as the direct sum
or, at a point , the direct sum
corresponding to the Lie algebra generators Y, J and X, respectively, carried, by the left action of group element g, from the origin e to the point q. That is, one has and . These spaces are each subbundles, and are preserved (are invariant) under the action of the geodesic flow; that is, under the action of group elements .
To compare the lengths of vectors in at different points q, one needs a metric. Any inner product at extends to a left-invariant Riemannian metric on P, and thus to a Riemannian metric on Q. The length of a vector expands exponentially as exp(t) under the action of . The length of a vector shrinks exponentially as exp(-t) under the action of . Vectors in are unchanged. This may be seen by examining how the group elements commute. The geodesic flow is invariant,
but the other two shrink and expand:
and
where we recall that a tangent vector in is given by the derivative, with respect to t, of the curve , the setting .
Geometric interpretation of the Anosov flow
When acting on the point of the upper half-plane, corresponds to a geodesic on the upper half plane, passing through the point . The action is the standard Möbius transformation action of SL(2,R) on the upper half-plane, so that
A general geodesic is given by
with a, b, c and d real, with . The curves and are called horocycles. Horocycles correspond to the motion of the normal vectors of a horosphere on the upper half-plane.
See also
Ergodic flow
Morse–Smale system
Pseudo-Anosov map
Notes
References
Anthony Manning, Dynamics of geodesic and horocycle flows on surfaces of constant negative curvature, (1991), appearing as Chapter 3 in Ergodic Theory, Symbolic Dynamics and Hyperbolic Spaces, Tim Bedford, Michael Keane and Caroline Series, Eds. Oxford University Press, Oxford (1991). (Provides an expository introduction to the Anosov flow on SL(2,R).)
Toshikazu Sunada, Magnetic flows on a Riemann surface, Proc. KAIST Math. Workshop (1993), 93–108.
Diffeomorphisms
Dynamical systems
Hyperbolic geometry | Anosov diffeomorphism | [
"Physics",
"Mathematics"
] | 1,709 | [
"Mechanics",
"Dynamical systems"
] |
2,320,130 | https://en.wikipedia.org/wiki/Quantum%20Monte%20Carlo | Quantum Monte Carlo encompasses a large family of computational methods whose common aim is the study of complex quantum systems. One of the major goals of these approaches is to provide a reliable solution (or an accurate approximation) of the quantum many-body problem. The diverse flavors of quantum Monte Carlo approaches all share the common use of the Monte Carlo method to handle the multi-dimensional integrals that arise in the different formulations of the many-body problem.
Quantum Monte Carlo methods allow for a direct treatment and description of complex many-body effects encoded in the wave function, going beyond mean-field theory. In particular, there exist numerically exact and polynomially-scaling algorithms to exactly study static properties of boson systems without geometrical frustration. For fermions, there exist very good approximations to their static properties and numerically exact exponentially scaling quantum Monte Carlo algorithms, but none that are both.
Background
In principle, any physical system can be described by the many-body Schrödinger equation as long as the constituent particles are not moving "too" fast; that is, they are not moving at a speed comparable to that of light, and relativistic effects can be neglected. This is true for a wide range of electronic problems in condensed matter physics, in Bose–Einstein condensates and superfluids such as liquid helium. The ability to solve the Schrödinger equation for a given system allows prediction of its behavior, with important applications ranging from materials science to complex biological systems.
The difficulty is however that solving the Schrödinger equation requires the knowledge of the many-body wave function in the many-body Hilbert space, which typically has an exponentially large size in the number of particles. Its solution for a reasonably large number of particles is therefore typically impossible, even for modern parallel computing technology in a reasonable amount of time. Traditionally, approximations for the many-body wave function as an antisymmetric function of one-body orbitals have been used, in order to have a manageable treatment of the Schrödinger equation. However, this kind of formulation has several drawbacks, either limiting the effect of quantum many-body correlations, as in the case of the Hartree–Fock (HF) approximation, or converging very slowly, as in configuration interaction applications in quantum chemistry.
Quantum Monte Carlo is a way to directly study the many-body problem and the many-body wave function beyond these approximations. The most advanced quantum Monte Carlo approaches provide an exact solution to the many-body problem for non-frustrated interacting boson systems, while providing an approximate description of interacting fermion systems. Most methods aim at computing the ground state wavefunction of the system, with the exception of path integral Monte Carlo and finite-temperature auxiliary-field Monte Carlo, which calculate the density matrix. In addition to static properties, the time-dependent Schrödinger equation can also be solved, albeit only approximately, restricting the functional form of the time-evolved wave function, as done in the time-dependent variational Monte Carlo.
From a probabilistic point of view, the computation of the top eigenvalues and the corresponding ground state eigenfunctions associated with the Schrödinger equation relies on the numerical solving of Feynman–Kac path integration problems.
Quantum Monte Carlo methods
There are several quantum Monte Carlo methods, each of which uses Monte Carlo in different ways to solve the many-body problem.
Zero-temperature (only ground state)
Variational Monte Carlo: A good place to start; it is commonly used in many sorts of quantum problems.
Diffusion Monte Carlo: The most common high-accuracy method for electrons (that is, chemical problems), since it comes quite close to the exact ground-state energy fairly efficiently. Also used for simulating the quantum behavior of atoms, etc.
Reptation Monte Carlo: Recent zero-temperature method related to path integral Monte Carlo, with applications similar to diffusion Monte Carlo but with some different tradeoffs.
Gaussian quantum Monte Carlo
Path integral ground state: Mainly used for boson systems; for those it allows calculation of physical observables exactly, i.e. with arbitrary accuracy
Finite-temperature (thermodynamic)
Auxiliary-field Monte Carlo: Usually applied to lattice problems, although there has been recent work on applying it to electrons in chemical systems.
Continuous-time quantum Monte Carlo
Determinant quantum Monte Carlo or Hirsch–Fye quantum Monte Carlo
Hybrid quantum Monte Carlo
Path integral Monte Carlo: Finite-temperature technique mostly applied to bosons where temperature is very important, especially superfluid helium.
Stochastic Green function algorithm: An algorithm designed for bosons that can simulate any complicated lattice Hamiltonian that does not have a sign problem.
World-line quantum Monte Carlo
Real-time dynamics (closed quantum systems)
Time-dependent variational Monte Carlo: An extension of the variational Monte Carlo to study the dynamics of pure quantum states.
See also
Monte Carlo method
QMC@Home
Quantum chemistry
Quantum Markov chain
Density matrix renormalization group
Time-evolving block decimation
Metropolis–Hastings algorithm
Wavefunction optimization
Monte Carlo molecular modeling
Quantum chemistry computer programs
Numerical analytic continuation
Notes
References
External links
QMC in Cambridge and around the world Large amount of general information about QMC with links.
Quantum Monte Carlo simulator (Qwalk)
Quantum chemistry
Electronic structure methods | Quantum Monte Carlo | [
"Physics",
"Chemistry"
] | 1,097 | [
"Quantum chemistry",
"Quantum mechanics",
"Computational physics",
"Theoretical chemistry",
"Electronic structure methods",
"Computational chemistry",
" molecular",
"Atomic",
"Quantum Monte Carlo",
" and optical physics"
] |
2,321,375 | https://en.wikipedia.org/wiki/Elastic%20recoil%20detection | Elastic recoil detection analysis (ERDA), also referred to as forward recoil scattering or spectrometry, is an ion beam analysis technique, in materials science, to obtain elemental concentration depth profiles in thin films. This technique can be achieved using many processes.
In the technique of ERDA, an energetic ion beam is directed at a sample to be characterized and (as in Rutherford backscattering) there is an elastic nuclear interaction between the ions of the beam and the atoms of the target sample. Such interactions are commonly of Coulomb nature. Depending on the kinetics of the ions, cross section area, and the loss of energy of the ions in the matter, ERDA helps determine the quantification of the elemental analysis. It also provides information about the depth profile of the sample.
The energy of incident energetic ions can vary from 2 MeV to 200 MeV, depending on the studied sample. The energy of the beam should be enough to kick out (“recoil”) the atoms of the sample. Thus, ERDA usually employs appropriate source and detectors to detect recoiled atoms.
ERDA setup is large, expensive and difficult to operate. Therefore, although it is commercially available, it is relatively uncommon in materials characterization. The angle of incidence that an ion beam makes with the sample must also be taken into account for correct analysis of the sample. This is because, depending on this angle, the recoiled atoms will be collected.
ERDA has been used since 1974. It has similar theory to Rutherford backscattering spectrometry (RBS), but there are minor differences in the set-up of the experiment. In case of RBS, the detector is placed in the back of the sample whereas in ERDA, the detector is placed in the front.
Characteristics of ERDA
The main characteristics of ERDA are listed below.
A variety of elements can be analyzed simultaneously as long as the atomic number of recoiled ion is smaller than the atomic number of the primary ion.
The sensitivity of this technique primarily depends upon scattering cross-section area, and the method has almost equal sensitivity to all light elements.
Depth resolution depends upon stopping power of heavy ions after interactions with sample, and the detection of scattered primary ions is reduced due to the narrow scattering cone of heavy ions scattering from light elements.
Gaseous ionization detectors provides efficient recoil detection, minimizing the exposure of sample to the ion beam and making this a non-destructive technique. This is important for accurate measurement of hydrogen, which is unstable under the beam and is often removed from the sample.
History
ERDA was first demonstrated by L’Ecuyer et al. in 1976. They used 25–40 MeV 35Cl ions to detect the recoils in the sample. Later, ERDA has been divided into two main groups. First is the light incident ion ERDA (LI-ERDA) and the second is the heavy incident ion ERDA (HI-ERDA). These techniques provide similar information and differ only in the type of ion beam used as a source.
LI-ERDA uses low voltage single-ended accelerators, whereas the HI-ERDA uses large tandem accelerators. These techniques were mainly developed after heavy ion accelerators were introduced in the materials research. LI-ERDA is also often performed using a relatively low energy (2 MeV) helium beam for measuring the depth profile of hydrogen. In this technique, multiple detectors are used: backscattering detector for heavier elements and forward (recoil) detector to simultaneously detect the recoiled hydrogen. The recoil detector for LI-ERDA typically has a “range foil”. It is usually a Mylar foil placed in front of the detector, which blocks scattered incident ions, but allows lighter recoiling target atoms to pass through to the detector. Usually a 10 μm thick Mylar foil completely stops 2.6 MeV helium ions but allows the recoiled protons to go through with a low energy loss.
HI-ERDA is more widely used compared to LI-ERDA because it can probe more elements. It is used to detect recoiled target atoms and scattered beam ions using several detectors, such as silicon diode detector, time-of-flight detector, Gaseous ionization detector etc. The main advantage of HI-ERDA is its ability to obtain quantitative depth profiling information of all the sample elements in one measurement. Depth resolution less than 1nm can be obtained with good quantitative accuracy thus giving these techniques significant advantages over other surface analysis methods. Additionally, a depth of 300 nm can be accessed using this technique. A wide range of ion beams including 35Cl, 63Cu, 127I, and 197Au, with different energies can be used in this technique.
The setup and the experimental conditions affect the performances of both of these techniques. Factors such as multiple scattering and ion beam induced damage must be taken into account before obtaining the data because these processes can affect the data interpretation, quantification, and accuracy of the study. Additionally, the incident angle and the scattered angle help determine the sample surface topography.
Prominent features of ERDA
ERDA is very similar to RBS, but instead of detecting the projectile at the back angle, the recoils are detected in the forward direction. Doyle and Peercey in 1979 established the use of this technique for hydrogen depth profiling. Some of the prominent features of ERDA with high energy heavy ions are:
Large recoil cross-section with heavy ions provides good sensitivity. Moreover, all chemical elements, including hydrogen, can be detected simultaneously with similar sensitivity and depth resolution.
Concentrations of 0.1 atomic percent can be easily detected. The sampling depth depends on the sample material and is of the order of 1.5–2.5 μm. For the surface region, a depth resolution of 10 nm can be achieved. The resolution deteriorates with increasing depth due to several physical processes, mainly the energy straggling and multiple scattering of the ions in the sample.
Same recoil cross-section for a wide mass range of target atoms.
The unique characteristic of this technique is depth-profiling of a wide range of elements from hydrogen to rare earth elements.
ERDA can overcome some of the limitations of RBS. ERDA has enabled depth profiling of elements from lightest elements like hydrogen up to heavy elements with high resolution in the light mass region as discussed above. Also, this technique has been highly sensitive because of the use of large area position sensitive telescope detectors. Such detectors are used especially when the elements in the sample have similar masses.
Principles of ERDA
The calculations that model this process are relatively simple, assuming projectile energy is in the range corresponding to Rutherford scattering. Projectile energy range for light incident ions is in 0.5–3.0 MeV range. For heavier projectile ions such as 127I the energy range is usually between 60 and 120 MeV; and for medium heavy ion beams,36Cl is a common ion beam used with an energy of approximately 30 MeV. For the instrumentation section, the focus will be on heavy ion bombardment. The E2 transferred by projectile ions of mass m1 and energy E1 to sample atoms of mass m2 recoiling at an angle ϕ, with respect to the incidence direction is given by the following equation.
(1)
Eq. 1 models the energy transfer from the incident ions striking the sample atoms and the recoiling effect of the target atoms with an angle of ϕ. For heavier ions in elastic recoil detection analysis, if m2/m1 <<1, all recoiling ions have similar velocities. It can be deduced from the previous equation the maximum scattering angle, θ’max, as Eq. 2 describes:
(2)
Using these parameters, absorber foils do not need to be incorporated into the instrument design. When using heavy ion beams and the parameters above, the geometry can be estimated as to allow for incident particle collision and scattering at an angle deflected away from the detector. This will prevent degradation of the detector from the more intense beam energies.
The differential elastic recoil cross-section σERD is given by:
(3)
where Z1 and Z2 are the atomic numbers of projectile and sample atoms, respectively. For m2/m1 <<1 and with approximation m=2Z; Z being the atomic number of Z1 and Z2. In Eq. 3 two essential consequences can be seen, first the sensitivity is roughly the same for all elements and second it has a Z14 dependence on the projector of the ion. This allows for the use of low energy beam currents in HI-ERDA preventing sample degradation and excessive heating of the specimen.
When using heavy ion beams, care must be taken for beam-induced damage in sample such as sputtering or amorphization. If only nuclear interaction is taken into account, it has been shown that the ratio of recoiling to displaced atoms is independent of Z1 and only weakly dependent on the projectile mass of the incident ion. With heavy ion bombardment, it has been shown that the sputter yield by the ion beam on the sample increases for nonmetallic samples and enhanced radiation damage in superconductors. In any case, the acceptance angle of the detector system should be as large as possible to minimize the radiation damage. However, it may reduce the depth profiling and elemental analysis due to the ion beam not being able to penetrate the sample.
This demand of a large acceptance angle, however, is in conflict with the requirement of optimum depth resolution dependency on the detection geometry. In the surface approximation and assuming constant energy loss the depth resolution δx can be written:
(4)
where Srel is the relative energy loss factor defined by:
(5)
here, α and β are the incidence angles of the beam and exit angle of the recoiling ion respectively, connected to the scattering angle ϕ by ϕ=α+β. It should be noticed here that the depth resolution depends on the relative energy resolution only, as well as the relative stopping power of incoming and outgoing ions. The detector resolution and energy broadening associated with the measuring geometry contribute to the energy spread, δE. The detector acceptance angle and the finite beam spot size define a scattering angle range δϕ causing a kinematic energy spread δEkin according to Eq. 6:
(6)
A detailed analysis of the different contributions to depth resolution shows that this kinematic effect is the predominant term near the surface, severely limiting the permitted detector acceptance angle, whereas energy straggling dominates the resolution at larger depth. For example, if one estimates δϕ for a scattering angle of 37.5° causing a kinematic energy shift comparable to typical detector energy resolutions of 1%, the angular spread δψ must be less than 0.4°. The angular spread can be maintained within this range by contributions from the beam spot size; however, the solid angle geometry of the detector is only 0.04 msr. Therefore, a detector system with large solid angle as well as high depth resolution may enable corrections for the kinematic energy shift.
In an elastic scattering event, the kinematics requires that the target atom is recoiled with significant energy. Eq. 7 models the recoil kinematical factor that occurs during the ion bombardment.
(7)
(8)
(9)
(10)
Eq. 7 gives a mathematical model of the collision event when the heavier ions in the beam strike the specimen. Ks is termed the kinematical factor for the scattered particle (Eq. 8) with a scattering angle of θ, and the recoiled particle (Eq. 9) with a recoil angle of Φ. The variable r is the ratio of mass of the incident nuclei to that of the mass of the target nuclei, (Eq. 10). To achieve this recoil of particles, the specimen needs to be very thin and the geometries need to be precisely optimized to obtain accurate recoil detection. Since ERD beam intensity can damage the specimen and there has been growing interest to invest in the development of low energy beams for reducing the damage of the specimen.
The cathode is divided into two insulated halves, where particle entrance position is derived from charges induced on the left, l, and right, r, halves of the cathode. Using the following equation, x-coordinates of particle positions, as they enter the detector, can be calculated from charges l and r:
(11)
Furthermore, the y-coordinate is calculated from the following equation due to the position independence of the anode pulses:
(12)
For transformation of the (x, y) information into scattering angle ϕ a removable calibration mask in front of the entrance window is used. This mask allows correction for x and y distortions too. For notation detail, the cathode has an ion drift time on the order of a few milliseconds. To prevent ion saturation of the detector, a limit of 1 kHz must be applied to the number of particles entering the detector.
Instrumentation
Elastic recoil detection analysis was originally developed for hydrogen detection or a light element (H, He, Li, C, O, Mg, K) profiling with an absorber foil in front of the energy detector for beam suppression. Using an absorber foil prevents the higher energy ion beam from striking the detector and causing degradation. Absorber foils increase the lifetime of the detector. More advanced techniques have been implemented to negate the use of absorber foils and the associated difficulties that arise through the use of it. In most cases, medium heavy ion beams, typically 36Cl ions, have been used for ERDA so far with energies around 30 MeV. Depth resolution and element profiling of thin films has been greatly advanced using elastic recoil detection analysis.
Ion source and interactions
Particle accelerators, such as a magnetron or cyclotron, implement electromagnetic fields to achieve acceleration of elements. Atoms must be electrically charged (ionized) before they can be accelerated. Ionization involves the removal of electrons from the target atoms. A magnetron can be used to produce hydrogen ions. Van de Graaff generators have also been integrated with particle accelerators for light ion beam generation.
For heavier ion production, for example, an electron cyclotron resonance (ECR) source can be used. At the National Superconducting Cyclotron Laboratory, neutral atoms have their electrons removed using an ECR ion source. ECR works by ionizing the vapor of a desired element such as chlorine and iodine. Further, utilizing this technique, metals (Au, Ag, etc.) can also be ionized using a small oven to achieve a vapor phase. The vapor is maintained within a magnetic field long enough for the atoms to be ionized by collisions with electrons. Microwaves are applied to the chamber as to keep the electrons in motion.
The vapor is introduced via injection directly into the “magnetic bottle” or the magnetic field. Circular coils provide the shape for the magnetic bottle. The coils are found at the top and bottom of the chamber with a hexapole magnet around the sides. A hexapole magnet consists of permanent magnets or superconducting coils. The plasma is contained within the magnetic trap that is formed from electric current flowing in solenoids located on the sides of the chamber. A radial magnetic field, exerted by the hexapole magnetic, is applied to the system that also confines the plasma. Acceleration of the electrons is achieved using resonance. For this to occur, the electrons must pass through a resonance zone. In this zone, their gyrofrequency or cyclotron frequency is equal to the frequency of the microwave injected into the plasma chamber. Cyclotron frequency is defined as the frequency of a charged particle moving perpendicular to the direction of a uniform magnetic field B. Since the motion is always circular, cyclotron frequency-ω in radians/second-can be described by the following equation:
(13) = ω
where m is the mass of the particle, its charge is q, and the velocity is v. Ionization is a step-by-step process from collisions of the accelerated electrons with the desired vapor atoms. The gyrofrequency of an electron is calculated to be 1.76x107 Brad/second.
Now that the vapor of the desired has been ionized, they must be removed from the magnetic bottle. To do this, a high voltage is between the hexapoles applied to pull out the ions from the magnetic field. The extraction of the ions, from the chamber, is carried out using an electrode system through a hole in a positively biased plasma chamber. Once the ions have been extracted from the chamber, they are then sent to the cyclotron for acceleration. It is very important that the ion source used is optimal for the experiment being carried out. To perform an experiment in a practical amount of time, the ions provided from the accelerator complex should have the correct desired energy. The quality and stability of the ion beam needs to be considered carefully, due to the fact that only the ions with the correct flight trajectory can be injected in the cyclotron and accelerated to the desired energy.
During ERDA, the idea is to place an ion beam source at a grazing angle to the sample. In this set up, the angle is calculated as to allow the incident ions to scatter off of the sample so that there is no contact made with the detector. The physical basis that has given the method its name stems from the elastic scattering of incident ions on a sample surface and detecting the recoiling sample atoms while the incident ions backscatter at such an angle, that they do not reach the detector; this is typically in reflection geometry.
Another method for preventing incident ions from making contact with the detector is to use an absorber foil. During analysis of the elastically recoiled particles, an absorber foil with selected specific thickness can be used to "stop" the heavy recoil and beam ions from reaching the detector; reducing the background noise. Incorporating an absorber into the experimental set up can be the most difficult to achieve. The stopping of the beam using either direct or scattered methods can only be accomplished without also stopping the light impurity atoms, if it is heavier (beam ions) than the impurity atoms being analyzed. There are advantages when using absorber films:
The large beam Z1 gives rise to a large Rutherford cross section and because of the kinematics of heavy-on-light collisions that cross section is nearly independent of the target, if M1>> M2 and M ~2Z; this helps in reducing the background.
The higher stopping power provides a good depth resolution of ~300 Angstroms, limited in fact by straggling in the absorber.
The major criterion for absorber foils used in ERDA is whether a recoiling impurity atom can be transmitted through the absorber, preferably a commercially available metal foil, while stopping heavy particles. Since the lighter atoms leave the absorber with smaller energies, the kinematic calculations do not provide much help. Favorable results have been obtained by using heavier ion beams of approximately 1 MeV/ nucleon. The best overall candidate is the 35Cl ion beam; although, 79Br would give better sensitivity by one order of magnitude compared to the 35Cl ion beam. The mass resolution, of the detector at θ= 0°, of thin samples is ΔM/Δx ~ 0.3 amu/1000 Angstroms of the profile width. With thick samples, the mass resolution is feasible at θ≤30°. In thicker samples there is some degradation of mass resolution and slight loss of sensitivity. The detector solid angle has to be closed, but the thick sample can take more current without heating, which decreases sample degradation.
Detectors
Once the ion beam has ionized target sample atoms, the sample ions are recoiled toward the detector. The beam ions are scattered at an angle that does not permit them to reach the detector. The sample ions pass through an entrance window of the detector, and depending on the type of detector used, the signal is converted into a spectrum.
Silicon diode detector
In elastic recoil detection analysis, a silicon diode is the most common detector. This type of detector is commonly used, however, there are some major disadvantages when using this type of detector. For example, the energy resolution decreases significantly with a Si detector when detecting heavy recoiled ions. There is also a possibility of damage to the detector by radiation exposure. These detectors have a short functional lifetime (5–10 years) when doing heavy ion analysis. One of the main advantages of silicon detectors is their simplicity. However, they have to be used with a so-called “range foil” to range out the forward scattered heavy beam ions. Therefore, the simple range foil ERD has two major disadvantages: first, the loss of energy resolution due to the energy straggle and second, thickness inhomogeneity of the range foil, and the intrinsic indistinguishability of the signals for the various different recoiled target elements. Aside from the listed disadvantages, ERDA range foils with silicon detectors is still a powerful method and is relatively simple to work with.
Time of flight detector
Another method of detection for ERDA is time of flight (TOF)-ERD. This method does not present the same issues, as those for the silicon detector. However, the throughput of TOF detectors is limited; the detection is performed in a serial fashion (one ion in the detector at a time). The longer the TOF for ions, the better the time resolution (equivalent to energy resolution) will be. TOF spectrometers that have an incorporated solid state detector must be confined to small solid angles. When performing HI-ERDA, TOF detectors are often used and/or ∆E/E detectors-such as ionization chambers. These types of detectors usually implement small solid angles for higher depth resolution. Heavier ions have a longer flight time than the lighter ions. Detectors in modern time-of-flight instruments have improved sensitivity, temporal and spatial resolution, and lifetimes. Hi mass bipolar (high mass ion detection), Gen 2 Ultra Fast (twice as fast as traditional detectors), and High temperature (operated up to 150 °C) TOF are just a few of the commercially available detectors integrated with time-of-flight instruments. Linear and reflectron-TOF are the more common instruments used.
Ionization detector
A third type of detector is the gas ionization detector. The gas ionization detectors have some advantages over silicon detectors, for example, they are completely impervious to beam damage, since the gas can be replenished continuously. Nuclear experiments with large area ionization chambers increase the particle and position resolution have been used for many years and can easily be assimilated to any specific geometry. The limiting factor on energy resolution using this type of detector is the entrance window, which needs to be strong enough to withstand the atmospheric pressure of the gas, 20–90 mbar. Ultra-thin silicon nitride windows have been introduced, together with dramatic simplifications in the design, which have been demonstrated to be nearly as good as more complex designs for low energy ERD. These detectors have also been implemented in heavy-ion rutherford backscattering spectrometry.
The energy resolution obtained from this detector is better than a silicon detector when using ion beams heavier than helium ions. There are various designs of ionization detectors but a general schematic of the detector consists of a transversal field ionization chamber with a Frisch grid positioned between anode and cathode electrodes. The anode is subdivided into two plates separated by a specific distance. From the anode, signals ∆E(energy lost), Erest(residual energy after loss), and Etot (the total energy Etot= ΔΕ+Erest) as well as the atomic number Z can be deduced. For this specific design, the gas used was isobutane at pressures of 20–90 mbar with a flow rate that was electronically controlled. A polypropylene foil was used as the entrance window. It has to be noted that the foil thickness homogeneity is of more importance for the detector energy resolution than the absolute thickness. If heavy ions are used and detected, the effect of energy loss straggling will be easily surpassed by the energy loss variation, which is a direct consequence of different foil thicknesses. The cathode electrode is divided in two insulated halves, thus information of particle entrance position is derived from charges induced at the right and left halves.
ERDA and energy detection of recoiled sample atoms
ERDA in transmission geometry, where only the energy of the recoiling sample atoms is measured, was extensively used for contamination analysis of target foils for nuclear physics experiments. This technique is excellent for discerning different contaminants of foils used in sensitive experiments, such as carbon contamination. Using 127I ion beam, a profile of various elements can be obtained and the amount of contamination can be determined. High levels of carbon contamination could be associated with beam excursions on the support, such as a graphite support. This could be corrected by using a different support material. Using a Mo support, the carbon content could be reduced from 20 to 100 at.% to 1–2 at.% level of the oxygen contamination probably originating from residual gas components. For nuclear experiments, high carbon contamination would result in extremely high background and the experimental results would be skewed or less differentiable with the background. With ERDA and heavy ion projectiles, valuable information can be obtained on the light element content of thin foils even if only the energy of the recoils is measured.
ERDA and particle identification
Generally, the energy spectra of different recoil elements overlap due to finite sample thickness, therefore particle identification is necessary to separate the contributions of different elements. Common examples of analysis are thin films of TiNxOy-Cu and BaBiKO. TiNxOy-Cu films were developed at the University of Munich and are used as tandem solar absorbers. The copper coating and the glass substrate was also identified. Not only is ERDA is also coupled to Rutherford backscattering spectrometry, which is a similar process to ERDA. Using a solid angle of 7.5 mrs, recoils can be detected for this specific analysis of TiNxOy-Cu. It is important when designing an experiment to always consider the geometry of the system as to achieve recoil detection. In this geometry and with Cu being the heaviest component of the sample, according to eq. 2, scattered projectiles could not reach the detector. To prevent pileup of signals from these recoiled ions, a limit of 500 Hz needed to be set on the count rate of ΔΕ pulses. This corresponded to beam currents of lass than 20 particle pA.
Another example of thin film analysis is of BaBiKO. This type of film showed superconductivity at one of the highest-temperatures for oxide superconductors. Elemental analysis of this film was carried out using heavy ion-ERDA. These elemental constituents of the polymer film (Bi, K, Mg, O, along with carbon contamination) were detected using an ionization chamber. Other than potassium, the lighter elements are clearly separated in the matrix. From the matrix, there is evidence of a strong carbon contamination within the film. Some films showed a 1:1 ratio of K to carbon contamination. For this specific film analysis, the source for contamination was traced to an oil diffusion pump and replaced with an oil-free pumping system.
ERDA and position resolution
In the above examples, the main focus was identification of constituent particles found in thin films and depth resolution was of less significance. Depth resolution is of great importance in applications when a profile of a sample's elemental composition, in different sample layers, has to be measured. This is a powerful tool for materials characterization. Being able to quantify elemental concentration in sub-surface layers can provide a great deal of information pertaining to chemical properties. High sensitivity, i.e. large detector solid angle, can be combined with high depth resolution only if the related kinematic energy shift is compensated.
Physical processes of ERDA
The basic chemistry of forward recoil scattering process is considered to be charged particle interaction with matters. To understand forward recoil spectrometry, it is instructive to review the physics involved in elastic and inelastic collisions. In elastic collision, only kinetic energy is conserved in the scattering process, and there is no role of particle internal energy. Meanwhile, in case of inelastic collision, both kinetic energy and internal energy are participated in the scattering process. Physical concepts of two-body elastic scattering are the basis of several nuclear methods for elemental material characterization.
Fundamentals of recoil (backscattering) spectrometry
The Fundamental aspects in dealing with recoil spectroscopy involves electron back scattering process of matter such as thin films and solid materials. Energy loss of particles in target materials is evaluated by assuming that the target sample is laterally uniform and constituted by a mono isotopic element. This allows a simple relationship between that of penetration depth profile and elastic scattering yield.
Main assumptions in physical concepts of Back scattering spectrometry
Elastic collision between two bodies is the energy transfer from a projectile to a target molecule. This process depends on the concept of kinematics and mass perceptibility.
Probability of occurrence of collision provides information about scattering cross section.
Average loss of energy of an atom moving through a dense medium gives idea on stopping cross section and capability of depth perception.
Statistical fluctuations caused by the energy loss of an atom while moving through a dense medium. This process leads to the concept of energy straggling and a limitation to the ultimate depth and mass resolution in back scattering spectroscopy.
Physical concepts that are highly important in interpretation of forward recoil spectrum are depth profile, energy straggling, and multiple scattering. These concepts are described in detail in the following sections:
Depth profile and resolution analysis
A key parameter that characterizes recoil spectrometry is the depth resolution. This parameter is defined as the ability of an analytical technique to measure a variation in atomic distribution as a function of depth in a sample layer.
In terms of low energy forward recoil spectrometry, hydrogen and deuterium depth profiling can be expressed in the following mathematical notation:
where δEdet defines as the energy width of a channel in a multichannel analyzer, and dEdet/dx is the effective stopping power of the recoiled particles.
Consider an Incoming and outgoing ion beams that are calculated as a function of collisional depth, by considering two trajectories are in a plane perpendicular to target surface, and incoming and outgoing paths are the shortest possible ones for a given collision depth and given scattering and recoil angles .
Impinging ions reach the surface, making an angle θ1, with the inward-pointing normal to the surface. After collision their velocity makes an angle θ1, with the outward surface normal; and the atom initially at rest recoils, making an angle θ1, with this normal. Detection is possible at one of these angles as such that the particle crosses the target surface.
Paths of particles are related to collisional depth x, measured along a normal to the surface.
For the impinging ion, length of the incoming path L1 is given by:
The outgoing path length L2 of the scattered projectile is:
And finally the outgoing path L3 of the recoil is:
In this simple case a collisional plane is perpendicular to the target surface, the scattering angle of the impinging ion is θ = π-θ1-θ2 & the recoil angle is φ = π-θ1-θ3.
The target angle with the collisional plane is taken as α, and path is augmented by a factor of 1/cos α.
For the purpose of converting outgoing particle in to collision depth, geometrical factors are chosen.
For recoil, R(φ, α) is defined as:
For forward scattering of the projectile, R(φ,α) is given by:
Paths of scattered particles are considered to be L1 for incident beam, L2 is for scattered particle, and L3 is for recoiled atoms.
Energy depth relationship
The energy E0(x) of the incident particle at a depth (x) compared to its initial energy E0 where scattering occurs is given by the following equations.
Similarly, energy expression for the scattered particle is:
and for the recoil atom is:
The energy loss per unit path is usually defined as stopping power and it is represented by:
Specifically, stopping power S(E) is known as a function of the energy E of an ion.
Starting point for the energy loss calculations is illustrated by the expression:
By applying the above equation and energy conservation Illustrates expressions in 3 cases:
Here, and ; and are the stopping powers for the projectile and recoil in the target material. Finally, the stopping cross-section is defined by , where ε is the stopping cross-section factor.
To obtain the energy path scale, we need to evaluate the energy variation δE2 of the outgoing beam of energy E2 from the target surface for an increment δx of collisional depth, while E0 remains fixed. This causes changes in path lengths L1 and L3. The variation of the path around the collision point x is related to the corresponding variation in energy before scattering:
Moreover, particles with slight energy differences after scattering from a depth x undergo slight energy losses on their outgoing path. Then the change δL3 of the path length L3 can be written as:
δL1 is the path variations due to energy variation just after the collision and δL3 is the path variation because of variation of energy loss along the outward path. The above equations can be solved assuming δx = 0 for the derivative and :
In elastic spectrometry, the term is called as energy loss factor:
Finally, the stopping cross section is defined by , where N is the atomic density of the target material.
The stopping cross-section factor is given by:
Depth resolution
An important parameter that characterizes recoil spectrometer is depth resolution. It is defined as the ability of an analytical technique to detect a variation in atomic distribution as a function of depth. The capability to separate in energy in the recoil system arising from small depth intervals. The expression for depth resolution is given as:
Here, δET is the total energy resolution of the system, and the expression in the denominator is the sum of the path integrals of initial, scattered and recoil ion beams.
Practical importance of depth resolution
The concept of depth resolution represents the ability of Recoil spectrometry to separate the energies of scattered particles that occurred at slightly different depths δRx is interpreted as an absolute limit for determining the concentration profile. From this point of view, a concentration profile separated by a depth interval of the order of magnitude of δRx would be undistinguishable in the spectrum, and obviously it is impossible to gain accuracy better than δRx to assign depth profile. In particular the fact that the signals corresponding to features of the concentration profile separated by less than δRx strongly overlap in the spectrum.
A finite final depth resolution resulting from both theoretical and experimental limitations has deviation from exact result when consider an ideal situation. Final resolution is not coincide with theoretical evaluation such as the classical depth resolution δRx precisely because it results from three terms that escape from theoretical estimations:
Incertitude due to approximations of energy spread among molecules.
Inconsistency in data on stopping powers and cross section values.
Statistical fluctuations of recoil yield (counting noise).
Influence of energy broadening on a recoil spectrum
Straggling is energy loss of particle in a dense medium is statistical in nature due to a large number of individual collisions between the particle and sample. Thus the evolution of an initially mono energetic and mono directional beam leads to dispersion of energy and direction. The resulting statistical energy distribution or deviation from initial energy is called energy straggling. Energy straggling data are plotted as a function of depth in the material.
The energy distribution of straggling is divided into three domains depending on the ratio of ΔE i.e., ΔE /E where ΔE is the mean energy loss and E is the average energy of the particle along the trajectory.
1. Low fraction of energy loss: for very thin films with small path lengths, where ΔE/E ≤ 0.01, Landau and Vavilov derived that infrequent single collisions with large energy transfers contributes certain amount of loss in energy.
2. Medium fraction of energy loss: for regions where 0.01< ΔE/E ≤ 0.2. Bohr’s model based on electronic interactions is useful for estimating energy straggling for this case, and this model includes the amount of energy straggling in terms of the areal density of electrons traversed by the beam.
The standard deviation Ω2B of the energy distribution is, where NZ2Δx is the number of electrons per unit area over the path length increment Δx.
3. Large fraction of energy loss: for fractional energy loss in the region of 0.2 < ΔE/E ≤ 0.8, the energy dependence of stopping power causes the energy loss distribution to differ from Bohr’s straggling function. This case can not be described by the Bohr theory, and has been treated using alternative approaches.
An expression of energy for straggling was proposed by Symon in the region of 0.2 < ΔE/E ≤ 0.5.
Tschalar et al. derived a straggling function , where σ2(E) represents energy straggling per unit length (or) variance of energy loss distribution per unit length for particles of energy E, and E(x) is the mean energy at depth x. The Tschalar's expression is valid for nearly symmetrical energy loss spectra.
Mass resolution
In a similar way mass resolution is a parameter that characterizes the capability of recoil spectrometry to separate two signals arising from two neighboring elements in the target. The difference in the energy δE2 of recoil atoms after collision when two types of atoms differ in their masses by a quantity δM2 is:
Mass resolution δMR (≡ δE2/ δM2).
A main limitation of using low beam energies is the reduced mass resolution. The energy separation of different masses is, in fact, directly proportional to the incident energy. The mass resolution is limited by the relative E and velocity v.
Expression for mass resolution is: ΔM = √(∂M/∂E.∆E)2 + √(∂M/∂v.∆v)2
ΔM = M(√((∆E)/E)2+√(2.∆v/v)2)
Here, E is the energy, M is the mass, v is the velocity of the particle beam, and ΔM is the reduced mass difference.
Multiple scattering scheme in forward recoil spectrometry
When an ion beam penetrating in to matter, ions undergo successive scattering events and deviates from original direction. The beam of ions in initial stage are well collimated(single direction), but after passing through a thickness of Δx in a random medium their direction of light propagation certainly differs from normal direction . As a result, both angular and lateral deviations from the initial direction can occur. These two parameters are discussed below. Hence, path length will be increased than expected causing fluctuations in ion beam. This process is called multiple scattering, and it is statistical in nature due to the large number of collisions.
Theory and experiment of multiple scattering phenomena
In the study of multiple scattering phenomenon angular distribution of a beam is important quantity for consideration. The lateral distribution is closely related to the angular one but secondary to it, since lateral displacement is a consequence of angular divergence. Lateral distribution represents the beam profile in the matter. both lateral and angular Multiple scattering distributions are interdependent.
The analysis of multiple scattering was started by Walther Bothe and Gregor Wentzel in the early 1920s using well-known approximation of small angles. The physics of energy straggling and multiple scattering was developed by Williams in 1929–1945. Williams devised a theory, which consists of fitting the multiple scattering distribution as a Gaussian-like portion due to small scattering angles and the single collision tail due to the large angles. William, E.J., studied beta particle straggling, Multiple scattering of fast electrons and alpha particles, and cloud curvature tracks due to scattering to explain Multiple scattering in different scenario and he proposed a mean projection deflection occurrence due to scattering. His theory later extended to multiple scattering of alpha particles.
Goudsmit and Saunderson provided a more complete treatment of multiple scattering, including large angles. For large angles Goudsmit considered series of Legendre polynomials which are numerically evaluated for distribution of scattering. The angular distribution from Coulomb scattering has been studied in by Molière in the 1940s and then by Marion and coworkers, who tabulated energy loss of charged particles in matter, multiple scattering of charged particles, range straggling of protons, deuterons and alpha particles, and equilibrium charge states of ions in solids and energies of elastically scattered particles. Scott presents a complete review of basic theory, mathematical methods, as well as results and applications.
A comparative development of multiple scattering at small angles was presented by Meyer, based on a classical calculation of single cross section. Sigmund and Winterbon extended the Meyer's calculation to a more general case. Marwick and Sigmund carried out development on lateral spreading by multiple scattering, which resulted in a simple scaling relation with the angular distribution.
Applications
ERDA has applications in the areas of polymer science, semiconductor materials, electronics, and thin film characterization. ERDA is widely used in polymer science. This is because polymers are hydrogen-rich materials which can be easily studied by LI-ERDA. One can examine surface properties of polymers, polymer blends and evolution of polymer composition induced by irradiation. HI-ERDA can also be used in the field of new materials processed for microelectronics and opto-electronic applications. Moreover, elemental analysis and depth profiling in thin film can also be performed using ERDA.
ERDA is also used to characterize hydrogen transport near interfaces induced by corrosion and wear.
Characterizing how polymer molecules behave at interfaces between incompatible polymers and at interfaces with inorganic solid substances is crucial to our fundamental understanding and for improving the performance of polymers in applications. For example, the adhesion of two polymers strongly depends on the interactions occurring at the interface between polymer segments. LI-ERDA is one of the most attractive methods for investigating these aspects of polymer science quantitatively.
Electronic devices are usually composed of sequential thin layers made up of oxides, nitrides, silicides, metals, polymers, or doped semiconductor–based media coated on a single-crystalline substrate (Si, Ge or GaAs). These structures can be studied by HI-ERDA. This technique has one major advantage over other methods. The profile of impurities can be found in a one-shot measurement at a constant incident energy. Moreover, this technique offers an opportunity to study the density profiles of hydrogen, carbon and oxygen in various materials, as well as the absolute hydrogen, carbon and oxygen content.
References
Materials science | Elastic recoil detection | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 8,875 | [
"Ion beam methods",
"Applied and interdisciplinary physics",
"Materials science",
"Surface science",
"nan"
] |
376,800 | https://en.wikipedia.org/wiki/Critical%20phenomena | In physics, critical phenomena is the collective name associated with the
physics of critical points. Most of them stem from the divergence of the
correlation length, but also the dynamics slows down. Critical phenomena include scaling relations among different quantities, power-law divergences of some quantities (such as the magnetic susceptibility in the ferromagnetic phase transition) described by critical exponents, universality, fractal behaviour, and ergodicity breaking. Critical phenomena take place in second order phase transitions, although not exclusively.
The critical behavior is usually different from the mean-field approximation which is valid away from the phase transition, since the latter neglects correlations, which become increasingly important as the system approaches the critical point where the correlation length diverges. Many properties of the critical behavior of a system can be derived in the framework of the renormalization group.
In order to explain the physical origin of these phenomena, we shall use the Ising model as a pedagogical example.
Critical point of the 2D Ising model
Consider a square array of classical spins which may only take two positions: +1 and −1, at a certain temperature , interacting through the Ising classical Hamiltonian:
where the sum is extended over the pairs of nearest neighbours and is a coupling constant, which we will consider to be fixed. There is a certain temperature, called the Curie temperature or critical temperature, below which the system presents ferromagnetic long range order. Above it, it is paramagnetic and is apparently disordered.
At temperature zero, the system may only take one global sign, either +1 or -1. At higher temperatures, but below , the state is still globally magnetized, but clusters of the opposite sign appear. As the temperature increases, these clusters start to contain smaller clusters themselves, in a typical Russian dolls picture. Their typical size, called the correlation length, grows with temperature until it diverges at . This means that the whole system is such a cluster, and there is no global magnetization. Above that temperature, the system is globally disordered, but with ordered clusters within it, whose size is again called correlation length, but it is now decreasing with temperature. At infinite temperature, it is again zero, with the system fully disordered.
Divergences at the critical point
The correlation length diverges at the critical point: as , . This divergence poses no physical problem. Other physical observables diverge at this point, leading to some confusion at the beginning.
The most important is susceptibility. Let us apply a very small magnetic field to the
system in the critical point. A very small magnetic field is not able to magnetize a large coherent cluster, but with these fractal clusters the picture changes. It affects easily the smallest size clusters, since they have a nearly paramagnetic behaviour. But this change, in its turn, affects the next-scale clusters, and the perturbation climbs the ladder until the whole system changes radically. Thus, critical systems are very sensitive to small changes in the environment.
Other observables, such as the specific heat, may also diverge at this point. All these divergences stem from that of the correlation length.
Critical exponents and universality
As we approach the critical point, these diverging observables behave as for some exponent where, typically, the value of the exponent α is the same above and below Tc. These exponents are called critical exponents and are robust observables. Even more, they take the same values for very different physical systems. This intriguing phenomenon, called universality, is explained, qualitatively and also quantitatively, by the renormalization group.
Critical dynamics
Critical phenomena may also appear for dynamic quantities, not only for static ones. In fact, the divergence of the characteristic time of a system is directly related to the divergence of the thermal correlation length by the introduction of a dynamical exponent z and the relation . The voluminous static universality class of a system splits into different, less voluminous dynamic universality classes with different values of z
but a common static critical behaviour, and by approaching the critical point one may observe all kinds of slowing-down phenomena. The divergence of relaxation time at criticality leads to singularities in various collective transport quantities, e.g., the interdiffusivity, shear viscosity , and bulk viscosity . The dynamic critical exponents follow certain scaling relations, viz., , where d is the space dimension. There is only one independent dynamic critical exponent. Values of these exponents are dictated by several universality classes. According to the Hohenberg−Halperin nomenclature, for the model H universality class (fluids) .
Ergodicity breaking
Ergodicity is the assumption that a system, at a given temperature, explores the full phase space, just each state takes different probabilities. In an Ising ferromagnet below this does not happen. If , never mind how close they are, the system has chosen a global magnetization, and the phase space is divided into two regions. From one of them it is impossible to reach the other, unless a magnetic field is applied, or temperature is raised above .
See also superselection sector
Mathematical tools
The main mathematical tools to study critical points are renormalization group, which takes advantage of the Russian dolls picture or the self-similarity to explain universality and predict numerically the critical exponents, and variational perturbation theory, which converts divergent perturbation expansions into convergent strong-coupling expansions relevant to critical phenomena. In two-dimensional systems, conformal field theory is a powerful tool which has discovered many new properties of 2D critical systems, employing the fact that scale invariance, along with a few other requisites, leads to an infinite symmetry group.
Critical point in renormalization group theory
The critical point is described by a conformal field theory. According to the renormalization group theory, the defining property of criticality is that the characteristic length scale of the structure of the physical system, also known as the correlation length ξ, becomes infinite. This can happen along critical lines in phase space. This effect is the cause of the critical opalescence that can be observed as a binary fluid mixture approaches its liquid–liquid critical point.
In systems in equilibrium, the critical point is reached only by precisely tuning a control parameter. However, in some non-equilibrium systems, the critical point is an attractor of the dynamics in a manner that is robust with respect to system parameters, a phenomenon referred to as self-organized criticality.
Applications
Applications arise in physics and chemistry, but also in fields such as sociology. For example, it is natural to describe a system of two political parties by an Ising model. Thereby, at a transition from one majority to the other, the above-mentioned critical phenomena may appear.
See also
Ising model
Catastrophe theory
Critical point
Critical exponent
Critical opalescence
Variational perturbation theory
Conformal field theory
Ergodicity
Self-organized criticality
Rushbrooke inequality
Widom scaling
Critical brain hypothesis
Bibliography
Phase Transitions and Critical Phenomena, vol. 1-20 (1972–2001), Academic Press, Ed.: C. Domb, M.S. Green, J.L. Lebowitz
J.J. Binney et al. (1993): The theory of critical phenomena, Clarendon press.
N. Goldenfeld (1993): Lectures on phase transitions and the renormalization group, Addison-Wesley.
H. Kleinert and V. Schulte-Frohlinde, Critical Properties of φ4-Theories, World Scientific (Singapore, 2001); Paperback (Read online at ) J. M. Yeomans, Statistical Mechanics of Phase Transitions (Oxford Science Publications, 1992)
M.E. Fisher, Renormalization Group in Theory of Critical Behavior, Reviews of Modern Physics, vol. 46, p. 597-616 (1974)
H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena''
References
External links
Physical phenomena
Conformal field theory
Renormalization group | Critical phenomena | [
"Physics",
"Materials_science",
"Mathematics"
] | 1,692 | [
"Physical phenomena",
"Critical phenomena",
"Renormalization group",
"Condensed matter physics",
"Statistical mechanics",
"Dynamical systems"
] |
376,845 | https://en.wikipedia.org/wiki/Cooper%20pair | In condensed matter physics, a Cooper pair or BCS pair (Bardeen–Cooper–Schrieffer pair) is a pair of electrons (or other fermions) bound together at low temperatures in a certain manner first described in 1956 by American physicist Leon Cooper.
Description
Cooper showed that an arbitrarily small attraction between electrons in a metal can cause a paired state of electrons to have a lower energy than the Fermi energy, which implies that the pair is bound. In conventional superconductors, this attraction is due to the electron–phonon interaction. The Cooper pair state is responsible for superconductivity, as described in the BCS theory developed by John Bardeen, Leon Cooper, and John Schrieffer for which they shared the 1972 Nobel Prize.
Although Cooper pairing is a quantum effect, the reason for the pairing can be seen from a simplified classical explanation. An electron in a metal normally behaves as a free particle. The electron is repelled from other electrons due to their negative charge, but it also attracts the positive ions that make up the rigid lattice of the metal. This attraction distorts the ion lattice, moving the ions slightly toward the electron, increasing the positive charge density of the lattice in the vicinity. This positive charge can attract other electrons. At long distances, this attraction between electrons due to the displaced ions can overcome the electrons' repulsion due to their negative charge, and cause them to pair up. The rigorous quantum mechanical explanation shows that the effect is due to electron–phonon interactions, with the phonon being the collective motion of the positively-charged lattice.
The energy of the pairing interaction is quite weak, of the order of 10−3 eV, and thermal energy can easily break the pairs. So only at low temperatures, in metal and other substrates, are a significant number of the electrons bound in Cooper pairs.
The electrons in a pair are not necessarily close together; because the interaction is long range, paired electrons may still be many hundreds of nanometers apart. This distance is usually greater than the average interelectron distance so that many Cooper pairs can occupy the same space. Electrons have spin-, so they are fermions, but the total spin of a Cooper pair is integer (0 or 1) so it is a composite boson. This means the wave functions are symmetric under particle interchange. Therefore, unlike electrons, multiple Cooper pairs are allowed to be in the same quantum state, which is responsible for the phenomenon of superconductivity.
The BCS theory is also applicable to other fermion systems, such as helium-3. Indeed, Cooper pairing is responsible for the superfluidity of helium-3 at low temperatures. In 2008 it was proposed that pairs of bosons in an optical lattice may be similar to Cooper pairs.
Relationship to superconductivity
The tendency for all the Cooper pairs in a body to "condense" into the same ground quantum state is responsible for the peculiar properties of superconductivity.
Cooper originally considered only the case of an isolated pair's formation in a metal. When one considers the more realistic state of many electronic pair formations, as is elucidated in the full BCS theory, one finds that the pairing opens a gap in the continuous spectrum of allowed energy states of the electrons, meaning that all excitations of the system must possess some minimum amount of energy. This gap to excitations leads to superconductivity, since small excitations such as scattering of electrons are forbidden.
The gap appears due to many-body effects between electrons feeling the attraction.
R. A. Ogg Jr., was first to suggest that electrons might act as pairs coupled by lattice vibrations in the material. This was indicated by the isotope effect observed in superconductors. The isotope effect showed that materials with heavier ions (different nuclear isotopes) had lower superconducting transition temperatures. This can be explained by the theory of Cooper pairing: heavier ions are harder for the electrons to attract and move (how Cooper pairs are formed), which results in smaller binding energy for the pairs.
The theory of Cooper pairs is quite general and does not depend on the specific electron-phonon interaction. Condensed matter theorists have proposed pairing mechanisms based on other attractive interactions such as electron–exciton interactions or electron–plasmon interactions. Currently, none of these other pairing interactions has been observed in any material.
It should be mentioned that Cooper pairing does not involve individual electrons pairing up to form "quasi-bosons". The paired states are energetically favored, and electrons go in and out of those states preferentially. This is a fine distinction that John Bardeen makes:
"The idea of paired electrons, though not fully accurate, captures the sense of it."
The mathematical description of the second-order coherence involved here is given by Yang.
See also
Color–flavor locking
Superinsulator
Lone pair
Electron pair
References
Further reading
Michael Tinkham, Introduction to Superconductivity,
Schmidt, Vadim Vasil'evich. The physics of superconductors: Introduction to fundamentals and applications. Springer Science & Business Media, 2013.
Superconductivity
Superconductors
Spintronics
Quantum phases
Charge carriers | Cooper pair | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,082 | [
"Quantum phases",
"Physical phenomena",
"Matter",
"Physical quantities",
"Charge carriers",
"Spintronics",
"Superconductivity",
"Phases of matter",
"Quantum mechanics",
"Materials science",
"Electrical phenomena",
"Condensed matter physics",
"Superconductors",
"Electrical resistance and co... |
376,885 | https://en.wikipedia.org/wiki/Diffraction-limited%20system | In optics, any optical instrument or systema microscope, telescope, or camerahas a principal limit to its resolution due to the physics of diffraction. An optical instrument is said to be diffraction-limited if it has reached this limit of resolution performance. Other factors may affect an optical system's performance, such as lens imperfections or aberrations, but these are caused by errors in the manufacture or calculation of a lens, whereas the diffraction limit is the maximum resolution possible for a theoretically perfect, or ideal, optical system.
The diffraction-limited angular resolution, in radians, of an instrument is proportional to the wavelength of the light being observed, and inversely proportional to the diameter of its objective's entrance aperture. For telescopes with circular apertures, the size of the smallest feature in an image that is diffraction limited is the size of the Airy disk. As one decreases the size of the aperture of a telescopic lens, diffraction proportionately increases. At small apertures, such as f/22, most modern lenses are limited only by diffraction and not by aberrations or other imperfections in the construction.
For microscopic instruments, the diffraction-limited spatial resolution is proportional to the light wavelength, and to the numerical aperture of either the objective or the object illumination source, whichever is smaller.
In astronomy, a diffraction-limited observation is one that achieves the resolution of a theoretically ideal objective in the size of instrument used. However, most observations from Earth are seeing-limited due to atmospheric effects. Optical telescopes on the Earth work at a much lower resolution than the diffraction limit because of the distortion introduced by the passage of light through several kilometres of turbulent atmosphere. Advanced observatories have started using adaptive optics technology, resulting in greater image resolution for faint targets, but it is still difficult to reach the diffraction limit using adaptive optics.
Radio telescopes are frequently diffraction-limited, because the wavelengths they use (from millimeters to meters) are so long that the atmospheric distortion is negligible. Space-based telescopes (such as Hubble, or a number of non-optical telescopes) always work at their diffraction limit, if their design is free of optical aberration.
The beam from a laser with near-ideal beam propagation properties may be described as being diffraction-limited. A diffraction-limited laser beam, passed through diffraction-limited optics, will remain diffraction-limited, and will have a spatial or angular extent essentially equal to the resolution of the optics at the wavelength of the laser.
Calculation of diffraction limit
The Abbe diffraction limit for a microscope
The observation of sub-wavelength structures with microscopes is difficult because of the Abbe diffraction limit. Ernst Abbe found in 1873, and expressed as a formula in 1882, that light with wavelength , traveling in a medium with refractive index and converging to a spot with half-angle will have a minimum resolvable distance of
The portion of the denominator is called the numerical aperture (NA) and can reach about 1.4–1.6 in modern optics, hence the Abbe limit is .
The same formula had been proven by Hermann von Helmholtz in 1874.
Considering green light around 500 nm and a NA of 1, the Abbe limit is roughly (0.25 μm), which is small compared to most biological cells (1 μm to 100 μm), but large compared to viruses (100 nm), proteins (10 nm) and less complex molecules (1 nm). To increase the resolution, shorter wavelengths can be used such as UV and X-ray microscopes. These techniques offer better resolution but are expensive, suffer from lack of contrast in biological samples and may damage the sample.
Digital photography
In a digital camera, diffraction effects interact with the effects of the regular pixel grid. The combined effect of the different parts of an optical system is determined by the convolution of the point spread functions (PSF). The point spread function of a diffraction limited circular-aperture lens is simply the Airy disk. The point spread function of the camera, otherwise called the instrument response function (IRF) can be approximated by a rectangle function, with a width equivalent to the pixel pitch. A more complete derivation of the modulation transfer function (derived from the PSF) of image sensors is given by Fliegel. Whatever the exact instrument response function, it is largely independent of the f-number of the lens. Thus at different f-numbers a camera may operate in three different regimes, as follows:
In the case where the spread of the IRF is small with respect to the spread of the diffraction PSF, in which case the system may be said to be essentially diffraction limited (so long as the lens itself is diffraction limited).
In the case where the spread of the diffraction PSF is small with respect to the IRF, in which case the system is instrument limited.
In the case where the spread of the PSF and IRF are similar, in which case both impact the available resolution of the system.
The spread of the diffraction-limited PSF is approximated by the diameter of the first null of the Airy disk,
where is the wavelength of the light and is the f-number of the imaging optics, i.e., in the Abbe diffraction limit formula. For instance, for an f/8 lens ( and ) and for green light ( 0.5 μm wavelength) light, the focusing spot diameter will be d = 9.76 μm or 19.5. This is similar to the pixel size for the majority of commercially available 'full frame' (43mm sensor diagonal) cameras and so these will operate in regime 3 for f-numbers around 8 (few lenses are close to diffraction limited at f-numbers smaller than 8). Cameras with smaller sensors will tend to have smaller pixels, but their lenses will be designed for use at smaller f-numbers and it is likely that they will also operate in regime 3 for those f-numbers for which their lenses are diffraction limited.
Obtaining higher resolution
There are techniques for producing images that appear to have higher resolution than allowed by simple use of diffraction-limited optics. Although these techniques improve some aspect of resolution, they generally come at an enormous increase in cost and complexity. Usually the technique is only appropriate for a small subset of imaging problems, with several general approaches outlined below.
Extending numerical aperture
The effective resolution of a microscope can be improved by illuminating from the side.
In conventional microscopes such as bright-field or differential interference contrast, this is achieved by using a condenser. Under spatially incoherent conditions, the image is understood as a composite of images illuminated from each point on the condenser, each of which covers a different portion of the object's spatial frequencies. This effectively improves the resolution by, at most, a factor of two.
Simultaneously illuminating from all angles (fully open condenser) drives down interferometric contrast. In conventional microscopes, the maximum resolution (fully open condenser, at N = 1) is rarely used. Further, under partially coherent conditions, the recorded image is often non-linear with object's scattering potential—especially when looking at non-self-luminous (non-fluorescent) objects. To boost contrast, and sometimes to linearize the system, unconventional microscopes (with structured illumination) synthesize the condenser illumination by acquiring a sequence of images with known illumination parameters. Typically, these images are composited to form a single image with data covering a larger portion of the object's spatial frequencies when compared to using a fully closed condenser (which is also rarely used).
Another technique, 4Pi microscopy, uses two opposing objectives to double the effective numerical aperture, effectively halving the diffraction limit, by collecting the forward and backward scattered light. When imaging a transparent sample, with a combination of incoherent or structured illumination, as well as collecting both forward, and backward scattered light it is possible to image the complete scattering sphere.
Unlike methods relying on localization, such systems are still limited by the diffraction limit of the illumination (condenser) and collection optics (objective), although in practice they can provide substantial resolution improvements compared to conventional methods.
Near-field techniques
The diffraction limit is only valid in the far field as it assumes that no evanescent fields reach the detector. Various near-field techniques that operate less than ≈1 wavelength of light away from the image plane can obtain substantially higher resolution. These techniques exploit the fact that the evanescent field contains information beyond the diffraction limit which can be used to construct very high resolution images, in principle beating the diffraction limit by a factor proportional to how well a specific imaging system can detect the near-field signal. For scattered light imaging, instruments such as near-field scanning optical microscopes and nano-FTIR, which are built atop atomic force microscope systems, can be used to achieve up to 10-50 nm resolution. The data recorded by such instruments often requires substantial processing, essentially solving an optical inverse problem for each image.
Metamaterial-based superlenses can image with a resolution better than the diffraction limit by locating the objective lens extremely close (typically hundreds of nanometers) to the object.
In fluorescence microscopy the excitation and emission are typically on different wavelengths. In total internal reflection fluorescence microscopy a thin portion of the sample located immediately on the cover glass is excited with an evanescent field, and recorded with a conventional diffraction-limited objective, improving the axial resolution.
However, because these techniques cannot image beyond 1 wavelength, they cannot be used to image into objects thicker than 1 wavelength which limits their applicability.
Far-field techniques
Far-field imaging techniques are most desirable for imaging objects that are large compared to the illumination wavelength but that contain fine structure. This includes nearly all biological applications in which cells span multiple wavelengths but contain structure down to molecular scales. In recent years several techniques have shown that sub-diffraction limited imaging is possible over macroscopic distances. These techniques usually exploit optical nonlinearity in a material's reflected light to generate resolution beyond the diffraction limit.
Among these techniques, the STED microscope has been one of the most successful. In STED, multiple laser beams are used to first excite, and then quench fluorescent dyes. The nonlinear response to illumination caused by the quenching process in which adding more light causes the image to become less bright generates sub-diffraction limited information about the location of dye molecules, allowing resolution far beyond the diffraction limit provided high illumination intensities are used.
Laser beams
The limits on focusing or collimating a laser beam are very similar to the limits on imaging with a microscope or telescope. The only difference is that laser beams are typically soft-edged beams. This non-uniformity in light distribution leads to a coefficient slightly different from the 1.22 value familiar in imaging. However, the scaling with wavelength and aperture is exactly the same.
The beam quality of a laser beam is characterized by how well its propagation matches an ideal Gaussian beam at the same wavelength. The beam quality factor M squared (M2) is found by measuring the size of the beam at its waist, and its divergence far from the waist, and taking the product of the two, known as the beam parameter product. The ratio of this measured beam parameter product to that of the ideal is defined as M2, so that M2=1 describes an ideal beam. The M2 value of a beam is conserved when it is transformed by diffraction-limited optics.
The outputs of many low and moderately powered lasers have M2 values of 1.2 or less, and are essentially diffraction-limited.
Other waves
The same equations apply to other wave-based sensors, such as radar and the human ear.
As opposed to light waves (i.e., photons), massive particles have a different relationship between their quantum mechanical wavelength and their energy. This relationship indicates that the effective "de Broglie" wavelength is inversely proportional to the momentum of the particle. For example, an electron at an energy of 10 keV has a wavelength of 0.01 nm, allowing the electron microscope (SEM or TEM) to achieve high resolution images. Other massive particles such as helium, neon, and gallium ions have been used to produce images at resolutions beyond what can be attained with visible light. Such instruments provide nanometer scale imaging, analysis and fabrication capabilities at the expense of system complexity.
See also
Rayleigh criterion
References
External links
Describes the Leica APO-Telyt-R 280mm f/4, a diffraction-limited photographic lens.
Diffraction
Telescopes
Microscopes | Diffraction-limited system | [
"Physics",
"Chemistry",
"Materials_science",
"Astronomy",
"Technology",
"Engineering"
] | 2,666 | [
"Spectrum (physical sciences)",
"Telescopes",
"Astronomical instruments",
"Measuring instruments",
"Crystallography",
"Diffraction",
"Microscopes",
"Microscopy",
"Spectroscopy"
] |
377,537 | https://en.wikipedia.org/wiki/Hankel%20matrix | In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a n x m matrix in which each ascending skew-diagonal from left to right is constant. For example,
More generally, a Hankel matrix is any matrix of the form
In terms of the components, if the element of is denoted with , and assuming , then we have for all
Properties
Any Hankel matrix is symmetric.
Let be the exchange matrix. If is an Hankel matrix, then where is an Toeplitz matrix.
If is real symmetric, then will have the same eigenvalues as up to sign.
The Hilbert matrix is an example of a Hankel matrix.
The determinant of a Hankel matrix is called a catalecticant.
Hankel operator
Given a formal Laurent series
the corresponding Hankel operator is defined as
This takes a polynomial and sends it to the product , but discards all powers of with a non-negative exponent, so as to give an element in , the formal power series with strictly negative exponents. The map is in a natural way -linear, and its matrix with respect to the elements and is the Hankel matrix
Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if is a rational function, that is, a fraction of two polynomials
Approximations
We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.
Note that the matrix does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.
Hankel matrix transform
The Hankel matrix transform, or simply Hankel transform, of a sequence is the sequence of the determinants of the Hankel matrices formed from . Given an integer , define the corresponding -dimensional Hankel matrix as having the matrix elements Then the sequence given by
is the Hankel transform of the sequence The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes
as the binomial transform of the sequence , then one has
Applications of Hankel matrices
Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired. The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization. The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.
Method of moments for polynomial distributions
The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.
Positive Hankel matrices and the Hamburger moment problems
See also
Cauchy matrix
Jacobi operator
Toeplitz matrix, an "upside down" (that is, row-reversed) Hankel matrix
Vandermonde matrix
Notes
References
Brent R.P. (1999), "Stability of fast algorithms for structured linear systems", Fast Reliable Algorithms for Matrices with Structure (editors—T. Kailath, A.H. Sayed), ch.4 (SIAM).
Matrices
Transforms | Hankel matrix | [
"Mathematics"
] | 731 | [
"Functions and mappings",
"Mathematical objects",
"Matrices (mathematics)",
"Mathematical relations",
"Transforms"
] |
377,876 | https://en.wikipedia.org/wiki/Time%20of%20flight | Time of flight (ToF) is the measurement of the time taken by an object, particle or wave (be it acoustic, electromagnetic, etc.) to travel a distance through a medium. This information can then be used to measure velocity or path length, or as a way to learn about the particle or medium's properties (such as composition or flow rate). The traveling object may be detected directly (direct time of flight, dToF, e.g., via an ion detector in mass spectrometry) or indirectly (indirect time of flight, iToF, e.g., by light scattered from an object in laser doppler velocimetry). Time of flight technology has found valuable applications in the monitoring and characterization of material and biomaterials, hydrogels included.
Overview
In electronics, one of the earliest devices using the principle are ultrasonic distance-measuring devices, which emit an ultrasonic pulse and are able to measure the distance to a solid object based on the time taken for the wave to bounce back to the emitter. The ToF method is also used to estimate the electron mobility. Originally, it was designed for measurement of low-conductive thin films, later adjusted for common semiconductors. This experimental technique is used for metal-dielectric-metal structures as well as organic field-effect transistors. The excess charges are generated by application of the laser or voltage pulse.
For Magnetic Resonance Angiography (MRA), ToF is a major underlying method. In this method, blood entering the imaged area is not yet saturated, giving it a much higher signal when using short echo time and flow compensation. It can be used in the detection of aneurysm, stenosis or dissection.
In time-of-flight mass spectrometry, ions are accelerated by an electrical field to the same kinetic energy with the velocity of the ion depending on the mass-to-charge ratio. Thus the time-of-flight is used to measure velocity, from which the mass-to-charge ratio can be determined. The time-of-flight of electrons is used to measure their kinetic energy.
In near-infrared spectroscopy, the ToF method is used to measure the media-dependent optical pathlength over a range of optical wavelengths, from which composition and properties of the media can be analyzed.
In ultrasonic flow meter measurement, ToF is used to measure speed of signal propagation upstream and downstream of flow of a media, in order to estimate total flow velocity. This measurement is made in a collinear direction with the flow.
In planar Doppler velocimetry (optical flow meter measurement), ToF measurements are made perpendicular to the flow by timing when individual particles cross two or more locations along the flow (collinear measurements would require generally high flow velocities and extremely narrow-band optical filters).
In optical interferometry, the pathlength difference between sample and reference arms can be measured by ToF methods, such as frequency modulation followed by phase shift measurement or cross correlation of signals. Such methods are used in laser radar and laser tracker systems for medium-long range distance measurement.
In neutron time-of-flight scattering, a pulsed monochromatic neutron beam is scattered by a sample. The energy spectrum of the scattered neutrons is measured via time of flight.
In kinematics, ToF is the duration in which a projectile is traveling through the air. Given the initial velocity of a particle launched from the ground, the downward (i.e. gravitational) acceleration , and the projectile's angle of projection θ (measured relative to the horizontal), then a simple rearrangement of the SUVAT equation
results in this equation
for the time of flight of a projectile.
In mass spectrometry
The time-of-flight principle can be applied for mass spectrometry. Ions are accelerated by an electric field of known strength. This acceleration results in an ion having the same kinetic energy as any other ion that has the same charge. The velocity of the ion depends on the mass-to-charge ratio. The time that it subsequently takes for the particle to reach a detector at a known distance is measured. This time will depend on the mass-to-charge ratio of the particle (heavier particles reach lower speeds). From this time and the known experimental parameters one can find the mass-to-charge ratio of the ion. The elapsed time from the instant a particle leaves a source to the instant it reaches a detector.
In flow meters
An ultrasonic flow meter measures the velocity of a liquid or gas through a pipe using acoustic sensors. This has some advantages over other measurement techniques. The results are slightly affected by temperature, density or conductivity. Maintenance is inexpensive because there are no moving parts.
Ultrasonic flow meters come in three different types: transmission (contrapropagating transit time) flowmeters, reflection (Doppler) flowmeters, and open-channel flowmeters. Transit time flowmeters work by measuring the time difference between an ultrasonic pulse sent in the flow direction and an ultrasound pulse sent opposite the flow direction. Doppler flowmeters measure the doppler shift resulting in reflecting an ultrasonic beam off either small particles in the fluid, air bubbles in the fluid, or the flowing fluid's turbulence. Open channel flow meters measure upstream levels in front of flumes or weirs.
Optical time-of-flight sensors consist of two light beams projected into the fluid whose detection is either interrupted or instigated by the passage of small particles (which are assumed to be following the flow). This is not dissimilar from the optical beams used as safety devices in motorized garage doors or as triggers in alarm systems. The speed of the particles is calculated by knowing the spacing between the two beams. If there is only one detector, then the time difference can be measured via autocorrelation. If there are two detectors, one for each beam, then direction can also be known. Since the location of the beams is relatively easy to determine, the precision of the measurement depends primarily on how small the setup can be made. If the beams are too far apart, the flow could change substantially between them, thus the measurement becomes an average over that space. Moreover, multiple particles could reside between them at any given time, and this would corrupt the signal since the particles are indistinguishable. For such a sensor to provide valid data, it must be small relative to the scale of the flow and the seeding density. MOEMS approaches yield extremely small packages, making such sensors applicable in a variety of situations.
In physics
Usually the time-of-flight tube used in mass spectrometry is praised for simplicity, but for precision measurements of charged low energy particles the electric and the magnetic field in the tube has to be controlled within 10 mV and 1 nT respectively.
The work function homogeneity of the tube can be controlled by a Kelvin probe. The magnetic field can be measured by a fluxgate compass. High frequencies are passively shielded and damped by radar absorbent material. To generate arbitrary low frequencies field the screen is parted into plates (overlapping and connected by capacitors) with bias voltage on each plate and a bias current on coil behind plate whose flux is closed by an outer core. In this way the tube can be configured to act as a weak achromatic quadrupole lens with an aperture with a grid and a delay line detector in the diffraction plane to do angle resolved measurements. Changing the field the angle of the field of view can be changed and a deflecting bias can be superimposed to scan through all angles.
When no delay line detector is used focusing the ions onto a detector can be accomplished through the use of two or three einzel lenses placed in the vacuum tube located between the ion source and the detector.
The sample should be immersed into the tube with holes and apertures for and against stray light to do magnetic experiments and to control the electrons from their start.
Camera
Detector
See also
Propagation delay
Round-trip time
Time of arrival
Time of transmission
References
Mass spectrometry
Spectroscopy
Time measurement systems | Time of flight | [
"Physics",
"Chemistry"
] | 1,680 | [
"Molecular physics",
"Physical quantities",
"Time",
"Time measurement systems",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"Spacetime",
"Spectroscopy",
"Matter"
] |
378,010 | https://en.wikipedia.org/wiki/Planning | Planning is the process of thinking regarding the activities required to achieve a desired goal. Planning is based on foresight, the fundamental capacity for mental time travel. Some researchers regard the evolution of forethought - the capacity to think ahead - as a prime mover in human evolution.
Planning is a fundamental property of intelligent behavior. It involves the use of logic and imagination to visualize not only a desired result, but the steps necessary to achieve that result.
An important aspect of planning is its relationship to forecasting. Forecasting aims to predict what the future will look like, while planning imagines what the future could look like.
Planning according to established principles - most notably since the early-20th century -
forms a core part of many professional occupations, particularly in fields such as management and business. Once people have developed a plan, they can measure and assess progress, efficiency and effectiveness. As circumstances change, plans may need to be modified or even abandoned.
In light of the popularity of the concept of planning, some adherents of the idea advocate planning for unplannable eventualities.
Psychology
Planning has been modeled in terms of intentions: deciding what tasks one might wish to do; tenacity: continuing towards a goal in the face of difficulty and flexibility, adapting one's approach in response implementation. An implementation intention is a specification of behavior that an individual believes to be correlated with a goal will take place, such as at a particular time or in a particular place. Implementation intentions are distinguished from goal intentions, which specifies an outcome such as running a marathon.
Neurology
Planning is one of the executive functions of the brain, encompassing the neurological processes involved in the formulation, evaluation and selection of a sequence of thoughts and actions to achieve a desired goal. Various studies utilizing a combination of neuropsychological, neuropharmacological and functional neuroimaging approaches have suggested there is a positive relationship between impaired planning ability and damage to the frontal lobe.
A specific area within the mid-dorsolateral frontal cortex located in the frontal lobe has been implicated as playing an intrinsic role in both cognitive planning and associated executive traits such as working memory.
Disruption of the neural pathways, via various mechanisms such as traumatic brain injury, or the effects of neurodegenerative diseases between this area of the frontal cortex and the basal ganglia, specifically the striatum (corticostriatal pathway), may disrupt the processes required for normal planning function.
Individuals who were born very low birth weight (<1500 grams) and extremely low birth weight are at greater risk for various cognitive deficits including planning ability.
The other region activated in planning process is default mode network which contributes to activity of remembering the past and imagine the future. This network distributed set of regions that involve association cortex and paralimbic region but spare sensory and motor cortex this is make possible planning process disruption by active task that uses sensory and motoric regions.
Neuropsychological tests
There are a variety of neuropsychological tests which can be used to measure variance of planning ability between the subject and controls.
Tower of Hanoi, a puzzle invented in 1883 by the French mathematician Édouard Lucas. There are different variations of the puzzle: the classic version consists of three rods and usually seven to nine discs of subsequently smaller size. Planning is a key component of the problem-solving skills necessary to achieve the objective, which is to move the entire stack to another rod, obeying the following rules:
Only one disk may be moved at a time.
Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod.
No disk may be placed on top of a smaller disk.
Tower of London is another test that was developed in 1992 by Tim Shallice specifically to detect deficits in planning as may occur with damage to the frontal lobe. Test participants with damage to the left anterior frontal lobe demonstrated planning deficits (i.e., greater number of moves required for solution).
Test participants with damage to the right anterior, and left or right posterior areas of the frontal lobes, showed no impairment. The results implicating the left anterior frontal lobes involvement in solving the Tower of London were supported in concomitant neuroimaging studies which also showed a reduction in regional cerebral blood flow to the left pre-frontal lobe. For the number of moves, a significant negative correlation was observed for the left prefrontal area: i.e. subjects that took more time planning their moves showed greater activation in the left prefrontal area.
Planning theories
Business
Patrick Montana and Bruce Charnov outline a three-step result-oriented process for planning:
Choosing a destination
Evaluating alternative routes
Deciding the specific course of the plan
In organizations, planning can become a management process, concerned with defining goals for a future direction and determining on the missions and resources to achieve those targets. To meet the goals, managers may develop plans such as a business plan or a marketing plan. Planning always has a purpose. The purpose may involve the achievement of certain goals or targets: efficient use of resources, reducing risk, expanding the organization and its assets, etc.
Public policy
Public policies include laws, rules, decisions, and decrees. Public policy can be defined as efforts to tackle social issues via policymaking. A policy is crafted with a specific goal in mind in order to address a societal problem that has been prioritized by the government.
Public policy planning includes environmental, land use, regional, urban and spatial planning. In many countries, the operation of a town and country planning system is often referred to as "planning" and the professionals which operate the system are known as "planners".
It is a conscious as well as sub-conscious activity. It is "an anticipatory decision making process" that helps in coping with complexities. It is deciding future course of action from amongst alternatives. It is a process that involves making and evaluating each set of interrelated decisions. It is selection of missions, objectives and "translation of knowledge into action." A planned performance brings better results compared to an unplanned one. A manager's job is planning, monitoring and controlling. Planning and goal setting are important traits of an organization. It is done at all levels of the organization. Planning includes the plan, the thought process, action, and implementation. Planning gives more power over the future. Planning is deciding in advance what to do, how to do it, when to do it, and who should do it. This bridges the gap from where the organization is to where it wants to be. The planning function involves establishing goals and arranging them in logical order. An organization that plans well achieves goals faster than one that does not plan before implementation.
Personal
Planning is not just a professional activity: it is a feature of everyday life, whether for career advancement, organizing an event or even just getting through a busy day.
Alternatives to planning
Opportunism can supplement or replace planning.
Types of planning
Automated planning and scheduling
Business plan
Central planning
Collaborative planning, forecasting, and replenishment
Comprehensive planning
Contingency planning
Economic planning
Enterprise architecture planning
Environmental planning
Event planning
Family planning
Financial planning
Land use planning
Landscape planning
Lesson planning
Marketing plan
Network resource planning
Operational planning
Planning Domain Definition Language
Regional planning
Site planning
Spatial planning
Strategic planning
Succession planning
Time management
Urban planning
See also
Futures studies
Learning theory (education)
Planning fallacy
Project management
Time management
References
Further reading
Bazin, A. (2012). Bilateral and multilateral planning: Best practices and lessons learned. Strategos.
Das, J P, Binod C Kar, Rauno K Parrila. Cognitive Planning: The Psychological Basis of Intelligent Behaviour. Sage Publications Pvt. Ltd; illustrated edition. English
Yiftachel, Oren, 1995, "The Dark Side of Modernism: Planning as Control of an Ethnic Minority," in Sophie Watson and Katherine Gibson, eds., Postmodern Cities and Spaces (Oxford and Cambridge, MA: Blackwell), pp. 216–240.
Neuropsychological assessment
Systems engineering | Planning | [
"Engineering"
] | 1,644 | [
"Systems engineering"
] |
378,661 | https://en.wikipedia.org/wiki/Thermoregulation | Thermoregulation is the ability of an organism to keep its body temperature within certain boundaries, even when the surrounding temperature is very different. A thermoconforming organism, by contrast, simply adopts the surrounding temperature as its own body temperature, thus avoiding the need for internal thermoregulation. The internal thermoregulation process is one aspect of homeostasis: a state of dynamic stability in an organism's internal conditions, maintained far from thermal equilibrium with its environment (the study of such processes in zoology has been called physiological ecology). If the body is unable to maintain a normal temperature and it increases significantly above normal, a condition known as hyperthermia occurs. Humans may also experience lethal hyperthermia when the wet bulb temperature is sustained above for six hours.
Work in 2022 established by experiment that a wet-bulb temperature exceeding 30.55°C caused uncompensable heat stress in young, healthy adult humans. The opposite condition, when body temperature decreases below normal levels, is known as hypothermia. It results when the homeostatic control mechanisms of heat within the body malfunction, causing the body to lose heat faster than producing it. Normal body temperature is around 37°C (98.6°F), and hypothermia sets in when the core body temperature gets lower than . Usually caused by prolonged exposure to cold temperatures, hypothermia is usually treated by methods that attempt to raise the body temperature back to a normal range.
It was not until the introduction of thermometers that any exact data on the temperature of animals could be obtained. It was then found that local differences were present, since heat production and heat loss vary considerably in different parts of the body, although the circulation of the blood tends to bring about a mean temperature of the internal parts. Hence it is important to identify the parts of the body that most closely reflect the temperature of the internal organs. Also, for such results to be comparable, the measurements must be conducted under comparable conditions. The rectum has traditionally been considered to reflect most accurately the temperature of internal parts, or in some cases of sex or species, the vagina, uterus or bladder. Some animals undergo one of various forms of dormancy where the thermoregulation process temporarily allows the body temperature to drop, thereby conserving energy. Examples include hibernating bears and torpor in bats.
Classification of animals by thermal characteristics
Endothermy vs. ectothermy
Thermoregulation in organisms runs along a spectrum from endothermy to ectothermy. Endotherms create most of their heat via metabolic processes and are colloquially referred to as warm-blooded. When the surrounding temperatures are cold, endotherms increase metabolic heat production to keep their body temperature constant, thus making the internal body temperature of an endotherm more or less independent of the temperature of the environment. Endotherms possess a larger number of mitochondria per cell than ectotherms, enabling them to generate more heat by increasing the rate at which they metabolize fats and sugars. Ectotherms use external sources of temperature to regulate their body temperatures. They are colloquially referred to as cold-blooded despite the fact that body temperatures often stay within the same temperature ranges as warm-blooded animals. Ectotherms are the opposite of endotherms when it comes to regulating internal temperatures. In ectotherms, the internal physiological sources of heat are of negligible importance; the biggest factor that enables them to maintain adequate body temperatures is due to environmental influences. Living in areas that maintain a constant temperature throughout the year, like the tropics or the ocean, has enabled ectotherms to develop behavioral mechanisms that respond to external temperatures, such as sun-bathing to increase body temperature, or seeking the cover of shade to lower body temperature.
Ectotherms
Ectothermic cooling
Vaporization:
Evaporation of sweat and other bodily fluids.
Convection:
Increasing blood flow to body surfaces to maximize heat transfer across the advective gradient.
Conduction:
Losing heat by being in contact with a colder surface. For instance:
Lying on cool ground.
Staying wet in a river, lake or sea.
Covering in cool mud.
Radiation:
Releasing heat by radiating it away from the body.
Ectothermic heating (or minimizing heat loss)
Convection:
Climbing to higher ground up trees, ridges, rocks.
Entering a warm water or air current.
Building an insulated nest or burrow.
Conduction:
Lying on a hot surface.
Radiation:
Lying in the sun (heating this way is affected by the body's angle in relation to the sun).
Folding skin to reduce exposure.
Concealing wing surfaces.
Exposing wing surfaces.
Insulation:
Changing shape to alter surface/volume ratio.
Inflating the body.
To cope with low temperatures, some fish have developed the ability to remain functional even when the water temperature is below freezing; some use natural antifreeze or antifreeze proteins to resist ice crystal formation in their tissues. Amphibians and reptiles cope with heat gain by evaporative cooling and behavioral adaptations. An example of behavioral adaptation is that of a lizard lying in the sun on a hot rock in order to heat through radiation and conduction.
Endothermy
An endotherm is an animal that regulates its own body temperature, typically by keeping it at a constant level. To regulate body temperature, an organism may need to prevent heat gains in arid environments. Evaporation of water, either across respiratory surfaces or across the skin in those animals possessing sweat glands, helps in cooling body temperature to within the organism's tolerance range. Animals with a body covered by fur have limited ability to sweat, relying heavily on panting to increase evaporation of water across the moist surfaces of the lungs and the tongue and mouth. Mammals like cats, dogs and pigs, rely on panting or other means for thermal regulation and have sweat glands only in foot pads and snout. The sweat produced on pads of paws and on palms and soles mostly serves to increase friction and enhance grip. Birds also counteract overheating by gular fluttering, or rapid vibrations of the gular (throat) skin. Down feathers trap warm air acting as excellent insulators just as hair in mammals acts as a good insulator. Mammalian skin is much thicker than that of birds and often has a continuous layer of insulating fat beneath the dermis. In marine mammals, such as whales, or animals that live in very cold regions, such as the polar bears, this is called blubber. Dense coats found in desert endotherms also aid in preventing heat gain such as in the case of the camels.
A cold weather strategy is to temporarily decrease metabolic rate, decreasing the temperature difference between the animal and the air and thereby minimizing heat loss. Furthermore, having a lower metabolic rate is less energetically expensive. Many animals survive cold frosty nights through torpor, a short-term temporary drop in body temperature. Organisms, when presented with the problem of regulating body temperature, have not only behavioural, physiological, and structural adaptations but also a feedback system to trigger these adaptations to regulate temperature accordingly. The main features of this system are stimulus, receptor, modulator, effector and then the feedback of the newly adjusted temperature to the stimulus. This cyclical process aids in homeostasis.
Homeothermy compared with poikilothermy
Homeothermy and poikilothermy refer to how stable an organism's deep-body temperature is. Most endothermic organisms are homeothermic, like mammals. However, animals with facultative endothermy are often poikilothermic, meaning their temperature can vary considerably. Most fish are ectotherms, as most of their heat comes from the surrounding water. However, almost all fish are poikilothermic.
Beetles
The physiology of the Dendroctonus micans beetle encompasses a suite of adaptations crucial for its survival and reproduction. Flight capabilities enable them to disperse and locate new host trees, while sensory organs aid in detecting environmental cues and food sources. Of particular importance is their ability to thermoregulate, ensuring optimal body temperature in fluctuating forest conditions. This physiological mechanism, coupled with thermosensation, allows them to thrive across diverse environments. Overall, these adaptations underscore the beetle's remarkable resilience and highlight the significance of understanding their physiology for effective management and conservation efforts.
Vertebrates
By numerous observations upon humans and other animals, John Hunter showed that the essential difference between the so-called warm-blooded and cold-blooded animals lies in observed constancy of the temperature of the former, and the observed variability of the temperature of the latter. Almost all birds and mammals have a high temperature almost constant and independent of that of the surrounding air (homeothermy). Almost all other animals display a variation of body temperature, dependent on their surroundings (poikilothermy).
Brain control
Thermoregulation in both ectotherms and endotherms is controlled mainly by the preoptic area of the anterior hypothalamus. Such homeostatic control is separate from the sensation of temperature.
In birds and mammals
In cold environments, birds and mammals employ the following adaptations and strategies to minimize heat loss:
Using small smooth muscles (arrector pili in mammals), which are attached to feather or hair shafts; this distorts the surface of the skin making feather/hair shaft stand erect (called goose bumps or goose pimples) which slows the movement of air across the skin and minimizes heat loss.
Increasing body size to more easily maintain core body temperature (warm-blooded animals in cold climates tend to be larger than similar species in warmer climates (see Bergmann's rule))
Having the ability to store energy as fat for metabolism
Have shortened extremities
Have countercurrent blood flow in extremities – this is where the warm arterial blood travelling to the limb passes the cooler venous blood from the limb and heat is exchanged warming the venous blood and cooling the arterial (e.g., Arctic wolf or penguins)
In warm environments, birds and mammals employ the following adaptations and strategies to maximize heat loss:
Behavioural adaptations like living in burrows during the day and being nocturnal
Evaporative cooling by perspiration and panting
Storing fat reserves in one place (e.g., camel's hump) to avoid its insulating effect
Elongated, often vascularized extremities to conduct body heat to the air
In humans
As in other mammals, thermoregulation is an important aspect of human homeostasis. Most body heat is generated in the deep organs, especially the liver, brain, and heart, and in contraction of skeletal muscles. Humans have been able to adapt to a great diversity of climates, including hot humid and hot arid. High temperatures pose serious stresses for the human body, placing it in great danger of injury or even death. For example, one of the most common reactions to hot temperatures is heat exhaustion, which is an illness that could happen if one is exposed to high temperatures, resulting in some symptoms such as dizziness, fainting, or a rapid heartbeat. For humans, adaptation to varying climatic conditions includes both physiological mechanisms resulting from evolution and behavioural mechanisms resulting from conscious cultural adaptations. The physiological control of the body's core temperature takes place primarily through the hypothalamus, which assumes the role as the body's "thermostat". This organ possesses control mechanisms as well as key temperature sensors, which are connected to nerve cells called thermoreceptors. Thermoreceptors come in two subcategories; ones that respond to cold temperatures and ones that respond to warm temperatures. Scattered throughout the body in both peripheral and central nervous systems, these nerve cells are sensitive to changes in temperature and are able to provide useful information to the hypothalamus through the process of negative feedback, thus maintaining a constant core temperature.
There are four avenues of heat loss: evaporation, convection, conduction, and radiation. If skin temperature is greater than that of the surrounding air temperature, the body can lose heat by convection and conduction. However, if air temperature of the surroundings is greater than that of the skin, the body gains heat by convection and conduction. In such conditions, the only means by which the body can rid itself of heat is by evaporation. So, when the surrounding temperature is higher than the skin temperature, anything that prevents adequate evaporation will cause the internal body temperature to rise. During intense physical activity (e.g. sports), evaporation becomes the main avenue of heat loss. Humidity affects thermoregulation by limiting sweat evaporation and thus heat loss.
In reptiles
Thermoregulation is also an integral part of a reptile's life, specifically lizards such as Microlophus occipitalis and Ctenophorus decresii who must change microhabitats to keep a constant body temperature. By moving to cooler areas when it is too hot and to warmer areas when it is cold, they can thermoregulate their temperature to stay within their necessary bounds.
In plants
Thermogenesis occurs in the flowers of many plants in the family Araceae as well as in cycad cones. In addition, the sacred lotus (Nelumbo nucifera) is able to thermoregulate itself, remaining on average above air temperature while flowering. Heat is produced by breaking down the starch that was stored in their roots, which requires the consumption of oxygen at a rate approaching that of a flying hummingbird.
One possible explanation for plant thermoregulation is to provide protection against cold temperature. For example, the skunk cabbage is not frost-resistant, yet it begins to grow and flower when there is still snow on the ground. Another theory is that thermogenicity helps attract pollinators, which is borne out by observations that heat production is accompanied by the arrival of beetles or flies.
Some plants are known to protect themselves against colder temperatures using antifreeze proteins. This occurs in wheat (Triticum aestivum), potatoes (Solanum tuberosum) and several other angiosperm species.
Behavioral temperature regulation
Animals other than humans regulate and maintain their body temperature with physiological adjustments and behavior. Desert lizards are ectotherms, and therefore are unable to regulate their internal temperature themselves. To regulate their internal temperature, many lizards relocate themselves to a more environmentally favorable location. They may do this in the morning only by raising their head from its burrow and then exposing their entire body. By basking in the sun, the lizard absorbs solar heat. It may also absorb heat by conduction from heated rocks that have stored radiant solar energy. To lower their temperature, lizards exhibit varied behaviors. Sand seas, or ergs, produce up to , and the sand lizard will hold its feet up in the air to cool down, seek cooler objects with which to contact, find shade, or return to its burrow. They also go to their burrows to avoid cooling when the temperature falls. Aquatic animals can also regulate their temperature behaviorally by changing their position in the thermal gradient. Sprawling prone in a cool shady spot, "splooting," has been observed in squirrels on hot days.
Animals also engage in kleptothermy in which they share or steal each other's body warmth. Kleptothermy is observed, particularly amongst juveniles, in endotherms such as bats and birds (such as the mousebird and emperor penguin). This allows the individuals to increase their thermal inertia (as with gigantothermy) and so reduce heat loss. Some ectotherms share burrows of ectotherms. Other animals exploit termite mounds.
Some animals living in cold environments maintain their body temperature by preventing heat loss. Their fur grows more densely to increase the amount of insulation. Some animals are regionally heterothermic and are able to allow their less insulated extremities to cool to temperatures much lower than their core temperature—nearly to . This minimizes heat loss through less insulated body parts, like the legs, feet (or hooves), and nose.
Different species of Drosophila found in the Sonoran Desert will exploit different species of cacti based on the thermotolerance differences between species and hosts. For example, Drosophila mettleri is found in cacti like the saguaro and senita; these two cacti remain cool by storing water. Over time, the genes selecting for higher heat tolerance were reduced in the population due to the cooler host climate the fly is able to exploit.
Some flies, such as Lucilia sericata, lay their eggs en masse. The resulting group of larvae, depending on its size, is able to thermoregulate and keep itself at the optimum temperature for development.
Koalas also can behaviorally thermoregulate by seeking out cooler portions of trees on hot days. They preferentially wrap themselves around the coolest portions of trees, typically near the bottom, to increase their passive radiation of internal body heat.
Hibernation, estivation and daily torpor
To cope with limited food resources and low temperatures, some mammals hibernate during cold periods. To remain in "stasis" for long periods, these animals build up brown fat reserves and slow all body functions. True hibernators (e.g., groundhogs) keep their body temperatures low throughout hibernation whereas the core temperature of false hibernators (e.g., bears) varies; occasionally the animal may emerge from its den for brief periods. Some bats are true hibernators and rely upon a rapid, non-shivering thermogenesis of their brown fat deposit to bring them out of hibernation.
Estivation is similar to hibernation, however, it usually occurs in hot periods to allow animals to avoid high temperatures and desiccation. Both terrestrial and aquatic invertebrate and vertebrates enter into estivation. Examples include lady beetles (Coccinellidae), North American desert tortoises, crocodiles, salamanders, cane toads, and the water-holding frog.
Daily torpor occurs in small endotherms like bats and hummingbirds, which temporarily reduces their high metabolic rates to conserve energy.
Variation in animals
Normal human temperature
Previously, average oral temperature for healthy adults had been considered , while normal ranges are . In Poland and Russia, the temperature had been measured axillarily (under the arm). was considered "ideal" temperature in these countries, while normal ranges are .
Recent studies suggest that the average temperature for healthy adults is (same result in three different studies). Variations (one standard deviation) from three other studies are:
for males, for females
Measured temperature varies according to thermometer placement, with rectal temperature being higher than oral temperature, while axillary temperature is lower than oral temperature. The average difference between oral and axillary temperatures of Indian children aged 6–12 was found to be only 0.1 °C (standard deviation 0.2 °C), and the mean difference in Maltese children aged 4–14 between oral and axillary temperature was 0.56 °C, while the mean difference between rectal and axillary temperature for children under 4 years old was 0.38 °C.
Variations due to circadian rhythms
In humans, a diurnal variation has been observed dependent on the periods of rest and activity, lowest at 11 p.m. to 3 a.m. and peaking at 10 a.m. to 6 p.m. Monkeys also have a well-marked and regular diurnal variation of body temperature that follows periods of rest and activity, and is not dependent on the incidence of day and night; nocturnal monkeys reach their highest body temperature at night and lowest during the day. Sutherland Simpson and J.J. Galbraith observed that all nocturnal animals and birds – whose periods of rest and activity are naturally reversed through habit and not from outside interference – experience their highest temperature during the natural period of activity (night) and lowest during the period of rest (day). Those diurnal temperatures can be reversed by reversing their daily routine.
In essence, the temperature curve of diurnal birds is similar to that of humans and other homeothermic animals, except that the maximum occurs earlier in the afternoon and the minimum earlier in the morning. Also, the curves obtained from rabbits, guinea pigs, and dogs were quite similar to those from humans. These observations indicate that body temperature is partially regulated by circadian rhythms.
Variations due to human menstrual cycles
During the follicular phase (which lasts from the first day of menstruation until the day of ovulation), the average basal body temperature in women ranges from . Within 24 hours of ovulation, women experience an elevation of due to the increased metabolic rate caused by sharply elevated levels of progesterone. The basal body temperature ranges between throughout the luteal phase, and drops down to pre-ovulatory levels within a few days of menstruation. Women can chart this phenomenon to determine whether and when they are ovulating, so as to aid conception or contraception.
Variations due to fever
Fever is a regulated elevation of the set point of core temperature in the hypothalamus, caused by circulating pyrogens produced by the immune system. To the subject, a rise in core temperature due to fever may result in feeling cold in an environment where people without fever do not.
Variations due to biofeedback
Some monks are known to practice Tummo, biofeedback meditation techniques, that allow them to raise their body temperatures substantially.
Effect on lifespan
The effects of such a genetic change in body temperature on longevity is difficult to study in humans.
Limits compatible with life
There are limits both of heat and cold that an endothermic animal can bear and other far wider limits that an ectothermic animal may endure and yet live. The effect of too extreme a cold is to decrease metabolism, and hence to lessen the production of heat. Both catabolic and anabolic pathways share in this metabolic depression, and, though less energy is used up, still less energy is generated. The effects of this diminished metabolism become telling on the central nervous system first, especially the brain and those parts concerning consciousness; both heart rate and respiration rate decrease; judgment becomes impaired as drowsiness supervenes, becoming steadily deeper until the individual loses consciousness; without medical intervention, death by hypothermia quickly follows. Occasionally, however, convulsions may set in towards the end, and death is caused by asphyxia.
In experiments on cats performed by Sutherland Simpson and Percy T. Herring, the animals were unable to survive when rectal temperature fell below . At this low temperature, respiration became increasingly feeble; heart-impulse usually continued after respiration had ceased, the beats becoming very irregular, appearing to cease, then beginning again. Death appeared to be mainly due to asphyxia, and the only certain sign that it had taken place was the loss of knee-jerks.
However, too high a temperature speeds up the metabolism of different tissues to such a rate that their metabolic capital is soon exhausted. Blood that is too warm produces dyspnea by exhausting the metabolic capital of the respiratory centre; heart rate is increased; the beats then become arrhythmic and eventually cease. The central nervous system is also profoundly affected by hyperthermia and delirium, and convulsions may set in. Consciousness may also be lost, propelling the person into a comatose condition. These changes can sometimes also be observed in patients experiencing an acute fever. Mammalian muscle becomes rigid with heat rigor at about 50 °C, with the sudden rigidity of the whole body rendering life impossible.
H.M. Vernon performed work on the death temperature and paralysis temperature (temperature of heat rigor) of various animals. He found that species of the same class showed very similar temperature values, those from the Amphibia examined being 38.5 °C, fish 39 °C, reptiles 45 °C, and various molluscs 46 °C. Also, in the case of pelagic animals, he showed a relation between death temperature and the quantity of solid constituents of the body. In higher animals, however, his experiments tend to show that there is greater variation in both the chemical and physical characteristics of the protoplasm and, hence, greater variation in the extreme temperature compatible with life.
A 2022 study on the effect of heat on young people found that the critical wet-bulb temperature at which heat stress can no longer be compensated, Twb,crit, in young, healthy adults performing tasks at modest metabolic rates mimicking basic activities of daily life was much lower than the 35°C usually assumed, at about 30.55°C in 36–40°C humid environments, but progressively decreased in hotter, dry ambient environments.
Arthropoda
The maximum temperatures tolerated by certain thermophilic arthropods exceeds the lethal temperatures for most vertebrates.
The most heat-resistant insects are three genera of desert ants recorded from three different parts of the world. The ants have developed a lifestyle of scavenging for short durations during the hottest hours of the day, in excess of , for the carcasses of insects and other forms of life which have died from heat stress.
In April 2014, the South Californian mite Paratarsotomus macropalpis has been recorded as the world's fastest land animal relative to body length, at a speed of 322 body lengths per second. Besides the unusually great speed of the mites, the researchers were surprised to find the mites running at such speeds on concrete at temperatures up to , which is significant because this temperature is well above the lethal limit for the majority of animal species. In addition, the mites are able to stop and change direction very quickly.
Spiders like Nephila pilipes exhibits active thermal regulation behavior. During high temperature sunny days, it aligns its body with the direction of sunlight to reduce the body area under direct sunlight.
See also
Human body temperature
Innate heat
Insect thermoregulation
Thermal neutral zone
References
Further reading
full pdf
This cites work of Simpson & Galbraith
Other Internet Archive listings
see Table of Contents link (Previously Guyton's Textbook of Medical Physiology. Earlier editions back to at least 5th edition 1976, contain useful information on the subject of thermoregulation, the concepts of which have changed little in that time).
link to abstract
Weldon Owen Pty Ltd. (1993). Encyclopedia of animals – Mammals, Birds, Reptiles, Amphibians. Reader's Digest Association, Inc. Pages 567–568. .
External links
Royal Institution Christmas Lectures 1998
Human homeostasis
Animal physiology
Heat transfer
Articles containing video clips
Mathematics in medicine | Thermoregulation | [
"Physics",
"Chemistry",
"Mathematics",
"Biology"
] | 5,585 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Animals",
"Animal physiology",
"Human homeostasis",
"Applied mathematics",
"Thermoregulation",
"Thermodynamics",
"Homeostasis",
"Mathematics in medicine"
] |
378,744 | https://en.wikipedia.org/wiki/Calcium%20in%20biology | Calcium ions (Ca2+) contribute to the physiology and biochemistry of organisms' cells. They play an important role in signal transduction pathways, where they act as a second messenger, in neurotransmitter release from neurons, in contraction of all muscle cell types, and in fertilization. Many enzymes require calcium ions as a cofactor, including several of the coagulation factors. Extracellular calcium is also important for maintaining the potential difference across excitable cell membranes, as well as proper bone formation.
Plasma calcium levels in mammals are tightly regulated, with bone acting as the major mineral storage site. Calcium ions, Ca2+, are released from bone into the bloodstream under controlled conditions. Calcium is transported through the bloodstream as dissolved ions or bound to proteins such as serum albumin. Parathyroid hormone secreted by the parathyroid gland regulates the resorption of Ca2+ from bone, reabsorption in the kidney back into circulation, and increases in the activation of vitamin D3 to calcitriol. Calcitriol, the active form of vitamin D3, promotes absorption of calcium from the intestines and bones. Calcitonin secreted from the parafollicular cells of the thyroid gland also affects calcium levels by opposing parathyroid hormone; however, its physiological significance in humans is dubious.
Intracellular calcium is stored in organelles which repetitively release and then reaccumulate Ca2+ ions in response to specific cellular events: storage sites include mitochondria and the endoplasmic reticulum.
Characteristic concentrations of calcium in model organisms are: in E. coli 3 mM (bound), 100 nM (free), in budding yeast 2 mM (bound), in mammalian cell 10–100 nM (free) and in blood plasma 2 mM.
Humans
In 2022, it was the 277th most commonly prescribed medication in the United States, with more than 700,000 prescriptions.
Dietary recommendations
The US Institute of Medicine (IOM) established Recommended Dietary Allowances (RDAs) for calcium in 1997 and updated those values in 2011. See table. The
European Food Safety Authority (EFSA) uses the term Population Reference Intake (PRIs) instead of RDAs and sets slightly different numbers: ages 4–10 800 mg, ages 11–17 1150 mg, ages 18–24 1000 mg, and >25 years 950 mg.
Because of concerns of long-term adverse side effects such as calcification of arteries and kidney stones, the IOM and EFSA both set Tolerable Upper Intake Levels (ULs) for the combination of dietary and supplemental calcium. From the IOM, people ages 9–18 years are not supposed to exceed 3,000 mg/day; for ages 19–50 not to exceed 2,500 mg/day; for ages 51 and older, not to exceed 2,000 mg/day. The EFSA set UL at 2,500 mg/day for adults but decided the information for children and adolescents was not sufficient to determine ULs.
Labeling
For US food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For calcium labeling purposes, 100% of the Daily Value was 1000 mg, but as of 27 May 2016, it was revised to 1300 mg to bring it into agreement with the RDA. A table of the old and new adult daily values is provided at Reference Daily Intake.
Health claims
Although as a general rule, dietary supplement labeling and marketing are not allowed to make disease prevention or treatment claims, the FDA has for some foods and dietary supplements reviewed the science, concluded that there is significant scientific agreement, and published specifically worded allowed health claims. An initial ruling allowing a health claim for calcium dietary supplements and osteoporosis was later amended to include calcium and vitamin D supplements, effective 1 January 2010. Examples of allowed wording are shown below. In order to qualify for the calcium health claim, a dietary supplement must contain at least 20% of the Reference Dietary Intake, which for calcium means at least 260 mg/serving.
"Adequate calcium throughout life, as part of a well-balanced diet, may reduce the risk of osteoporosis."
"Adequate calcium as part of a healthful diet, along with physical activity, may reduce the risk of osteoporosis in later life."
"Adequate calcium and vitamin D throughout life, as part of a well-balanced diet, may reduce the risk of osteoporosis."
"Adequate calcium and vitamin D as part of a healthful diet, along with physical activity, may reduce the risk of osteoporosis in later life."
In 2005, the FDA approved a Qualified Health Claim for calcium and hypertension, with suggested wording "Some scientific evidence suggests that calcium supplements may reduce the risk of hypertension. However, FDA has determined that the evidence is inconsistent and not conclusive." Evidence for pregnancy-induced hypertension and preeclampsia was considered inconclusive. The same year, the FDA approved a QHC for calcium and colon cancer, with suggested wording "Some evidence suggests that calcium supplements may reduce the risk of colon/rectal cancer, however, FDA has determined that this evidence is limited and not conclusive." Evidence for breast cancer and prostate cancer was considered inconclusive. Proposals for QHCs for calcium as protective against kidney stones or against menstrual disorders or pain were rejected.
The European Food Safety Authority (EFSA) concluded that "Calcium contributes to the normal development of bones." The EFSA rejected a claim that a cause-and-effect relationship existed between the dietary intake of calcium and potassium and maintenance of normal acid-base balance. The EFSA also rejected claims for calcium and nails, hair, blood lipids, premenstrual syndrome and body weight maintenance.
Food sources
The United States Department of Agriculture (USDA) web site has a very complete searchable table of calcium content (in milligrams) in foods, per common measures such as per 100 grams or per a normal serving.
Measurement in blood
The amount of calcium in blood (more specifically, in blood plasma) can be measured as total calcium, which includes both protein-bound and free calcium. In contrast, ionized calcium is a measure of free calcium. An abnormally high level of calcium in plasma is termed hypercalcemia and an abnormally low level is termed hypocalcemia, with "abnormal" generally referring to levels outside the reference range.
The main methods to measure serum calcium are:
O-Cresolphalein Complexone Method; A disadvantage of this method is that the volatile nature of the 2-amino-2-methyl-1-propanol used in this method makes it necessary to calibrate the method every few hours in a clinical laboratory setup.
Arsenazo III Method; This method is more robust, but the arsenic in the reagent is a health hazard.
The total amount of Ca2+ present in a tissue may be measured using Atomic absorption spectroscopy, in which the tissue is vaporized and combusted. To measure Ca2+ concentration or spatial distribution within the cell cytoplasm in vivo or in vitro, a range of fluorescent reporters may be used. These include cell permeable, calcium-binding fluorescent dyes such as Fura-2 or genetically engineered variant of green fluorescent protein (GFP) named Cameleon.
Corrected calcium
As access to an ionized calcium is not always available a corrected calcium may be used instead. To calculate a corrected calcium in mmol/L one takes the total calcium in mmol/L and adds it to ((40 minus the serum albumin in g/L) multiplied by 0.02). There is, however, controversy around the usefulness of corrected calcium as it may be no better than total calcium. It may be more useful to correct total calcium for both albumin and the anion gap.
Other animals
Vertebrates
In vertebrates, calcium ions, like many other ions, are of such vital importance to many physiological processes that its concentration is maintained within specific limits to ensure adequate homeostasis. This is evidenced by human plasma calcium, which is one of the most closely regulated physiological variables in the human body. Normal plasma levels vary between 1 and 2% over any given time. Approximately half of all ionized calcium circulates in its unbound form, with the other half being complexed with plasma proteins such as albumin, as well as anions including bicarbonate, citrate, phosphate, and sulfate.
Different tissues contain calcium in different concentrations. For instance, Ca2+ (mostly calcium phosphate and some calcium sulfate) is the most important (and specific) element of bone and calcified cartilage. In humans, the total body content of calcium is present mostly in the form of bone mineral (roughly 99%). In this state, it is largely unavailable for exchange/bioavailability. The way to overcome this is through the process of bone resorption, in which calcium is liberated into the bloodstream through the action of bone osteoclasts. The remainder of calcium is present within the extracellular and intracellular fluids.
Within a typical cell, the intracellular concentration of ionized calcium is roughly 100 nM, but is subject to increases of 10- to 100-fold during various cellular functions. The intracellular calcium level is kept relatively low with respect to the extracellular fluid, by an approximate magnitude of 12,000-fold. This gradient is maintained through various plasma membrane calcium pumps that utilize ATP for energy, as well as a sizable storage within intracellular compartments. In electrically excitable cells, such as skeletal and cardiac muscles and neurons, membrane depolarization leads to a Ca2+ transient with cytosolic Ca2+ concentration reaching around 1 μM. Mitochondria are capable of sequestering and storing some of that Ca2+. It has been estimated that mitochondrial matrix free calcium concentration rises to the tens of micromolar levels in situ during neuronal activity.
Effects
The effects of calcium on human cells are specific, meaning that different types of cells respond in different ways. However, in certain circumstances, its action may be more general. Ca2+ ions are one of the most widespread second messengers used in signal transduction. They make their entrance into the cytoplasm either from outside the cell through the cell membrane via calcium channels (such as calcium-binding proteins or voltage-gated calcium channels), or from some internal calcium storages such as the endoplasmic reticulum and mitochondria. Levels of intracellular calcium are regulated by transport proteins that remove it from the cell. For example, the sodium-calcium exchanger uses energy from the electrochemical gradient of sodium by coupling the influx of sodium into cell (and down its concentration gradient) with the transport of calcium out of the cell. In addition, the plasma membrane Ca2+ ATPase (PMCA) obtains energy to pump calcium out of the cell by hydrolysing adenosine triphosphate (ATP). In neurons, voltage-dependent, calcium-selective ion channels are important for synaptic transmission through the release of neurotransmitters into the synaptic cleft by vesicle fusion of synaptic vesicles.
Calcium's function in muscle contraction was found as early as 1882 by Ringer. Subsequent investigations were to reveal its role as a messenger about a century later. Because its action is interconnected with cAMP, they are called synarchic messengers. Calcium can bind to several different calcium-modulated proteins such as troponin-C (the first one to be identified) and calmodulin, proteins that are necessary for promoting contraction in muscle.
In the endothelial cells which line the inside of blood vessels, Ca2+ ions can regulate several signaling pathways which cause the smooth muscle surrounding blood vessels to relax. Some of these Ca2+-activated pathways include the stimulation of eNOS to produce nitric oxide, as well as the stimulation of Kca channels to efflux K+ and cause hyperpolarization of the cell membrane. Both nitric oxide and hyperpolarization cause the smooth muscle to relax in order to regulate the amount of tone in blood vessels. However, dysfunction within these Ca2+-activated pathways can lead to an increase in tone caused by unregulated smooth muscle contraction. This type of dysfunction can be seen in cardiovascular diseases, hypertension, and diabetes.
Calcium coordination plays an important role in defining the structure and function of proteins. An example a protein with calcium coordination is von Willebrand factor (vWF) which has an essential role in blood clot formation process. It was discovered using single molecule optical tweezers measurement that calcium-bound vWF acts as a shear force sensor in the blood. Shear force leads to unfolding of the A2 domain of vWF whose refolding rate is dramatically enhanced in the presence of calcium.
Adaptation
Ca2+ ion flow regulates several secondary messenger systems in neural adaptation for visual, auditory, and the olfactory system. It may often be bound to calmodulin such as in the olfactory system to either enhance or repress cation channels. Other times the calcium level change can actually release guanylyl cyclase from inhibition, like in the photoreception system. Ca2+ ion can also determine the speed of adaptation in a neural system depending on the receptors and proteins that have varied affinity for detecting levels of calcium to open or close channels at high concentration and low concentration of calcium in the cell at that time.
Negative effects and pathology
Substantial decreases in extracellular Ca2+ ion concentrations may result in a condition known as hypocalcemic tetany, which is marked by spontaneous motor neuron discharge. In addition, severe hypocalcaemia will begin to affect aspects of blood coagulation and signal transduction.
Ca2+ ions can damage cells if they enter in excessive numbers (for example, in the case of excitotoxicity, or over-excitation of neural circuits, which can occur in neurodegenerative diseases, or after insults such as brain trauma or stroke). Excessive entry of calcium into a cell may damage it or even cause it to undergo apoptosis, or death by necrosis. Calcium also acts as one of the primary regulators of osmotic stress (osmotic shock). Chronically elevated plasma calcium (hypercalcemia) is associated with cardiac arrhythmias and decreased neuromuscular excitability. One cause of hypercalcemia is a condition known as hyperparathyroidism.
Invertebrates
Some invertebrates use calcium compounds for building their exoskeleton (shells and carapaces) or endoskeleton (echinoderm plates and poriferan calcareous spicules).
Plants
Stomata closing
When abscisic acid signals the guard cells, free Ca2+ ions enter the cytosol from both outside the cell and internal stores, reversing the concentration gradient so the K+ ions begin exiting the cell. The loss of solutes makes the cell flaccid and closes the stomatal pores.
Cellular division
Calcium is a necessary ion in the formation of the mitotic spindle. Without the mitotic spindle, cellular division cannot occur. Although young leaves have a higher need for calcium, older leaves contain higher amounts of calcium because calcium is relatively immobile through the plant. It is not transported through the phloem because it can bind with other nutrient ions and precipitate out of liquid solutions.
Structural roles
Ca2+ ions are an essential component of plant cell walls and cell membranes, and are used as cations to balance organic anions in the plant vacuole. The Ca2+ concentration of the vacuole may reach millimolar levels. The most striking use of Ca2+ ions as a structural element in algae occurs in the marine coccolithophores, which use Ca2+ to form the calcium carbonate plates, with which they are covered.
Calcium is needed to form the pectin in the middle lamella of newly formed cells.
Calcium is needed to stabilize the permeability of cell membranes. Without calcium, the cell walls are unable to stabilize and hold their contents. This is particularly important in developing fruits. Without calcium, the cell walls are weak and unable to hold the contents of the fruit.
Some plants accumulate Ca in their tissues, thus making them more firm. Calcium is stored as Ca-oxalate crystals in plastids.
Cell signaling
Ca2+ ions are usually kept at nanomolar levels in the cytosol of plant cells, and act in a number of signal transduction pathways as second messengers.
See also
Biology and pharmacology of chemical elements
References
External links
United States Department of Agriculture: Vitamin D and Calcium
National Osteoporosis Foundation: Calcium and vitamin D
Biological systems
Biology and pharmacology of chemical elements
Biology
Calcium signaling
Dietary minerals
Nutrition
Signal transduction | Calcium in biology | [
"Chemistry",
"Biology"
] | 3,536 | [
"Pharmacology",
"Properties of chemical elements",
"Biology and pharmacology of chemical elements",
"Signal transduction",
"nan",
"Biochemistry",
"Neurochemistry",
"Calcium signaling"
] |
378,912 | https://en.wikipedia.org/wiki/Potassium%20in%20biology | Potassium is the main intracellular ion for all types of cells, while having a major role in maintenance of fluid and electrolyte balance. Potassium is necessary for the function of all living cells and is thus present in all plant and animal tissues. It is found in especially high concentrations within plant cells, and in a mixed diet, it is most highly concentrated in fruits. The high concentration of potassium in plants, associated with comparatively very low amounts of sodium there, historically resulted in potassium first being isolated from the ashes of plants (potash), which in turn gave the element its modern name. The high concentration of potassium in plants means that heavy crop production rapidly depletes soils of potassium, and agricultural fertilizers consume 93% of the potassium chemical production of the modern world economy.
The functions of potassium and sodium in living organisms are quite different. Animals, in particular, employ sodium and potassium differentially to generate electrical potentials in animal cells, especially in nervous tissue. Potassium depletion in animals, including humans, results in various neurological dysfunctions. Characteristic concentrations of potassium in model organisms are: 30–300 mM in E. coli, 300 mM in budding yeast, 100 mM in mammalian cell and 4 mM in blood plasma.
Function in plants
The main role of potassium in plants is to provide the ionic environment for metabolic processes in the cytosol, and as such functions as a regulator of various processes including growth regulation. Plants require potassium ions (K+) for protein synthesis and for the opening and closing of stomata, which is regulated by proton pumps to make surrounding guard cells either turgid or flaccid. A deficiency of potassium ions can impair a plant's ability to maintain these processes. Potassium also functions in other physiological processes such as photosynthesis, protein synthesis, activation of some enzymes, phloem solute transport of photoassimilates into source organs, and maintenance of cation:anion balance in the cytosol and vacuole.
Function in animals
Potassium is the major cation (K+, a positive ion) inside animal cells, while sodium (Na+) is the major cation outside animal cells. The difference between the concentrations of these charged particles causes a difference in electric potential between the inside and outside of cells, known as the membrane potential. The balance between potassium and sodium is maintained by ion transporters in the cell membrane. All potassium ion channels are tetramers with several conserved secondary structural elements. A number of potassium channel structures have been solved including voltage gated, ligand gated, tandem-pore, and inwardly rectifying channels, from prokaryotes and eukaryotes. The cell membrane potential created by potassium and sodium ions allows the cell to generate an action potential—a "spike" of electrical discharge. The ability of cells to produce electrical discharge is critical for body functions such as neurotransmission, muscle contraction, and heart function.
Dietary recommendations
The U.S. National Academy of Medicine (NAM), on behalf of both the U.S. and Canada, sets Dietary Reference Intakes, including Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs), or Adequate Intakes (AIs) for when there is not sufficient information to set EARs and RDAs.
For both males and females under 9 years of age, the AIs for potassium are: 400mg of potassium for 0 to 6-month-old infants, 860mg of potassium for 7 to 12-month-old infants, 2,000mg of potassium for 1 to 3-year-old children, and 2,300mg of potassium for 4 to 8-year-old children.
For males 9 years of age and older, the AIs for potassium are: 2,500mg of potassium for 9 to 13-year-old males, 3,000mg of potassium for 14 to 18-year-old males, and 3,400mg for males that are 19 years of age and older.
For females 9 years of age and older, the AIs for potassium are: 2,300mg of potassium for 9 to 18-year-old females, and 2,600mg of potassium for females that are 19 years of age and older.
For pregnant and lactating females, the AIs for potassium are: 2,600mg of potassium for 14 to 18-year-old pregnant females, 2,900mg for pregnant females that are 19 years of age and older; furthermore, 2,500mg of potassium for 14 to 18-year-old lactating females, and 2,800mg for lactating females that are 19 years of age and older. As for safety, the NAM also sets tolerable upper intake levels (ULs) for vitamins and minerals, but for potassium the evidence was insufficient, so no UL was established.
In 2019, the National Academies of Sciences, Engineering, and Medicine revised the Adequate Intake for potassium to 2,600 mg/day for females 19 years of age and older who are not pregnant or lactating, and 3,400 mg/day for males 19 years of age and older.
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For people ages 15 and older, the AI is set at 3,500 mg/day. AIs for pregnancy is 3,500 mg/day, for lactation 4,000 mg/day. For children ages 1–14 years, the AIs increase with age from 800 to 2,700 mg/day. These AIs are lower than the U.S. RDAs. The EFSA reviewed the same safety question and decided that there was insufficient data to establish a UL for potassium.
Labeling
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of Daily Value (%DV). For potassium labeling purposes, 100% of the Daily Value was 3500 mg, but as of May 2016, it has been revised to 4700 mg. A table of the old and new adult Daily Values is provided at Reference Daily Intake.
Supplements
20 mEq (781 mg) potassium from potassium gluconate (4680 mg), or potassium citrate (2040 mg), mixed with a half-cup (1.12 dL) water, taken two to four times a day, may be used on a daily basis.
Labeling
Because of the risk of small-bowel lesions, the US FDA requires some potassium salts (for example potassium chloride) containing more than 99 mg (about 1.3 mEq) to be labeled with a warning.
Food sources
Eating a variety of foods that contain potassium is the best way to get an adequate amount.
Foods with high sources of potassium include kiwifruit, orange juice, potatoes, coconut, avocados, apricots, parsnips and turnips, although many other fruits, vegetables, legumes, and meats contain potassium.
Common foods very high in potassium:
beans (white beans and others)
dark leafy greens (spinach, Swiss chard, and others)
potatoes
dried fruit (apricots, peaches, prunes, raisins; figs and dates)
squash
yogurt
fish (salmon)
avocado
nuts (pistachios, almonds, walnuts, etc.)
seeds (squash, pumpkin, sunflower)
Foods containing the highest concentration:
dried herbs
sun dried tomatoes
cocoa solids
whey powder
paprika
yeast extract
rice bran
molasses
dry roasted soybeans
Deficiency
High blood pressure/Hypertension
Diets low in potassium increase risk of hypertension, stroke and cardiovascular disease.
Hypokalemia
A severe shortage of potassium in body fluids may cause a potentially fatal condition known as hypokalemia. Hypokalemia typically results from loss of potassium through diarrhea, diuresis, or vomiting. Symptoms are related to alterations in membrane potential and cellular metabolism. Symptoms include muscle weakness and cramps, paralytic ileus, ECG abnormalities, intestinal paralysis, decreased reflex response and (in severe cases) respiratory paralysis, alkalosis and arrhythmia.
In rare cases, habitual consumption of large amounts of black licorice has resulted in hypokalemia. Licorice contains a compound (Glycyrrhizin) that increases urinary excretion of potassium.
Insufficient intake
Adult women in the United States consume on average half the AI, for men two-thirds. For all adults, fewer than 5% exceed the AI. Similarly, in the European Union, insufficient potassium intake is widespread.
Side effects and toxicity
Gastrointestinal symptoms are the most common side effects of potassium supplements, including nausea, vomiting, abdominal discomfort, and diarrhea. Taking potassium with meals or taking a microencapsulated form of potassium may reduce gastrointestinal side effects.
Hyperkalemia is the most serious adverse reaction to potassium. Hyperkalemia occurs when potassium builds up faster than the kidneys can remove it. It is most common in individuals with renal failure. Symptoms of hyperkalemia may include tingling of the hands and feet, muscular weakness, and temporary paralysis. The most serious complication of hyperkalemia is the development of an abnormal heart rhythm (arrhythmia), which can lead to cardiac arrest.
Although hyperkalemia is rare in healthy individuals, oral doses greater than 18 grams taken at one time in individuals not accustomed to high intakes can lead to hyperkalemia.
See also
Biology and pharmacology of chemical elements
References
Further reading
External links
Brooks/Cole publishers – Sodium Potassium pump
Oregon State University – Micronutrient Information Center
Potassium at Lab Tests Online
Potassium: analyte monograph - the Association for Clinical Biochemistry and Laboratory Medicine.
Biological systems
Biology and pharmacology of chemical elements
Dietary minerals
Nutrition
Biology | Potassium in biology | [
"Chemistry",
"Biology"
] | 2,064 | [
"Pharmacology",
"Properties of chemical elements",
"Biology and pharmacology of chemical elements",
"nan",
"Biochemistry"
] |
378,938 | https://en.wikipedia.org/wiki/Magnesium%20in%20biology | Magnesium is an essential element in biological systems. Magnesium occurs typically as the Mg2+ ion. It is an essential mineral nutrient (i.e., element) for life and is present in every cell type in every organism. For example, adenosine triphosphate (ATP), the main source of energy in cells, must bind to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP. As such, magnesium plays a role in the stability of all polyphosphate compounds in the cells, including those associated with the synthesis of DNA and RNA.
Over 300 enzymes require the presence of magnesium ions for their catalytic action, including all enzymes utilizing or synthesizing ATP, or those that use other nucleotides to synthesize DNA and RNA.
In plants, magnesium is necessary for synthesis of chlorophyll and photosynthesis.
Function
A balance of magnesium is vital to the well-being of all organisms. Magnesium is a relatively abundant ion in Earth's crust and mantle and is highly bioavailable in the hydrosphere. This availability, in combination with a useful and very unusual chemistry, may have led to its utilization in evolution as an ion for signaling, enzyme activation, and catalysis. However, the unusual nature of ionic magnesium has also led to a major challenge in the use of the ion in biological systems. Biological membranes are impermeable to magnesium (and other ions), so transport proteins must facilitate the flow of magnesium, both into and out of cells and intracellular compartments.
Human health
Inadequate magnesium intake frequently causes muscle spasms, and has been associated with cardiovascular disease, diabetes, high blood pressure, anxiety disorders, migraines, osteoporosis, and cerebral infarction. Acute deficiency (see hypomagnesemia) is rare, and is more common as a drug side-effect (such as chronic alcohol or diuretic use) than from low food intake per se, but it can occur in people fed intravenously for extended periods of time.
The most common symptom of excess oral magnesium intake is diarrhea. Supplements based on amino acid chelates (such as glycinate, lysinate etc.) are much better-tolerated by the digestive system and do not have the side-effects of the older compounds used, while sustained-release dietary supplements prevent the occurrence of diarrhea. Since the kidneys of adult humans excrete excess magnesium efficiently, oral magnesium poisoning in adults with normal renal function is very rare. Infants, which have less ability to excrete excess magnesium even when healthy, should not be given magnesium supplements, except under a physician's care.
Pharmaceutical preparations with magnesium are used to treat conditions including magnesium deficiency and hypomagnesemia, as well as eclampsia. Such preparations are usually in the form of magnesium sulfate or chloride when given parenterally. Magnesium is absorbed with reasonable efficiency (30% to 40%) by the body from any soluble magnesium salt, such as the chloride or citrate. Magnesium is similarly absorbed from Epsom salts, although the sulfate in these salts adds to their laxative effect at higher doses. Magnesium absorption from the insoluble oxide and hydroxide salts (milk of magnesia) is erratic and of poorer efficiency, since it depends on the neutralization and solution of the salt by the acid of the stomach, which may not be (and usually is not) complete.
Magnesium orotate may be used as adjuvant therapy in patients on optimal treatment for severe congestive heart failure, increasing survival rate and improving clinical symptoms and patient's quality of life.
In 2022, magnesium salts were the 207th most commonly prescribed medication in the United States, with more than 1million prescriptions.
Nerve conduction
Magnesium can affect muscle relaxation through direct action on cell membranes. Mg2+ ions close certain types of calcium channels, which conduct positively charged calcium ions into neurons. With an excess of magnesium, more channels will be blocked and nerve cells activity will decrease.
Hypertension
Intravenous magnesium sulphate is used in treating pre-eclampsia. For other than pregnancy-related hypertension, a meta-analysis of 22 clinical trials with dose ranges of 120 to 973 mg/day and a mean dose of 410 mg, concluded that magnesium supplementation had a small but statistically significant effect, lowering systolic blood pressure by 3–4 mm Hg and diastolic blood pressure by 2–3 mm Hg. The effect was larger when the dose was more than 370 mg/day.
Diabetes and glucose tolerance
Higher dietary intakes of magnesium correspond to lower diabetes incidence. For people with diabetes or at high risk of diabetes, magnesium supplementation lowers fasting glucose.
Mitochondria
Magnesium is essential as part of the process that generates adenosine triphosphate.
Mitochondria are often referred to as the "powerhouses of the cell" because their primary role is generating energy for cellular processes. They achieve this by breaking down nutrients, primarily glucose, through a series of chemical reactions known as cellular respiration. This process ultimately produces adenosine triphosphate (ATP), the cell's main energy currency.
Vitamin D
Magnesium and vitamin D have a synergistic relationship in the body, meaning they work together to optimize each other's functions:
Magnesium activates vitamin D
Vitamin D influences magnesium absorption.
Bone health: They play crucial roles in calcium absorption and bone metabolism.
Muscle function: They contribute to muscle contraction and relaxation, impacting physical performance and overall well-being.
Immune function: They support a healthy immune system and may help reduce inflammation.
Overall, maintaining adequate levels of both magnesium and vitamin D is essential for optimal health and well-being.
Testosterone
It is theorized that the process of making testosterone from cholesterol, needs magnesium to function properly.
Studies have shown that significant gains in testosterone occur after taking 10 mg magnesium/kg body weight/day.
Dietary recommendations
The U.S. Institute of Medicine (IOM) updated Estimated Average Requirements (EARs) and Recommended Dietary Allowances (RDAs) for magnesium in 1997. If there is not sufficient information to establish EARs and RDAs, an estimate designated Adequate Intake (AI) is used instead. The current EARs for magnesium for women and men ages 31 and up are 265 mg/day and 350 mg/day, respectively. The RDAs are 320 and 420 mg/day. RDAs are higher than EARs so as to identify amounts that will cover people with higher than average requirements. RDA for pregnancy is 350 to 400 mg/day depending on age of the woman. RDA for lactation ranges 310 to 360 mg/day for same reason. For children ages 1–13 years, the RDA increases with age from 65 to 200 mg/day. As for safety, the IOM also sets Tolerable upper intake levels (ULs) for vitamins and minerals when evidence is sufficient. In the case of magnesium the UL is set at 350 mg/day. The UL is specific to magnesium consumed as a dietary supplement, the reason being that too much magnesium consumed at one time can cause diarrhea. The UL does not apply to food-sourced magnesium. Collectively the EARs, RDAs and ULs are referred to as Dietary Reference Intakes.
* = Adequate intake
The European Food Safety Authority (EFSA) refers to the collective set of information as Dietary Reference Values, with Population Reference Intake (PRI) instead of RDA, and Average Requirement instead of EAR. AI and UL are defined the same as in the United States. For women and men ages 18 and older, the AIs are set at 300 and 350 mg/day, respectively. AIs for pregnancy and lactation are also 300 mg/day. For children ages 1–17 years, the AIs increase with age from 170 to 250 mg/day. These AIs are lower than the U.S. RDAs. The European Food Safety Authority reviewed the same safety question and set its UL at 250 mg/day lower than the U.S. value. The magnesium UL is unique in that it is lower than some of the RDAs. It applies to intake from a pharmacological agent or dietary supplement only and does not include intake from food and water.
Labeling
For U.S. food and dietary supplement labeling purposes, the amount in a serving is expressed as a percent of daily value (%DV). For magnesium labeling purposes, 100% of the daily value was 400 mg, but as of May 27, 2016, it was revised to 420 mg to bring it into agreement with the RDA. A table of the old and new adult Daily Values is provided at Reference Daily Intake.
Food sources
Green vegetables such as spinach provide magnesium because of the abundance of chlorophyll molecules, which contain the ion. Nuts (especially Brazil nuts, cashews and almonds), seeds (e.g., pumpkin seeds), dark chocolate, roasted soybeans, bran, and some whole grains are also good sources of magnesium.
Although many foods contain magnesium, it is usually found in low levels. As with most nutrients, daily needs for magnesium are unlikely to be met by one serving of any single food. Eating a wide variety of fruits, vegetables, and grains will help ensure adequate intake of magnesium.
Because magnesium readily dissolves in water, refined foods, which are often processed or cooked in water and dried, in general, are poor sources of the nutrient. For example, whole-wheat bread has twice as much magnesium as white bread because the magnesium-rich germ and bran are removed when white flour is processed. The table of food sources of magnesium suggests many dietary sources of magnesium.
"Hard" water can also provide magnesium, but "soft" water contains less of the ion. Dietary surveys do not assess magnesium intake from water, which may lead to underestimating total magnesium intake and its variability.
Too much magnesium may make it difficult for the body to absorb calcium. Not enough magnesium can lead to hypomagnesemia as described above, with irregular heartbeats, high blood pressure (a sign in humans but not some experimental animals such as rodents), insomnia, and muscle spasms (fasciculation). However, as noted, symptoms of low magnesium from pure dietary deficiency are thought to be rarely encountered.
Following are some foods and the amount of magnesium in them:
Pumpkin seeds, no hulls ( cup) = 303 mg
Chia seeds, ( cup) = 162 mg
Buckwheat flour ( cup) = 151 mg
Brazil nuts ( cup) = 125 mg
Oat bran, raw ( cup) = 110 mg
Cocoa powder ( cup) = 107 mg
Halibut (3 oz) = 103 mg
Almonds ( cup) = 99 mg
Cashews ( cup) = 89 mg
Whole wheat flour ( cup) = 83 mg
Spinach, boiled ( cup) = 79 mg
Swiss chard, boiled ( cup) = 75 mg
Chocolate, 70% cocoa (1 oz) = 73 mg
Tofu, firm ( cup) = 73 mg
Black beans, boiled ( cup) = 60 mg
Quinoa, cooked ( cup) = 59 mg
Peanut butter (2 tablespoons) = 50 mg
Walnuts ( cup) = 46 mg
Sunflower seeds, hulled ( cup) = 41 mg
Chickpeas, boiled ( cup) = 39 mg
Kale, boiled ( cup) = 37 mg
Lentils, boiled ( cup) = 36 mg
Oatmeal, cooked ( cup) = 32 mg
Fish sauce (1 Tbsp) = 32 mg
Milk, non fat (1 cup) = 27 mg
Coffee, espresso (1 oz) = 24 mg
Whole wheat bread (1 slice) = 23 mg
Biological range, distribution, and regulation
In animals, it has been shown that different cell types maintain different concentrations of magnesium. It seems likely that the same is true for plants. This suggests that different cell types may regulate influx and efflux of magnesium in different ways based on their unique metabolic needs. Interstitial and systemic concentrations of free magnesium must be delicately maintained by the combined processes of buffering (binding of ions to proteins and other molecules) and muffling (the transport of ions to storage or extracellular spaces).
In plants, and more recently in animals, magnesium has been recognized as an important signaling ion, both activating and mediating many biochemical reactions. The best example of this is perhaps the regulation of carbon fixation in chloroplasts in the Calvin cycle.
Magnesium is very important in cellular function. Deficiency of the nutrient causes disease of the affected organism. In single-cell organisms such as bacteria and yeast, low levels of magnesium manifests in greatly reduced growth rates. In magnesium transport knockout strains of bacteria, healthy rates are maintained only with exposure to very high external concentrations of the ion. In yeast, mitochondrial magnesium deficiency also leads to disease.
Plants deficient in magnesium show stress responses. The first observable signs of both magnesium starvation and overexposure in plants is a decrease in the rate of photosynthesis. This is due to the central position of the Mg2+ ion in the chlorophyll molecule. The later effects of magnesium deficiency on plants are a significant reduction in growth and reproductive viability. Magnesium can also be toxic to plants, although this is typically seen only in drought conditions.
In animals, magnesium deficiency (hypomagnesemia) is seen when the environmental availability of magnesium is low. In ruminant animals, particularly vulnerable to magnesium availability in pasture grasses, the condition is known as 'grass tetany'. Hypomagnesemia is identified by a loss of balance due to muscle weakness. A number of genetically attributable hypomagnesemia disorders have also been identified in humans.
Overexposure to magnesium may be toxic to individual cells, though these effects have been difficult to show experimentally. Hypermagnesemia, an overabundance of magnesium in the blood, is usually caused by loss of kidney function. Healthy animals rapidly excrete excess magnesium in the urine and stool. Urinary magnesium is called magnesuria. Characteristic concentrations of magnesium in model organisms are: in E. coli 30-100mM (bound), 0.01-1mM (free), in budding yeast 50mM, in mammalian cell 10mM (bound), 0.5mM (free) and in blood plasma 1mM.
Biological chemistry
Mg2+ is the fourth-most-abundant metal ion in cells (per moles) and the most abundant free divalent cation — as a result, it is deeply and intrinsically woven into cellular metabolism. Indeed, Mg2+-dependent enzymes appear in virtually every metabolic pathway: Specific binding of Mg2+ to biological membranes is frequently observed, Mg2+ is also used as a signalling molecule, and much of nucleic acid biochemistry requires Mg2+, including all reactions that require release of energy from ATP. In nucleotides, the triple-phosphate moiety of the compound is invariably stabilized by association with Mg2+ in all enzymatic processes.
Chlorophyll
In photosynthetic organisms, Mg2+ has the additional vital role of being the coordinating ion in the chlorophyll molecule. This role was discovered by Richard Willstätter, who received the Nobel Prize in Chemistry 1915 for the purification and structure of chlorophyll binding with sixth number of carbon
Enzymes
The chemistry of the Mg2+ ion, as applied to enzymes, uses the full range of this ion's unusual reaction chemistry to fulfill a range of functions. Mg2+ interacts with substrates, enzymes, and occasionally both (Mg2+ may form part of the active site). In general, Mg2+ interacts with substrates through inner sphere coordination, stabilising anions or reactive intermediates, also including binding to ATP and activating the molecule to nucleophilic attack. When interacting with enzymes and other proteins, Mg2+ may bind using inner or outer sphere coordination, to either alter the conformation of the enzyme or take part in the chemistry of the catalytic reaction. In either case, because Mg2+ is only rarely fully dehydrated during ligand binding, it may be a water molecule associated with the Mg2+ that is important rather than the ion itself. The Lewis acidity of Mg2+ (pKa 11.4) is used to allow both hydrolysis and condensation reactions (most common ones being phosphate ester hydrolysis and phosphoryl transfer) that would otherwise require pH values greatly removed from physiological values.
Essential role in the biological activity of ATP
ATP (adenosine triphosphate), the main source of energy in cells, must be bound to a magnesium ion in order to be biologically active. What is called ATP is often actually Mg-ATP.
Nucleic acids
Nucleic acids have an important range of interactions with Mg2+. The binding of Mg2+ to DNA and RNA stabilises structure; this can be observed in the increased melting temperature (Tm) of double-stranded DNA in the presence of Mg2+. In addition, ribosomes contain large amounts of Mg2+ and the stabilisation provided is essential to the complexation of this ribo-protein. A large number of enzymes involved in the biochemistry of nucleic acids bind Mg2+ for activity, using the ion for both activation and catalysis. Finally, the autocatalysis of many ribozymes (enzymes containing only RNA) is Mg2+ dependent (e.g. the yeast mitochondrial group II self splicing introns).
Magnesium ions can be critical in maintaining the positional integrity of closely clustered phosphate groups. These clusters appear in numerous and distinct parts of the cell nucleus and cytoplasm. For instance, hexahydrated Mg2+ ions bind in the deep major groove and at the outer mouth of A-form nucleic acid duplexes.
Cell membranes and walls
Biological cell membranes and cell walls are polyanionic surfaces. This has important implications for the transport of ions, in particular because it has been shown that different membranes preferentially bind different ions. Both Mg2+ and Ca2+ regularly stabilize membranes by the cross-linking of carboxylated and phosphorylated head groups of lipids. However, the envelope membrane of E. coli has also been shown to bind Na+, K+, Mn2+ and Fe3+. The transport of ions is dependent on both the concentration gradient of the ion and the electric potential (ΔΨ) across the membrane, which will be affected by the charge on the membrane surface. For example, the specific binding of Mg2+ to the chloroplast envelope has been implicated in a loss of photosynthetic efficiency by the blockage of K+ uptake and the subsequent acidification of the chloroplast stroma.
Proteins
The Mg2+ ion tends to bind only weakly to proteins (Ka ≤ 105) and this can be exploited by the cell to switch enzymatic activity on and off by changes in the local concentration of Mg2+. Although the concentration of free cytoplasmic Mg2+ is on the order of 1 mmol/L, the total Mg2+ content of animal cells is 30 mmol/L and in plants the content of leaf endodermal cells has been measured at values as high as 100 mmol/L (Stelzer et al., 1990), much of which buffered in storage compartments. The cytoplasmic concentration of free Mg2+ is buffered by binding to chelators (e.g., ATP), but also, what is more important, it is buffered by storage of Mg2+ in intracellular compartments. The transport of Mg2+ between intracellular compartments may be a major part of regulating enzyme activity. The interaction of Mg2+ with proteins must also be considered for the transport of the ion across biological membranes.
Manganese
In biological systems, only manganese (Mn2+) is readily capable of replacing Mg2+, but only in a limited set of circumstances. Mn2+ is very similar to Mg2+ in terms of its chemical properties, including inner and outer shell complexation. Mn2+ effectively binds ATP and allows hydrolysis of the energy molecule by most ATPases. Mn2+ can also replace Mg2+ as the activating ion for a number of Mg2+-dependent enzymes, although some enzyme activity is usually lost. Sometimes such enzyme metal preferences vary among closely related species: For example, the reverse transcriptase enzyme of lentiviruses like HIV, SIV and FIV is typically dependent on Mg2+, whereas the analogous enzyme for other retroviruses prefers Mn2+.
Measuring magnesium in biological samples
By radioactive isotopes
The use of radioactive tracer elements in ion uptake assays allows the calculation of km, Ki and Vmax and determines the initial change in the ion content of the cells. 28Mg decays by the emission of a high-energy beta or gamma particle, which can be measured using a scintillation counter. However, the radioactive half-life of 28Mg, the most stable of the radioactive magnesium isotopes, is only 21 hours. This severely restricts the experiments involving the nuclide. Also, since 1990, no facility has routinely produced 28Mg, and the price per mCi is now predicted to be approximately US$30,000. The chemical nature of Mg2+ is such that it is closely approximated by few other cations. However, Co2+, Mn2+ and Ni2+ have been used successfully to mimic the properties of Mg2+ in some enzyme reactions, and radioactive forms of these elements have been employed successfully in cation transport studies. The difficulty of using metal ion replacement in the study of enzyme function is that the relationship between the enzyme activities with the replacement ion compared to the original is very difficult to ascertain.
By fluorescent indicators
A number of chelators of divalent cations have different fluorescence spectra in the bound and unbound states. Chelators for Ca2+ are well established, have high affinity for the cation, and low interference from other ions. Mg2+ chelators lag behind and the major fluorescence dye for Mg2+ (mag-fura 2) actually has a higher affinity for Ca2+. This limits the application of this dye to cell types where the resting level of Ca2+ is < 1 μM and does not vary with the experimental conditions under which Mg2+ is to be measured. Recently, Otten et al. (2001) have described work into a new class of compounds that may prove more useful, having significantly better binding affinities for Mg2+. The use of the fluorescent dyes is limited to measuring the free Mg2+. If the ion concentration is buffered by the cell by chelation or removal to subcellular compartments, the measured rate of uptake will give only minimum values of km and Vmax.
By electrophysiology
First, ion-specific microelectrodes can be used to measure the internal free ion concentration of cells and organelles. The major advantages are that readings can be made from cells over relatively long periods of time, and that unlike dyes very little extra ion buffering capacity is added to the cells.
Second, the technique of two-electrode voltage-clamp allows the direct measurement of the ion flux across the membrane of a cell. The membrane is held at an electric potential and the responding current is measured. All ions passing across the membrane contribute to the measured current.
Third, the technique of patch-clamp uses isolated sections of natural or artificial membrane in much the same manner as voltage-clamp but without the secondary effects of a cellular system. Under ideal conditions the conductance of individual channels can be quantified. This methodology gives the most direct measurement of the action of ion channels.
By absorption spectroscopy
Flame atomic absorption spectroscopy (AAS) determines the total magnesium content of a biological sample. This method is destructive; biological samples must be broken down in concentrated acids to avoid clogging the fine nebulising apparatus. Beyond this, the only limitation is that samples must be in a volume of approximately 2 mL and at a concentration range of 0.1 – 0.4 μmol/L for optimum accuracy. As this technique cannot distinguish between Mg2+ already present in the cell and that taken up during the experiment, only content not uptaken can be quantified.
Inductively coupled plasma (ICP) using either the mass spectrometry (MS) or atomic emission spectroscopy (AES) modifications also allows the determination of the total ion content of biological samples.
Magnesium transport
The chemical and biochemical properties of Mg2+ present the cellular system with a significant challenge when transporting the ion across biological membranes. The dogma of ion transport states that the transporter recognises the ion then progressively removes the water of hydration, removing most or all of the water at a selective pore before releasing the ion on the far side of the membrane. Due to the properties of Mg2+, large volume change from hydrated to bare ion, high energy of hydration and very low rate of ligand exchange in the inner coordination sphere, these steps are probably more difficult than for most other ions. To date, only the ZntA protein of Paramecium has been shown to be a Mg2+ channel. The mechanisms of Mg2+ transport by the remaining proteins are beginning to be uncovered with the first three-dimensional structure of a Mg2+ transport complex being solved in 2004.
The hydration shell of the Mg2+ ion has a very tightly bound inner shell of six water molecules and a relatively tightly bound second shell containing 12–14 water molecules (Markham et al., 2002). Thus, it is presumed that recognition of the Mg2+ ion requires some mechanism to interact initially with the hydration shell of Mg2+, followed by a direct recognition/binding of the ion to the protein.
In spite of the mechanistic difficulty, Mg2+ must be transported across membranes, and a large number of Mg2+ fluxes across membranes from a variety of systems have been described. However, only a small selection of Mg2+ transporters have been characterised at the molecular level.
Ligand ion channel blockade
Magnesium ions (Mg2+) in cellular biology are usually in almost all senses opposite to Ca2+ ions, because they are bivalent too, but have greater electronegativity and thus exert greater pull on water molecules, preventing passage through the channel (even though the magnesium itself is smaller). Thus, Mg2+ ions block Ca2+ channels such as (NMDA channels) and have been shown to affect gap junction channels forming electrical synapses.
Plant physiology of magnesium
The previous sections have dealt in detail with the chemical and biochemical aspects of Mg2+ and its transport across cellular membranes. This section will apply this knowledge to aspects of whole plant physiology, in an attempt to show how these processes interact with the larger and more complex environment of the multicellular organism.
Nutritional requirements and interactions
Mg2+ is essential for plant growth and is present in higher plants in amounts on the order of 80 μmol g−1 dry weight. The amounts of Mg2+ vary in different parts of the plant and are dependent upon nutritional status. In times of plenty, excess Mg2+ may be stored in vascular cells (Stelzer et al., 1990; and in times of starvation Mg2+ is redistributed, in many plants, from older to newer leaves.
Mg2+ is taken up into plants via the roots. Interactions with other cations in the rhizosphere can have a significant effect on the uptake of the ion.(Kurvits and Kirkby, 1980; The structure of root cell walls is highly permeable to water and ions, and hence ion uptake into root cells can occur anywhere from the root hairs to cells located almost in the centre of the root (limited only by the Casparian strip). Plant cell walls and membranes carry a great number of negative charges, and the interactions of cations with these charges is key to the uptake of cations by root cells allowing a local concentrating effect. Mg2+ binds relatively weakly to these charges, and can be displaced by other cations, impeding uptake and causing deficiency in the plant.
Within individual plant cells, the Mg2+ requirements are largely the same as for all cellular life; Mg2+ is used to stabilise membranes, is vital to the utilisation of ATP, is extensively involved in the nucleic acid biochemistry, and is a cofactor for many enzymes (including the ribosome). Also, Mg2+ is the coordinating ion in the chlorophyll molecule. It is the intracellular compartmentalisation of Mg2+ in plant cells that leads to additional complexity. Four compartments within the plant cell have reported interactions with Mg2+. Initially, Mg2+ will enter the cell into the cytoplasm (by an as yet unidentified system), but free Mg2+ concentrations in this compartment are tightly regulated at relatively low levels (≈2 mmol/L) and so any excess Mg2+ is either quickly exported or stored in the second intracellular compartment, the vacuole. The requirement for Mg2+ in mitochondria has been demonstrated in yeast and it seems highly likely that the same will apply in plants. The chloroplasts also require significant amounts of internal Mg2+, and low concentrations of cytoplasmic Mg2+. In addition, it seems likely that the other subcellular organelles (e.g., Golgi, endoplasmic reticulum, etc.) also require Mg2+.
Distributing magnesium ions within the plant
Once in the cytoplasmic space of root cells Mg2+, along with the other cations, is probably transported radially into the stele and the vascular tissue. From the cells surrounding the xylem the ions are released or pumped into the xylem and carried up through the plant. In the case of Mg2+, which is highly mobile in both the xylem and phloem, the ions will be transported to the top of the plant and back down again in a continuous cycle of replenishment. Hence, uptake and release from vascular cells is probably a key part of whole plant Mg2+ homeostasis. Figure 1 shows how few processes have been connected to their molecular mechanisms (only vacuolar uptake has been associated with a transport protein, AtMHX).
The diagram shows a schematic of a plant and the putative processes of Mg2+ transport at the root and leaf where Mg2+ is loaded and unloaded from the vascular tissues. Mg2+ is taken up into the root cell wall space (1) and interacts with the negative charges associated with the cell walls and membranes. Mg2+ may be taken up into cells immediately (symplastic pathway) or may travel as far as the Casparian band (4) before being absorbed into cells (apoplastic pathway; 2). The concentration of Mg2+ in the root cells is probably buffered by storage in root cell vacuoles (3). Note that cells in the root tip do not contain vacuoles. Once in the root cell cytoplasm, Mg2+ travels toward the centre of the root by plasmodesmata, where it is loaded into the xylem (5) for transport to the upper parts of the plant. When the Mg2+ reaches the leaves it is unloaded from the xylem into cells (6) and again is buffered in vacuoles (7). Whether cycling of Mg2+ into the phloem occurs via general cells in the leaf (8) or directly from xylem to phloem via transfer cells (9) is unknown. Mg2+ may return to the roots in the phloem sap.
When a Mg2+ ion has been absorbed by a cell requiring it for metabolic processes, it is generally assumed that the ion stays in that cell for as long as the cell is active. In vascular cells, this is not always the case; in times of plenty, Mg2+ is stored in the vacuole, takes no part in the day-to-day metabolic processes of the cell (Stelzer et al., 1990), and is released at need. But for most cells it is death by senescence or injury that releases Mg2+ and many of the other ionic constituents, recycling them into healthy parts of the plant. In addition, when Mg2+ in the environment is limiting, some species are able to mobilise Mg2+ from older tissues. These processes involve the release of Mg2+ from its bound and stored states and its transport back into the vascular tissue, where it can be distributed to the rest of the plant. In times of growth and development, Mg2+ is also remobilised within the plant as source and sink relationships change.
The homeostasis of Mg2+ within single plant cells is maintained by processes occurring at the plasma membrane and at the vacuole membrane (see Figure 2). The major driving force for the translocation of ions in plant cells is ΔpH. H+-ATPases pump H+ ions against their concentration gradient to maintain the pH differential that can be used for the transport of other ions and molecules. H+ ions are pumped out of the cytoplasm into the extracellular space or into the vacuole. The entry of Mg2+ into cells may occur through one of two pathways, via channels using the ΔΨ (negative inside) across this membrane or by symport with H+ ions. To transport the Mg2+ ion into the vacuole requires a Mg2+/H+ antiport transporter (such as AtMHX). The H+-ATPases are dependent on Mg2+ (bound to ATP) for activity, so that Mg2+ is required to maintain its own homeostasis.
A schematic of a plant cell is shown including the four major compartments currently recognised as interacting with Mg2+. H+-ATPases maintain a constant ΔpH across the plasma membrane and the vacuole membrane. Mg2+ is transported into the vacuole using the energy of ΔpH (in A. thaliana by AtMHX). Transport of Mg2+ into cells may use either the negative ΔΨ or the ΔpH. The transport of Mg2+ into mitochondria probably uses ΔΨ as in the mitochondria of yeast, and it is likely that chloroplasts take Mg2+ by a similar system. The mechanism and the molecular basis for the release of Mg2+ from vacuoles and from the cell is not known. Likewise, the light-regulated Mg2+ concentration changes in chloroplasts are not fully understood, but do require the transport of H+ ions across the thylakoid membrane.
Magnesium, chloroplasts and photosynthesis
Mg2+ is the coordinating metal ion in the chlorophyll molecule, and in plants where the ion is in high supply about 6% of the total Mg2+ is bound to chlorophyll. Thylakoid stacking is stabilised by Mg2+ and is important for the efficiency of photosynthesis, allowing phase transitions to occur.
Mg2+ is probably taken up into chloroplasts to the greatest extent during the light-induced development from proplastid to chloroplast or etioplast to chloroplast. At these times, the synthesis of chlorophyll and the biogenesis of the thylakoid membrane stacks absolutely require the divalent cation.
Whether Mg2+ is able to move into and out of chloroplasts after this initial developmental phase has been the subject of several conflicting reports. Deshaies et al. (1984) found that Mg2+ did move in and out of isolated chloroplasts from young pea plants, but Gupta and Berkowitz (1989) were unable to reproduce the result using older spinach chloroplasts. Deshaies et al. had stated in their paper that older pea chloroplasts showed less significant changes in Mg2+ content than those used to form their conclusions. The relative proportion of immature chloroplasts present in the preparations may explain these observations.
The metabolic state of the chloroplast changes considerably between night and day. During the day, the chloroplast is actively harvesting the energy of light and converting it into chemical energy. The activation of the metabolic pathways involved comes from the changes in the chemical nature of the stroma on the addition of light. H+ is pumped out of the stroma (into both the cytoplasm and the lumen) leading to an alkaline pH. Mg2+ (along with K+) is released from the lumen into the stroma, in an electroneutralisation process to balance the flow of H+. Finally, thiol groups on enzymes are reduced by a change in the redox state of the stroma. Examples of enzymes activated in response to these changes are fructose 1,6-bisphosphatase, sedoheptulose bisphosphatase and ribulose-1,5-bisphosphate carboxylase. During the dark period, if these enzymes were active a wasteful cycling of products and substrates would occur.
Two major classes of the enzymes that interact with Mg2+ in the stroma during the light phase can be identified. Firstly, enzymes in the glycolytic pathway most often interact with two atoms of Mg2+. The first atom is as an allosteric modulator of the enzymes' activity, while the second forms part of the active site and is directly involved in the catalytic reaction. The second class of enzymes includes those where the Mg2+ is complexed to nucleotide di- and tri-phosphates (ADP and ATP), and the chemical change involves phosphoryl transfer. Mg2+ may also serve in a structural maintenance role in these enzymes (e.g., enolase).
Magnesium stress
Plant stress responses can be observed in plants that are under- or over-supplied with Mg2+. The first observable signs of Mg2+ stress in plants for both starvation and toxicity is a depression of the rate of photosynthesis, it is presumed because of the strong relationships between Mg2+ and chloroplasts/chlorophyll. In pine trees, even before the visible appearance of yellowing and necrotic spots, the photosynthetic efficiency of the needles drops markedly. In Mg2+ deficiency, reported secondary effects include carbohydrate immobility, loss of RNA transcription and loss of protein synthesis. However, due to the mobility of Mg2+ within the plant, the deficiency phenotype may be present only in the older parts of the plant. For example, in Pinus radiata starved of Mg2+, one of the earliest identifying signs is the chlorosis in the needles on the lower branches of the tree. This is because Mg2+ has been recovered from these tissues and moved to growing (green) needles higher in the tree.
A Mg2+ deficit can be caused by the lack of the ion in the media (soil), but more commonly comes from inhibition of its uptake. Mg2+ binds quite weakly to the negatively charged groups in the root cell walls, so that excesses of other cations such as K+, NH4+, Ca2+, and Mn2+ can all impede uptake.(Kurvits and Kirkby, 1980; In acid soils Al3+ is a particularly strong inhibitor of Mg2+ uptake. The inhibition by Al3+ and Mn2+ is more severe than can be explained by simple displacement, hence it is possible that these ions bind to the Mg2+ uptake system directly. In bacteria and yeast, such binding by Mn2+ has already been observed. Stress responses in the plant develop as cellular processes halt due to a lack of Mg2+ (e.g. maintenance of ΔpH across the plasma and vacuole membranes). In Mg2+-starved plants under low light conditions, the percentage of Mg2+ bound to chlorophyll has been recorded at 50%. Presumably, this imbalance has detrimental effects on other cellular processes.
Mg2+ toxicity stress is more difficult to develop. When Mg2+ is plentiful, in general the plants take up the ion and store it (Stelzer et al., 1990). However, if this is followed by drought then ionic concentrations within the cell can increase dramatically. High cytoplasmic Mg2+ concentrations block a K+ channel in the inner envelope membrane of the chloroplast, in turn inhibiting the removal of H+ ions from the chloroplast stroma. This leads to an acidification of the stroma that inactivates key enzymes in carbon fixation, which all leads to the production of oxygen free radicals in the chloroplast that then cause oxidative damage.
See also
Biology and pharmacology of chemical elements
Magnesium deficiency (agriculture)
Notes
References
electronic-book electronic-
External links
Magnesium Deficiency
List of foods rich in Magnesium
The Magnesium Website- Includes full text papers and textbook chapters by leading magnesium authorities Mildred Seelig, Jean Durlach, Burton M. Altura and Bella T. Altura. Links to over 300 articles discussing magnesium and magnesium deficiency.
Dietary Reference Intake
Physiology
Plant physiology
Magnesium
Biology and pharmacology of chemical elements
Biological systems | Magnesium in biology | [
"Chemistry",
"Biology"
] | 8,696 | [
"Plant physiology",
"Pharmacology",
"Properties of chemical elements",
"Plants",
"Physiology",
"Biology and pharmacology of chemical elements",
"nan",
"Biochemistry"
] |
379,241 | https://en.wikipedia.org/wiki/Schmitt%20trigger | In electronics, a Schmitt trigger is a comparator circuit with hysteresis implemented by applying positive feedback to the noninverting input of a comparator or differential amplifier. It is an active circuit which converts an analog input signal to a digital output signal. The circuit is named a trigger because the output retains its value until the input changes sufficiently to trigger a change. In the non-inverting configuration, when the input is higher than a chosen threshold, the output is high. When the input is below a different (lower) chosen threshold the output is low, and when the input is between the two levels the output retains its value. This dual threshold action is called hysteresis and implies that the Schmitt trigger possesses memory and can act as a bistable multivibrator (latch or flip-flop). There is a close relation between the two kinds of circuits: a Schmitt trigger can be converted into a latch and a latch can be converted into a Schmitt trigger.
Schmitt trigger devices are typically used in signal conditioning applications to remove noise from signals used in digital circuits, particularly mechanical contact bounce in switches. They are also used in closed loop negative feedback configurations to implement relaxation oscillators, used in function generators and switching power supplies.
In signal theory, a schmitt trigger is essentially a one-bit quantizer.
History
The Schmitt trigger was invented by American scientist Otto H. Schmitt in 1934 while he was a graduate student, later described in his doctoral dissertation (1937) as a thermionic trigger. It was a direct result of Schmitt's study of the neural impulse propagation in squid nerves.
Implementation
Fundamental idea
Circuits with hysteresis are based on positive feedback. Any active circuit can be made to behave as a Schmitt trigger by applying positive feedback so that the loop gain is more than one. The positive feedback is introduced by adding a part of the output voltage to the input voltage. These circuits contain an attenuator (the B box in the figure on the right) and an adder (the circle with "+" inside) in addition to an amplifier acting as a comparator. There are three specific techniques for implementing this general idea. The first two of them are dual versions (series and parallel) of the general positive feedback system. In these configurations, the output voltage increases the effective difference input voltage of the comparator by "decreasing the threshold" or by "increasing the circuit input voltage"; the threshold and memory properties are incorporated in one element. In the third technique, the threshold and memory properties are separated.
Dynamic threshold (series feedback): when the input voltage crosses the threshold in either direction, the circuit itself changes its own threshold to the opposite direction. For this purpose, it subtracts a part of its output voltage from the threshold (it is equal to adding voltage to the input voltage). Thus the output affects the threshold and does not affect the input voltage. These circuits are implemented by a differential amplifier with "series positive feedback" where the input is connected to the inverting input and the inverted output to the non-inverting input. In this arrangement, attenuation and summation are separated: a voltage divider acts as an attenuator and the loop acts as a simple series voltage summer. Examples are the classic transistor emitter-coupled Schmitt trigger, the op-amp inverting Schmitt trigger, etc.
Modified input voltage (parallel feedback): when the input voltage crosses the threshold in either direction the circuit changes its input voltage in the same direction (now it adds a part of its output voltage directly to the input voltage). Thus the output augments the input voltage and does not affect the threshold. These circuits can be implemented by a single-ended non-inverting amplifier with "parallel positive feedback" where the input and the output sources are connected through resistors to the input. The two resistors form a weighted parallel summer incorporating both the attenuation and summation. Examples are the less familiar collector-base coupled Schmitt trigger, the op-amp non-inverting Schmitt trigger, etc.
Some circuits and elements exhibiting negative resistance can also act in a similar way: negative impedance converters (NIC), neon lamps, tunnel diodes (e.g., a diode with an N-shaped current–voltage characteristic in the first quadrant), etc. In the last case, an oscillating input will cause the diode to move from one rising leg of the "N" to the other and back again as the input crosses the rising and falling switching thresholds.
Two different unidirectional thresholds are assigned in this case to two separate open-loop comparators (without hysteresis) driving a bistable multivibrator (latch) or flip-flop. The trigger is toggled high when the input voltage crosses down to up the high threshold and low when the input voltage crosses up to down the low threshold. Again, there is a positive feedback, but now it is concentrated only in the memory cell. Examples are the 555 timer and the switch debouncing circuit.
The symbol for Schmitt triggers in circuit diagrams is a triangle with a symbol inside representing its ideal hysteresis curve.
Transistor Schmitt triggers
Classic emitter-coupled circuit
The original Schmitt trigger is based on the dynamic threshold idea that is implemented by a voltage divider with a switchable upper leg (the collector resistors RC1 and RC2) and a steady lower leg (RE). Q1 acts as a comparator with a differential input (Q1 base-emitter junction) consisting of an inverting (Q1 base) and a non-inverting (Q1 emitter) inputs. The input voltage is applied to the inverting input; the output voltage of the voltage divider is applied to the non-inverting input thus determining its threshold. The comparator output drives the second common collector stage Q2 (an emitter follower) through the voltage divider R1-R2. The emitter-coupled transistors Q1 and Q2 actually compose an electronic double throw switch that switches over the upper legs of the voltage divider and changes the threshold in a different (to the input voltage) direction.
This configuration can be considered as a differential amplifier with series positive feedback between its non-inverting input (Q2 base) and output (Q1 collector) that forces the transition process. There is also a smaller negative feedback introduced by the emitter resistor RE. To make the positive feedback dominate over the negative one and to obtain a hysteresis, the proportion between the two collector resistors is chosen so that RC1 > RC2. Thus less current flows through and there is less voltage drop across RE when Q1 is switched on than in the case when Q2 is switched on. As a result, the circuit has two different thresholds in regard to ground (V− in the image).
Operation
Initial state. For the NPN transistors shown on the right, imagine the input voltage is below the shared emitter voltage (high threshold for concreteness) so that the Q1 base-emitter junction is reverse-biased and Q1 does not conduct. The Q2 base voltage is determined by the divider described above so that Q2 is conducting and the trigger output is in the low state. The two resistors RC2 and RE form another voltage divider that determines the high threshold. Neglecting VBE, the high threshold value is approximately
.
The output voltage is low but well above ground. It is approximately equal to the high threshold and may not be low enough to be a logical zero for subsequent digital circuits. This may require an additional level shifting circuit following the trigger circuit.
Crossing up the high threshold. When the input voltage (Q1 base voltage) rises slightly above the voltage across the emitter resistor RE (the high threshold), Q1 begins conducting. Its collector voltage goes down and Q2 starts toward cutoff, because the voltage divider now provides lower Q2 base voltage. The common emitter voltage follows this change and goes down, making Q1 conduct more. The current begins to steer from the right leg of the circuit to the left one. Although Q1 is conducting more, it passes less current through RE (since RC1 > RC2); the emitter voltage continues dropping and the effective Q1 base-emitter voltage continuously increases. This avalanche-like process continues until Q1 becomes completely turned on (saturated) and Q2 turned off. The trigger transitions to the high state and the output (Q2's collector) voltage is close to V+. Now the two resistors RC1 and RE form a voltage divider that determines the low threshold. Its value is approximately
.
Crossing down the low threshold. With the trigger now in the high state, if the input voltage drops enough (below the low threshold), Q1 begins cutting off. Its collector current reduces; as a result, the shared emitter voltage drops slightly and Q1's collector voltage rises significantly. The R1-R2 voltage divider conveys this change to the Q2 base voltage and it begins conducting. The voltage across RE rises, further reducing the Q1 base-emitter potential in the same avalanche-like manner, and Q1 ceases to conduct. Q2 becomes completely turned on (saturated) and the output voltage becomes low again.
Variations
Non-inverting circuit. The classic non-inverting Schmitt trigger can be turned into an inverting trigger by taking Vout from the emitters instead of from a Q2 collector. In this configuration, the output voltage is equal to the dynamic threshold (the shared emitter voltage) and both the output levels stay away from the supply rails. Another disadvantage is that the load changes the thresholds so, it has to be high enough. The base resistor RB is obligatory to prevent the impact of the input voltage through Q1 base-emitter junction on the emitter voltage.
Direct-coupled circuit. To simplify the circuit, the R1–R2 voltage divider can be omitted connecting Q1 collector directly to Q2 base. The base resistor RB can be omitted as well so that the input voltage source drives directly Q1's base. In this case, the common emitter voltage and Q1 collector voltage are not suitable for outputs. Only Q2 collector should be used as an output since, when the input voltage exceeds the high threshold and Q1 saturates, its base-emitter junction is forward biased and transfers the input voltage variations directly to the emitters. As a result, the common emitter voltage and Q1 collector voltage follow the input voltage. This situation is typical for over-driven transistor differential amplifiers and ECL gates.
Collector-base coupled circuit
Like every latch, the fundamental collector-base coupled bistable circuit operates with hysteresis. It can be converted to a Schmitt trigger by connecting an additional base resistor R to one of the inputs (Q1's base in the figure). The two resistors R and R4 form a parallel voltage summer (the circle in the block diagram above) that sums output (Q2's collector) voltage and the input voltage, and drives the single-ended transistor "comparator" Q1. When the base voltage crosses the threshold (VBE0 ∞ 0.65 V) in either direction, a part of Q2's collector voltage is added in the same direction to the input voltage. Thus the output modifies the input voltage by means of parallel positive feedback and does not affect the threshold (the base-emitter voltage).
Comparison between emitter- and collector-coupled circuit
The emitter-coupled version has the advantage that the input transistor is reverse biased when the input voltage is quite below the high threshold so the transistor is definitely cut off. This was important when germanium transistors were used for implementing the circuit, and this configuration has continued to be popular. The input base resistor can be omitted, since the emitter resistor limits the current when the input base-emitter junction is forward-biased.
An emitter-coupled Schmitt trigger logical zero output level may not be low enough and might need an additional output level shifting circuit. The collector-coupled Schmitt trigger has extremely low (almost zero) output at logical zero.
Op-amp implementations
Schmitt triggers are commonly implemented using an operational amplifier or a dedicated comparator. An open-loop op-amp and comparator may be considered as an analog-digital device having analog inputs and a digital output that extracts the sign of the voltage difference between its two inputs. The positive feedback is applied by adding a part of the output voltage to the input voltage in series or parallel manner. Due to the extremely high op-amp gain, the loop gain is also high enough and provides the avalanche-like process.
Non-inverting Schmitt trigger
In this circuit, the two resistors R1 and R2 form a parallel voltage summer. It adds a part of the output voltage to the input voltage thus augmenting it during and after switching that occurs when the resulting voltage is near ground. This parallel positive feedback creates the needed hysteresis that is controlled by the proportion between the resistances of R1 and R2. The output of the parallel voltage summer is single-ended (it produces voltage with respect to ground) so the circuit does not need an amplifier with a differential input. Since conventional op-amps have a differential input, the inverting input is grounded to make the reference point zero volts.
The output voltage always has the same sign as the op-amp input voltage but it does not always have the same sign as the circuit input voltage (the signs of the two input voltages can differ). When the circuit input voltage is above the high threshold or below the low threshold, the output voltage has the same sign as the circuit input voltage (the circuit is non-inverting). It acts like a comparator that switches at a different point depending on whether the output of the comparator is high or low. When the circuit input voltage is between the thresholds, the output voltage is undefined and it depends on the last state (the circuit behaves as an elementary latch).
For instance, if the Schmitt trigger is currently in the high state, the output will be at the positive power supply rail (+VS). The output voltage V+ of the resistive summer can be found by applying the superposition theorem:
The comparator will switch when V+=0. Then (the same result can be obtained by applying the current conservation principle). So must drop below to get the output to switch. Once the comparator output has switched to −VS, the threshold becomes to switch back to high. So this circuit creates a switching band centered on zero, with trigger levels (it can be shifted to the left or the right by applying a bias voltage to the inverting input). The input voltage must rise above the top of the band, and then below the bottom of the band, for the output to switch on (plus) and then back off (minus). If R1 is zero or R2 is infinity (i.e., an open circuit), the band collapses to zero width, and it behaves as a standard comparator. The transfer characteristic is shown in the picture on the left. The value of the threshold T is given by and the maximum value of the output M is the power supply rail.
A unique property of circuits with parallel positive feedback is the impact on the input source. In circuits with negative parallel feedback (e.g., an inverting amplifier), the virtual ground at the inverting input separates the input source from the op-amp output. Here there is no virtual ground, and the steady op-amp output voltage is applied through R1-R2 network to the input source. The op-amp output passes an opposite current through the input source (it injects current into the source when the input voltage is positive and it draws current from the source when it is negative).
A practical Schmitt trigger with precise thresholds is shown in the figure on the right. The transfer characteristic has exactly the same shape of the previous basic configuration, and the threshold values are the same as well. On the other hand, in the previous case, the output voltage was depending on the power supply, while now it is defined by the Zener diodes (which could also be replaced with a single double-anode Zener diode). In this configuration, the output levels can be modified by appropriate choice of Zener diode, and these levels are resistant to power supply fluctuations (i.e., they increase the PSRR of the comparator). The resistor R3 is there to limit the current through the diodes, and the resistor R4 minimizes the input voltage offset caused by the comparator's input leakage currents (see limitations of real op-amps).
Inverting Schmitt trigger
In the inverting version, the attenuation and summation are separated. The two resistors R1 and R2 act only as a "pure" attenuator (voltage divider). The input loop acts as a series voltage summer that adds a part of the output voltage in series to the circuit input voltage. This series positive feedback creates the needed hysteresis that is controlled by the proportion between the resistances of R1 and the whole resistance (R1 and R2). The effective voltage applied to the op-amp input is floating so the op-amp must have a differential input.
The circuit is named inverting since the output voltage always has an opposite sign to the input voltage when it is out of the hysteresis cycle (when the input voltage is above the high threshold or below the low threshold). However, if the input voltage is within the hysteresis cycle (between the high and low thresholds), the circuit can be inverting as well as non-inverting. The output voltage is undefined and it depends on the last state so the circuit behaves like an elementary latch.
To compare the two versions, the circuit operation will be considered at the same conditions as above. If the Schmitt trigger is currently in the high state, the output will be at the positive power supply rail (+VS). The output voltage V+ of the voltage divider is:
The comparator will switch when Vin = V+. So must exceed above this voltage to get the output to switch. Once the comparator output has switched to −VS, the threshold becomes to switch back to high. So this circuit creates a switching band centered on zero, with trigger levels (it can be shifted to the left or the right by connecting R1 to a bias voltage). The input voltage must rise above the top of the band, and then below the bottom of the band, for the output to switch off (minus) and then back on (plus). If R1 is zero (i.e., a short circuit) or R2 is infinity, the band collapses to zero width, and it behaves as a standard comparator.
In contrast with the parallel version, this circuit does not impact on the input source since the source is separated from the voltage divider output by the high op-amp input differential impedance.
In the inverting amplifier voltage drop across resistor (R1) decides the reference voltages i.e., upper threshold voltage (V+) and lower threshold voltages (V−) for the comparison with input signal applied. These voltages are fixed as the output voltage and resistor values are fixed.
so by changing the drop across (R1) threshold voltages can be varied. By adding a bias voltage in series with resistor (R1) drop across it can be varied, which can change threshold voltages. Desired values of reference voltages can be obtained by varying bias voltage.
The above equations can be modified as:
Applications
Schmitt triggers are typically used in open loop configurations for noise immunity and closed loop configurations to implement function generators.
Analog-to-digital conversion: The Schmitt trigger is effectively a one bit analog to digital converter. When the signal reaches a given level it switches from its low to high state.
Level detection: The Schmitt trigger circuit is able to provide level detection. When undertaking this application, it is necessary that the hysteresis voltage is taken into account so that the circuit switches on the required voltage.
Line reception: When running a data line that may have picked up noise into a logic gate it is necessary to ensure that a logic output level is only changed as the data changed and not as a result of spurious noise that may have been picked up. Using a Schmitt trigger broadly enables the peak to peak noise to reach the level of the hysteresis before spurious triggering may occur.
Noise immunity
One application of a Schmitt trigger is to increase the noise immunity in a circuit with only a single input threshold. With only one input threshold, a noisy input signal near that threshold could cause the output to switch rapidly back and forth from noise alone. A noisy Schmitt Trigger input signal near one threshold can cause only one switch in output value, after which it would have to move beyond the other threshold in order to cause another switch.
For example, an amplified infrared photodiode may generate an electric signal that switches frequently between its absolute lowest value and its absolute highest value. This signal is then low-pass filtered to form a smooth signal that rises and falls corresponding to the relative amount of time the switching signal is on and off. That filtered output passes to the input of a Schmitt trigger. The net effect is that the output of the Schmitt trigger only passes from low to high after a received infrared signal excites the photodiode for longer than some known period, and once the Schmitt trigger is high, it only moves low after the infrared signal ceases to excite the photodiode for longer than a similar known period. Whereas the photodiode is prone to spurious switching due to noise from the environment, the delay added by the filter and Schmitt trigger ensures that the output only switches when there is certainly an input stimulating the device.
Schmitt triggers are common in many switching circuits for similar reasons (e.g., for switch debouncing).
The following 7400 series devices include a Schmitt trigger on their input(s): (see List of 7400-series integrated circuits)
7413: Dual Schmitt trigger 4-input NAND Gate
7414: Hex Schmitt trigger Inverter
7418: Dual Schmitt trigger 4-input NAND Gate
7419: Hex Schmitt trigger Inverter
74121: Monostable Multivibrator with Schmitt Trigger Inputs
74132: Quad 2-input NAND Schmitt Trigger
74221: Dual Monostable Multivibrator with Schmitt Trigger Input
74232: Quad NOR Schmitt Trigger
74310: Octal Buffer with Schmitt Trigger Inputs
74340: Octal Buffer with Schmitt Trigger Inputs and three-state inverted outputs
74341: Octal Buffer with Schmitt Trigger Inputs and three-state noninverted outputs
74344: Octal Buffer with Schmitt Trigger Inputs and three-state noninverted outputs
74(HC/HCT)7541 Octal Buffer with Schmitt Trigger Inputs and Three-State Noninverted Outputs
SN74LV8151 is a 10-bit universal Schmitt-trigger buffer with 3-state outputs
A number of 4000 series devices include a Schmitt trigger on their inputs(s): (see List of 4000-series integrated circuits)
4017: Decade Counter with Decoded Outputs
4020: 14-Stage Binary Ripple Counter
4022: Octal Counter with Decoded Outputs
4024: 7-Stage Binary Ripple Counter
4040: 12-Stage Binary Ripple Counter
4093: Quad 2-Input NAND
4538: Dual Monostable Multivibrator
4584: Hex inverting Schmitt trigger
40106: Hex Inverter
Schmitt input configurable single-gate chips: (see List of 7400-series integrated circuits#One gate chips)
NC7SZ57 Fairchild
NC7SZ58 Fairchild
SN74LVC1G57 Texas Instruments
SN74LVC1G58 Texas Instruments
Use as an oscillator
A Schmitt trigger is a bistable multivibrator, and it can be used to implement another type of multivibrator, the relaxation oscillator. This is achieved by connecting a single RC integrating circuit between the output and the input of an inverting Schmitt trigger. The output will be a continuous square wave whose frequency depends on the values of R and C, and the threshold points of the Schmitt trigger. Since multiple Schmitt trigger circuits can be provided by a single integrated circuit (e.g. the 4000 series CMOS device type 40106 contains 6 of them), a spare section of the IC can be quickly pressed into service as a simple and reliable oscillator with only two external components.
Here, a comparator-based Schmitt trigger is used in its inverting configuration. Additionally, slow negative feedback is added with an integrating RC network. The result, which is shown on the right, is that the output automatically oscillates from VSS to VDD as the capacitor charges from one Schmitt trigger threshold to the other.
See also
Operational amplifier applications
Threshold detector with hysteresis
List of 4000-series integrated circuits - includes logic chips with Schmitt trigger inputs
List of 7400-series integrated circuits - includes logic chips with Schmitt trigger inputs
Notes
References
External links
Inverting Schmitt Trigger Calculator
Non-Inverting Schmitt Trigger Calculator
Digital electronics
Electronic circuits
Hysteresis | Schmitt trigger | [
"Physics",
"Materials_science",
"Engineering"
] | 5,471 | [
"Physical phenomena",
"Digital electronics",
"Electronic circuits",
"Materials science",
"Electronic engineering",
"Hysteresis"
] |
379,303 | https://en.wikipedia.org/wiki/Gas%20exchange | Gas exchange is the physical process by which gases move passively by diffusion across a surface. For example, this surface might be the air/water interface of a water body, the surface of a gas bubble in a liquid, a gas-permeable membrane, or a biological membrane that forms the boundary between an organism and its extracellular environment.
Gases are constantly consumed and produced by cellular and metabolic reactions in most living things, so an efficient system for gas exchange between, ultimately, the interior of the cell(s) and the external environment is required. Small, particularly unicellular organisms, such as bacteria and protozoa, have a high surface-area to volume ratio. In these creatures the gas exchange membrane is typically the cell membrane. Some small multicellular organisms, such as flatworms, are also able to perform sufficient gas exchange across the skin or cuticle that surrounds their bodies. However, in most larger organisms, which have small surface-area to volume ratios, specialised structures with convoluted surfaces such as gills, pulmonary alveoli and spongy mesophylls provide the large area needed for effective gas exchange. These convoluted surfaces may sometimes be internalised into the body of the organism. This is the case with the alveoli, which form the inner surface of the mammalian lung, the spongy mesophyll, which is found inside the leaves of some kinds of plant, or the gills of those molluscs that have them, which are found in the mantle cavity.
In aerobic organisms, gas exchange is particularly important for respiration, which involves the uptake of oxygen () and release of carbon dioxide (). Conversely, in oxygenic photosynthetic organisms such as most land plants, uptake of carbon dioxide and release of both oxygen and water vapour are the main gas-exchange processes occurring during the day. Other gas-exchange processes are important in less familiar organisms: e.g. carbon dioxide, methane and hydrogen are exchanged across the cell membrane of methanogenic archaea. In nitrogen fixation by diazotrophic bacteria, and denitrification by heterotrophic bacteria (such as Paracoccus denitrificans and various pseudomonads), nitrogen gas is exchanged with the environment, being taken up by the former and released into it by the latter, while giant tube worms rely on bacteria to oxidize hydrogen sulfide extracted from their deep sea environment, using dissolved oxygen in the water as an electron acceptor.
Diffusion only takes place with a concentration gradient. Gases will flow from a high concentration to a low concentration.
A high oxygen concentration in the alveoli and low oxygen concentration in the capillaries causes oxygen to move into the capillaries.
A high carbon dioxide concentration in the capillaries and low carbon dioxide concentration in the alveoli causes carbon dioxide to move into the alveoli.
Physical principles of gas-exchange
Diffusion and surface area
The exchange of gases occurs as a result of diffusion down a concentration gradient. Gas molecules move from a region in which they are at high concentration to one in which they are at low concentration. Diffusion is a passive process, meaning that no energy is required to power the transport, and it follows Fick's law:
In relation to a typical biological system, where two compartments ('inside' and 'outside'), are separated by a membrane barrier, and where a gas is allowed to spontaneously diffuse down its concentration gradient:
J is the flux, the amount of gas diffusing per unit area of membrane per unit time. Note that this is already scaled for the area of the membrane.
D is the diffusion coefficient, which will differ from gas to gas, and from membrane to membrane, according to the size of the gas molecule in question, and the nature of the membrane itself (particularly its viscosity, temperature and hydrophobicity).
φ is the concentration of the gas.
x is the position across the thickness of the membrane.
dφ/dx is therefore the concentration gradient across the membrane. If the two compartments are individually well-mixed, then this is simplifies to the difference in concentration of the gas between the inside and outside compartments divided by the thickness of the membrane.
The negative sign indicates that the diffusion is always in the direction that - over time - will destroy the concentration gradient, i.e. the gas moves from high concentration to low concentration until eventually the inside and outside compartments reach equilibrium.
Gases must first dissolve in a liquid in order to diffuse across a membrane, so all biological gas exchange systems require a moist environment. In general, the higher the concentration gradient across the gas-exchanging surface, the faster the rate of diffusion across it. Conversely, the thinner the gas-exchanging surface (for the same concentration difference), the faster the gases will diffuse across it.
In the equation above, J is the flux expressed per unit area, so increasing the area will make no difference to its value. However, an increase in the available surface area, will increase the amount of gas that can diffuse in a given time. This is because the amount of gas diffusing per unit time (dq/dt) is the product of J and the area of the gas-exchanging surface, A:
Single-celled organisms such as bacteria and amoebae do not have specialised gas exchange surfaces, because they can take advantage of the high surface area they have relative to their volume. The amount of gas an organism produces (or requires) in a given time will be in rough proportion to the volume of its cytoplasm. The volume of a unicellular organism is very small; thus, it produces (and requires) a relatively small amount of gas in a given time. In comparison to this small volume, the surface area of its cell membrane is very large, and adequate for its gas-exchange needs without further modification. However, as an organism increases in size, its surface area and volume do not scale in the same way. Consider an imaginary organism that is a cube of side-length, L. Its volume increases with the cube (L3) of its length, but its external surface area increases only with the square (L2) of its length. This means the external surface rapidly becomes inadequate for the rapidly increasing gas-exchange needs of a larger volume of cytoplasm. Additionally, the thickness of the surface that gases must cross (dx in Fick's law) can also be larger in larger organisms: in the case of a single-celled organism, a typical cell membrane is only 10 nm thick; but in larger organisms such as roundworms (Nematoda) the equivalent exchange surface - the cuticle - is substantially thicker at 0.5 μm.
Interaction with circulatory systems
In multicellular organisms therefore, specialised respiratory organs such as gills or lungs are often used to provide the additional surface area for the required rate of gas exchange with the external environment. However the distances between the gas exchanger and the deeper tissues are often too great for diffusion to meet gaseous requirements of these tissues. The gas exchangers are therefore frequently coupled to gas-distributing circulatory systems, which transport the gases evenly to all the body tissues regardless of their distance from the gas exchanger.
Some multicellular organisms such as flatworms (Platyhelminthes) are relatively large but very thin, allowing their outer body surface to act as a gas exchange surface without the need for a specialised gas exchange organ. Flatworms therefore lack gills or lungs, and also lack a circulatory system. Other multicellular organisms such as sponges (Porifera) have an inherently high surface area, because they are very porous and/or branched. Sponges do not require a circulatory system or specialised gas exchange organs, because their feeding strategy involves one-way pumping of water through their porous bodies using flagellated collar cells. Each cell of the sponge's body is therefore exposed to a constant flow of fresh oxygenated water. They can therefore rely on diffusion across their cell membranes to carry out the gas exchange needed for respiration.
In organisms that have circulatory systems associated with their specialized gas-exchange surfaces, a great variety of systems are used for the interaction between the two.
In a countercurrent flow system, air (or, more usually, the water containing dissolved air) is drawn in the opposite direction to the flow of blood in the gas exchanger. A countercurrent system such as this maintains a steep concentration gradient along the length of the gas-exchange surface (see lower diagram in Fig. 2). This is the situation seen in the gills of fish and many other aquatic creatures. The gas-containing environmental water is drawn unidirectionally across the gas-exchange surface, with the blood-flow in the gill capillaries beneath flowing in the opposite direction. Although this theoretically allows almost complete transfer of a respiratory gas from one side of the exchanger to the other, in fish less than 80% of the oxygen in the water flowing over the gills is generally transferred to the blood.
Alternative arrangements are cross current systems found in birds. and dead-end air-filled sac systems found in the lungs of mammals. In a cocurrent flow system, the blood and gas (or the fluid containing the gas) move in the same direction through the gas exchanger. This means the magnitude of the gradient is variable along the length of the gas-exchange surface, and the exchange will eventually stop when an equilibrium has been reached (see upper diagram in Fig. 2).
Cocurrent flow gas exchange systems are not known to be used in nature.
Mammals
The gas exchanger in mammals is internalized to form lungs, as it is in most of the larger land animals. Gas exchange occurs in microscopic dead-end air-filled sacs called alveoli, where a very thin membrane (called the blood-air barrier) separates the blood in the alveolar capillaries (in the walls of the alveoli) from the alveolar air in the sacs.
Exchange membrane
The membrane across which gas exchange takes place in the alveoli (i.e. the blood-air barrier) is extremely thin (in humans, on average, 2.2 μm thick). It consists of the alveolar epithelial cells, their basement membranes and the endothelial cells of the pulmonary capillaries (Fig. 4). The large surface area of the membrane comes from the folding of the membrane into about 300 million alveoli, with diameters of approximately 75–300 μm each. This provides an extremely large surface area (approximately 145 m2) across which gas exchange can occur.
Alveolar air
Air is brought to the alveoli in small doses (called the tidal volume), by breathing in (inhalation) and out (exhalation) through the respiratory airways, a set of relatively narrow and moderately long tubes which start at the nose or mouth and end in the alveoli of the lungs in the chest. Air moves in and out through the same set of tubes, in which the flow is in one direction during inhalation, and in the opposite direction during exhalation.
During each inhalation, at rest, approximately 500 ml of fresh air flows in through the nose. It is warmed and moistened as it flows through the nose and pharynx. By the time it reaches the trachea the inhaled air's temperature is 37 °C and it is saturated with water vapor. On arrival in the alveoli it is diluted and thoroughly mixed with the approximately 2.5–3.0 liters of air that remained in the alveoli after the last exhalation. This relatively large volume of air that is semi-permanently present in the alveoli throughout the breathing cycle is known as the functional residual capacity (FRC).
At the beginning of inhalation the airways are filled with unchanged alveolar air, left over from the last exhalation. This is the dead space volume, which is usually about 150 ml. It is the first air to re-enter the alveoli during inhalation. Only after the dead space air has returned to the alveoli does the remainder of the tidal volume (500 ml - 150 ml = 350 ml) enter the alveoli. The entry of such a small volume of fresh air with each inhalation, ensures that the composition of the FRC hardly changes during the breathing cycle (Fig. 5). The alveolar partial pressure of oxygen remains very close to 13–14 kPa (100 mmHg), and the partial pressure of carbon dioxide varies minimally around 5.3 kPa (40 mmHg) throughout the breathing cycle (of inhalation and exhalation). The corresponding partial pressures of oxygen and carbon dioxide in the ambient (dry) air at sea level are 21 kPa (160 mmHg) and 0.04 kPa (0.3 mmHg) respectively.
This alveolar air, which constitutes the FRC, completely surrounds the blood in the alveolar capillaries (Fig. 6). Gas exchange in mammals occurs between this alveolar air (which differs significantly from fresh air) and the blood in the alveolar capillaries. The gases on either side of the gas exchange membrane equilibrate by simple diffusion. This ensures that the partial pressures of oxygen and carbon dioxide in the blood leaving the alveolar capillaries, and ultimately circulates throughout the body, are the same as those in the FRC.
The marked difference between the composition of the alveolar air and that of the ambient air can be maintained because the functional residual capacity is contained in dead-end sacs connected to the outside air by long, narrow, tubes (the airways: nose, pharynx, larynx, trachea, bronchi and their branches and sub-branches down to the bronchioles). This anatomy, and the fact that the lungs are not emptied and re-inflated with each breath, provides mammals with a "portable atmosphere", whose composition differs significantly from the present-day ambient air.
The composition of the air in the FRC is carefully monitored, by measuring the partial pressures of oxygen and carbon dioxide in the arterial blood. If either gas pressure deviates from normal, reflexes are elicited that change the rate and depth of breathing in such a way that normality is restored within seconds or minutes.
Pulmonary circulation
All the blood returning from the body tissues to the right side of the heart flows through the alveolar capillaries before being pumped around the body again. On its passage through the lungs the blood comes into close contact with the alveolar air, separated from it by a very thin diffusion membrane which is only, on average, about 2 μm thick. The gas pressures in the blood will therefore rapidly equilibrate with those in the alveoli, ensuring that the arterial blood that circulates to all the tissues throughout the body has an oxygen tension of 13−14 kPa (100 mmHg), and a carbon dioxide tension of 5.3 kPa (40 mmHg). These arterial partial pressures of oxygen and carbon dioxide are homeostatically controlled. A rise in the arterial , and, to a lesser extent, a fall in the arterial , will reflexly cause deeper and faster breathing until the blood gas tensions return to normal. The converse happens when the carbon dioxide tension falls, or, again to a lesser extent, the oxygen tension rises: the rate and depth of breathing are reduced until blood gas normality is restored.
Since the blood arriving in the alveolar capillaries has a of, on average, 6 kPa (45 mmHg), while the pressure in the alveolar air is 13 kPa (100 mmHg), there will be a net diffusion of oxygen into the capillary blood, changing the composition of the 3 liters of alveolar air slightly. Similarly, since the blood arriving in the alveolar capillaries has a of also about 6 kPa (45 mmHg), whereas that of the alveolar air is 5.3 kPa (40 mmHg), there is a net movement of carbon dioxide out of the capillaries into the alveoli. The changes brought about by these net flows of individual gases into and out of the functional residual capacity necessitate the replacement of about 15% of the alveolar air with ambient air every 5 seconds or so. This is very tightly controlled by the continuous monitoring of the arterial blood gas tensions (which accurately reflect partial pressures of the respiratory gases in the alveolar air) by the aortic bodies, the carotid bodies, and the blood gas and pH sensor on the anterior surface of the medulla oblongata in the brain. There are also oxygen and carbon dioxide sensors in the lungs, but they primarily determine the diameters of the bronchioles and pulmonary capillaries, and are therefore responsible for directing the flow of air and blood to different parts of the lungs.
It is only as a result of accurately maintaining the composition of the 3 liters alveolar air that with each breath some carbon dioxide is discharged into the atmosphere and some oxygen is taken up from the outside air. If more carbon dioxide than usual has been lost by a short period of hyperventilation, respiration will be slowed down or halted until the alveolar has returned to 5.3 kPa (40 mmHg). It is therefore strictly speaking untrue that the primary function of the respiratory system is to rid the body of carbon dioxide "waste". In fact the total concentration of carbon dioxide in arterial blood is about 26 mM (or 58 ml per 100 ml), compared to the concentration of oxygen in saturated arterial blood of about 9 mM (or 20 ml per 100 ml blood). This large concentration of carbon dioxide plays a pivotal role in the determination and maintenance of the pH of the extracellular fluids. The carbon dioxide that is breathed out with each breath could probably be more correctly be seen as a byproduct of the body's extracellular fluid carbon dioxide and pH homeostats
If these homeostats are compromised, then a respiratory acidosis, or a respiratory alkalosis will occur. In the long run these can be compensated by renal adjustments to the H+ and HCO3− concentrations in the plasma; but since this takes time, the hyperventilation syndrome can, for instance, occur when agitation or anxiety cause a person to breathe fast and deeply thus blowing off too much CO2 from the blood into the outside air, precipitating a set of distressing symptoms which result from an excessively high pH of the extracellular fluids.
Oxygen has a very low solubility in water, and is therefore carried in the blood loosely combined with hemoglobin. The oxygen is held on the hemoglobin by four ferrous iron-containing heme groups per hemoglobin molecule. When all the heme groups carry one O2 molecule each the blood is said to be "saturated" with oxygen, and no further increase in the partial pressure of oxygen will meaningfully increase the oxygen concentration of the blood. Most of the carbon dioxide in the blood is carried as HCO3− ions in the plasma. However the conversion of dissolved CO2 into HCO3− (through the addition of water) is too slow for the rate at which the blood circulates through the tissues on the one hand, and alveolar capillaries on the other. The reaction is therefore catalyzed by carbonic anhydrase, an enzyme inside the red blood cells. The reaction can go in either direction depending on the prevailing partial pressure of carbon dioxide. A small amount of carbon dioxide is carried on the protein portion of the hemoglobin molecules as carbamino groups. The total concentration of carbon dioxide (in the form of bicarbonate ions, dissolved CO2, and carbamino groups) in arterial blood (i.e. after it has equilibrated with the alveolar air) is about 26 mM (or 58 ml/100 ml), compared to the concentration of oxygen in saturated arterial blood of about 9 mM (or 20 ml/100 ml blood).
Other vertebrates
Fish
The dissolved oxygen content in fresh water is approximately 8–10 milliliters per liter compared to that of air which is 210 milliliters per liter. Water is 800 times more dense than air and 100 times more viscous. Therefore, oxygen has a diffusion rate in air 10,000 times greater than in water. The use of sac-like lungs to remove oxygen from water would therefore not be efficient enough to sustain life. Rather than using lungs, gaseous exchange takes place across the surface of highly vascularized gills. Gills are specialised organs containing filaments, which further divide into lamellae. The lamellae contain capillaries that provide a large surface area and short diffusion distances, as their walls are extremely thin. Gill rakers are found within the exchange system in order to filter out food, and keep the gills clean.
Gills use a countercurrent flow system that increases the efficiency of oxygen-uptake (and waste gas loss). Oxygenated water is drawn in through the mouth and passes over the gills in one direction while blood flows through the lamellae in the opposite direction. This countercurrent maintains steep concentration gradients along the entire length of each capillary (see the diagram in the "Interaction with circulatory systems" section above). Oxygen is able to continually diffuse down its gradient into the blood, and the carbon dioxide down its gradient into the water. The deoxygenated water will eventually pass out through the operculum (gill cover). Although countercurrent exchange systems theoretically allow an almost complete transfer of a respiratory gas from one side of the exchanger to the other, in fish less than 80% of the oxygen in the water flowing over the gills is generally transferred to the blood.
Amphibians
Amphibians have three main organs involved in gas exchange: the lungs, the skin, and the gills, which can be used singly or in a variety of different combinations. The relative importance of these structures differs according to the age, the environment and species of the amphibian. The skin of amphibians and their larvae are highly vascularised, leading to relatively efficient gas exchange when the skin is moist. The larvae of amphibians, such as the pre-metamorphosis tadpole stage of frogs, also have external gills. The gills are absorbed into the body during metamorphosis, after which the lungs will then take over. The lungs are usually simpler than in the other land vertebrates, with few internal septa and larger alveoli; however, toads, which spend more time on land, have a larger alveolar surface with more developed lungs. To increase the rate of gas exchange by diffusion, amphibians maintain the concentration gradient across the respiratory surface using a process called buccal pumping. The lower floor of the mouth is moved in a "pumping" manner, which can be observed by the naked eye.
Reptiles
All reptiles breathe using lungs. In squamates (the lizards and snakes) ventilation is driven by the axial musculature, but this musculature is also used during movement, so some squamates rely on buccal pumping to maintain gas exchange efficiency.
Due to the rigidity of turtle and tortoise shells, significant expansion and contraction of the chest is difficult. Turtles and tortoises depend on muscle layers attached to their shells, which wrap around their lungs to fill and empty them. Some aquatic turtles can also pump water into a highly vascularised mouth or cloaca to achieve gas-exchange.
Crocodiles have a structure similar to the mammalian diaphragm - the diaphragmaticus - but this muscle helps create a unidirectional flow of air through the lungs rather than a tidal flow: this is more similar to the air-flow seen in birds than that seen in mammals. During inhalation, the diaphragmaticus pulls the liver back, inflating the lungs into the space this creates. Air flows into the lungs from the bronchus during inhalation, but during exhalation, air flows out of the lungs into the bronchus by a different route: this one-way movement of gas is achieved by aerodynamic valves in the airways.
Birds
Birds have lungs but no diaphragm. They rely mostly on air sacs for ventilation. These air sacs do not play a direct role in gas exchange, but help to move air unidirectionally across the gas exchange surfaces in the lungs. During inhalation, fresh air is taken from the trachea down into the posterior air sacs and into the parabronchi which lead from the posterior air sacs into the lung. The air that enters the lungs joins the air which is already in the lungs, and is drawn forward across the gas exchanger into anterior air sacs. During exhalation, the posterior air sacs force air into the same parabronchi of the lungs, flowing in the same direction as during inhalation, allowing continuous gas exchange irrespective of the breathing cycle. Air exiting the lungs during exhalation joins the air being expelled from the anterior air sacs (both consisting of "spent air" that has passed through the gas exchanger) entering the trachea to be exhaled (Fig. 10). Selective bronchoconstriction at the various bronchial branch points ensures that the air does not ebb and flow through the bronchi during inhalation and exhalation, as it does in mammals, but follows the paths described above.
The unidirectional airflow through the parabronchi exchanges respiratory gases with a crosscurrent blood flow (Fig. 9). The partial pressure of O2 () in the parabronchioles declines along their length as O2 diffuses into the blood. The capillaries leaving the exchanger near the entrance of airflow take up more O2 than capillaries leaving near the exit end of the parabronchi. When the contents of all capillaries mix, the final of the mixed pulmonary venous blood is higher than that of the exhaled air, but lower than that of the inhaled air.
Plants
Gas exchange in plants is dominated by the roles of carbon dioxide, oxygen and water vapor. is the only carbon source for autotrophic growth by photosynthesis, and when a plant is actively photosynthesising in the light, it will be taking up carbon dioxide, and losing water vapor and oxygen. At night, plants respire, and gas exchange partly reverses: water vapor is still lost (but to a smaller extent), but oxygen is now taken up and carbon dioxide released.
Plant gas exchange occurs mostly through the leaves. Gas exchange between a leaf and the atmosphere occurs simultaneously through two pathways: 1) epidermal cells and cuticular waxes (usually referred as 'cuticle') which are always present at each leaf surface, and 2) stomata, which typically control the majority of the exchange. Gases enter into the photosynthetic tissue of the leaf through dissolution onto the moist surface of the palisade and spongy mesophyll cells. The spongy mesophyll cells are loosely packed, allowing for an increased surface area, and consequently an increased rate of gas-exchange. Uptake of carbon dioxide necessarily results in some loss of water vapor, because both molecules enter and leave by the same stomata, so plants experience a gas exchange dilemma: gaining enough without losing too much water. Therefore, water loss from other parts of the leaf is minimised by the waxy cuticle on the leaf's epidermis. The size of a stoma is regulated by the opening and closing of its two guard cells: the turgidity of these cells determines the state of the stomatal opening, and this itself is regulated by water stress. Plants showing crassulacean acid metabolism are drought-tolerant xerophytes and perform almost all their gas-exchange at night, because it is only during the night that these plants open their stomata. By opening the stomata only at night, the water vapor loss associated with carbon dioxide uptake is minimised. However, this comes at the cost of slow growth: the plant has to store the carbon dioxide in the form of malic acid for use during the day, and it cannot store unlimited amounts.
Gas exchange measurements are important tools in plant science: this typically involves sealing the plant (or part of a plant) in a chamber and measuring changes in the concentration of carbon dioxide and water vapour with an infrared gas analyzer. If the environmental conditions (humidity, concentration, light and temperature) are fully controlled, the measurements of uptake and water release reveal important information about the assimilation and transpiration rates. The intercellular concentration reveals important information about the photosynthetic condition of the plants. Simpler methods can be used in specific circumstances: hydrogencarbonate indicator can be used to monitor the consumption of in a solution containing a single plant leaf at different levels of light intensity, and oxygen generation by the pondweed Elodea can be measured by simply collecting the gas in a submerged test-tube containing a small piece of the plant.
Invertebrates
The mechanism of gas exchange in invertebrates depends their size, feeding strategy, and habitat (aquatic or terrestrial).
The sponges (Porifera) are sessile creatures, meaning they are unable to move on their own and normally remain attached to their substrate. They obtain nutrients through the flow of water across their cells, and they exchange gases by simple diffusion across their cell membranes. Pores called ostia draw water into the sponge and the water is subsequently circulated through the sponge by cells called choanocytes which have hair-like structures that move the water through the sponge.
The cnidarians include corals, sea anemones, jellyfish and hydras. These animals are always found in aquatic environments, ranging from fresh water to salt water. They do not have any dedicated respiratory organs; instead, every cell in their body can absorb oxygen from the surrounding water, and release waste gases to it. One key disadvantage of this feature is that cnidarians can die in environments where water is stagnant, as they deplete the water of its oxygen supply. Corals often form symbiosis with other organisms, particularly photosynthetic dinoflagellates. In this symbiosis, the coral provides shelter and the other organism provides nutrients to the coral, including oxygen.
The roundworms (Nematoda), flatworms (Platyhelminthes), and many other small invertebrate animals living in aquatic or otherwise wet habitats do not have a dedicated gas-exchange surface or circulatory system. They instead rely on diffusion of and directly across their cuticle. The cuticle is the semi-permeable outermost layer of their bodies.
Other aquatic invertebrates such as most molluscs (Mollusca) and larger crustaceans (Crustacea) such as lobsters, have gills analogous to those of fish, which operate in a similar way.
Unlike the invertebrates groups mentioned so far, insects are usually terrestrial, and exchange gases across a moist surface in direct contact with the atmosphere, rather than in contact with surrounding water. The insect's exoskeleton is impermeable to gases, including water vapor, so they have a more specialised gas exchange system, requiring gases to be directly transported to the tissues via a complex network of tubes. This respiratory system is separated from their circulatory system. Gases enter and leave the body through openings called spiracles, located laterally along the thorax and abdomen. Similar to plants, insects are able to control the opening and closing of these spiracles, but instead of relying on turgor pressure, they rely on muscle contractions. These contractions result in an insect's abdomen being pumped in and out. The spiracles are connected to tubes called tracheae, which branch repeatedly and ramify into the insect's body. These branches terminate in specialised tracheole cells which provides a thin, moist surface for efficient gas exchange, directly with cells.
The other main group of terrestrial arthropod, the arachnids (spiders, scorpion, mites, and their relatives) typically perform gas exchange with a book lung.
Summary of main gas exchange systems
See also
References
Biological processes | Gas exchange | [
"Physics",
"Chemistry",
"Biology"
] | 6,762 | [
"Matter",
"Phases of matter",
"nan",
"Statistical mechanics",
"Gases"
] |
1,659,646 | https://en.wikipedia.org/wiki/Initiation%20%28chemistry%29 | In chemistry, initiation is a chemical reaction that triggers one or more secondary reactions. Initiation creates a reactive centre on a molecule which produces a chain reaction. The reactive centre generated by initiation is usually a radical, but can also be cations or anions. Once the reaction is initiated, the species goes through propagation where the reactive species reacts with stable molecules, producing stable species and reactive species. This process can produce very long chains of molecules called polymers, which are the building blocks for many materials. After propagation, the reaction is then terminated. There are different types of initiation, with the two main ways being thermal initiation and photo-initiation (light).
Thermal initiation
Thermal initiation involves initiating a reaction in the presence of heat, usually at very high temperatures. Heating a reaction can result in radical initiation of the substrate(s). In the presence of heat, a monomer can self-initiate and react with other monomers or pairs of monomers. This process is called spontaneous polymerization and requires a lot of heat to occur (up to 200°C). For monomers to initiate and polymerize with the same type of monomer (called Homopolymerization), ~180°C is needed for the monomers to initiate. Copolymerization, which is when different kinds of monomers are initiated and react with each other, is more stable and can happen at lower temperatures than Homopolymerization. Self-initiation between homo-monomers is a difficult mechanism to observe because species that are initiated aren't always the same kind of monomer. Sometimes impurities found in the reaction flask with the monomers get initiated and polymerize with monomers, instead of the monomer getting initiated.
Photoinitiation (light)
Photo-initiation occurs when monomers get initiated by light irradiation. LED light passes through the reaction flask which excites the monomers turning them into reactive species, mainly radicals and ions, which can then polymerize. There are two mechanistic classifications of photo-initiation reactions, being either a photoredox process or intramolecular photochemical process. This type of initiation can happen at much lower temperatures, mainly room temperature, then thermal initiation. This makes photo-initiation much more practical than thermal initiation. Photo-initiation also produces less side reactions than thermal and has less impurities. Though thermal initiation is hard to maintain, photo-initiation provides an easy way to initiate monomers to polymerize. Photo-initiation is even used in application such as making various coatings, adhesives, inks, and microelectronics.
See also
Free radical addition
References
Sources
R. G., Compton.1992. Mechanism and Kinetics of Addition Polymerizations, 30, 75-162.
Britannica, The Editors of Encyclopaedia. "chain reaction". Encyclopedia Britannica, 2 May. 2017, https://www.britannica.com/science/chain-reaction. Accessed 29 March 2023.
Britannica, The Editors of Encyclopaedia. "polymer". Encyclopedia Britannica, 2 Jan. 2023, https://www.britannica.com/science/polymer. Accessed 31 March 2023.
Graeme, Moad and David H., Solomon. 1989. Comprehensive Polymer Science and Supplements. 141-146.
Yagçi, Y., Jockusch, S., Turro, N.J. Photoinitiated Polymerization: Advances, Challenges, and Opportunities. Macromolecules 2010, 43, 6245–6260.
Gijsman, P., Hensen, G., Manon, M. Thermal initiation of the oxidation of thermoplastic polymers (Polyamides, Polyesters and UHMwPE). Polymer Degradation and Stability 2021, 183.
Chen, M., Zhong, M., Johnson, J. A. Light-Controlled Radical Polymerization: Mechanisms, Methods, and Applications. Chemical Reviews, 2016,116(17), 10167–1021.
Khojczyk (2011-20-09), English: Hofmann-Löffler-Freytag reaction mechanism, retrieved 2023-03-31.
Reaction mechanisms | Initiation (chemistry) | [
"Chemistry"
] | 875 | [
"Reaction mechanisms",
"Chemical kinetics",
"Physical organic chemistry"
] |
1,661,177 | https://en.wikipedia.org/wiki/Accretion%20%28astrophysics%29 | In astrophysics, accretion is the accumulation of particles into a massive object by gravitationally attracting more matter, typically gaseous matter, into an accretion disk. Most astronomical objects, such as galaxies, stars, and planets, are formed by accretion processes.
Overview
The accretion model that Earth and the other terrestrial planets formed from meteoric material was proposed in 1944 by Otto Schmidt, followed by the protoplanet theory of William McCrea (1960) and finally the capture theory of Michael Woolfson. In 1978, Andrew Prentice resurrected the initial Laplacian ideas about planet formation and developed the modern Laplacian theory. None of these models proved completely successful, and many of the proposed theories were descriptive.
The 1944 accretion model by Otto Schmidt was further developed in a quantitative way in 1969 by Viktor Safronov. He calculated, in detail, the different stages of terrestrial planet formation. Since then, the model has been further developed using intensive numerical simulations to study planetesimal accumulation. It is now accepted that stars form by the gravitational collapse of interstellar gas. Prior to collapse, this gas is mostly in the form of molecular clouds, such as the Orion Nebula. As the cloud collapses, losing potential energy, it heats up, gaining kinetic energy, and the conservation of angular momentum ensures that the cloud forms a flattened disk—the accretion disk.
Accretion of galaxies
A few hundred thousand years after the Big Bang, the Universe cooled to the point where atoms could form. As the Universe continued to expand and cool, the atoms lost enough kinetic energy, and dark matter coalesced sufficiently, to form protogalaxies. As further accretion occurred, galaxies formed. Indirect evidence is widespread. Galaxies grow through mergers and smooth gas accretion. Accretion also occurs inside galaxies, forming stars.
Accretion of stars
Stars are thought to form inside giant clouds of cold molecular hydrogen—giant molecular clouds of roughly and in diameter. Over millions of years, giant molecular clouds are prone to collapse and fragmentation. These fragments then form small, dense cores, which in turn collapse into stars. The cores range in mass from a fraction to several times that of the Sun and are called protostellar (protosolar) nebulae. They possess diameters of and a particle number density of roughly . Compare it with the particle number density of the air at the sea level—.
The initial collapse of a solar-mass protostellar nebula takes around 100,000 years. Every nebula begins with a certain amount of angular momentum. Gas in the central part of the nebula, with relatively low angular momentum, undergoes fast compression and forms a hot hydrostatic (non-contracting) core containing a small fraction of the mass of the original nebula. This core forms the seed of what will become a star. As the collapse continues, conservation of angular momentum dictates that the rotation of the infalling envelope accelerates, which eventually forms a disk.
As the infall of material from the disk continues, the envelope eventually becomes thin and transparent and the young stellar object (YSO) becomes observable, initially in far-infrared light and later in the visible. Around this time the protostar begins to fuse deuterium. If the protostar is sufficiently massive (above ), hydrogen fusion follows. Otherwise, if its mass is too low, the object becomes a brown dwarf. This birth of a new star occurs approximately 100,000 years after the collapse begins. Objects at this stage are known as Class I protostars, which are also called young T Tauri stars, evolved protostars, or young stellar objects. By this time, the forming star has already accreted much of its mass; the total mass of the disk and remaining envelope does not exceed 10–20% of the mass of the central YSO.
At the next stage, the envelope completely disappears, having been gathered up by the disk, and the protostar becomes a classical T Tauri star. The latter have accretion disks and continue to accrete hot gas, which manifests itself by strong emission lines in their spectrum. The former do not possess accretion disks. Classical T Tauri stars evolve into weakly lined T Tauri stars. This happens after about 1 million years. The mass of the disk around a classical T Tauri star is about 1–3% of the stellar mass, and it is accreted at a rate of 10−7 to per year. A pair of bipolar jets is usually present as well. The accretion explains all peculiar properties of classical T Tauri stars: strong flux in the emission lines (up to 100% of the intrinsic luminosity of the star), magnetic activity, photometric variability and jets. The emission lines actually form as the accreted gas hits the "surface" of the star, which happens around its magnetic poles. The jets are byproducts of accretion: they carry away excessive angular momentum. The classical T Tauri stage lasts about 10 million years (there are only a few examples of so-called Peter Pan disks, where the accretion continues to persist for much longer periods, sometimes lasting for more than 40 million years). The disk eventually disappears due to accretion onto the central star, planet formation, ejection by jets, and photoevaporation by ultraviolet radiation from the central star and nearby stars. As a result, the young star becomes a weakly lined T Tauri star, which, over hundreds of millions of years, evolves into an ordinary Sun-like star, dependent on its initial mass.
Accretion of planets
Self-accretion of cosmic dust accelerates the growth of the particles into boulder-sized planetesimals. The more massive planetesimals accrete some smaller ones, while others shatter in collisions. Accretion disks are common around smaller stars, stellar remnants in a close binary, or black holes surrounded by material (such as those at the centers of galaxies). Some dynamics in the disk, such as dynamical friction, are necessary to allow orbiting gas to lose angular momentum and fall onto the central massive object. Occasionally, this can result in stellar surface fusion (see Bondi accretion).
In the formation of terrestrial planets or planetary cores, several stages can be considered. First, when gas and dust grains collide, they agglomerate by microphysical processes like van der Waals forces and electromagnetic forces, forming micrometer-sized particles. During this stage, accumulation mechanisms are largely non-gravitational in nature. However, planetesimal formation in the centimeter-to-meter range is not well understood, and no convincing explanation is offered as to why such grains would accumulate rather than simply rebound. In particular, it is still not clear how these objects grow to become sized planetesimals; this problem is known as the "meter size barrier": As dust particles grow by coagulation, they acquire increasingly large relative velocities with respect to other particles in their vicinity, as well as a systematic inward drift velocity, that leads to destructive collisions, and thereby limit the growth of the aggregates to some maximum size. Ward (1996) suggests that when slow moving grains collide, the very low, yet non-zero, gravity of colliding grains impedes their escape. It is also thought that grain fragmentation plays an important role replenishing small grains and keeping the disk thick, but also in maintaining a relatively high abundance of solids of all sizes.
A number of mechanisms have been proposed for crossing the 'meter-sized' barrier. Local concentrations of pebbles may form, which then gravitationally collapse into planetesimals the size of large asteroids. These concentrations can occur passively due to the structure of the gas disk, for example, between eddies, at pressure bumps, at the edge of a gap created by a giant planet, or at the boundaries of turbulent regions of the disk. Or, the particles may take an active role in their concentration via a feedback mechanism referred to as a streaming instability. In a streaming instability the interaction between the solids and the gas in the protoplanetary disk results in the growth of local concentrations, as new particles accumulate in the wake of small concentrations, causing them to grow into massive filaments. Alternatively, if the grains that form due to the agglomeration of dust are highly porous their growth may continue until they become large enough to collapse due to their own gravity. The low density of these objects allows them to remain strongly coupled with the gas, thereby avoiding high velocity collisions which could result in their erosion or fragmentation.
Grains eventually stick together to form mountain-size (or larger) bodies called planetesimals. Collisions and gravitational interactions between planetesimals combine to produce Moon-size planetary embryos (protoplanets) over roughly 0.1–1 million years. Finally, the planetary embryos collide to form planets over 10–100 million years. The planetesimals are massive enough that mutual gravitational interactions are significant enough to be taken into account when computing their evolution. Growth is aided by orbital decay of smaller bodies due to gas drag, which prevents them from being stranded between orbits of the embryos. Further collisions and accumulation lead to terrestrial planets or the core of giant planets.
If the planetesimals formed via the gravitational collapse of local concentrations of pebbles, their growth into planetary embryos and the cores of giant planets is dominated by the further accretions of pebbles. Pebble accretion is aided by the gas drag felt by objects as they accelerate toward a massive body. Gas drag slows the pebbles below the escape velocity of the massive body causing them to spiral toward and to be accreted by it. Pebble accretion may accelerate the formation of planets by a factor of 1000 compared to the accretion of planetesimals, allowing giant planets to form before the dissipation of the gas disk. However, core growth via pebble accretion appears incompatible with the final masses and compositions of Uranus and Neptune. Direct calculations indicate that, in a typical protoplanetary disk, the formation time of a giant planet via pebble accretion is comparable to the formation times resulting from planetesimal accretion.
The formation of terrestrial planets differs from that of giant gas planets, also called Jovian planets. The particles that make up the terrestrial planets are made from metal and rock that condensed in the inner Solar System. However, Jovian planets began as large, icy planetesimals, which then captured hydrogen and helium gas from the solar nebula. Differentiation between these two classes of planetesimals arise due to the frost line of the solar nebula.
Accretion of asteroids
Meteorites contain a record of accretion and impacts during all stages of asteroid origin and evolution; however, the mechanism of asteroid accretion and growth is not well understood. Evidence suggests the main growth of asteroids can result from gas-assisted accretion of chondrules, which are millimeter-sized spherules that form as molten (or partially molten) droplets in space before being accreted to their parent asteroids. In the inner Solar System, chondrules appear to have been crucial for initiating accretion. The tiny mass of asteroids may be partly due to inefficient chondrule formation beyond 2 AU, or less-efficient delivery of chondrules from near the protostar. Also, impacts controlled the formation and destruction of asteroids, and are thought to be a major factor in their geological evolution.
Chondrules, metal grains, and other components likely formed in the solar nebula. These accreted together to form parent asteroids. Some of these bodies subsequently melted, forming metallic cores and olivine-rich mantles; others were aqueously altered. After the asteroids had cooled, they were eroded by impacts for 4.5 billion years, or disrupted.
For accretion to occur, impact velocities must be less than about twice the escape velocity, which is about for a radius asteroid. Simple models for accretion in the asteroid belt generally assume micrometer-sized dust grains sticking together and settling to the midplane of the nebula to form a dense layer of dust, which, because of gravitational forces, was converted into a disk of kilometer-sized planetesimals. But, several arguments suggest that asteroids may not have accreted this way.
Accretion of comets
Comets, or their precursors, formed in the outer Solar System, possibly millions of years before planet formation. How and when comets formed is debated, with distinct implications for Solar System formation, dynamics, and geology. Three-dimensional computer simulations indicate the major structural features observed on cometary nuclei can be explained by pairwise low velocity accretion of weak cometesimals. The currently favored formation mechanism is that of the nebular hypothesis, which states that comets are probably a remnant of the original planetesimal "building blocks" from which the planets grew.
Astronomers think that comets originate in both the Oort cloud and the scattered disk. The scattered disk was created when Neptune migrated outward into the proto-Kuiper belt, which at the time was much closer to the Sun, and left in its wake a population of dynamically stable objects that could never be affected by its orbit (the Kuiper belt proper), and a population whose perihelia are close enough that Neptune can still disturb them as it travels around the Sun (the scattered disk). Because the scattered disk is dynamically active and the Kuiper belt relatively dynamically stable, the scattered disk is now seen as the most likely point of origin for periodic comets. The classic Oort cloud theory states that the Oort cloud, a sphere measuring about in radius, formed at the same time as the solar nebula and occasionally releases comets into the inner Solar System as a giant planet or star passes nearby and causes gravitational disruptions. Examples of such comet clouds may already have been seen in the Helix Nebula.
The Rosetta mission to comet 67P/Churyumov–Gerasimenko determined in 2015 that when Sun's heat penetrates the surface, it triggers evaporation (sublimation) of buried ice. While some of the resulting water vapour may escape from the nucleus, 80% of it recondenses in layers beneath the surface. This observation implies that the thin ice-rich layers exposed close to the surface may be a consequence of cometary activity and evolution, and that global layering does not necessarily occur early in the comet's formation history. While most scientists thought that all the evidence indicated that the structure of nuclei of comets is processed rubble piles of smaller ice planetesimals of a previous generation, the Rosetta mission confirmed the idea that comets are "rubble piles" of disparate material. Comets appear to have formed as ~100-km bodies, then overwhelmingly ground/recontacted into their present states.
See also
Quasi-star
References
Concepts in astrophysics
Celestial mechanics
Solar System dynamic theories | Accretion (astrophysics) | [
"Physics"
] | 3,087 | [
"Celestial mechanics",
"Classical mechanics",
"Astrophysics",
"Concepts in astrophysics"
] |
1,661,604 | https://en.wikipedia.org/wiki/Quebec%20%E2%80%93%20New%20England%20Transmission | The Quebec – New England Transmission (officially known in Quebec as the Réseau multiterminal à courant continu (RMCC) and also known as Phase I / Phase II and the Radisson - Nicolet - Des Cantons circuit, and known in New England as the Northern Pass) is a long-distance high-voltage direct current (HVDC) line between Radisson, Quebec and Westford Road in Ayer, Massachusetts. As of 2012, it remains one of only two Multi-terminal HVDC systems in the world (the other one being the Sardinia–Corsica–Italy system, completed in the same year) and is "the only multi-terminal bipole HVDC system in the world where three stations are interconnected and operate under a common master control system".
History
Initially, the Quebec – New England Transmission consisted of the section between the Des Cantons station near Windsor, Quebec and the Frank D. Comerford Dam near Monroe, New Hampshire which, because of the asynchronous operation of the American and Québec power grids, had to be implemented as HVDC. This bipolar electricity transmission line, which is overhead for its whole length except the crossing of Saint Lawrence river, went into service in 1986. It could transfer a maximum power of 690 megawatts. The operating voltage was ±450kV or 900 kV from line to line.
The line was planned to extend beyond the two terminals at Des Cantons and Comerford to the hydroelectric power plants of the La Grande Complex, in the James Bay region of Québec, and to the high consumption area around Boston, Massachusetts — specifically, by to the north toward the converter station at Radisson Substation, and to the south to the converter station at Sandy Pond in Massachusetts. The transmission power was increased by extending the existing converter stations to 2,000 megawatts, with the value of the transmission voltage remaining unchanged at ±450 kV. For the connection of the Montreal area, a further converter station at Nicolet was put into service in 1992 with a transmission capacity of 2,000 megawatts.
The line crosses the Saint Lawrence River between Grondines and Lotbinière via a tunnel. Until the tunnel was built, the line crossed the river via an overhead lattice tower electricity pylon—portions of one of these towers would later be used as part of the observation tower at La Cité de l'Énergie in Shawinigan.
Failed Northern Pass initiative
In December 2008, Hydro-Québec, along with American utilities Northeast Utilities (parent company of Public Service of New Hampshire) and NSTAR (parent company of Boston Edison), created a joint venture to build a new HVDC line from Windsor, Quebec to Deerfield, New Hampshire, with an HVDC converter terminal intended to be built in Franklin, New Hampshire. Hydro-Québec would have owned the segment within Quebec, while the segment within the US would have been owned by Northern Pass Transmission LLC, a partnership between Northeast Utilities (75%) and NSTAR (25%). Estimated to cost US$1.1 billion to build, it was projected that the line would either run in existing right-of-way adjacent to the HVDC line that runs through New Hampshire, or it would have connected to a right-of-way in northern New Hampshire that runs through the White Mountains. This line, projected to carry 1,200 megawatts, would have brought electricity to approximately one million homes.
In order to go ahead, the project needed to receive regulatory approval in Quebec and the United States. The proposed transmission line could have been in operation in 2015. According to Jim Robb, a senior executive from Northeast Utilities, New England could have met one third of its Regional Greenhouse Gas Initiative commitments with the hydropower coming through this new power line alone.
In October 2010, Northeast Utilities announced that it would merge with NSTAR. In effect, Northern Pass Transmission would have become a wholly owned subsidiary of Northeast Utilities, which was renamed Eversource Energy in 2015.
The purchase of power from Hydro-Québec was an issue during the Massachusetts gubernatorial election of 2010.
In July 2019, Eversource issued a statement that the Northern Pass project was now "off the table" after investing $318 million over a decade to develop and promote the project.
New England Clean Energy Connect
Construction of the New England Clean Energy Connect, a similar project, started in February 2021.
Massachusetts pursues it as an option to bring Canadian hydropower through transmission lines in Maine, estimated to cost $1 billion. The citizens of Maine voted in a 2021 referendum to revoke the project's permit, forcing a halt to construction which was already underway. In August 2022, the Supreme Court of Maine ruled the retroactive revocation of the permit was unconstitutional, but remanded the case to lower courts for more consideration.
Sites
Important waypoints of the line.
Radisson to Nicolet
Nicolet to Des Cantons
Des Cantons to Comerford
Comerford to Ayer
Des Cantons to Deerfield
Route listed here reflects the primary route, and is currently projected.
Grounding electrodes
Quebec – New England Transmission has two grounding electrodes: one at Des Cantons at and the other near Radisson substation approximately at .
Opposition
2004 Hydro tower bombing
In 2004, shortly before U.S. President George W. Bush's visit to Canada, a tower along the Quebec–New England Transmission circuit in the Eastern Townships near the Canada–US border was damaged by explosive charges detonated at its base. The CBC reported that a message, purportedly from the Résistance internationaliste and issued to the La Presse and Le Journal de Montréal newspapers and CKAC radio, stated that the attack had been carried out to "denounce the 'pillaging' of Quebec's resources by the United States".
2015: Sierra Club of New Hampshire
In November 2015, the Sierra Club of New Hampshire expressed opposition to the new line, saying that it would benefit Connecticut and Massachusetts residents more than those in New Hampshire, and expressing concerns about the flooding of boreal forests during the construction of Hydro-Québec's dams in northern Quebec, disputes with the Innu First Nations, and the effects on tourism and the environment within the White Mountain National Forest.
2011-Present: Local government and community opposition
A coalition of New Hampshire communities and local government officials oppose the construction of the expanded transmission line. Elected representatives from New Hampshire's 10 counties have expressed opposition, including 114 officials in the New Hampshire House of Representatives and 5 members of the New Hampshire Senate. United States Congressional Representative Carol Shea-Porter and Senators Maggie Hassan and Jeanne Shaheen also oppose expansion of the line. Some of the incumbent power companies in New England oppose it, while other companies favor it.
Scientific analysis
A 2024 study by Gazar, Borsuk, and Calder used Bayesian network modeling of historical data (1979–2021)
to investigate whether proposed expansions of transborder transmission capacity between Quebec and
the northeastern United States lead directly to new hydropower reservoir construction in Quebec.
Their analysis found that while increased transmission capacity can indirectly affect generation
investments by facilitating exports, the primary drivers of hydropower expansion appear to be domestic
demand in Quebec and electricity price differences relative to the United States, rather than the
construction of new transmission corridors in and of itself. The authors concluded that environmental
impacts of large reservoirs are therefore not necessarily a direct consequence of new transmission lines,
especially if exports are settled primarily on the short-term market rather than through long-term
power purchase agreements.
See also
Hydro-Québec
Hydro-Québec's electricity transmission system
Saint Lawrence River HVDC Powerline Crossing
Champlain Hudson Power Express
References
Bibliography
External links
The HVDC Transmission Québec - New England
Pictures of the Quebec - New England line in New England
Northern Pass Transmission
Bonneville Power Administration: Quebec - New England Transmission (via Internet Archive)
Bonneville Power Administration: Schematics of Quebec - New England Transmission system (via Internet Archive)
Electric power transmission systems in Canada
Energy in New England
Energy in Quebec
Electric power transmission systems in the United States
HVDC transmission lines
Hydro-Québec
James Bay Project | Quebec – New England Transmission | [
"Engineering"
] | 1,666 | [
"James Bay Project",
"Macro-engineering"
] |
1,664,427 | https://en.wikipedia.org/wiki/Mereotopology | In formal ontology, a branch of metaphysics, and in ontological computer science, mereotopology is a first-order theory, embodying mereological and topological concepts, of the relations among wholes, parts, parts of parts, and the boundaries between parts.
History and motivation
Mereotopology begins in philosophy with theories articulated by A. N. Whitehead in several books and articles he published between 1916 and 1929, drawing in part on the mereogeometry of De Laguna (1922). The first to have proposed the idea of a point-free definition of the concept of topological space in mathematics was Karl Menger in his book Dimensionstheorie (1928) -- see also his (1940). The early historical background of mereotopology is documented in Bélanger and Marquis (2013) and Whitehead's early work is discussed in Kneebone (1963: ch. 13.5) and Simons (1987: 2.9.1). The theory of Whitehead's 1929 Process and Reality augmented the part-whole relation with topological notions such as contiguity and connection. Despite Whitehead's acumen as a mathematician, his theories were insufficiently formal, even flawed. By showing how Whitehead's theories could be fully formalized and repaired, Clarke (1981, 1985) founded contemporary mereotopology. The theories of Clarke and Whitehead are discussed in Simons (1987: 2.10.2), and Lucas (2000: ch. 10). The entry Whitehead's point-free geometry includes two contemporary treatments of Whitehead's theories, due to Giangiacomo Gerla, each different from the theory set out in the next section.
Although mereotopology is a mathematical theory, we owe its subsequent development to logicians and theoretical computer scientists. Lucas (2000: ch. 10) and Casati and Varzi (1999: ch. 4,5) are introductions to mereotopology that can be read by anyone having done a course in first-order logic. More advanced treatments of mereotopology include Cohn and Varzi (2003) and, for the mathematically sophisticated, Roeper (1997). For a mathematical treatment of point-free geometry, see Gerla (1995). Lattice-theoretic (algebraic) treatments of mereotopology as contact algebras have been applied to separate the topological from the mereological structure, see Stell (2000), Düntsch and Winter (2004).
Applications
Barry Smith, Anthony Cohn, Achille Varzi and their co-authors have shown that mereotopology can be useful in formal ontology and computer science, by allowing the formalization of relations such as contact, connection, boundaries, interiors, holes, and so on. Mereotopology has been applied also as a tool for qualitative spatial-temporal reasoning, with constraint calculi such as the Region Connection Calculus (RCC). It provides the starting point for the theory of fiat boundaries developed by Smith and Varzi, which grew out of the attempt to distinguish formally between
boundaries (in geography, geopolitics, and other domains) which reflect more or less arbitrary human demarcations and
boundaries which reflect bona fide physical discontinuities (Smith 1995, 2001).
Mereotopology is being applied by Salustri in the domain of digital manufacturing (Salustri, 2002) and by Smith and Varzi to the formalization of basic notions of ecology and environmental biology (Smith and Varzi, 1999, 2002). It has been applied also to deal with vague boundaries in geography (Smith and Mark, 2003), and in the study of vagueness and granularity (Smith and Brogaard, 2002, Bittner and Smith, 2001, 2001a).
Preferred approach of Casati & Varzi
Casati and Varzi (1999: ch.4) set out a variety of mereotopological theories in a consistent notation. This section sets out several nested theories that culminate in their preferred theory GEMTC, and follows their exposition closely. The mereological part of GEMTC is the conventional theory GEM. Casati and Varzi do not say if the models of GEMTC include any conventional topological spaces.
We begin with some domain of discourse, whose elements are called individuals (a synonym for mereology is "the calculus of individuals"). Casati and Varzi prefer limiting the ontology to physical objects, but others freely employ mereotopology to reason about geometric figures and events, and to solve problems posed by research in machine intelligence.
An upper case Latin letter denotes both a relation and the predicate letter referring to that relation in first-order logic. Lower case letters from the end of the alphabet denote variables ranging over the domain; letters from the start of the alphabet are names of arbitrary individuals. If a formula begins with an atomic formula followed by the biconditional, the subformula to the right of the biconditional is a definition of the atomic formula, whose variables are unbound. Otherwise, variables not explicitly quantified are tacitly universally quantified. The axiom Cn below corresponds to axiom C.n in Casati and Varzi (1999: ch. 4).
We begin with a topological primitive, a binary relation called connection; the atomic formula Cxy denotes that "x is connected to y." Connection is governed, at minimum, by the axioms:
C1. (reflexive)
C2. (symmetric)
Let E, the binary relation of enclosure, be defined as:
Exy is read as "y encloses x" and is also topological in nature. A consequence of C1-2 is that E is reflexive and transitive, and hence a preorder. If E is also assumed extensional, so that:
then E can be proved antisymmetric and thus becomes a partial order. Enclosure, notated xKy, is the single primitive relation of the theories in Whitehead (1919, 1920), the starting point of mereotopology.
Let parthood be the defining primitive binary relation of the underlying mereology, and let the atomic formula Pxy denote that "x is part of y". We assume that P is a partial order. Call the resulting minimalist mereological theory M.
If x is part of y, we postulate that y encloses x:
C3.
C3 nicely connects mereological parthood to topological enclosure.
Let O, the binary relation of mereological overlap, be defined as:
Let Oxy denote that "x and y overlap." With O in hand, a consequence of C3 is:
Note that the converse does not necessarily hold. While things that overlap are necessarily connected, connected things do not necessarily overlap. If this were not the case, topology would merely be a model of mereology (in which "overlap" is always either primitive or defined).
Ground mereotopology (MT) is the theory consisting of primitive C and P, defined E and O, the axioms C1-3, and axioms assuring that P is a partial order. Replacing the M in MT with the standard extensional mereology GEM results in the theory GEMT.
Let IPxy denote that "x is an internal part of y." IP is defined as:
Let σx φ(x) denote the mereological sum (fusion) of all individuals in the domain satisfying φ(x). σ is a variable binding prefix operator. The axioms of GEM assure that this sum exists if φ(x) is a first-order formula. With σ and the relation IP in hand, we can define the interior of x, as the mereological sum of all interior parts z of x, or:
df
Two easy consequences of this definition are:
where W is the universal individual, and
C5. (Inclusion)
The operator i has two more axiomatic properties:
C6. (Idempotence)
C7.
where a×b is the mereological product of a and b, not defined when Oab is false. i distributes over product.
It can now be seen that i is isomorphic to the interior operator of topology. Hence the dual of i, the topological closure operator c, can be defined in terms of i, and Kuratowski's axioms for c are theorems. Likewise, given an axiomatization of c that is analogous to C5-7, i may be defined in terms of c, and C5-7 become theorems. Adding C5-7 to GEMT results in Casati and Varzi's preferred mereotopological theory, GEMTC.
x is self-connected if it satisfies the following predicate:
Note that the primitive and defined predicates of MT alone suffice for this definition. The predicate SC enables formalizing the necessary condition given in Whitehead's Process and Reality for the mereological sum of two individuals to exist: they must be connected. Formally:
C8.
Given some mereotopology X, adding C8 to X results in what Casati and Varzi call the Whiteheadian extension of X, denoted WX. Hence the theory whose axioms are C1-8 is WGEMTC.
The converse of C8 is a GEMTC theorem. Hence given the axioms of GEMTC, C is a defined predicate if O and SC are taken as primitive predicates.
If the underlying mereology is atomless and weaker than GEM, the axiom that assures the absence of atoms (P9 in Casati and Varzi 1999) may be replaced by C9, which postulates that no individual has a topological boundary:
C9.
When the domain consists of geometric figures, the boundaries can be points, curves, and surfaces. What boundaries could mean, given other ontologies, is not an easy matter and is discussed in Casati and Varzi (1999: ch. 5).
See also
Mereology
Pointless topology
Point-set topology
Topology
Topological space (with links to T0 through T6)
Whitehead's point-free geometry
Notes
References
Biacino L., and Gerla G., 1991, "Connection Structures," Notre Dame Journal of Formal Logic 32: 242–47.
Casati, Roberto, and Varzi, Achille, 1999. Parts and places: the structures of spatial representation. MIT Press.
Stell J. G., 2000, "Boolean connection algebras: A new approach to the Region-Connection Calculus," Artificial Intelligence 122: 111–136.
External links
Stanford Encyclopedia of Philosophy: Boundary—by Achille Varzi. With many references.
Mathematical axioms
Mereology
Ontology
Topology | Mereotopology | [
"Physics",
"Mathematics"
] | 2,207 | [
"Mathematical logic",
"Mathematical axioms",
"Topology",
"Space",
"Geometry",
"Spacetime"
] |
1,664,428 | https://en.wikipedia.org/wiki/UniProt | UniProt is a freely accessible database of protein sequence and functional information, many entries being derived from genome sequencing projects. It contains a large amount of information about the biological function of proteins derived from the research literature. It is maintained by the UniProt consortium, which consists of several European bioinformatics organisations and a foundation from Washington, DC, USA.
The UniProt consortium
The UniProt consortium comprises the European Bioinformatics Institute (EBI), the Swiss Institute of Bioinformatics (SIB), and the Protein Information Resource (PIR). EBI, located at the Wellcome Trust Genome Campus in Hinxton, UK, hosts a large resource of bioinformatics databases and services. SIB, located in Geneva, Switzerland, maintains the ExPASy (Expert Protein Analysis System) servers that are a central resource for proteomics tools and databases. PIR, hosted by the National Biomedical Research Foundation (NBRF) at the Georgetown University Medical Center in Washington, DC, US, is heir to the oldest protein sequence database, Margaret Dayhoff's Atlas of Protein Sequence and Structure, first published in 1965. In 2002, EBI, SIB, and PIR joined forces as the UniProt consortium.
The roots of the UniProt databases
Each consortium member is heavily involved in protein database maintenance and annotation. Until recently, EBI and SIB together produced the Swiss-Prot and TrEMBL databases, while PIR produced the Protein Sequence Database (PIR-PSD). These databases coexisted with differing protein sequence coverage and annotation priorities.
Swiss-Prot was created in 1986 by Amos Bairoch during his PhD and developed by the Swiss Institute of Bioinformatics and subsequently developed by Rolf Apweiler at the European Bioinformatics Institute. Swiss-Prot aimed to provide reliable protein sequences associated with a high level of annotation (such as the description of the function of a protein, its domain structure, post-translational modifications, variants, etc.), a minimal level of redundancy and high level of integration with other databases. Recognizing that sequence data were being generated at a pace exceeding Swiss-Prot's ability to keep up, TrEMBL (Translated EMBL Nucleotide Sequence Data Library) was created to provide automated annotations for those proteins not in Swiss-Prot. Meanwhile, PIR maintained the PIR-PSD and related databases, including iProClass, a database of protein sequences and curated families.
The consortium members pooled their overlapping resources and expertise, and launched UniProt in December 2003.
Organization of the UniProt databases
UniProt provides four core databases: UniProtKB (with sub-parts Swiss-Prot and TrEMBL), UniParc, UniRef and Proteome.
UniProtKB
UniProt Knowledgebase (UniProtKB) is a protein database partially curated by experts, consisting of two sections: UniProtKB/Swiss-Prot (containing reviewed, manually annotated entries) and UniProtKB/TrEMBL (containing unreviewed, automatically annotated entries). , release "2023_01" of UniProtKB/Swiss-Prot contains 569,213 sequence entries (comprising 205,728,242 amino acids abstracted from 291,046 references) and release "2023_01" of UniProtKB/TrEMBL contains 245,871,724 sequence entries (comprising 85,739,380,194 amino acids).
UniProtKB/Swiss-Prot
UniProtKB/Swiss-Prot is a manually annotated, non-redundant protein sequence database. It combines information extracted from scientific literature and biocurator-evaluated computational analysis. The aim of UniProtKB/Swiss-Prot is to provide all known relevant information about a particular protein. Annotation is regularly reviewed to keep up with current scientific findings. The manual annotation of an entry involves detailed analysis of the protein sequence and of the scientific literature.
Sequences from the same gene and the same species are merged into the same database entry. Differences between sequences are identified, and their cause documented (for example alternative splicing, natural variation, incorrect initiation sites, incorrect exon boundaries, frameshifts, unidentified conflicts). A range of sequence analysis tools is used in the annotation of UniProtKB/Swiss-Prot entries. Computer-predictions are manually evaluated, and relevant results selected for inclusion in the entry. These predictions include post-translational modifications, transmembrane domains and topology, signal peptides, domain identification, and protein family classification.
Relevant publications are identified by searching databases such as PubMed. The full text of each paper is read, and information is extracted and added to the entry. Annotation arising from the scientific literature includes, but is not limited to:
Protein and gene names
Function
Enzyme-specific information such as catalytic activity, cofactors and catalytic residues
Subcellular location
Protein-protein interactions
Pattern of expression
Locations and roles of significant domains and sites
Ion-, substrate- and cofactor-binding sites
Protein variant forms produced by natural genetic variation, RNA editing, alternative splicing, proteolytic processing, and post-translational modification
Annotated entries undergo quality assurance before inclusion into UniProtKB/Swiss-Prot. When new data becomes available, entries are updated.
UniProtKB/TrEMBL
UniProtKB/TrEMBL contains high-quality computationally analyzed records, which are enriched with automatic annotation. It was introduced in response to increased dataflow resulting from genome projects, as the time- and labour-consuming manual annotation process of UniProtKB/Swiss-Prot could not be broadened to include all available protein sequences. The translations of annotated coding sequences in the EMBL-Bank/GenBank/DDBJ nucleotide sequence database are automatically processed and entered in UniProtKB/TrEMBL.
UniProtKB/TrEMBL also contains sequences from PDB, and from gene prediction, including Ensembl, RefSeq and CCDS. Since 22 July 2021 it also includes structures predicted with AlphaFold2.
UniParc
UniProt Archive (UniParc) is a comprehensive and non-redundant database, which contains all the protein sequences from the main, publicly available protein sequence databases. Proteins may exist in several different source databases, and in multiple copies in the same database. In order to avoid redundancy, UniParc stores each unique sequence only once. Identical sequences are merged, regardless of whether they are from the same or different species. Each sequence is given a stable and unique identifier (UPI), making it possible to identify the same protein from different source databases. UniParc contains only protein sequences, with no annotation. Database cross-references in UniParc entries allow further information about the protein to be retrieved from the source databases. When sequences in the source databases change, these changes are tracked by UniParc and history of all changes is archived.
Source databases
Currently UniParc contains protein sequences from the following publicly available databases:
INSDC EMBL-Bank/DDBJ/GenBank nucleotide sequence databases
Ensembl
European Patent Office (EPO)
FlyBase: the primary repository of genetic and molecular data for the insect family Drosophilidae (FlyBase)
H-Invitational Database (H-Inv)
International Protein Index (IPI)
Japan Patent Office (JPO)
Protein Information Resource (PIR-PSD)
Protein Data Bank (PDB)
Protein Research Foundation (PRF)
RefSeq
Saccharomyces Genome Database (SGD)
The Arabidopsis Information Resource (TAIR)
TROME
US Patent Office (USPTO)
UniProtKB/Swiss-Prot, UniProtKB/Swiss-Prot protein isoforms, UniProtKB/TrEMBL
Vertebrate and Genome Annotation Database (VEGA)
WormBase
UniRef
The UniProt Reference Clusters (UniRef) consist of three databases of clustered sets of protein sequences from UniProtKB and selected UniParc records. The UniRef100 database combines identical sequences and sequence fragments (from any organism) into a single UniRef entry. The sequence of a representative protein, the accession numbers of all the merged entries and links to the corresponding UniProtKB and UniParc records are displayed. UniRef100 sequences are clustered using the CD-HIT algorithm to build UniRef90 and UniRef50. Each cluster is composed of sequences that have at least 90% or 50% sequence identity, respectively, to the longest sequence. Clustering sequences significantly reduces database size, enabling faster sequence searches.
UniRef is available from the UniProt FTP site.
Funding
UniProt is funded by grants from the National Human Genome Research Institute, the National Institutes of Health (NIH), the European Commission, the Swiss Federal Government through the Federal Office of Education and Science, NCI-caBIG, and the US Department of Defense.
References
External links
UniProt
Protein databases
Online databases
Proteomics
Science and technology in Cambridgeshire
South Cambridgeshire District
Bioinformatics
Computational biology | UniProt | [
"Engineering",
"Biology"
] | 1,970 | [
"Bioinformatics",
"Biological engineering",
"Computational biology"
] |
5,785,677 | https://en.wikipedia.org/wiki/Landau%27s%20problems | At the 1912 International Congress of Mathematicians, Edmund Landau listed four basic problems about prime numbers. These problems were characterised in his speech as "unattackable at the present state of mathematics" and are now known as Landau's problems. They are as follows:
Goldbach's conjecture: Can every even integer greater than 2 be written as the sum of two primes?
Twin prime conjecture: Are there infinitely many primes p such that p + 2 is prime?
Legendre's conjecture: Does there always exist at least one prime between consecutive perfect squares?
Are there infinitely many primes p such that p − 1 is a perfect square? In other words: Are there infinitely many primes of the form n2 + 1?
, all four problems are unresolved.
Progress toward solutions
Goldbach's conjecture
Goldbach's weak conjecture, every odd number greater than 5 can be expressed as the sum of three primes, is a consequence of Goldbach's conjecture. Ivan Vinogradov proved it for large enough n (Vinogradov's theorem) in 1937, and Harald Helfgott extended this to a full proof of Goldbach's weak conjecture in 2013.
Chen's theorem, another weakening of Goldbach's conjecture, proves that for all sufficiently large n, where p is prime and q is either prime or semiprime. Bordignon, Johnston, and Starichkova, correcting and improving on Yamada, proved an explicit version of Chen's theorem: every even number greater than is the sum of a prime and a product of at most two primes. Bordignon and Starichkova reduce this to assuming the Generalized Riemann hypothesis (GRH) for Dirichlet L-functions. Johnson and Starichkova give a version working for all n ≥ 4 at the cost of using a number which is the product of at most 369 primes rather than a prime or semiprime; under GRH they improve 369 to 33.
Montgomery and Vaughan showed that the exceptional set of even numbers not expressible as the sum of two primes has a density zero, although the set is not proven to be finite. The best current bounds on the exceptional set is (for large enough x) due to Pintz, and under RH, due to Goldston.
Linnik proved that large enough even numbers could be expressed as the sum of two primes and some (ineffective) constant K of powers of 2. Following many advances (see Pintz for an overview), Pintz and Ruzsa improved this to K = 8. Assuming the GRH, this can be improved to K = 7.
Twin prime conjecture
In 2013 Yitang Zhang showed that there are infinitely many prime pairs with gap bounded by 70 million, and this result has been improved to gaps of length 246 by a collaborative effort of the Polymath Project. Under the generalized Elliott–Halberstam conjecture this was improved to 6, extending earlier work by Maynard and Goldston, Pintz and Yıldırım.
In 1966 Chen showed that there are infinitely many primes p (later called Chen primes) such that p + 2 is either a prime or a semiprime.
Legendre's conjecture
It suffices to check that each prime gap starting at p is smaller than . A table of maximal prime gaps shows that the conjecture holds to 264 ≈ 1.8. A counterexample near that size would require a prime gap a hundred million times the size of the average gap.
Järviniemi, improving on work by Heath-Brown and by Matomäki, shows that there are at most exceptional primes followed by gaps larger than ; in particular,
A result due to Ingham shows that there is a prime between and for every large enough n.
Near-square primes
Landau's fourth problem asked whether there are infinitely many primes which are of the form for integer n. (The list of known primes of this form is .) The existence of infinitely many such primes would follow as a consequence of other number-theoretic conjectures such as the Bunyakovsky conjecture and Bateman–Horn conjecture. , this problem is open.
One example of near-square primes are Fermat primes. Henryk Iwaniec showed that there are infinitely many numbers of the form with at most two prime factors. Ankeny and Kubilius proved that, assuming the extended Riemann hypothesis for L-functions on Hecke characters, there are infinitely many primes of the form with . Landau's conjecture is for the stronger . The best unconditional result is due to Harman and Lewis and it gives .
Merikoski, improving on previous works, showed that there are infinitely many numbers of the form with greatest prime factor at least . Replacing the exponent with 2 would yield Landau's conjecture.
The Friedlander–Iwaniec theorem shows that infinitely many primes are of the form .
Baier and Zhao prove that there are infinitely many primes of the form with ; the exponent can be improved to under the Generalized Riemann Hypothesis for L-functions and to under a certain Elliott-Halberstam type hypothesis.
The Brun sieve establishes an upper bound on the density of primes having the form : there are such primes up to . Hence almost all numbers of the form are composite.
See also
List of unsolved problems in mathematics
Hilbert's problems
Notes
References
External links
Conjectures about prime numbers
Unsolved problems in number theory | Landau's problems | [
"Mathematics"
] | 1,134 | [
"Unsolved problems in mathematics",
"Mathematical problems",
"Number theory",
"Unsolved problems in number theory"
] |
5,786,821 | https://en.wikipedia.org/wiki/Rothe%E2%80%93Hagen%20identity | In mathematics, the Rothe–Hagen identity is a mathematical identity valid for all complex numbers () except where its denominators vanish:
It is a generalization of Vandermonde's identity, and is named after Heinrich August Rothe and Johann Georg Hagen.
References
.
. See especially pp. 89–91.
. As cited by .
.
. As cited by .
Factorial and binomial topics
Algebraic identities
Complex analysis | Rothe–Hagen identity | [
"Mathematics"
] | 91 | [
"Factorial and binomial topics",
"Algebraic identities",
"Applied mathematics",
"Combinatorics",
"Applied mathematics stubs",
"Mathematical identities"
] |
5,787,012 | https://en.wikipedia.org/wiki/Lerche%E2%80%93Newberger%20sum%20rule | The Lerche–Newberger, or Newberger, sum rule, discovered by B. S. Newberger in 1982, finds the sum of certain infinite series involving Bessel functions Jα of the first kind.
It states that if μ is any non-integer complex number, , and Re(α + β) > −1, then
Newberger's formula generalizes a formula of this type proven by Lerche in 1966; Newberger discovered it independently. Lerche's formula has γ =1; both extend a standard rule for the summation of Bessel functions, and are useful in plasma physics.
References
Special functions
Mathematical identities | Lerche–Newberger sum rule | [
"Mathematics"
] | 135 | [
"Mathematical analysis",
"Special functions",
"Mathematical analysis stubs",
"Combinatorics",
"Mathematical problems",
"Mathematical identities",
"Mathematical theorems",
"Algebra"
] |
5,787,851 | https://en.wikipedia.org/wiki/Casein%20kinase%202 | Casein kinase 2 ()(CK2/CSNK2) is a serine/threonine-selective protein kinase that has been implicated in cell cycle control, DNA repair, regulation of the circadian rhythm, and other cellular processes. De-regulation of CK2 has been linked to tumorigenesis as a potential protection mechanism for mutated cells. Proper CK2 function is necessary for survival of cells as no knockout models have been successfully generated.
Structure
CK2 typically appears as a tetramer of two α subunits; α being 42 kDa and α’ being 38 kDa, and two β subunits, each weighing in at 28 kDa. The β regulatory domain only has one isoform and therefore within the tetramer will have two β subunits. The catalytic α domains appear as an α or α’ variant and can either be formed in a homodimer (α & α, or α’ & α’) formation or heterodimer formation (α & α’). Other β isoforms have been found in other organisms but not in humans.
The α subunits do not require the β regulatory subunits to function, this allows dimers to form of the catalytic domains independent of β subunit transcription. The presence of these α subunits does have an effect on the phosphorylation targets of CK2. A functional difference between α and α’ has been found but the exact nature of differences isn't fully understood yet. An example is that Caspase 3 is preferentially phosphorylated by α’ based tetramers over α based tetramers.
Function
CK2 is a protein kinase responsible for phosphorylation of substrates in various pathways within a cell; ATP or GTP can be used as phosphate source. CK2 has a dual functionality with involvement in cell growth/proliferation and suppression of apoptosis. CK2s anti-apoptotic function is in the continuation of the cell cycle; from G1 to S phase and G2 to M phase checkpoints. This function is achieved by protecting proteins from caspase-mediated apoptosis via phosphorylation of sites adjacent to the caspase cleavage site, blocking the activity of caspase proteins. CK2 also protects from drug-induced apoptosis via similar methods but it is not as well understood. Knockdown studies of both α and α’ sub-units have been used to verify this anti-apoptotic function.
Important phosphorylation events also regulated by CK2 are found in DNA damage repair pathways, and multiple stress-signaling pathways. Examples are phosphorylation of p53 or MAPK, which both regulate many interactions within their respective cellular pathways.
Another indication of separate function of α subunits is that mice that lack CK2α’ have a defect in the morphology of developing sperm.
Regulation
Although the targets of CK2 are predominantly nucleus-based the protein itself is localized to both the nucleus and cytoplasm. Casein kinase 2 activity has been reported to be activated following Wnt signaling pathway activation. A Pertussis toxin-sensitive G protein and Dishevelled appear to be an intermediary between Wnt-mediated activation of the Frizzled receptor and activation of CK2. Further studies need to be done on the regulation of this protein due to the complexity of CK2 function and localization.
Phosphorylation of CK2α T344 has been shown to inhibit its proteasomal degradation and support binding to Pin1. O-GlcNAcylation at S347 antagonizes this phosphorylation and accelerates CK2α degradation. O-GlcNAcylation of CK2α has also been shown to alter the phosphoproteome, notably including many chromatin regulators such as HDAC1, HDAC2, and HCFC1.
Role in tumorigenesis
Among the array of substrates that can be altered by CK2 many of them have been found in increased prevalence in cancers of the breast, lung, colon, and prostate. An increased concentration of substrates in cancerous cells infers a likely survival benefit to the cell, and activation of many of these substrates requires CK2. As well the anti-apoptotic function of CK2 allows the cancerous cell to escapes cell death and continue proliferating. Having roles in cell cycle regulation may also indicate CK2's role in allowing cell cycle progression when normally it should have been ceased. This also promotes CK2 as a possible therapeutic target for cancer drugs. When added with other potent anti-cancer therapies, a CK2 inhibitor may increase the effectiveness of the other therapy by allowing drug-induced apoptosis to occur at a normal rate.
Role in viral infection
In SARS-CoV-2 (COVID-19) infected Caco-2 cells, the phosphorylase activity of CK2 is increased resulting in phosphorylation of several cytoskeletal proteins. These infected cells also display CK2-containing filopodia protrusions associated with budding viral particles. Hence the protrusions may assist the virus in infecting adjacent cells. In these same cells, the CK2 inhibitor silmitasertib displayed potent antiviral activity. Senhwa Biosciences and the US National Institutes of Health have announced that they will evaluate the efficacy of silmitasertib in treating COVID-19 infections.
Protein subunits
See also
CSNK2A1
CSNK2A2
Casein kinase 1 — a distinct protein kinase family
References
Signal transduction
Protein kinases | Casein kinase 2 | [
"Chemistry",
"Biology"
] | 1,162 | [
"Biochemistry",
"Neurochemistry",
"Signal transduction"
] |
5,788,287 | https://en.wikipedia.org/wiki/Breaking%20wave | In fluid dynamics and nautical terminology, a breaking wave or breaker is a wave with enough energy to "break" at its peak, reaching a critical level at which linear energy transforms into wave turbulence energy with a distinct forward curve. At this point, simple physical models that describe wave dynamics often become invalid, particularly those that assume linear behaviour.
The most generally familiar sort of breaking wave is the breaking of water surface waves on a coastline. Wave breaking generally occurs where the amplitude reaches the point that the crest of the wave actually overturns. Certain other effects in fluid dynamics have also been termed "breaking waves", partly by analogy with water surface waves. In meteorology, atmospheric gravity waves are said to break when the wave produces regions where the potential temperature decreases with height, leading to energy dissipation through convective instability; likewise, Rossby waves are said to break when the potential vorticity gradient is overturned. Wave breaking also occurs in plasmas, when the particle velocities exceed the wave's phase speed. Another application in plasma physics is plasma expansion into a vacuum, in which the process of wave breaking and the subsequent development of a fast ion peak is described by the Sack-Schamel equation.
A reef or spot of shallow water such as a shoal against which waves break may also be known as a breaker.
Types
Breaking of water surface waves may occur anywhere that the amplitude is sufficient, including in mid-ocean. However, it is particularly common on beaches because wave heights are amplified in the region of shallower water (because the group velocity is lower there). See also waves and shallow water.
There are four basic types of breaking water waves. They are spilling, plunging, collapsing, and surging.
Spilling breakers
When the ocean floor has a gradual slope, the wave will steepen until the crest becomes unstable, resulting in turbulent whitewater spilling down the face of the wave. This continues as the wave approaches the shore, and the wave's energy is slowly dissipated in the whitewater. Because of this, spilling waves break for a longer time than other waves, and create a relatively gentle wave. Onshore wind conditions make spillers more likely.
Plunging breakers
A plunging wave occurs when the ocean floor is steep or has sudden depth changes, such as from a reef or sandbar. The crest of the wave becomes much steeper than a spilling wave, becomes vertical, then curls over and drops onto the trough of the wave, releasing most of its energy at once in a relatively violent impact. A plunging wave breaks with more energy than a significantly larger spilling wave. The wave can trap and compress the air under the lip, which creates the "crashing" sound associated with waves. With large waves, this crash can be felt by beachgoers on land. Offshore wind conditions can make plungers more likely.
If a plunging wave is not parallel to the beach (or the ocean floor), the section of the wave which reaches shallow water will break first, and the breaking section (or curl) will move laterally across the face of the wave as the wave continues. This is the "tube" that is so highly sought after by surfers (also called a "barrel", a "pit", and "the greenroom", among other terms). The surfer tries to stay near or under the crashing lip, often trying to stay as "deep" in the tube as possible while still being able to shoot forward and exit the barrel before it closes. A plunging wave that is parallel to the beach can break along its whole length at once, rendering it unrideable and dangerous. Surfers refer to these waves as "closed out".
Collapsing
Collapsing waves are a cross between plunging and surging, in which the crest never fully breaks, yet the bottom face of the wave gets steeper and collapses, resulting in foam.
Surging
Surging breakers originate from long period, low steepness waves and/or steep beach profiles. The outcome is the rapid movement of the base of the wave up the swash slope and the disappearance of the wave crest. The front face and crest of the wave remain relatively smooth with little foam or bubbles, resulting in a very narrow surf zone, or no breaking waves at all. The short, sharp burst of wave energy means that the swash/backwash cycle completes before the arrival of the next wave, leading to a low value of Kemp's phase difference (< 0.5). Surging waves are typical of reflective beach states. On steeper beaches, the energy of the wave can be reflected by the bottom back into the ocean, causing standing waves.
Physics
During breaking, a deformation (usually a bulge) forms at the wave crest, either leading side of which is known as the "toe". Parasitic capillary waves are formed, with short wavelengths. Those above the "toe" tend to have much longer wavelengths. This theory is anything but perfect, however, as it is linear. There have been a couple non-linear theories of motion (regarding waves). One put forth uses a perturbation method to expand the description all the way to the third order, and better solutions have been found since then. As for wave deformation, methods much like the boundary integral method and the Boussinesq model have been created.
It has been found that high-frequency detail present in a breaking wave plays a part in crest deformation and destabilization. The same theory expands on this, stating that the valleys of the capillary waves create a source for vorticity. It is said that surface tension (and viscosity) are significant for waves up to about in wavelength.
These models are flawed, however, as they can't take into account what happens to the water after the wave breaks. Post-break eddy forms and the turbulence created via the breaking is mostly un-researched. Understandably, it might be difficult to glean predictable results from the ocean.
After the tip of the wave overturns and the jet collapses, it creates a very coherent and defined horizontal vortex. The plunging breakers create secondary eddies down the face of the wave. Small horizontal random eddies that form on the sides of the wave suggest that, perhaps, prior to breaking, the water's velocity is more or less two dimensional. This becomes three dimensional upon breaking.
The main vortex along the front of the wave diffuses rapidly into the interior of the wave after breaking, as the eddies on the surface become more viscous. Advection and molecular diffusion play a part in stretching the vortex and redistributing the vorticity, as well as the formation turbulence cascades. The energy of the large vortices are, by this method, transferred to much smaller isotropic vortices.
Experiments have been conducted to deduce the evolution of turbulence after break, both in deep water and on a beach.
See also
References
External links
Oceans and margins, Earth Science Australia
Water waves
Articles containing video clips | Breaking wave | [
"Physics",
"Chemistry"
] | 1,424 | [
"Water waves",
"Waves",
"Physical phenomena",
"Fluid dynamics"
] |
5,790,527 | https://en.wikipedia.org/wiki/Li%20Kui%20%28legalist%29 | Li Kui (, 455–395 BC) was a Chinese hydraulic engineer, philosopher, and politician. He served as government minister and court advisor to Marquis Wen (r. 403–387 BC) in the state of Wei. In 407 BC, he wrote the Book of Law (Fajing, 法经). Said to have been a main influence on Shang Yang, it served the basis for the codified laws of the Qin and Han dynasties.
His political agendas, as well as the Book of Law, had a deep influence on later thinkers such as Han Feizi and Shang Yang, who would later develop the philosophy of Legalism based on Li Kui's reforms.
Life and reforms
Li Kui was in the service of the Marquis Wen of Wei even before the state of Wei was officially recognized, though little else is known of his early life. He was appointed as Chancellor of the Wei-controlled lands in 422 BC, in order to begin administrative and political reforms; Wei would therefore be the first of the Seven Warring States to embark on the creation of a bureaucratic, rather than a noble-dominated, form of government.
The main agendas of Li Kui's reforms included:
The institution of meritocracy, rather than inheritance, as the key principle for the selection of officials. By doing this, Li Kui undermined the nobility while enhancing the effectiveness of government. He was responsible for recommending Ximen Bao to oversee Wei's water conservancy projects in the vicinity of Ye, and recommending Wu Qi as a military commander when Wu Qi sought asylum in Wei.
Giving the state an active role in encouraging agriculture, by 'maximising instruction and agricultural productivity' (盡地力之教). While the precise contents of this reform are unclear, they could have included the spreading of information about agricultural practices, thus encouraging more productive methods of farming.
Instituting the 'Law of Equalising Purchases' (平籴法), wherein the state purchased grain to fill its granaries in years of good harvest, to ease price fluctuations and serve as a guarantee against famine.
Codifying the laws of the state, thus creating the Book of Law. The text was in turn subdivided, with laws dealing with theft, banditry, procedures of arrest and imprisonment, and miscellaneous criminal activities.
Legacy
The direct result of these pioneering reform measures was the dominance of Wei in the early decades of the Warring States era. Leveraging its improved economy, Wei achieved considerable military successes under Marquis Wen, including victories against the states of Qin between 413 and 409 BC, Qi in 404 BC, and joint expeditions against Chu with Zhao and Han as its allies.
At the same time, the main tenets of Li Kui's reforms - supporting law over ritual, agrarian production, meritocratic and bureaucratic government and an active role of the state in economic and social affairs - proved an inspiration for later generations of reform-minded thinkers. When Shang Yang sought service in Qin, three decades after Li Kui's death, he brought with him a copy of the Book of Law, which was eventually adapted and became the legal code of Qin.
Along with his contemporary Ximen Bao, he was given oversight in construction of canal and irrigation projects in the State of Wei.
See also
Warring States
Sunshu Ao
Notes
References
Zhang, Guohua, "Li Kui". Encyclopedia of China (Law Edition), 1st ed.
Needham, Joseph (1986). Science and Civilization in China: Volume 4, Part 3. Taipei: Caves Books, Ltd.
455 BC births
395 BC deaths
5th-century BC Chinese philosophers
4th-century BC Chinese people
4th-century BC Chinese philosophers
Chinese hydrologists
Chinese reformers
Engineers from Shanxi
Hydraulic engineering
People from Yuncheng
Philosophers of law
Philosophers from Shanxi
Politicians from Shanxi
Legalism (Chinese philosophy)
Writers from Shanxi
Zhou dynasty essayists
Zhou dynasty philosophers
Zhou dynasty government officials | Li Kui (legalist) | [
"Physics",
"Engineering",
"Environmental_science"
] | 800 | [
"Hydrology",
"Physical systems",
"Hydraulics",
"Civil engineering",
"Hydraulic engineering"
] |
5,791,186 | https://en.wikipedia.org/wiki/Bone%20density | Bone density, or bone mineral density, is the amount of bone mineral in bone tissue. The concept is of mass of mineral per volume of bone (relating to density in the physics sense), although clinically it is measured by proxy according to optical density per square centimetre of bone surface upon imaging. Bone density measurement is used in clinical medicine as an indirect indicator of osteoporosis and fracture risk. It is measured by a procedure called densitometry, often performed in the radiology or nuclear medicine departments of hospitals or clinics. The measurement is painless and non-invasive and involves low radiation exposure. Measurements are most commonly made over the lumbar spine and over the upper part of the hip. The forearm may be scanned if the hip and lumbar spine are not accessible.
There is a statistical association between poor bone density and higher probability of fracture. Fractures of the legs and pelvis due to falls are a significant public health problem, especially in elderly women, leading to substantial medical costs, inability to live independently and even risk of death. Bone density measurements are used to screen people for osteoporosis risk and to identify those who might benefit from measures to improve bone strength.
Testing
A bone density test may detect osteoporosis or osteopenia. The usual response to either of these indications is consultation with a physician. Bone density tests are not recommended for people without risk factors for weak bones, which is more likely to result in unnecessary treatment rather than discovery of a weakness.
Indications for testing
The risk factors for low bone density and primary considerations for a bone density test include:
females age 65 or older.
males age 70 or older.
people over age 50 with:
previous bone fracture from minor trauma.
rheumatoid arthritis.
low body weight.
a parent with a hip fracture.
individuals with vertebral abnormalities.
individuals receiving, or planning to receive, long-term glucocorticoid (steroid) therapy.
individuals with primary hyperparathyroidism.
individuals being monitored to assess the response or efficacy of an approved osteoporosis drug therapy.
when androgen deprivation therapy is being planned for prostate cancer.
individuals with a history of eating disorders.
Other considerations that are related to risk of low bone density and the need for a test include smoking habits, drinking habits, the long-term use of corticosteroid drugs, and a vitamin D deficiency.
Test result terms
Results of the test are reported in three forms:
Measured areal density in g cm−2.
Z-score: the number of standard deviations above or below the mean for the patient's age, sex and ethnicity.
T-score: the number of standard deviations above or below the mean for a healthy 30-year-old adult of the same sex and ethnicity as the patient.
Types of tests
While there are many types of bone mineral density tests, all are non-invasive. The tests differ according to which bones are measured to determine the test result.
These tests include:
Dual-energy X-ray absorptiometry (DXA or DEXA)
Trabecular bone score
Dual X-ray Absorptiometry and Laser (DXL)
Quantitative computed tomography (QCT)
Quantitative ultrasound (QUS)
Single photon absorptiometry (SPA)
Dual photon absorptiometry (DPA)
Digital X-ray radiogrammetry (DXR)
Single energy X-ray absorptiometry (SEXA)
DXA is the most commonly used testing method . The DXA test works by measuring a specific bone or bones, usually the spine, hip, and wrist. The density of these bones is then compared with an average index based on age, sex, and size. The resulting comparison is used to determine the risk for fractures and the stage of osteoporosis (if any) in an individual.
Quantitative ultrasound (QUS) has been described as a more cost-effective approach for measuring bone density, as compared to DXA.
Average bone mineral density = BMC / W [g/cm2]
BMC = bone mineral content = g/cm
W = width at the scanned line
Interpretation
Results are generally scored by two measures, the T-score and the Z-score. Scores indicate the amount one's bone mineral density varies from the mean. Negative scores indicate lower bone density, and positive scores indicate higher.
Less than 0.5% of patients who underwent DXA-scanning were found to have a T- or Z-score of more than +4.0, often the cause of an unusually high bone mass (HBM) and associated with mild skeletal dysplasia and the inability to float in water.
T-score
The T-score is the relevant measure when screening for osteoporosis. It is the bone mineral density at the site when compared to the "young normal reference mean". It is a comparison of a patient's bone mineral density to that of a healthy 30-year-old. The US standard is to use data for a 30-year-old of the same sex and ethnicity, but the WHO recommends using data for a 30-year-old white female for everyone. Values for 30-year-olds are used in post-menopausal women and men over age 50 because they better predict risk of future fracture. The criteria of the World Health Organization are:
Normal is a T-score of −1.0 or higher
Osteopenia is defined as between −1.0 and −2.5
Osteoporosis is defined as −2.5 or lower, meaning a bone density that is two and a half standard deviations below the mean of a 30-year-old man/woman.
Z-score
The Z-score for bone density is the comparison to the "age-matched normal" and is usually used in cases of severe osteoporosis. This is the standard score or number of standard deviations a patient's bone mineral density differs from the average for their age, sex, and ethnicity. This value is used in premenopausal women, men under the age of 50, and in children and adolescents. It is most useful when the score is less than 2 standard deviations below this normal. In this setting, it is helpful to scrutinize for coexisting illnesses or treatments that may contribute to osteoporosis such as glucocorticoid therapy, hyperparathyroidism, or alcoholism.
Prevention
To prevent low bone density it is recommended to have sufficient calcium and vitamin D. Sufficient calcium is defined as 1,000 mg per day, increasing to 1,200 mg for women above 50 and men above 70. Sufficient vitamin D is defined as 600 IUs per day for adults 19 to 70, increasing to 800 IUs per day for those over 71. Exercise, especially weight-bearing and resistance exercises are most effective for building bone. Weight-bearing exercise includes walking, jogging, dancing, and hiking. Resistance exercise is often accomplished through lifting weights. Other therapies, such as estrogens (e.g., estradiol, conjugated estrogens), selective estrogen receptor modulators (e.g., raloxifene, bazedoxifene), and bisphosphonates (e.g., alendronic acid, risedronic acid), can also be used to improve or maintain bone density. Tobacco use and excessive alcohol consumption have detrimental effects on bone density. Excessive alcohol consumption is defined as more than one standard-sized alcoholic beverage per day for women, and drinking two or more alcoholic beverages per day for men.
Genetics
Bone mineral density is highly variable between individuals. While there are many environmental factors that affect bone mineral density, genetic factors play the largest role. Bone mineral density variation has been estimated to have 0.6–0.8 heritability factor, meaning that 60–80% of its variation is inherited from parents. Because of the heritability of bone mineral density, family history of fractures is considered as a risk factor for osteoporosis. Bone mineral density is polygenic and many of the genetic mechanisms remain poorly understood.
Genetic diseases associated with bone mineral density
There are several rare genetic diseases that have been associated with pathologic changes in bone mineral density. The table summarizes these diseases:
References
Mass density
Bones | Bone density | [
"Physics"
] | 1,736 | [
"Mechanical quantities",
"Physical quantities",
"Mass",
"Intensive quantities",
"Volume-specific quantities",
"Density",
"Mass density",
"Matter"
] |
5,791,679 | https://en.wikipedia.org/wiki/Mechanical%20singularity | In engineering, a mechanical singularity is a position or configuration of a mechanism or a machine where the subsequent behaviour cannot be predicted, or the forces or other physical quantities involved become infinite or nondeterministic.
When the underlying engineering equations of a mechanism or machine are evaluated at the singular configuration (if any exists), then those equations exhibit mathematical singularity.
Examples of mechanical singularities are gimbal lock and in static mechanical analysis, an under-constrained system.
Types of singularities
There are three types of singularities that can be found in mechanisms: direct-kinematics singularities, inverse-kinematics singularities, and combined singularities. These singularities occur when one or both Jacobian matrices of the mechanisms becomes singular of rank-deficient. The relationship between the input and output velocities of the mechanism are defined by the following general equation:
where is the output velocities, is the input velocities, is the direct-kinematics Jacobians, and is the inverse-kinematics Jacobian.
Type-I: Inverse-kinematics singularities
This first kind of singularities occurs when:
Type-II: Direct-kinematics singularities
This second kind of singularities occurs when:
Type-III: Combined singularities
This kind of singularities occurs when for a particular configuration, both and become singular simultaneously.
References
Mechanical engineering | Mechanical singularity | [
"Physics",
"Engineering"
] | 286 | [
"Mechanical engineering stubs",
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
5,792,005 | https://en.wikipedia.org/wiki/Perchloryl%20fluoride | Perchloryl fluoride is a reactive gas with the chemical formula . It has a characteristic sweet odor that resembles gasoline and kerosene. It is toxic and is a powerful oxidizing and fluorinating agent. It is the acid fluoride of perchloric acid.
In spite of its small enthalpy of formation (ΔfH° = ), it is kinetically stable, decomposing only at 400 °C. It is quite reactive towards reducing agents and anions, however, with the chlorine atom acting as an electrophile. It reacts explosively with reducing agents such as metal amides, metals, hydrides, etc. Its hydrolysis in water occurs very slowly, unlike that of chloryl fluoride.
Synthesis and chemistry
Perchloryl fluoride is produced primarily by the fluorination of perchlorates. The initial syntheses in the early 1950s used fluorine gas or fluorides and anodic oxidation as the fluorinating agents, but these give explosive gaseous mixtures. A common fluorinator in modern syntheses is antimony pentafluoride:
+ 3 HF + 2 → + + 2
Alternatively, potassium perchlorate reacts with excess fluorosulfuric acid to give potassium bisulfate and perchloryl fluoride:
KClO4 + HFSO3 → KHSO4 + FClO3
reacts with alcohols to produce alkyl perchlorates, which are extremely shock-sensitive explosives. In the presence of a Lewis acid, it can be used for introducing the group into aromatic rings via electrophilic aromatic substitution.
Applications
Perchloryl fluoride is used in organic chemistry as a mild fluorinating agent. It was the first industrially relevant electrophilic fluorinating agent, used since the 1960s for producing fluorinated steroids. In the presence of aluminum trichloride, it has also been used as an electrophilic perchlorylation reagent for aromatic compounds.
Perchloryl fluoride was investigated as a high performance liquid rocket fuel oxidizer. In comparison with chlorine pentafluoride and bromine pentafluoride, it has significantly lower specific impulse, but does not tend to corrode tanks. It does not require cryogenic storage. Rocket fuel chemist John Drury Clark reported in his book Ignition! that perchloryl fluoride is completely miscible with all-halogen oxidizers such as chlorine trifluoride and chlorine pentafluoride, and such a mixture provides the needed oxygen to properly burn carbon-containing fuels. It can also be used in flame photometry as an excitation source.
Safety
Perchloryl fluoride is toxic, with a TLV of 3 ppm. It is a strong lung- and eye-irritant capable of producing burns on exposed skin. Its IDLH level is 100 ppm. Symptoms of exposure include dizziness, headaches, syncope, and cyanosis. Exposure to toxic levels causes severe respiratory tract inflammation and pulmonary edema.
See also
Periodyl fluoride
Perbromyl fluoride
References
Inorganic chlorine compounds
Oxyfluorides
Rocket oxidizers
Fluorinating agents
Perchloryl compounds
Sweet-smelling chemicals | Perchloryl fluoride | [
"Chemistry"
] | 688 | [
"Inorganic compounds",
"Oxidizing agents",
"Fluorinating agents",
"Rocket oxidizers",
"Inorganic chlorine compounds",
"Reagents for organic chemistry"
] |
5,792,085 | https://en.wikipedia.org/wiki/Reduction%20drive | A reduction drive is a mechanical device to shift rotational speed. A planetary reduction drive is a small scale version using ball bearings in an epicyclic arrangement instead of toothed gears.
Reduction drives are used in engines of all kinds to increase the amount of torque per revolution of a shaft: the gearbox of any car is a ubiquitous example of a reduction drive. Common household uses are washing machines, food blenders and window-winders. Reduction drives are also used to decrease the rotational speed of an input shaft to an appropriate output speed. Reduction drives can be a gear train design or belt driven.
Planetary reduction drives are typically attached between the shaft of the variable capacitor and the tuning knob of any radio, to allow fine adjustments of the tuning capacitor with smooth movements of the knob. Planetary drives are used in this situation to avoid "backlash", which makes tuning easier. If the capacitor drive has backlash, when one attempts to tune in a station, the tuning knob will feel sloppy and it will be hard to perform small adjustments. Gear-drives can be made to have no backlash by using split gears and spring tension but the shaft bearings have to be very precise.
Application
Reduction gear in light aircraft
Piston-engined light aircraft may have direct-drive to the propeller or may use a reduction drive. The advantages of direct-drive are simplicity, lightness and reliability, but a direct-drive engine may never achieve full output, as the propeller might exceed its maximum permissible rpm. For instance, a direct-drive aero engine (such as the Jabiru 2200) has a nominal maximum output of 64 kW (85 bhp) at 3,300 RPM, but if the propeller cannot exceed 2,600 rpm, the maximum output would be only about 70 bhp. By contrast, a Rotax 912 has an engine capacity of only 56% of the Jabiru 2200, but its reduction gear (of 1 : 2.273 or 1 : 2.43) allows the full output of 80 bhp to be exploited.
The Midwest twin-rotor wankel engine has an eccentric shaft that spins up to 7,800 rpm, so a 2.96:1 reduction gear is used.
Aero-engine reduction gears are typically of the gear type, but smaller two-stroke engines such as the Rotax 582 use belt drive with toothed belts, which is a cheap and lightweight option with built-in damping of power surges.
Reduction Drives on Marine Vessels
Most of the world's ships are powered by diesel engines which can be split into three categories, low speed (<400 rpm), medium speed (400-1200 rpm), and high speed (1200+ rpm). Low speed diesels operate at speeds within the optimum range for propeller usage. Thus it is acceptable to directly transmit power from the engine to the propeller. For medium and high speed diesels, the rotational speed of the crankshaft within the engine must be reduced in order to reach the optimum speed for use by a propeller.
Reduction drives operate by making the engine turn a high speed pinion against a gear, turning the high rotational speed from the engine to lower rotational speed for the propeller. The amount of reduction is based on the number of teeth on each gear. For example, a pinion with 25 teeth, turning a gear with 100 teeth, must turn 4 times in order for the larger gear to turn once. This reduces the speed by a factor of 4 while raising the torque 4 fold. This reduction factor changes depending on the needs and operating speeds of the machinery. The reduction gear aboard the Training Ship Golden Bear has a ratio of 3.6714:1. So when the two Enterprise R5 V-16 diesel engines operate at their standard 514 rpm, the propeller turns at 140 rpm.
A large variety of reduction gear arrangements are used in the industry. The three arrangements most commonly used are: double reduction utilizing two pinion nested, double reduction utilizing two-pinion articulated, and double reduction utilizing two-pinion locked train.
The gears used in a ship's reduction gearbox are usually double helical gears. This design helps lower the amount of required maintenance and increase the lifetime of the gears. Helical gears are used because the load upon it is more distributed than in other types. The double helical gear set can also be called a herringbone gear and consists of two oppositely angled sets of teeth. A single set of helical teeth will produce a thrust parallel to the axle of the gear (known as axial thrust) due to the angular nature of the teeth. By adding a second set opposed to the first set, the axial thrust created by both sets cancels each other out.
When installing reduction gears on ships the alignment of the gear is critical. Correct alignment helps ensure a uniform distribution of load upon each pinion and gear. When manufactured, the gears are assembled in such a way as to obtain uniform load distribution and tooth contact. After completion of construction and delivery to shipyard it is required that these gears achieve proper alignment when first operated under load. Some shipbuilders will have the gears transported and installed as a complete assembly. Others will have the gears dismantled, shipped, reassembled in their shops and lowered as a complete assembly into the ship. While finally others will have the gears dismantled, shipped and reassembled in the ship. These three methods are the most common used by shipbuilders to achieve proper alignment and each of them work based upon the assumption that proper alignment was correctly achieved at the manufacturer.
Because of the involvement in the process of aligning reduction drives, there are two main sources of responsibility to achieve proper alignment. That of the shipbuilder and that of the gear manufacturer. The shipbuilder must provide a foundation that is sufficiently strong and rigid so that the gear mounting surface does not deflect greatly under operating conditions, a shaft alignment drawing that details the positions of line bearing and the method for aligning the forward piece of line shafting to the reduction gear coupling and the location of stern tube being such that the normal wear down of the stern tube will not induce significant movement of the reduction gear coupling from its proper alignment.
The gear manufacturer is then responsible for ensuring basic gear alignment, such that the final assembly measurements are taken carefully and recorded for the reduction drive to be installed correctly, proper tooth contact in the factory, where the manufacturer accurately and precisely assembles the gears and pinions, and denoting all steps performed, making measurements of parts at the different steps and final assembly then forwarding this data to the shipbuilder so that they may assure the degree of accuracy required by the gear designer in the resulting shipboard assembly.
Thrust bearings do not commonly appear on reduction drives on ships because axial loading is handled by a thrust bearing separate from the reduction drive assembly. But on smaller reduction drives attached to auxiliary machinery or if the design of the ship demands it, one can find thrust bearings as a part of the assembly.
In order to ensure a reduction drive's smooth working and long lifetime, it is vital to have lubricating oil. A reduction drive that is run with oil free of impurities like water, dirt, grit and flakes of metal, requires little care in comparison to other type of engine room machinery. In order to ensure that the lube oil in the reduction gears stay this way a lube oil purifier will be installed with the drive.
Types
Types of reduction drives include cycloidal, strain wave gear, and worm gear drives.
References
Hardware (mechanical) | Reduction drive | [
"Physics",
"Technology",
"Engineering"
] | 1,541 | [
"Physical systems",
"Machines",
"Hardware (mechanical)",
"Construction"
] |
9,073,820 | https://en.wikipedia.org/wiki/PVLAS | PVLAS (Polarizzazione del Vuoto con LASer, "polarization of the vacuum with laser") aims to carry out a test of quantum electrodynamics and possibly detect dark matter at the Department of Physics and National Institute of Nuclear Physics in Ferrara, Italy. It searches for vacuum polarization causing nonlinear optical behavior in magnetic fields. Experiments began in 2001 at the INFN Laboratory in Legnaro (Padua, Italy) and continue today with new equipment.
Background
Nonlinear electrodynamic effects in vacuum have been predicted since the earliest days of quantum electrodynamics (QED), a few years after the discovery of positrons. One such effect is vacuum magnetic birefringence, closely connected to elastic light-by-light interaction. The effect is extremely small and has never yet been observed directly.
Although today QED is a very well-tested theory, the importance of detecting light-by-light interaction remains. First, QED has always been tested in the presence of charged particles either in the initial state or the final state. No tests exist in systems with only photons. More generally, no interaction has ever been observed directly with only gauge bosons present in the initial and final states. Second, to date, the evidence for zero-point quantum fluctuations relies entirely on the observation of the Casimir effect, which applies to photons only. PVLAS deals with the fluctuations of virtual charged particle-antiparticle pairs (of any nature, including hypothetical millicharged particles) and therefore the structure of fermionic quantum vacuum: to leading order, it would be a direct detection of loop diagrams. Finally, the observation of light-by-light interaction would be an evidence of the breakdown of the superposition principle and of Maxwell's equations. One important consequence of a nonlinearity is that the velocity of light would depend on the presence or not of other electromagnetic fields. PVLAS carries out its search by looking at changes in the polarisation state of a linearly polarised laser beam after it passes through a vacuum with an intense magnetic field. The birefringence of the vacuum in quantum electrodynamics by an external field is generally credited to Stephen L. Adler, who presented the first general derivation in Photon splitting and photon dispersion in a strong magnetic field in 1971. Experimental investigation of the photon splitting in atomic field was carried out at the ROKK-1 facility at the Budker institute in 1993-96.
Design
PVLAS uses a high-finesse Fabry-Perot optical cavity. The first setup, used until 2005, sent a linearly polarized laser beam through vacuum with 5T magnetic field from a superconducting magnet to an ellipsometer. After upgrades to avoid fringe fields, several runs were done at 2.3T and 5T, excluding a prior claim of axion detection. It was determined that an optimized optical setup was needed for discovery potential. A prototype with much improved sensitivity was tested in 2010. In 2013 the upgraded apparatus at INFN Ferrara with permanent magnets and horizontal ellipsometer was set up and began data taking in 2014
Results
PVLAS investigated vacuum polarization induced by external magnetic fields. An observation of the rotation of light polarization by the vacuum in a magnetic field was published in 2006. Data taken with an upgraded setup excluded the previous magnetic rotation in 2008 and set limits on photon-photon scattering. An improved limit on nonlinear vacuum effects was set in 2012: Ae < 2.9·10−21 T−2 @ 95% C.L.
See also
DAMA/NaI
DAMA/LIBRA
CAST
External links
PVLAS website - Istituto Nazionale di Fisica Nucleare (INFN) – Trieste
OSQAR experiment – CERN
PVLAS experiment record on INSPIRE-HEP
References and notes
Experiments for dark matter search
Particle experiments | PVLAS | [
"Physics"
] | 794 | [
"Dark matter",
"Experiments for dark matter search",
"Unsolved problems in physics"
] |
9,075,104 | https://en.wikipedia.org/wiki/Dark%20Energy%20Survey | The Dark Energy Survey (DES) is an astronomical survey designed to constrain the properties of dark energy. It uses images taken in the near-ultraviolet, visible, and near-infrared to measure the expansion of the universe using Type Ia supernovae, baryon acoustic oscillations, the number of galaxy clusters, and weak gravitational lensing. The collaboration is composed of research institutions and universities from the United States, Australia, Brazil, the United Kingdom, Germany, Spain, and Switzerland. The collaboration is divided into several scientific working groups. The director of DES is Josh Frieman.
The DES began by developing and building Dark Energy Camera (DECam), an instrument designed specifically for the survey. This camera has a wide field of view and high sensitivity, particularly in the red part of the visible spectrum and in the near infrared. Observations were performed with DECam mounted on the 4-meter Víctor M. Blanco Telescope, located at the Cerro Tololo Inter-American Observatory (CTIO) in Chile. Observing sessions ran from 2013 to 2019; the DES collaboration has published results from the first three years of the survey.
DECam
DECam, short for the Dark Energy Camera, is a large camera built to replace the previous prime focus camera on the Victor M. Blanco Telescope. The camera consists of three major components: mechanics, optics, and CCDs.
Mechanics
The mechanics of the camera consists of a filter changer with an 8-filter capacity and shutter. There is also an optical barrel that supports 5 corrector lenses, the largest of which is 98 cm in diameter. These components are attached to the CCD focal plane which is cooled to with liquid nitrogen in order to reduce thermal noise in the CCDs. The focal plane is also kept in an extremely low vacuum of to prevent the formation of condensation on the sensors. The entire camera with lenses, filters, and CCDs weighs approximately 4 tons. When mounted at the prime focus it was supported with a hexapod system allowing for real time focal adjustment.
Optics
The camera is outfitted with u, g, r, i, z, and Y filters spanning roughly from 340–1070 nm, similar to those used in the Sloan Digital Sky Survey (SDSS). This allows DES to obtain photometric redshift measurements to z≈1. DECam also contains five lenses acting as corrector optics to extend the telescope's field of view to a diameter of 2.2°, one of the widest fields of view available for ground-based optical and infrared imaging. One significant difference between previous charge-coupled devices (CCD) at the Victor M. Blanco Telescope and DECam is the improved quantum efficiency in the red and near-infrared wavelengths.
CCDs
The scientific sensor array on DECam is an array of 62 2048×4096 pixel back-illuminated CCDs totaling 520 megapixels; an additional 12 2048×2048 pixel CCDs (50 Mpx) are used for guiding the telescope, monitoring focus, and alignment. The full DECam focal plane contains 570 megapixels. The CCDs for DECam use high resistivity silicon manufactured by Dalsa and LBNL with 15×15 micron pixels. By comparison, the OmniVision Technologies back-illuminated CCD that was used in the iPhone 4 has a 1.75×1.75 micron pixel with 5 megapixels. The larger pixels allow DECam to collect more light per pixel, improving low light sensitivity which is desirable for an astronomical instrument. DECam's CCDs also have a 250-micron crystal depth; this is significantly larger than most consumer CCDs. The additional crystal depth increases the path length travelled by entering photons. This, in turn, increases the probability of interaction and allows the CCDs to have an increased sensitivity to lower energy photons, extending the wavelength range to 1050 nm. Scientifically this is important because it allows one to look for objects at a higher redshift, increasing statistical power in the studies mentioned above. When placed in the telescope's focal plane each pixel has a width of 0.27 on the sky, resulting in a total field of view of 3 square degrees.
Survey
DES imaged 5,000 square degrees of the southern sky in a footprint that overlaps with the South Pole Telescope and Stripe 82 (in large part avoiding the Milky Way). The survey took 758 observing nights spread over six annual sessions between August and February to complete, covering the survey footprint ten times in five photometric bands (g, r, i, z, and Y). The survey reached a depth of 24th magnitude in the i band over the entire survey area. Longer exposure times and faster observing cadence were made in five smaller patches totaling 30 square degrees to search for supernovae.
First light was achieved on 12 September 2012; after a verification and testing period, scientific survey observations started in August 2013. The last observing session was completed on 9 January 2019.
Other surveys using DECam
After completion of the Dark Energy Survey, the Dark Energy Camera was used for other sky surveys:
Dark Energy Camera Legacy Survey (DECaLS) covers the sky below 32°Declination, not including the Milky Way. This survey covers over 9000 square degrees.
The DESI Legacy Imaging Surveys (Legacy Surveys), as of data release 10, includes DECaLS, BASS and MzLS. It also incorporating additional DECam data, which means that it covers almost the entire extragalactic southern sky, including parts of the Magellanic Clouds. The purpose of the Legacy Surveys is to find targets for the Dark Energy Spectroscopic Instrument.
Dark Energy Camera Plane Survey (DECaPS), covers the Milky Way in the southern sky.
Observing
Each year from August through February, observers will stay in dormitories on the mountain. During a weeklong period of work, observers sleep during the day and use the telescope and camera at night. There will be some DES members working at the telescope console to monitor operations while others are monitoring camera operations and data process.
For the wide-area footprint observations, DES takes roughly every two minutes for each new image: The exposures are typically 90 seconds long, with another 30 seconds for readout of the camera data and slewing to point the telescope at its next target. Despite the restrictions on each exposure, the team also need to consider different sky conditions for the observations, such as moonlight and cloud cover.
In order to get better images, DES team use a computer algorithm called the "Observing Tactician" (ObsTac) to help with sequencing observations. It optimizes among different factors, such as the date and time, weather conditions, and the position of the moon. ObsTac automatically points the telescope in the best direction, and selects the exposure, using the best light filter. It also decides whether to take a wide-area or time-domain survey image, depending on whether or not the exposure will also be used for supernova searches.
Results
Cosmology
Dark Energy Group published several papers presenting their results for cosmology. Most of these cosmology results coming from its first-year data and the third-year data. Their results for cosmology were concluded with a Multi-Probe Methodology, which mainly combine the data from Galaxy-Galaxy Lensing, different shape of weak lensing, cosmic shear, galaxy clustering and photometric data set.
For the first-year data collected by DES, Dark Energy Survey Group showed the Cosmological Constraints results from Galaxy Clustering and Weak Lensing results and cosmic shear measurement. With Galaxy Clustering and Weak Lensing results, and for ΛCDM, , and at 68% confidence limits for ωCMD. Combine the most significant measurements of cosmic shear in a galaxy survey, Dark Energy Survey Group showed that at 68% confidence limits and for ΛCDM with . Other cosmological analyses from first year data showed a derivation and validation of redshift distribution estimates and their uncertainties for the galaxies used as weak lensing sources. The DES team also published a paper summarize all the Photometric Data Set for Cosmology for their first-year data.
For the third-year data collected by DES, they updated the Cosmological Constraints to for the ΛCDM model with the new cosmic shear measurements. From third-year data of Galaxy Clustering and Weak Lensing results, DES updated the Cosmological Constraints to and in ΛCDM at 68% confidence limits, , and in at 68% confidence limits. Similarly, the DES team published their third-year observations for photometric data set for cosmology comprising nearly 5000 deg2 of imaging in the south Galactic cap, including nearly 390 million objects, with depth reaching S/N ~ 10 for extended objects up to ~ 23.0, and top-of-the-atmosphere photometric uniformity < 3mmag.
Weak lensing
Weak lensing was measured statistically by measuring the shear-shear correlation function, a two-point function, or its Fourier Transform, the shear power spectrum. In April 2015, the Dark Energy Survey released mass maps using cosmic shear measurements of about 2 million galaxies from the science verification data between August 2012 and February 2013. In 2021 weak lensing was used to map the dark matter in a region of the southern hemisphere sky, in 2022 together with galaxy clustering data to give new cosmological constrains. and in 2023 with data from the Planck telescope and South Pole telescope to give once new improved constraints.
Another big part of weak lensing result is to calibrate the redshift of the source galaxies. In December 2020 and June 2021, DES team published two papers showing their results about using weak lensing to calibrate the redshift of the source galaxies in order to mapping the matter density field with gravitational lensing.
Gravitational waves
After LIGO detected the first gravitational wave signal from GW170817, DES made follow-up observations of GW170817 using DECam. With DECam independent discovery of the optical source, DES team establish its association with GW170817 by showing that none of the 1500 other sources found within the event localization region could plausibly be associated with the event. DES team monitored the source for over two weeks and provide the light curve data as a machine-readable file. From the observation data set, DES concluded that the optical counterpart they have identified near NGC 4993 is associated with GW170817. This discovery ushers in the era of multi-messenger astronomy with gravitational waves and demonstrates the power of DECam to identify the optical counterparts of gravitational-wave sources.
Dwarf galaxies
In March 2015, two teams released their discoveries of several new potential dwarf galaxy candidates found in Year 1 DES data. In August 2015, the Dark Energy Survey team announced the discovery of eight additional candidates in Year 2 DES data. Later on, Dark Energy Survey team found more dwarf galaxies. With more Dwarf Galaxy results, the team was able to take a deep look about more properties of the detected Dwarf Galaxy such as the chemical abundance, the structure of stellar population, and Stellar Kinematics and Metallicities. In Feb 2019, the team also discovered a sixth star cluster in the Fornax Dwarf Spheroidal Galaxy and a tidally Disrupted Ultra-Faint Dwarf Galaxy.
Baryon acoustic oscillations
The signature of baryon acoustic oscillations (BAO) can be observed in the distribution of tracers of the matter density field and used to measure the expansion history of the Universe. BAO can also be measured using purely photometric data, though at less significance. DES team observation samples consists of 7 million galaxies distributed over a footprint of 4100 deg2 with and a typical redshift uncertainty of 0.03(1+z). From their statistics, they combine the likelihoods derived from angular correlations and spherical harmonics to constrain the ratio of comoving angular diameter distance at the effective redshift of our sample to the sound horizon scale at the drag epoch.
Type Ia supernova observations
In May 2019, Dark Energy Survey team published their first cosmology results using Type Ia supernovae. The supernova data was from DES-SN3YR. The Dark Energy Survey team found Ωm = 0.331 ± 0.038 with a flat ΛCDM model and Ωm = 0.321 ± 0.018, w = −0.978 ± 0.059 with a flat model. Analyzing the same data from DES-SN3YR, they also found a new current Hubble constant, . This result has an excellent agreement with the Hubble constant measurement from Planck Satellite Collaboration in 2018. In June 2019, there a follow-up paper was published by DES team discussing the systematic uncertainties, and validation of using the supernovae to measure the cosmology results mentioned before. The team also published their photometric pipeline and light curve data in another paper published in the same month.
Minor planets
Several minor planets were discovered by DeCam in the course of The Dark Energy Survey, including high-inclination trans-Neptunian objects (TNOs).
{| class="wikitable" style="font-size:89%; float:left; text-align:center; width:27em; margin-right:1em; line-height:1.65em !important; height:155px;"
|+ List of DES discovered minor planets
|-
! Numbered MPdesignation !! Discoverydate
!style="width:3em;" |
! Ref
|-
|
| 19 November 2012
|
|
|-
|
| 8 September 2013
|
|
|-
|
| 18 August 2014
|
|
|-
|
| 19 August 2014
|
|
|-
|
| 15 November 2012
|
|
|-
|
| 15 November 2012
|
|
|-
|
| 28 September 2012
|
|
|-
|
| 12 November 2012
|
|
|-
|
| 13 October 2013
|
|
|-
!colspan=4 style="font-weight:normal; text-align:center; padding:4px 12px;"| Discoveries are credited either to"DECam" or "Dark Energy Survey".
|}
The MPC has assigned the IAU code W84 for DeCam's observations of small Solar System bodies. As of October 2019, the MPC inconsistently credits the discovery of nine numbered minor planets, all of them trans-Neptunian objects, to either "DeCam" or "Dark Energy Survey". The list does not contain any unnumbered minor planets potentially discovered by DeCam, as discovery credits are only given upon a body's numbering, which in turn depends on a sufficiently secure orbit determination.
Gallery
See also
Cosmic Evolution Survey
References
External links
Dark Energy Survey website
Dark Energy Survey Science Program (PDF)
Dark Energy Survey Data Management
Dark Energy Camera (DECam)
Astronomical surveys
Dark energy
Fermilab experiments
Minor-planet discovering observatories | Dark Energy Survey | [
"Physics",
"Astronomy"
] | 3,080 | [
"Unsolved problems in astronomy",
"Physical quantities",
"Concepts in astronomy",
"Astronomical surveys",
"Unsolved problems in physics",
"Works about astronomy",
"Energy (physics)",
"Dark energy",
"Wikipedia categories named after physical quantities",
"Astronomical objects"
] |
191,101 | https://en.wikipedia.org/wiki/Phase%20space | The phase space of a physical system is the set of all possible physical states of the system when described by a given parameterization. Each possible state corresponds uniquely to a point in the phase space. For mechanical systems, the phase space usually consists of all possible values of the position and momentum parameters. It is the direct product of direct space and reciprocal space. The concept of phase space was developed in the late 19th century by Ludwig Boltzmann, Henri Poincaré, and Josiah Willard Gibbs.
Principles
In a phase space, every degree of freedom or parameter of the system is represented as an axis of a multidimensional space; a one-dimensional system is called a phase line, while a two-dimensional system is called a phase plane. For every possible state of the system or allowed combination of values of the system's parameters, a point is included in the multidimensional space. The system's evolving state over time traces a path (a phase-space trajectory for the system) through the high-dimensional space. The phase-space trajectory represents the set of states compatible with starting from one particular initial condition, located in the full phase space that represents the set of states compatible with starting from any initial condition. As a whole, the phase diagram represents all that the system can be, and its shape can easily elucidate qualities of the system that might not be obvious otherwise. A phase space may contain a great number of dimensions. For instance, a gas containing many molecules may require a separate dimension for each particle's x, y and z positions and momenta (6 dimensions for an idealized monatomic gas), and for more complex molecular systems additional dimensions are required to describe vibrational modes of the molecular bonds, as well as spin around 3 axes. Phase spaces are easier to use when analyzing the behavior of mechanical systems restricted to motion around and along various axes of rotation or translation e.g. in robotics, like analyzing the range of motion of a robotic arm or determining the optimal path to achieve a particular position/momentum result.
Conjugate momenta
In classical mechanics, any choice of generalized coordinates qi for the position (i.e. coordinates on configuration space) defines conjugate generalized momenta pi, which together define co-ordinates on phase space. More abstractly, in classical mechanics phase space is the cotangent bundle of configuration space, and in this interpretation the procedure above expresses that a choice of local coordinates on configuration space induces a choice of natural local Darboux coordinates for the standard symplectic structure on a cotangent space.
Statistical ensembles in phase space
The motion of an ensemble of systems in this space is studied by classical statistical mechanics. The local density of points in such systems obeys Liouville's theorem, and so can be taken as constant. Within the context of a model system in classical mechanics, the phase-space coordinates of the system at any given time are composed of all of the system's dynamic variables. Because of this, it is possible to calculate the state of the system at any given time in the future or the past, through integration of Hamilton's or Lagrange's equations of motion.
In low dimensions
For simple systems, there may be as few as one or two degrees of freedom. One degree of freedom occurs when one has an autonomous ordinary differential equation in a single variable, with the resulting one-dimensional system being called a phase line, and the qualitative behaviour of the system being immediately visible from the phase line. The simplest non-trivial examples are the exponential growth model/decay (one unstable/stable equilibrium) and the logistic growth model (two equilibria, one stable, one unstable).
The phase space of a two-dimensional system is called a phase plane, which occurs in classical mechanics for a single particle moving in one dimension, and where the two variables are position and velocity. In this case, a sketch of the phase portrait may give qualitative information about the dynamics of the system, such as the limit cycle of the Van der Pol oscillator shown in the diagram.
Here the horizontal axis gives the position, and vertical axis the velocity. As the system evolves, its state follows one of the lines (trajectories) on the phase diagram.
Related concepts
Phase plot
A plot of position and momentum variables as a function of time is sometimes called a phase plot or a phase diagram. However the latter expression, "phase diagram", is more usually reserved in the physical sciences for a diagram showing the various regions of stability of the thermodynamic phases of a chemical system, which consists of pressure, temperature, and composition.
Phase portrait
Phase integral
In classical statistical mechanics (continuous energies) the concept of phase space provides a classical analog to the partition function (sum over states) known as the phase integral. Instead of summing the Boltzmann factor over discretely spaced energy states (defined by appropriate integer quantum numbers for each degree of freedom), one may integrate over continuous phase space. Such integration essentially consists of two parts: integration of the momentum component of all degrees of freedom (momentum space) and integration of the position component of all degrees of freedom (configuration space). Once the phase integral is known, it may be related to the classical partition function by multiplication of a normalization constant representing the number of quantum energy states per unit phase space. This normalization constant is simply the inverse of the Planck constant raised to a power equal to the number of degrees of freedom for the system.
Applications
Chaos theory
Classic examples of phase diagrams from chaos theory are:
the Lorenz attractor
population growth (i.e. logistic map)
parameter plane of complex quadratic polynomials with Mandelbrot set.
Quantum mechanics
In quantum mechanics, the coordinates p and q of phase space normally become Hermitian operators in a Hilbert space.
But they may alternatively retain their classical interpretation, provided functions of them compose in novel algebraic ways (through Groenewold's 1946 star product). This is consistent with the uncertainty principle of quantum mechanics.
Every quantum mechanical observable corresponds to a unique function or distribution on phase space, and conversely, as specified by Hermann Weyl (1927) and supplemented by John von Neumann (1931); Eugene Wigner (1932); and, in a grand synthesis, by H. J. Groenewold (1946).
With J. E. Moyal (1949), these completed the foundations of the phase-space formulation of quantum mechanics, a complete and logically autonomous reformulation of quantum mechanics. (Its modern abstractions include deformation quantization and geometric quantization.)
Expectation values in phase-space quantization are obtained isomorphically to tracing operator observables with the density matrix in Hilbert space: they are obtained by phase-space integrals of observables, with the Wigner quasi-probability distribution effectively serving as a measure.
Thus, by expressing quantum mechanics in phase space (the same ambit as for classical mechanics), the Weyl map facilitates recognition of quantum mechanics as a deformation (generalization) of classical mechanics, with deformation parameter ħ/S, where S is the action of the relevant process. (Other familiar deformations in physics involve the deformation of classical Newtonian into relativistic mechanics, with deformation parameter v/c; or the deformation of Newtonian gravity into general relativity, with deformation parameter Schwarzschild radius/characteristic dimension.)
Classical expressions, observables, and operations (such as Poisson brackets) are modified by ħ-dependent quantum corrections, as the conventional commutative multiplication applying in classical mechanics is generalized to the noncommutative star-multiplication characterizing quantum mechanics and underlying its uncertainty principle.
Thermodynamics and statistical mechanics
In thermodynamics and statistical mechanics contexts, the term "phase space" has two meanings: for one, it is used in the same sense as in classical mechanics. If a thermodynamic system consists of N particles, then a point in the 6N-dimensional phase space describes the dynamic state of every particle in that system, as each particle is associated with 3 position variables and 3 momentum variables. In this sense, as long as the particles are distinguishable, a point in phase space is said to be a microstate of the system. (For indistinguishable particles a microstate consists of a set of N! points, corresponding to all possible exchanges of the N particles.) N is typically on the order of the Avogadro number, thus describing the system at a microscopic level is often impractical. This leads to the use of phase space in a different sense.
The phase space can also refer to the space that is parameterized by the macroscopic states of the system, such as pressure, temperature, etc. For instance, one may view the pressure–volume diagram or temperature–entropy diagram as describing part of this phase space. A point in this phase space is correspondingly called a macrostate. There may easily be more than one microstate with the same macrostate. For example, for a fixed temperature, the system could have many dynamic configurations at the microscopic level. When used in this sense, a phase is a region of phase space where the system in question is in, for example, the liquid phase, or solid phase, etc.
Since there are many more microstates than macrostates, the phase space in the first sense is usually a manifold of much larger dimensions than in the second sense. Clearly, many more parameters are required to register every detail of the system down to the molecular or atomic scale than to simply specify, say, the temperature or the pressure of the system.
Optics
Phase space is extensively used in nonimaging optics, the branch of optics devoted to illumination. It is also an important concept in Hamiltonian optics.
Medicine
In medicine and bioengineering, the phase space method is used to visualize multidimensional physiological responses.
See also
Configuration space (mathematics)
Minisuperspace
Phase line, 1-dimensional case
Phase plane, 2-dimensional case
Phase portrait
Phase space method
Parameter space
Separatrix
Applications
Optical phase space
State space (controls) for information about state space (similar to phase state) in control engineering.
State space for information about state space with discrete states in computer science.
Molecular dynamics
Mathematics
Cotangent bundle
Dynamic system
Symplectic manifold
Wigner–Weyl transform
Physics
Classical mechanics
Hamiltonian mechanics
Lagrangian mechanics
State space (physics) for information about state space in physics
Phase-space formulation of quantum mechanics
Characteristics in phase space of quantum mechanics
References
Further reading
External links
Concepts in physics
Dynamical systems
Dimensional analysis
Hamiltonian mechanics | Phase space | [
"Physics",
"Mathematics",
"Engineering"
] | 2,196 | [
"Dimensional analysis",
"Theoretical physics",
"Classical mechanics",
"Hamiltonian mechanics",
"Mechanics",
"nan",
"Mechanical engineering",
"Dynamical systems"
] |
191,123 | https://en.wikipedia.org/wiki/Planck%27s%20law | In physics, Planck's law (also Planck radiation law) describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature , when there is no net flow of matter or energy between the body and its environment.
At the end of the 19th century, physicists were unable to explain why the observed spectrum of black-body radiation, which by then had been accurately measured, diverged significantly at higher frequencies from that predicted by existing theories. In 1900, German physicist Max Planck heuristically derived a formula for the observed spectrum by assuming that a hypothetical electrically charged oscillator in a cavity that contained black-body radiation could only change its energy in a minimal increment, , that was proportional to the frequency of its associated electromagnetic wave. While Planck originally regarded the hypothesis of dividing energy into increments as a mathematical artifice, introduced merely to get the correct answer, other physicists including Albert Einstein built on his work, and Planck's insight is now recognized to be of fundamental importance to quantum theory.
The law
Every physical body spontaneously and continuously emits electromagnetic radiation and the spectral radiance of a body, , describes the spectral emissive power per unit area, per unit solid angle and per unit frequency for particular radiation frequencies. The relationship given by Planck's radiation law, given below, shows that with increasing temperature, the total radiated energy of a body increases and the peak of the emitted spectrum shifts to shorter wavelengths. According to Planck's distribution law, the spectral energy density (energy per unit volume per unit frequency) at given temperature is given by:alternatively, the law can be expressed for the spectral radiance of a body for frequency at absolute temperature
given as:where is the Boltzmann constant, is the Planck constant, and is the speed of light in the medium, whether material or vacuum. The cgs units of spectral radiance are . The terms and are related to each other by a factor of since is independent of direction and radiation travels at speed .
The spectral radiance can also be expressed per unit wavelength instead of per unit frequency. In addition, the law may be expressed in other terms, such as the number of photons emitted at a certain wavelength, or the energy density in a volume of radiation.
In the limit of low frequencies (i.e. long wavelengths), Planck's law tends to the Rayleigh–Jeans law, while in the limit of high frequencies (i.e. small wavelengths) it tends to the Wien approximation.
Max Planck developed the law in 1900 with only empirically determined constants, and later showed that, expressed as an energy distribution, it is the unique stable distribution for radiation in thermodynamic equilibrium. As an energy distribution, it is one of a family of thermal equilibrium distributions which include the Bose–Einstein distribution, the Fermi–Dirac distribution and the Maxwell–Boltzmann distribution.
Black-body radiation
A black-body is an idealised object which absorbs and emits all radiation frequencies. Near thermodynamic equilibrium, the emitted radiation is closely described by Planck's law and because of its dependence on temperature, Planck radiation is said to be thermal radiation, such that the higher the temperature of a body the more radiation it emits at every wavelength.
Planck radiation has a maximum intensity at a wavelength that depends on the temperature of the body. For example, at room temperature (~), a body emits thermal radiation that is mostly infrared and invisible. At higher temperatures the amount of infrared radiation increases and can be felt as heat, and more visible radiation is emitted so the body glows visibly red. At higher temperatures, the body is bright yellow or blue-white and emits significant amounts of short wavelength radiation, including ultraviolet and even x-rays. The surface of the Sun (~) emits large amounts of both infrared and ultraviolet radiation; its emission is peaked in the visible spectrum. This shift due to temperature is called Wien's displacement law.
Planck radiation is the greatest amount of radiation that any body at thermal equilibrium can emit from its surface, whatever its chemical composition or surface structure. The passage of radiation across an interface between media can be characterized by the emissivity of the interface (the ratio of the actual radiance to the theoretical Planck radiance), usually denoted by the symbol . It is in general dependent on chemical composition and physical structure, on temperature, on the wavelength, on the angle of passage, and on the polarization. The emissivity of a natural interface is always between and 1.
A body that interfaces with another medium which both has and absorbs all the radiation incident upon it is said to be a black body. The surface of a black body can be modelled by a small hole in the wall of a large enclosure which is maintained at a uniform temperature with opaque walls that, at every wavelength, are not perfectly reflective. At equilibrium, the radiation inside this enclosure is described by Planck's law, as is the radiation leaving the small hole.
Just as the Maxwell–Boltzmann distribution is the unique maximum entropy energy distribution for a gas of material particles at thermal equilibrium, so is Planck's distribution for a gas of photons. By contrast to a material gas where the masses and number of particles play a role, the spectral radiance, pressure and energy density of a photon gas at thermal equilibrium are entirely determined by the temperature.
If the photon gas is not Planckian, the second law of thermodynamics guarantees that interactions (between photons and other particles or even, at sufficiently high temperatures, between the photons themselves) will cause the photon energy distribution to change and approach the Planck distribution. In such an approach to thermodynamic equilibrium, photons are created or annihilated in the right numbers and with the right energies to fill the cavity with a Planck distribution until they reach the equilibrium temperature. It is as if the gas is a mixture of sub-gases, one for every band of wavelengths, and each sub-gas eventually attains the common temperature.
The quantity is the spectral radiance as a function of temperature and frequency. It has units of W·m−2·sr−1·Hz−1 in the SI system. An infinitesimal amount of power is radiated in the direction described by the angle from the surface normal from infinitesimal surface area into infinitesimal solid angle in an infinitesimal frequency band of width centered on frequency . The total power radiated into any solid angle is the integral of over those three quantities, and is given by the Stefan–Boltzmann law. The spectral radiance of Planckian radiation from a black body has the same value for every direction and angle of polarization, and so the black body is said to be a Lambertian radiator.
Different forms
Planck's law can be encountered in several forms depending on the conventions and preferences of different scientific fields. The various forms of the law for spectral radiance are summarized in the table below. Forms on the left are most often encountered in experimental fields, while those on the right are most often encountered in theoretical fields.
In the fractional bandwidth formulation, and the integration is with respect to
Planck's law can also be written in terms of the spectral energy density () by multiplying by :
These distributions represent the spectral radiance of blackbodies—the power emitted from the emitting surface, per unit projected area of emitting surface, per unit solid angle, per spectral unit (frequency, wavelength, wavenumber or their angular equivalents, or fractional frequency or wavelength). Since the radiance is isotropic (i.e. independent of direction), the power emitted at an angle to the normal is proportional to the projected area, and therefore to the cosine of that angle as per Lambert's cosine law, and is unpolarized.
Correspondence between spectral variable forms
Different spectral variables require different corresponding forms of expression of the law. In general, one may not convert between the various forms of Planck's law simply by substituting one variable for another, because this would not take into account that the different forms have different units. Wavelength and frequency units are reciprocal.
Corresponding forms of expression are related because they express one and the same physical fact: for a particular physical spectral increment, a corresponding particular physical energy increment is radiated.
This is so whether it is expressed in terms of an increment of frequency, , or, correspondingly, of wavelength, , or of fractional bandwidth, or . Introduction of a minus sign can indicate that an increment of frequency corresponds with decrement of wavelength.
In order to convert the corresponding forms so that they express the same quantity in the same units we multiply by the spectral increment. Then, for a particular spectral increment, the particular physical energy increment may be written
which leads to
Also, , so that . Substitution gives the correspondence between the frequency and wavelength forms, with their different dimensions and units.
Consequently,
Evidently, the location of the peak of the spectral distribution for Planck's law depends on the choice of spectral variable. Nevertheless, in a manner of speaking, this formula means that the shape of the spectral distribution is independent of temperature, according to Wien's displacement law, as detailed below in § Properties §§ Percentiles.
The fractional bandwidth form is related to the other forms by
.
First and second radiation constants
In the above variants of Planck's law, the wavelength and wavenumber variants use the terms and which comprise physical constants only. Consequently, these terms can be considered as physical constants themselves, and are therefore referred to as the first radiation constant and the second radiation constant with
and
Using the radiation constants, the wavelength variant of Planck's law can be simplified to
and the wavenumber variant can be simplified correspondingly.
is used here instead of because it is the SI symbol for spectral radiance. The in refers to that. This reference is necessary because Planck's law can be reformulated to give spectral radiant exitance rather than spectral radiance , in which case replaces , with
so that Planck's law for spectral radiant exitance can be written as
As measuring techniques have improved, the General Conference on Weights and Measures has revised its estimate of ; see for details.
Physics
Planck's law describes the unique and characteristic spectral distribution for electromagnetic radiation in thermodynamic equilibrium, when there is no net flow of matter or energy. Its physics is most easily understood by considering the radiation in a cavity with rigid opaque walls. Motion of the walls can affect the radiation. If the walls are not opaque, then the thermodynamic equilibrium is not isolated. It is of interest to explain how the thermodynamic equilibrium is attained. There are two main cases: (a) when the approach to thermodynamic equilibrium is in the presence of matter, when the walls of the cavity are imperfectly reflective for every wavelength or when the walls are perfectly reflective while the cavity contains a small black body (this was the main case considered by Planck); or (b) when the approach to equilibrium is in the absence of matter, when the walls are perfectly reflective for all wavelengths and the cavity contains no matter. For matter not enclosed in such a cavity, thermal radiation can be approximately explained by appropriate use of Planck's law.
Classical physics led, via the equipartition theorem, to the ultraviolet catastrophe, a prediction that the total blackbody radiation intensity was infinite. If supplemented by the classically unjustifiable assumption that for some reason the radiation is finite, classical thermodynamics provides an account of some aspects of the Planck distribution, such as the Stefan–Boltzmann law, and the Wien displacement law. For the case of the presence of matter, quantum mechanics provides a good account, as found below in the section headed Einstein coefficients. This was the case considered by Einstein, and is nowadays used for quantum optics. For the case of the absence of matter, quantum field theory is necessary, because non-relativistic quantum mechanics with fixed particle numbers does not provide a sufficient account.
Photons
Quantum theoretical explanation of Planck's law views the radiation as a gas of massless, uncharged, bosonic particles, namely photons, in thermodynamic equilibrium. Photons are viewed as the carriers of the electromagnetic interaction between electrically charged elementary particles. Photon numbers are not conserved. Photons are created or annihilated in the right numbers and with the right energies to fill the cavity with the Planck distribution. For a photon gas in thermodynamic equilibrium, the internal energy density is entirely determined by the temperature; moreover, the pressure is entirely determined by the internal energy density. This is unlike the case of thermodynamic equilibrium for material gases, for which the internal energy is determined not only by the temperature, but also, independently, by the respective numbers of the different molecules, and independently again, by the specific characteristics of the different molecules. For different material gases at given temperature, the pressure and internal energy density can vary independently, because different molecules can carry independently different excitation energies.
Planck's law arises as a limit of the Bose–Einstein distribution, the energy distribution describing non-interactive bosons in thermodynamic equilibrium. In the case of massless bosons such as photons and gluons, the chemical potential is zero and the Bose–Einstein distribution reduces to the Planck distribution. There is another fundamental equilibrium energy distribution: the Fermi–Dirac distribution, which describes fermions, such as electrons, in thermal equilibrium. The two distributions differ because multiple bosons can occupy the same quantum state, while multiple fermions cannot. At low densities, the number of available quantum states per particle is large, and this difference becomes irrelevant. In the low density limit, the Bose–Einstein and the Fermi–Dirac distribution each reduce to the Maxwell–Boltzmann distribution.
Kirchhoff's law of thermal radiation
Kirchhoff's law of thermal radiation is a succinct and brief account of a complicated physical situation. The following is an introductory sketch of that situation, and is very far from being a rigorous physical argument. The purpose here is only to summarize the main physical factors in the situation, and the main conclusions.
Spectral dependence of thermal radiation
There is a difference between conductive heat transfer and radiative heat transfer. Radiative heat transfer can be filtered to pass only a definite band of radiative frequencies.
It is generally known that the hotter a body becomes, the more heat it radiates at every frequency.
In a cavity in an opaque body with rigid walls that are not perfectly reflective at any frequency, in thermodynamic equilibrium, there is only one temperature, and it must be shared in common by the radiation of every frequency.
One may imagine two such cavities, each in its own isolated radiative and thermodynamic equilibrium. One may imagine an optical device that allows radiative heat transfer between the two cavities, filtered to pass only a definite band of radiative frequencies. If the values of the spectral radiances of the radiations in the cavities differ in that frequency band, heat may be expected to pass from the hotter to the colder. One might propose to use such a filtered transfer of heat in such a band to drive a heat engine. If the two bodies are at the same temperature, the second law of thermodynamics does not allow the heat engine to work. It may be inferred that for a temperature common to the two bodies, the values of the spectral radiances in the pass-band must also be common. This must hold for every frequency band. This became clear to Balfour Stewart and later to Kirchhoff. Balfour Stewart found experimentally that of all surfaces, one of lamp-black emitted the greatest amount of thermal radiation for every quality of radiation, judged by various filters.
Thinking theoretically, Kirchhoff went a little further and pointed out that this implied that the spectral radiance, as a function of radiative frequency, of any such cavity in thermodynamic equilibrium must be a unique universal function of temperature. He postulated an ideal black body that interfaced with its surrounds in just such a way as to absorb all the radiation that falls on it. By the Helmholtz reciprocity principle, radiation from the interior of such a body would pass unimpeded directly to its surroundings without reflection at the interface. In thermodynamic equilibrium, the thermal radiation emitted from such a body would have that unique universal spectral radiance as a function of temperature. This insight is the root of Kirchhoff's law of thermal radiation.
Relation between absorptivity and emissivity
One may imagine a small homogeneous spherical material body labeled at a temperature , lying in a radiation field within a large cavity with walls of material labeled at a temperature . The body emits its own thermal radiation. At a particular frequency , the radiation emitted from a particular cross-section through the centre of in one sense in a direction normal to that cross-section may be denoted , characteristically for the material of . At that frequency , the radiative power from the walls into that cross-section in the opposite sense in that direction may be denoted , for the wall temperature . For the material of , defining the absorptivity as the fraction of that incident radiation absorbed by , that incident energy is absorbed at a rate .
The rate of accumulation of energy in one sense into the cross-section of the body can then be expressed
Kirchhoff's seminal insight, mentioned just above, was that, at thermodynamic equilibrium at temperature , there exists a unique universal radiative distribution, nowadays denoted , that is independent of the chemical characteristics of the materials and , that leads to a very valuable understanding of the radiative exchange equilibrium of any body at all, as follows.
When there is thermodynamic equilibrium at temperature , the cavity radiation from the walls has that unique universal value, so that . Further, one may define the emissivity of the material of the body just so that at thermodynamic equilibrium at temperature , one has .
When thermal equilibrium prevails at temperature , the rate of accumulation of energy vanishes so that . It follows that in thermodynamic equilibrium, when ,
Kirchhoff pointed out that it follows that in thermodynamic equilibrium, when ,
Introducing the special notation for the absorptivity of material at thermodynamic equilibrium at temperature (justified by a discovery of Einstein, as indicated below), one further has the equality
at thermodynamic equilibrium.
The equality of absorptivity and emissivity here demonstrated is specific for thermodynamic equilibrium at temperature and is in general not to be expected to hold when conditions of thermodynamic equilibrium do not hold. The emissivity and absorptivity are each separately properties of the molecules of the material but they depend differently upon the distributions of states of molecular excitation on the occasion, because of a phenomenon known as "stimulated emission", that was discovered by Einstein. On occasions when the material is in thermodynamic equilibrium or in a state known as local thermodynamic equilibrium, the emissivity and absorptivity become equal. Very strong incident radiation or other factors can disrupt thermodynamic equilibrium or local thermodynamic equilibrium. Local thermodynamic equilibrium in a gas means that molecular collisions far outweigh light emission and absorption in determining the distributions of states of molecular excitation.
Kirchhoff pointed out that he did not know the precise character of , but he thought it important that it should be found out. Four decades after Kirchhoff's insight of the general principles of its existence and character, Planck's contribution was to determine the precise mathematical expression of that equilibrium distribution .
Black body
In physics, one considers an ideal black body, here labeled , defined as one that completely absorbs all of the electromagnetic radiation falling upon it at every frequency (hence the term "black"). According to Kirchhoff's law of thermal radiation, this entails that, for every frequency , at thermodynamic equilibrium at temperature , one has , so that the thermal radiation from a black body is always equal to the full amount specified by Planck's law. No physical body can emit thermal radiation that exceeds that of a black body, since if it were in equilibrium with a radiation field, it would be emitting more energy than was incident upon it.
Though perfectly black materials do not exist, in practice a black surface can be accurately approximated. As to its material interior, a body of condensed matter, liquid, solid, or plasma, with a definite interface with its surroundings, is completely black to radiation if it is completely opaque. That means that it absorbs all of the radiation that penetrates the interface of the body with its surroundings, and enters the body. This is not too difficult to achieve in practice. On the other hand, a perfectly black interface is not found in nature. A perfectly black interface reflects no radiation, but transmits all that falls on it, from either side. The best practical way to make an effectively black interface is to simulate an 'interface' by a small hole in the wall of a large cavity in a completely opaque rigid body of material that does not reflect perfectly at any frequency, with its walls at a controlled temperature. Beyond these requirements, the component material of the walls is unrestricted. Radiation entering the hole has almost no possibility of escaping the cavity without being absorbed by multiple impacts with its walls.
Lambert's cosine law
As explained by Planck, a radiating body has an interior consisting of matter, and an interface with its contiguous neighbouring material medium, which is usually the medium from within which the radiation from the surface of the body is observed. The interface is not composed of physical matter but is a theoretical conception, a mathematical two-dimensional surface, a joint property of the two contiguous media, strictly speaking belonging to neither separately. Such an interface can neither absorb nor emit, because it is not composed of physical matter; but it is the site of reflection and transmission of radiation, because it is a surface of discontinuity of optical properties. The reflection and transmission of radiation at the interface obey the Stokes–Helmholtz reciprocity principle.
At any point in the interior of a black body located inside a cavity in thermodynamic equilibrium at temperature the radiation is homogeneous, isotropic and unpolarized. A black body absorbs all and reflects none of the electromagnetic radiation incident upon it. According to the Helmholtz reciprocity principle, radiation from the interior of a black body is not reflected at its surface, but is fully transmitted to its exterior. Because of the isotropy of the radiation in the body's interior, the spectral radiance of radiation transmitted from its interior to its exterior through its surface is independent of direction.
This is expressed by saying that radiation from the surface of a black body in thermodynamic equilibrium obeys Lambert's cosine law. This means that the spectral flux from a given infinitesimal element of area of the actual emitting surface of the black body, detected from a given direction that makes an angle with the normal to the actual emitting surface at , into an element of solid angle of detection centred on the direction indicated by , in an element of frequency bandwidth , can be represented as
where denotes the flux, per unit area per unit frequency per unit solid angle, that area would show if it were measured in its normal direction .
The factor is present because the area to which the spectral radiance refers directly is the projection, of the actual emitting surface area, onto a plane perpendicular to the direction indicated by . This is the reason for the name cosine law.
Taking into account the independence of direction of the spectral radiance of radiation from the surface of a black body in thermodynamic equilibrium, one has and so
Thus Lambert's cosine law expresses the independence of direction of the spectral radiance of the surface of a black body in thermodynamic equilibrium.
Stefan–Boltzmann law
The total power emitted per unit area at the surface of a black body () may be found by integrating the black body spectral flux found from Lambert's law over all frequencies, and over the solid angles corresponding to a hemisphere () above the surface.
The infinitesimal solid angle can be expressed in spherical polar coordinates:
So that:
where is known as the Stefan–Boltzmann constant.
Radiative transfer
The equation of radiative transfer describes the way in which radiation is affected as it travels through a material medium. For the special case in which the material medium is in thermodynamic equilibrium in the neighborhood of a point in the medium, Planck's law is of special importance.
For simplicity, we can consider the linear steady state, without scattering. The equation of radiative transfer states that for a beam of light going through a small distance , energy is conserved: The change in the (spectral) radiance of that beam () is equal to the amount removed by the material medium plus the amount gained from the material medium. If the radiation field is in equilibrium with the material medium, these two contributions will be equal. The material medium will have a certain emission coefficient and absorption coefficient.
The absorption coefficient is the fractional change in the intensity of the light beam as it travels the distance , and has units of length−1. It is composed of two parts, the decrease due to absorption and the increase due to stimulated emission. Stimulated emission is emission by the material body which is caused by and is proportional to the incoming radiation. It is included in the absorption term because, like absorption, it is proportional to the intensity of the incoming radiation. Since the amount of absorption will generally vary linearly as the density of the material, we may define a "mass absorption coefficient" which is a property of the material itself. The change in intensity of a light beam due to absorption as it traverses a small distance will then be
The "mass emission coefficient" is equal to the radiance per unit volume of a small volume element divided by its mass (since, as for the mass absorption coefficient, the emission is proportional to the emitting mass) and has units of power⋅solid angle−1⋅frequency−1⋅density−1. Like the mass absorption coefficient, it too is a property of the material itself. The change in a light beam as it traverses a small distance will then be
The equation of radiative transfer will then be the sum of these two contributions:
If the radiation field is in equilibrium with the material medium, then the radiation will be homogeneous (independent of position) so that and:
which is another statement of Kirchhoff's law, relating two material properties of the medium, and which yields the radiative transfer equation at a point around which the medium is in thermodynamic equilibrium:
Einstein coefficients
The principle of detailed balance states that, at thermodynamic equilibrium, each elementary process is equilibrated by its reverse process.
In 1916, Albert Einstein applied this principle on an atomic level to the case of an atom radiating and absorbing radiation due to transitions between two particular energy levels, giving a deeper insight into the equation of radiative transfer and Kirchhoff's law for this type of radiation. If level 1 is the lower energy level with energy , and level 2 is the upper energy level with energy , then the frequency of the radiation radiated or absorbed will be determined by Bohr's frequency condition:
If and are the number densities of the atom in states 1 and 2 respectively, then the rate of change of these densities in time will be due to three processes:
Spontaneous emission
Stimulated emission
Photo-absorption
where is the spectral energy density of the radiation field. The three parameters , and , known as the Einstein coefficients, are associated with the photon frequency produced by the transition between two energy levels (states). As a result, each line in a spectrum has its own set of associated coefficients. When the atoms and the radiation field are in equilibrium, the radiance will be given by Planck's law and, by the principle of detailed balance, the sum of these rates must be zero:
Since the atoms are also in equilibrium, the populations of the two levels are related by the Boltzmann factor:
where and are the multiplicities of the respective energy levels. Combining the above two equations with the requirement that they be valid at any temperature yields two relationships between the Einstein coefficients:
so that knowledge of one coefficient will yield the other two.
For the case of isotropic absorption and emission, the emission coefficient () and absorption coefficient () defined in the radiative transfer section above, can be expressed in terms of the Einstein coefficients. The relationships between the Einstein coefficients will yield the expression of Kirchhoff's law expressed in the Radiative transfer section above, namely that
These coefficients apply to both atoms and molecules.
Properties
Peaks
The distributions , , and peak at a photon energy ofwhere is the Lambert W function and is Euler's number.
However, the distribution peaks at a different energyThe reason for this is that, as mentioned above, one cannot go from (for example) to simply by substituting by . In addition, one must also multiply by , which shifts the peak of the distribution to higher energies. These peaks are the mode energy of a photon, when binned using equal-size bins of frequency or wavelength, respectively. Dividing () by these energy expression gives the wavelength of the peak.
The spectral radiance at these peaks is given by:
with andwith
Meanwhile, the average energy of a photon from a blackbody iswhere is the Riemann zeta function.
Approximations
In the limit of low frequencies (i.e. long wavelengths), Planck's law becomes the Rayleigh–Jeans law
or
The radiance increases as the square of the frequency, illustrating the ultraviolet catastrophe. In the limit of high frequencies (i.e. small wavelengths) Planck's law tends to the Wien approximation:
or
Percentiles
Wien's displacement law in its stronger form states that the shape of Planck's law is independent of temperature. It is therefore possible to list the percentile points of the total radiation as well as the peaks for wavelength and frequency, in a form which gives the wavelength when divided by temperature . The second column of the following table lists the corresponding values of , that is, those values of for which the wavelength is micrometers at the radiance percentile point given by the corresponding entry in the first column.
That is, 0.01% of the radiation is at a wavelength below μm, 20% below , etc. The wavelength and frequency peaks are in bold and occur at 25.0% and 64.6% respectively. The 41.8% point is the wavelength-frequency-neutral peak (i.e. the peak in power per unit change in logarithm of wavelength or frequency). These are the points at which the respective Planck-law functions , and , respectively, divided by attain their maxima. The much smaller gap in ratio of wavelengths between 0.1% and 0.01% (1110 is 22% more than 910) than between 99.9% and 99.99% (113374 is 120% more than 51613) reflects the exponential decay of energy at short wavelengths (left end) and polynomial decay at long.
Which peak to use depends on the application. The conventional choice is the wavelength peak at 25.0% given by Wien's displacement law in its weak form. For some purposes the median or 50% point dividing the total radiation into two-halves may be more suitable. The latter is closer to the frequency peak than to the wavelength peak because the radiance drops exponentially at short wavelengths and only polynomially at long. The neutral peak occurs at a shorter wavelength than the median for the same reason.
Comparison to solar spectrum
Solar radiation can be compared to black-body radiation at about 5778 K (but see graph). The table on the right shows how the radiation of a black body at this temperature is partitioned, and also how sunlight is partitioned for comparison. Also for comparison a planet modeled as a black body is shown, radiating at a nominal 288 K (15 °C) as a representative value of the Earth's highly variable temperature. Its wavelengths are more than twenty times that of the Sun, tabulated in the third column in micrometers (thousands of nanometers).
That is, only 1% of the Sun's radiation is at wavelengths shorter than 296 nm, and only 1% at longer than 3728 nm. Expressed in micrometers this puts 98% of the Sun's radiation in the range from 0.296 to 3.728 μm. The corresponding 98% of energy radiated from a 288 K planet is from 5.03 to 79.5 μm, well above the range of solar radiation (or below if expressed in terms of frequencies instead of wavelengths ).
A consequence of this more-than-order-of-magnitude difference in wavelength between solar and planetary radiation is that filters designed to pass one and block the other are easy to construct. For example, windows fabricated of ordinary glass or transparent plastic pass at least 80% of the incoming 5778 K solar radiation, which is below 1.2 μm in wavelength, while blocking over 99% of the outgoing 288 K thermal radiation from 5 μm upwards, wavelengths at which most kinds of glass and plastic of construction-grade thickness are effectively opaque.
The Sun's radiation is that arriving at the top of the atmosphere (TOA). As can be read from the table, radiation below 400 nm, or ultraviolet, is about 8%, while that above 700 nm, or infrared, starts at about the 48% point and so accounts for 52% of the total. Hence only 40% of the TOA insolation is visible to the human eye. The atmosphere shifts these percentages substantially in favor of visible light as it absorbs most of the ultraviolet and significant amounts of infrared.
Derivations
Photon gas
Consider a cube of side with conducting walls filled with electromagnetic radiation in thermal equilibrium at temperature . If there is a small hole in one of the walls, the radiation emitted from the hole will be characteristic of a perfect black body. We will first calculate the spectral energy density within the cavity and then determine the spectral radiance of the emitted radiation.
At the walls of the cube, the parallel component of the electric field and the orthogonal component of the magnetic field must vanish. Analogous to the wave function of a particle in a box, one finds that the fields are superpositions of periodic functions. The three wavelengths , , and , in the three directions orthogonal to the walls can be:where the are positive integers. For each set of integers there are two linearly independent solutions (known as modes). The two modes for each set of these correspond to the two polarization states of the photon which has a spin of 1. According to quantum theory, the total energy of a mode is given by:
The number can be interpreted as the number of photons in the mode. For the energy of the mode is not zero. This vacuum energy of the electromagnetic field is responsible for the Casimir effect. In the following we will calculate the internal energy of the box at absolute temperature .
According to statistical mechanics, the equilibrium probability distribution over the energy levels of a particular mode is given by:where we use the reciprocal temperatureThe denominator , is the partition function of a single mode. It makes properly normalized, and can be evaluated aswith
being the energy of a single photon. The average energy in a mode can be obtained from the partition function:This formula, apart from the first vacuum energy term, is a special case of the general formula for particles obeying Bose–Einstein statistics. Since there is no restriction on the total number of photons, the chemical potential is zero.
If we measure the energy relative to the ground state, the total energy in the box follows by summing over all allowed single photon states. This can be done exactly in the thermodynamic limit as approaches infinity. In this limit, becomes continuous and we can then integrate over this parameter. To calculate the energy in the box in this way, we need to evaluate how many photon states there are in a given energy range. If we write the total number of single photon states with energies between and as , where is the density of states (which is evaluated below), then the total energy is given by
To calculate the density of states we rewrite equation () as follows:where is the norm of the vector .
For every vector with integer components larger than or equal to zero, there are two photon states. This means that the number of photon states in a certain region of -space is twice the volume of that region. An energy range of corresponds to shell of thickness in -space. Because the components of have to be positive, this shell spans an octant of a sphere. The number of photon states , in an energy range , is thus given by:Inserting this in Eq. () and dividing by volume gives the total energy densitywhere the frequency-dependent spectral energy density is given bySince the radiation is the same in all directions, and propagates at the speed of light, the spectral radiance of radiation exiting the small hole iswhich yields the Planck's lawOther forms of the law can be obtained by change of variables in the total energy integral. The above derivation is based on .
Dipole approximation and Einstein Coefficients
For the non-degenerate case, A and B coefficients can be calculated using dipole approximation in time dependent perturbation theory in quantum mechanics. Calculation of A also requires second quantization since semi-classical theory cannot explain spontaneous emission which does not go to zero as perturbing field goes to zero. The transition rates hence calculated are (in SI units):
Note that the rate of transition formula depends on dipole moment operator. For higher order approximations, it involves quadrupole moment and other similar terms. The A and B coefficients (which correspond to angular frequency energy distribution) are hence:
where and A and B coefficients satisfy the given ratios for non degenerate case:
and .
Another useful ratio is that from maxwell distribution which says that the number of particles in an energy level is proportional to the exponent of . Mathematically:
where and are number of occupied energy levels of and respectively, where . Then, using:
Solving for for equilibrium condition , and using the derived ratios, we get Planck's Law:
.
History
Balfour Stewart
In 1858, Balfour Stewart described his experiments on the thermal radiative emissive and absorptive powers of polished plates of various substances, compared with the powers of lamp-black surfaces, at the same temperature. Stewart chose lamp-black surfaces as his reference because of various previous experimental findings, especially those of Pierre Prevost and of John Leslie. He wrote "Lamp-black, which absorbs all the rays that fall upon it, and therefore possesses the greatest possible absorbing power, will possess also the greatest possible radiating power."
Stewart measured radiated power with a thermo-pile and sensitive galvanometer read with a microscope. He was concerned with selective thermal radiation, which he investigated with plates of substances that radiated and absorbed selectively for different qualities of radiation rather than maximally for all qualities of radiation. He discussed the experiments in terms of rays which could be reflected and refracted, and which obeyed the Helmholtz reciprocity principle (though he did not use an eponym for it). He did not in this paper mention that the qualities of the rays might be described by their wavelengths, nor did he use spectrally resolving apparatus such as prisms or diffraction gratings. His work was quantitative within these constraints. He made his measurements in a room temperature environment, and quickly so as to catch his bodies in a condition near the thermal equilibrium in which they had been prepared by heating to equilibrium with boiling water. His measurements confirmed that substances that emit and absorb selectively respect the principle of selective equality of emission and absorption at thermal equilibrium.
Stewart offered a theoretical proof that this should be the case separately for every selected quality of thermal radiation, but his mathematics was not rigorously valid. According to historian D. M. Siegel: "He was not a practitioner of the more sophisticated techniques of nineteenth-century mathematical physics; he did not even make use of the functional notation in dealing with spectral distributions." He made no mention of thermodynamics in this paper, though he did refer to conservation of vis viva. He proposed that his measurements implied that radiation was both absorbed and emitted by particles of matter throughout depths of the media in which it propagated. He applied the Helmholtz reciprocity principle to account for the material interface processes as distinct from the processes in the interior material. He concluded that his experiments showed that, in the interior of an enclosure in thermal equilibrium, the radiant heat, reflected and emitted combined, leaving any part of the surface, regardless of its substance, was the same as would have left that same portion of the surface if it had been composed of lamp-black. He did not mention the possibility of ideally perfectly reflective walls; in particular he noted that highly polished real physical metals absorb very slightly.
Gustav Kirchhoff
In 1859, not knowing of Stewart's work, Gustav Robert Kirchhoff reported the coincidence of the wavelengths of spectrally resolved lines of absorption and of emission of visible light. Importantly for thermal physics, he also observed that bright lines or dark lines were apparent depending on the temperature difference between emitter and absorber.
Kirchhoff then went on to consider bodies that emit and absorb heat radiation, in an opaque enclosure or cavity, in equilibrium at temperature .
Here is used a notation different from Kirchhoff's. Here, the emitting power denotes a dimensioned quantity, the total radiation emitted by a body labeled by index at temperature . The total absorption ratio of that body is dimensionless, the ratio of absorbed to incident radiation in the cavity at temperature . (In contrast with Balfour Stewart's, Kirchhoff's definition of his absorption ratio did not refer in particular to a lamp-black surface as the source of the incident radiation.) Thus the ratio of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power, because is dimensionless. Also here the wavelength-specific emitting power of the body at temperature is denoted by and the wavelength-specific absorption ratio by . Again, the ratio of emitting power to absorption ratio is a dimensioned quantity, with the dimensions of emitting power.
In a second report made in 1859, Kirchhoff announced a new general principle or law for which he offered a theoretical and mathematical proof, though he did not offer quantitative measurements of radiation powers. His theoretical proof was and still is considered by some writers to be invalid. His principle, however, has endured: it was that for heat rays of the same wavelength, in equilibrium at a given temperature, the wavelength-specific ratio of emitting power to absorption ratio has one and the same common value for all bodies that emit and absorb at that wavelength. In symbols, the law stated that the wavelength-specific ratio has one and the same value for all bodies, that is for all values of index . In this report there was no mention of black bodies.
In 1860, still not knowing of Stewart's measurements for selected qualities of radiation, Kirchhoff pointed out that it was long established experimentally that for total heat radiation, of unselected quality, emitted and absorbed by a body in equilibrium, the dimensioned total radiation ratio , has one and the same value common to all bodies, that is, for every value of the material index . Again without measurements of radiative powers or other new experimental data, Kirchhoff then offered a fresh theoretical proof of his new principle of the universality of the value of the wavelength-specific ratio at thermal equilibrium. His fresh theoretical proof was and still is considered by some writers to be invalid.
But more importantly, it relied on a new theoretical postulate of "perfectly black bodies", which is the reason why one speaks of Kirchhoff's law. Such black bodies showed complete absorption in their infinitely thin most superficial surface. They correspond to Balfour Stewart's reference bodies, with internal radiation, coated with lamp-black. They were not the more realistic perfectly black bodies later considered by Planck. Planck's black bodies radiated and absorbed only by the material in their interiors; their interfaces with contiguous media were only mathematical surfaces, capable neither of absorption nor emission, but only of reflecting and transmitting with refraction.
Kirchhoff's proof considered an arbitrary non-ideal body labeled as well as various perfect black bodies labeled . It required that the bodies be kept in a cavity in thermal equilibrium at temperature . His proof intended to show that the ratio was independent of the nature of the non-ideal body, however partly transparent or partly reflective it was.
His proof first argued that for wavelength and at temperature , at thermal equilibrium, all perfectly black bodies of the same size and shape have the one and the same common value of emissive power , with the dimensions of power. His proof noted that the dimensionless wavelength-specific absorption ratio of a perfectly black body is by definition exactly 1. Then for a perfectly black body, the wavelength-specific ratio of emissive power to absorption ratio is again just , with the dimensions of power. Kirchhoff considered, successively, thermal equilibrium with the arbitrary non-ideal body, and with a perfectly black body of the same size and shape, in place in his cavity in equilibrium at temperature . He argued that the flows of heat radiation must be the same in each case. Thus he argued that at thermal equilibrium the ratio was equal to , which may now be denoted , a continuous function, dependent only on at fixed temperature , and an increasing function of at fixed wavelength , at low temperatures vanishing for visible but not for longer wavelengths, with positive values for visible wavelengths at higher temperatures, which does not depend on the nature of the arbitrary non-ideal body. (Geometrical factors, taken into detailed account by Kirchhoff, have been ignored in the foregoing.)
Thus Kirchhoff's law of thermal radiation can be stated: For any material at all, radiating and absorbing in thermodynamic equilibrium at any given temperature , for every wavelength , the ratio of emissive power to absorptive ratio has one universal value, which is characteristic of a perfect black body, and is an emissive power which we here represent by . (For our notation , Kirchhoff's original notation was simply .)
Kirchhoff announced that the determination of the function was a problem of the highest importance, though he recognized that there would be experimental difficulties to be overcome. He supposed that like other functions that do not depend on the properties of individual bodies, it would be a simple function. That function has occasionally been called 'Kirchhoff's (emission, universal) function', though its precise mathematical form would not be known for another forty years, till it was discovered by Planck in 1900. The theoretical proof for Kirchhoff's universality principle was worked on and debated by various physicists over the same time, and later. Kirchhoff stated later in 1860 that his theoretical proof was better than Balfour Stewart's, and in some respects it was so. Kirchhoff's 1860 paper did not mention the second law of thermodynamics, and of course did not mention the concept of entropy which had not at that time been established. In a more considered account in a book in 1862, Kirchhoff mentioned the connection of his law with "Carnot's principle", which is a form of the second law.
According to Helge Kragh, "Quantum theory owes its origin to the study of thermal radiation, in particular to the "blackbody" radiation that Robert Kirchhoff had first defined in 1859–1860."
Empirical and theoretical ingredients for the scientific induction of Planck's law
In 1860, Kirchhoff predicted experimental difficulties for the empirical determination of the function that described the dependence of the black-body spectrum as a function only of temperature and wavelength. And so it turned out. It took some forty years of development of improved methods of measurement of electromagnetic radiation to get a reliable result.
In 1865, John Tyndall described radiation from electrically heated filaments and from carbon arcs as visible and invisible. Tyndall spectrally decomposed the radiation by use of a rock salt prism, which passed heat as well as visible rays, and measured the radiation intensity by means of a thermopile.
In 1880, André-Prosper-Paul Crova published a diagram of the three-dimensional appearance of the graph of the strength of thermal radiation as a function of wavelength and temperature. He determined the spectral variable by use of prisms. He analyzed the surface through what he called "isothermal" curves, sections for a single temperature, with a spectral variable on the abscissa and a power variable on the ordinate. He put smooth curves through his experimental data points. They had one peak at a spectral value characteristic for the temperature, and fell either side of it towards the horizontal axis. Such spectral sections are widely shown even today.
In a series of papers from 1881 to 1886, Langley reported measurements of the spectrum of heat radiation, using diffraction gratings and prisms, and the most sensitive detectors that he could make. He reported that there was a peak intensity that increased with temperature, that the shape of the spectrum was not symmetrical about the peak, that there was a strong fall-off of intensity when the wavelength was shorter than an approximate cut-off value for each temperature, that the approximate cut-off wavelength decreased with increasing temperature, and that the wavelength of the peak intensity decreased with temperature, so that the intensity increased strongly with temperature for short wavelengths that were longer than the approximate cut-off for the temperature.
Having read Langley, in 1888, Russian physicist V.A. Michelson published a consideration of the idea that the unknown Kirchhoff radiation function could be explained physically and stated mathematically in terms of "complete irregularity of the vibrations of ... atoms". At this time, Planck was not studying radiation closely, and believed in neither atoms nor statistical physics. Michelson produced a formula for the spectrum for temperature:
where denotes specific radiative intensity at wavelength and temperature , and where and are empirical constants.
In 1898, Otto Lummer and Ferdinand Kurlbaum published an account of their cavity radiation source. Their design has been used largely unchanged for radiation measurements to the present day. It was a platinum box, divided by diaphragms, with its interior blackened with iron oxide. It was an important ingredient for the progressively improved measurements that led to the discovery of Planck's law. A version described in 1901 had its interior blackened with a mixture of chromium, nickel, and cobalt oxides.
The importance of the Lummer and Kurlbaum cavity radiation source was that it was an experimentally accessible source of black-body radiation, as distinct from radiation from a simply exposed incandescent solid body, which had been the nearest available experimental approximation to black-body radiation over a suitable range of temperatures. The simply exposed incandescent solid bodies, that had been used before, emitted radiation with departures from the black-body spectrum that made it impossible to find the true black-body spectrum from experiments.
Planck's views before the empirical facts led him to find his eventual law
Planck first turned his attention to the problem of black-body radiation in 1897.
Theoretical and empirical progress enabled Lummer and Pringsheim to write in 1899 that available experimental evidence was approximately consistent with the specific intensity law where and denote empirically measurable constants, and where and denote wavelength and temperature respectively. For theoretical reasons, Planck at that time accepted this formulation, which has an effective cut-off of short wavelengths.
Gustav Kirchhoff was Max Planck's teacher and surmised that there was a universal law for blackbody radiation and this was called "Kirchhoff's challenge". Planck, a theorist, believed that Wilhelm Wien had discovered this law and Planck expanded on Wien's work presenting it in 1899 to the meeting of the German Physical Society. Experimentalists Otto Lummer, Ferdinand Kurlbaum, Ernst Pringsheim Sr., and Heinrich Rubens did experiments that appeared to support Wien's law especially at higher frequency short wavelengths which Planck so wholly endorsed at the German Physical Society that it began to be called the Wien-Planck Law. However, by September 1900, the experimentalists had proven beyond a doubt that the Wien-Planck law failed at the longer wavelengths. They would present their data on October 19. Planck was informed by his friend Rubens and quickly created a formula within a few days. In June of that same year, Lord Rayleigh had created a formula that would work for short lower frequency wavelengths based on the widely accepted theory of equipartition. So Planck submitted a formula combining both Rayleigh's Law (or a similar equipartition theory) and Wien's law which would be weighted to one or the other law depending on wavelength to match the experimental data. However, although this equation worked, Planck himself said unless he could explain the formula derived from a "lucky intuition" into one of "true meaning" in physics, it did not have true significance. Planck explained that thereafter followed the hardest work of his life. Planck did not believe in atoms, nor did he think the second law of thermodynamics should be statistical because probability does not provide an absolute answer, and Boltzmann's entropy law rested on the hypothesis of atoms and was statistical. But Planck was unable to find a way to reconcile his Blackbody equation with continuous laws such as Maxwell's wave equations. So in what Planck called "an act of desperation", he turned to Boltzmann's atomic law of entropy as it was the only one that made his equation work. Therefore, he used the Boltzmann constant k and his new constant h to explain the blackbody radiation law which became widely known through his published paper.
Finding the empirical law
Max Planck produced his law on 19 October 1900 as an improvement upon the Wien approximation, published in 1896 by Wilhelm Wien, which fit the experimental data at short wavelengths (high frequencies) but deviated from it at long wavelengths (low frequencies). In June 1900, based on heuristic theoretical considerations, Rayleigh had suggested a formula that he proposed might be checked experimentally. The suggestion was that the Stewart–Kirchhoff universal function might be of the form . This was not the celebrated Rayleigh–Jeans formula , which did not emerge until 1905, though it did reduce to the latter for long wavelengths, which are the relevant ones here. According to Klein, one may speculate that it is likely that Planck had seen this suggestion though he did not mention it in his papers of 1900 and 1901. Planck would have been aware of various other proposed formulas which had been offered. On 7 October 1900, Rubens told Planck that in the complementary domain (long wavelength, low frequency), and only there, Rayleigh's 1900 formula fitted the observed data well.
For long wavelengths, Rayleigh's 1900 heuristic formula approximately meant that energy was proportional to temperature, . It is known that and this leads to and thence to for long wavelengths. But for short wavelengths, the Wien formula leads to and thence to for short wavelengths. Planck perhaps patched together these two heuristic formulas, for long and for short wavelengths, to produce a formula
This led Planck to the formula
where Planck used the symbols and to denote empirical fitting constants.
Planck sent this result to Rubens, who compared it with his and Kurlbaum's observational data and found that it fitted for all wavelengths remarkably well. On 19 October 1900, Rubens and Kurlbaum briefly reported the fit to the data, and Planck added a short presentation to give a theoretical sketch to account for his formula. Within a week, Rubens and Kurlbaum gave a fuller report of their measurements confirming Planck's law. Their technique for spectral resolution of the longer wavelength radiation was called the residual ray method. The rays were repeatedly reflected from polished crystal surfaces, and the rays that made it all the way through the process were 'residual', and were of wavelengths preferentially reflected by crystals of suitably specific materials.
Trying to find a physical explanation of the law
Once Planck had discovered the empirically fitting function, he constructed a physical derivation of this law. His thinking revolved around entropy rather than being directly about temperature. Planck considered a cavity with perfectly reflective walls; inside the cavity, there are finitely many distinct but identically constituted resonant oscillatory bodies of definite magnitude, with several such oscillators at each of finitely many characteristic frequencies. These hypothetical oscillators were for Planck purely imaginary theoretical investigative probes, and he said of them that such oscillators do not need to "really exist somewhere in nature, provided their existence and their properties are consistent with the laws of thermodynamics and electrodynamics.". Planck did not attribute any definite physical significance to his hypothesis of resonant oscillators but rather proposed it as a mathematical device that enabled him to derive a single expression for the black body spectrum that matched the empirical data at all wavelengths. He tentatively mentioned the possible connection of such oscillators with atoms. In a sense, the oscillators corresponded to Planck's speck of carbon; the size of the speck could be small regardless of the size of the cavity, provided the speck effectively transduced energy between radiative wavelength modes.
Partly following a heuristic method of calculation pioneered by Boltzmann for gas molecules, Planck considered the possible ways of distributing electromagnetic energy over the different modes of his hypothetical charged material oscillators. This acceptance of the probabilistic approach, following Boltzmann, for Planck was a radical change from his former position, which till then had deliberately opposed such thinking proposed by Boltzmann. In Planck's words, "I considered the [quantum hypothesis] a purely formal assumption, and I did not give it much thought except for this: that I had obtained a positive result under any circumstances and at whatever cost." Heuristically, Boltzmann had distributed the energy in arbitrary merely mathematical quanta , which he had proceeded to make tend to zero in magnitude, because the finite magnitude had served only to allow definite counting for the sake of mathematical calculation of probabilities, and had no physical significance. Referring to a new universal constant of nature, , Planck supposed that, in the several oscillators of each of the finitely many characteristic frequencies, the total energy was distributed to each in an integer multiple of a definite physical unit of energy, , characteristic of the respective characteristic frequency. His new universal constant of nature, , is now known as the Planck constant.
Planck explained further that the respective definite unit, , of energy should be proportional to the respective characteristic oscillation frequency of the hypothetical oscillator, and in 1901 he expressed this with the constant of proportionality :
Planck did not propose that light propagating in free space is quantized. The idea of quantization of the free electromagnetic field was developed later, and eventually incorporated into what we now know as quantum field theory.
In 1906, Planck acknowledged that his imaginary resonators, having linear dynamics, did not provide a physical explanation for energy transduction between frequencies. Present-day physics explains the transduction between frequencies in the presence of atoms by their quantum excitability, following Einstein. Planck believed that in a cavity with perfectly reflecting walls and with no matter present, the electromagnetic field cannot exchange energy between frequency components. This is because of the linearity of Maxwell's equations. Present-day quantum field theory predicts that, in the absence of matter, the electromagnetic field obeys nonlinear equations and in that sense does self-interact. Such interaction in the absence of matter has not yet been directly measured because it would require very high intensities and very sensitive and low-noise detectors, which are still in the process of being constructed. Planck believed that a field with no interactions neither obeys nor violates the classical principle of equipartition of energy, and instead remains exactly as it was when introduced, rather than evolving into a black body field. Thus, the linearity of his mechanical assumptions precluded Planck from having a mechanical explanation of the maximization of the entropy of the thermodynamic equilibrium thermal radiation field. This is why he had to resort to Boltzmann's probabilistic arguments.
Planck's law may be regarded as fulfilling the prediction of Gustav Kirchhoff that his law of thermal radiation was of the highest importance. In his mature presentation of his own law, Planck offered a thorough and detailed theoretical proof for Kirchhoff's law, theoretical proof of which until then had been sometimes debated, partly because it was said to rely on unphysical theoretical objects, such as Kirchhoff's perfectly absorbing infinitely thin black surface.
Subsequent events
It was not until five years after Planck made his heuristic assumption of abstract elements of energy or of action that Albert Einstein conceived of really existing quanta of light in 1905 as a revolutionary explanation of black-body radiation, of photoluminescence, of the photoelectric effect, and of the ionization of gases by ultraviolet light. In 1905, "Einstein believed that Planck's theory could not be made to agree with the idea of light quanta, a mistake he corrected in 1906." Contrary to Planck's beliefs of the time, Einstein proposed a model and formula whereby light was emitted, absorbed, and propagated in free space in energy quanta localized in points of space. As an introduction to his reasoning, Einstein recapitulated Planck's model of hypothetical resonant material electric oscillators as sources and sinks of radiation, but then he offered a new argument, disconnected from that model, but partly based on a thermodynamic argument of Wien, in which Planck's formula played no role. Einstein gave the energy content of such quanta in the form . Thus Einstein was contradicting the undulatory theory of light held by Planck. In 1910, criticizing a manuscript sent to him by Planck, knowing that Planck was a steady supporter of Einstein's theory of special relativity, Einstein wrote to Planck: "To me it seems absurd to have energy continuously distributed in space without assuming an aether."
According to Thomas Kuhn, it was not till 1908 that Planck more or less accepted part of Einstein's arguments for physical as distinct from abstract mathematical discreteness in thermal radiation physics. Still in 1908, considering Einstein's proposal of quantal propagation, Planck opined that such a revolutionary step was perhaps unnecessary. Until then, Planck had been consistent in thinking that discreteness of action quanta was to be found neither in his resonant oscillators nor in the propagation of thermal radiation. Kuhn wrote that, in Planck's earlier papers and in his 1906 monograph, there is no "mention of discontinuity, [nor] of talk of a restriction on oscillator energy, [nor of] any formula like ." Kuhn pointed out that his study of Planck's papers of 1900 and 1901, and of his monograph of 1906, had led him to "heretical" conclusions, contrary to the widespread assumptions of others who saw Planck's writing only from the perspective of later, anachronistic, viewpoints. Kuhn's conclusions, finding a period till 1908, when Planck consistently held his 'first theory', have been accepted by other historians.
In the second edition of his monograph, in 1912, Planck sustained his dissent from Einstein's proposal of light quanta. He proposed in some detail that absorption of light by his virtual material resonators might be continuous, occurring at a constant rate in equilibrium, as distinct from quantal absorption. Only emission was quantal. This has at times been called Planck's "second theory".
It was not till 1919 that Planck in the third edition of his monograph more or less accepted his 'third theory', that both emission and absorption of light were quantal.
The colourful term "ultraviolet catastrophe" was given by Paul Ehrenfest in 1911 to the paradoxical result that the total energy in the cavity tends to infinity when the equipartition theorem of classical statistical mechanics is (mistakenly) applied to black-body radiation. But this had not been part of Planck's thinking, because he had not tried to apply the doctrine of equipartition: when he made his discovery in 1900, he had not noticed any sort of "catastrophe". It was first noted by Lord Rayleigh in 1900, and then in 1901 by Sir James Jeans; and later, in 1905, by Einstein when he wanted to support the idea that light propagates as discrete packets, later called 'photons', and by Rayleigh and by Jeans.
In 1913, Bohr gave another formula with a further different physical meaning to the quantity . In contrast to Planck's and Einstein's formulas, Bohr's formula referred explicitly and categorically to energy levels of atoms. Bohr's formula was where and denote the energy levels of quantum states of an atom, with quantum numbers and . The symbol denotes the frequency of a quantum of radiation that can be emitted or absorbed as the atom passes between those two quantum states. In contrast to Planck's model, the frequency has no immediate relation to frequencies that might describe those quantum states themselves.
Later, in 1924, Satyendra Nath Bose developed the theory of the statistical mechanics of photons, which allowed a theoretical derivation of Planck's law. The actual word 'photon' was invented still later, by G.N. Lewis in 1926, who mistakenly believed that photons were conserved, contrary to Bose–Einstein statistics; nevertheless the word 'photon' was adopted to express the Einstein postulate of the packet nature of light propagation. In an electromagnetic field isolated in a vacuum in a vessel with perfectly reflective walls, such as was considered by Planck, indeed the photons would be conserved according to Einstein's 1905 model, but Lewis was referring to a field of photons considered as a system closed with respect to ponderable matter but open to exchange of electromagnetic energy with a surrounding system of ponderable matter, and he mistakenly imagined that still the photons were conserved, being stored inside atoms.
Ultimately, Planck's law of black-body radiation contributed to Einstein's concept of quanta of light carrying linear momentum, which became the fundamental basis for the development of quantum mechanics.
The above-mentioned linearity of Planck's mechanical assumptions, not allowing for energetic interactions between frequency components, was superseded in 1925 by Heisenberg's original quantum mechanics. In his paper submitted on 29 July 1925, Heisenberg's theory accounted for Bohr's above-mentioned formula of 1913. It admitted non-linear oscillators as models of atomic quantum states, allowing energetic interaction between their own multiple internal discrete Fourier frequency components, on the occasions of emission or absorption of quanta of radiation. The frequency of a quantum of radiation was that of a definite coupling between internal atomic meta-stable oscillatory quantum states. At that time, Heisenberg knew nothing of matrix algebra, but Max Born read the manuscript of Heisenberg's paper and recognized the matrix character of Heisenberg's theory. Then Born and Jordan published an explicitly matrix theory of quantum mechanics, based on, but in form distinctly different from, Heisenberg's original quantum mechanics; it is the Born and Jordan matrix theory that is today called matrix mechanics. Heisenberg's explanation of the Planck oscillators, as non-linear effects apparent as Fourier modes of transient processes of emission or absorption of radiation, showed why Planck's oscillators, viewed as enduring physical objects such as might be envisaged by classical physics, did not give an adequate explanation of the phenomena.
Nowadays, as a statement of the energy of a light quantum, often one finds the formula , where , and denotes angular frequency, and less often the equivalent formula . This statement about a really existing and propagating light quantum, based on Einstein's, has a physical meaning different from that of Planck's above statement about the abstract energy units to be distributed amongst his hypothetical resonant material oscillators.
An article by Helge Kragh published in Physics World gives an account of this history.
See also
Emissivity
Radiance
Sakuma–Hattori equation
References
Bibliography
Translated in part as "On quantum mechanics" in
Translated in
and a nearly identical version
Translated in
See also .
Translated as "Quantum-theoretical Re-interpretation of kinematic and mechanical relations" in
Translated from Frühgeschichte der Quantentheorie (1899–1913), Physik Verlag, Mosbach/Baden, 1969.
Translated by Guthrie, F. as
Translated in
Translated in
Translated in
Translated in
External links
Summary of Radiation
Radiation of a Blackbody – interactive simulation to play with Planck's law
Scienceworld entry on Planck's Law
Statistical mechanics
Foundational quantum physics
Max Planck
Old quantum theory
1900 in science
1900 in Germany | Planck's law | [
"Physics"
] | 14,404 | [
"Old quantum theory",
"Statistical mechanics",
"Foundational quantum physics",
"Quantum mechanics"
] |
191,282 | https://en.wikipedia.org/wiki/Coand%C4%83%20effect | The Coandă effect ( or ) is the tendency of a fluid jet to stay attached to a surface of any form. Merriam-Webster describes it as "the tendency of a jet of fluid emerging from an orifice to follow an adjacent flat or curved surface and to entrain fluid from the surroundings so that a region of lower pressure develops."
It is named after Romanian inventor Henri Coandă, who was the first to recognize the practical application of the phenomenon in aircraft design around 1910. It was first documented explicitly in two patents issued in 1936.
Discovery
An early description of this phenomenon was provided by Thomas Young in a lecture given to The Royal Society in 1800:
A hundred years later, Henri Coandă identified an application of the effect during experiments with his Coandă-1910 aircraft, which mounted an unusual engine he designed. The motor-driven turbine pushed hot air rearward, and Coandă noticed that the airflow was attracted to nearby surfaces. In 1934, Coandă obtained a patent in France for a "method and apparatus for deviation of a fluid into another fluid". The effect was described as the "deviation of a plain jet of a fluid that penetrates another fluid in the vicinity of a convex wall". The first official documents that explicitly mention the Coandă effect were two 1936 patents by Henri Coandă. This name was accepted by the leading aerodynamicist Theodore von Kármán, who had a long scientific relationship with Coandă on aerodynamics problems.
Mechanism
A free jet of air entrains molecules of air from its immediate surroundings causing an axisymmetrical "tube" or "sleeve" of low pressure around the jet (see Diagram 1). The resultant forces from this low pressure tube end up balancing any perpendicular flow instability, which stabilises the jet in a straight line. However, if a solid surface is placed close, and approximately parallel to the jet (Diagram 2), then the entrainment (and therefore removal) of air from between the solid surface and the jet causes a reduction in air pressure on that side of the jet that cannot be balanced as rapidly as the low pressure region on the "open" side of the jet. The pressure difference across the jet causes the jet to deviate towards the nearby surface, and then to adhere to it (Diagram 3). The jet adheres even better to curved surfaces (Diagram 4), because each (infinitesimally small) incremental change in direction of the surface brings about the effects described for the initial bending of the jet towards the surface. If the surface is not too sharply curved, the jet can, under the right circumstances, adhere to the surface even after flowing 180° around a cylindrically curved surface, and thus travel in a direction opposite to its initial direction. The forces that cause these changes in the direction of flow of the jet cause an equal and opposite force on the surface along which the jet flows. These Coandă effect induced forces can be harnessed to cause lift and other forms of motion, depending on the orientation of the jet and the surface to which the jet adheres. A small surface "lip" at the point where the jet starts to flow over that surface (Diagram 5) increases the initial deviation of the jet flow direction. This results from the fact that a low pressure vortex forms behind the lip, promoting the dip towards the surface.
The Coandă effect can be induced in any fluid, and is therefore equally effective in water and air. A heated airfoil significantly reduces drag.
Existence conditions
Early sources provide theoretical and experimental information needed to derive a detailed explanation of the effect. The Coandă effect may occur along a curved wall either in a free- or wall-jet.
On the left image of the preceding section: "The mechanism of Coandă effect", the effect as described, in the terms of T. Young as "the lateral pressure which eases the inflection of a current of air near an obstacle", represents a free jet emerging from an orifice and an obstacle in the surroundings. It includes the tendency of a free jet emerging from an orifice to entrain fluid from the surroundings confined with limited access, without developing any region of lower pressure when there is no obstacle in the surroundings, as is the case on the opposite side where turbulent mixing occurs at ambient pressure.
On the right image, the effect occurs along the curved wall as a wall jet. The image here on the right represents a two dimensional wall jet between two parallel plane walls, where the "obstacle" is a quarter cylindrical portion following the flat horizontal rectangular orifice, so that no fluid at all is entrained from the surroundings along the wall, but only on the opposite side in turbulent mixing with ambient air.
Wall jet
To compare experiment with a theoretical model, a two-dimensional plane wall jet of width () along a circular wall of radius () is referred to. A wall jet follows a flat horizontal wall, say of infinite radius, or rather whose radius is the radius of the Earth without separation because the surface pressure as well as the external pressure in the mixing zone is everywhere equal to the atmospheric pressure and the boundary layer does not separate from the wall.
With a much smaller radius (12 centimeters in the image on the right) a transverse difference arises between external and wall surface pressures of the jet, creating a pressure gradient depending upon , the relative curvature. This pressure gradient can appear in a zone before and after the origin of the jet where it gradually arises, and disappear at the point where the jet boundary layer separates from the wall, where the wall pressure reaches atmospheric pressure (and the transverse gradient becomes zero).
Experiments made in 1956 with turbulent air jets at a Reynolds number of 106 at various jet widths () show the pressures measured along a circularly curved wall radius () at a series of horizontal distance from the origin of the jet (see the diagram on the right).
Above a critical ratio of 0.5 only local effects at the origin of the jet are seen extending over a small angle of 18° along the curved wall. The jet then immediately separates from the curved wall. A Coandă effect is therefore not seen here but only a local attachment: a pressure smaller than atmospheric pressure appears on the wall along a distance corresponding to a small angle of 9°, followed by an equal angle of 9° where this pressure increases up to atmospheric pressure at the separation of the boundary layer, subject to this positive longitudinal gradient. However, if the ratio is smaller than the critical value of 0.5, the lower than ambient pressure measured on the wall seen at the origin of the jet continues along the wall (until the wall ends; see diagram on the right). This is "a true Coandă effect" as the jet clings to the wall "at a nearly constant pressure" as in a conventional wall jet.
A calculation made by Woods in 1954 of an inviscid flow along a circular wall shows that an inviscid solution exists with any curvature and any given deflection angle up to a separation point on the wall, where a singular point appears with an infinite slope of the surface pressure curve.
Introducing in the calculation the angle at separation found in the preceding experiments for each value of the relative curvature , the image here was recently obtained, and shows inertial effects represented by the inviscid solution: the calculated pressure field is similar to the experimental one described above, outside the nozzle. The flow curvature is caused exclusively by the transverse pressure gradient, as described by T. Young. Then, viscosity only produces a boundary layer along the wall and turbulent mixing with ambient air as in a conventional wall jet—except that this boundary layer separates under the action of the difference between the finally ambient pressure and a smaller surface pressure along the wall. According to Van Dyke, as quoted in Lift, the derivation of his equation (4c) also shows that the contribution of viscous stress to flow turning is negligible.
An alternative way would be to calculate the deflection angle at which the boundary layer subjected to the inviscid pressure field separates. A rough calculation has been tried that gives the separation angle as a function of and the Reynolds number: The results are reported on the image, e.g., 54° calculated instead of 60° measured for = 0.25. More experiments and a more accurate boundary layer calculation would be desirable.
Other experiments made in 2004 with a wall jet along a circular wall show that Coandă effect does not occur in a laminar flow, and the critical ratios for small Reynolds numbers are much smaller than those for turbulent flow. down to = 0.14 with a Reynolds number of 500, and = 0.05 for a Reynolds number of 100.
Free jet
L. C. Woods also made the calculation of the inviscid two-dimensional flow of a free jet of width h, deflected round a circularly cylindrical surface of radius r, between a first contact A and separation at B, including a deflection angle . Again a solution exists for any value of the relative curvature and angle .
Moreover, in the case of a free jet the equation can be solved in closed form, giving the distribution of velocity along the circular wall. The surface pressure distribution is then calculated using Bernoulli equation. Let us note the pressure () and the velocity () along the free streamline at the ambient pressure, and the angle along the wall which is zero in A and in B. Then the velocity () is found to be:
An image of the surface pressure distribution of the jet round the cylindrical surface using the same values of the relative curvature , and the same angle as those found for the wall jet reported in the image on the right side here has been established: it may be found in reference (15) p. 104 and both images are quite similar: the Coandă effect of a free jet is inertial, the same as Coandă effect of a wall jet. However, an experimental measurement of the corresponding surface pressure distribution is not known.
Experiments in 1959 by Bourque and Newmann concerning the reattachment of a two-dimensional turbulent jet to an offset parallel plate after enclosing a separation bubble where a low pressure vortex is confined (as in the image 5 in the preceding section) and also for a two-dimensional jet followed by a single flat plate inclined at an angle instead of the circularly curved wall in the diagram on the right here describing the experience of a wall jet: the jet separates from the plate, then curves towards the plate when the surrounding fluid is entrained and pressure lowered, and eventually reattaches to it, enclosing a separation bubble. The jet remains free if the angle is greater than 62°.
In this last case which is the geometry proposed by Coandă, the claim of the inventor is that the quantity of fluid entrained by the jet from the surroundings is increased when the jet is deflected, a feature exploited to improve the scavenging of internal combustion engines, and to increase the maximum lift coefficient of a wing, as indicated in the applications below.
The surface pressure distribution as well as the reattachment distance have been duly measured in both cases, and two approximate theories have been developed for the mean pressure within the separation bubble, the position of reattachment and the increase in volume flow from the orifice: the agreement with experiment was satisfactory.
Applications
Aircraft
The Coandă effect has applications in various high-lift devices on aircraft, where air moving over the wing can be "bent down" towards the ground using flaps and a jet sheet blowing over the curved surface of the top of the wing. The bending of the flow results in aerodynamic lift. The flow from a high-speed jet engine mounted in a pod over the wing produces increased lift by dramatically increasing the velocity gradient in the shear flow in the boundary layer. In this velocity gradient, particles are blown away from the surface, thus lowering the pressure there. Closely following the work of Coandă on applications of his research, and in particular the work on his "Aerodina Lenticulară," John Frost of Avro Canada also spent considerable time researching the effect, leading to a series of "inside out" hovercraft-like aircraft from which the air exited in a ring around the outside of the aircraft and was directed by being "attached" to a flap-like ring.
This is, as opposed to a traditional hovercraft design, in which the air is blown into a central area, the plenum, and directed down with the use of a fabric "skirt". Only one of Frost's designs was ever built, the Avro Canada VZ-9 Avrocar.
The Avrocar (often listed as 'VZ-9') was a Canadian vertical takeoff and landing (VTOL) aircraft developed by Avro Aircraft Ltd. as part of a secret United States military project carried out in the early years of the Cold War. The Avrocar intended to exploit the Coandă effect to provide lift and thrust from a single "turborotor" blowing exhaust out the rim of the disk-shaped aircraft to provide anticipated VTOL-like performance. In the air, it would have resembled a flying saucer. Two prototypes were built as "proof-of-concept" test vehicles for a more advanced U.S. Air Force fighter and also for a U.S. Army tactical combat aircraft requirement.
Avro's 1956 Project 1794 for the U.S. military designed a larger-scale flying saucer based on the Coandă effect and intended to reach speeds between Mach 3 and Mach 4. Project documents remained classified until 2012.
The effect was also implemented during the U.S. Air Force's Advanced Medium STOL Transport (AMST) project. Several aircraft, notably the Boeing YC-14 (the first modern type to exploit the effect), NASA's Quiet Short-Haul Research Aircraft, and the National Aerospace Laboratory of Japan's Asuka research aircraft have been built to take advantage of this effect, by mounting turbofans on the top of the wings to provide high-speed air even at low flying speeds, but to date only one aircraft has gone into production using this system to a major degree which is the Antonov An-72. The Shin Meiwa US-1A flying boat utilizes a similar system, only it directs the propwash from its four turboprop engines over the top of the wing to generate low-speed lift. More uniquely, it incorporates a fifth turboshaft engine inside of the wing center-section solely to provide air for powerful blown flaps. The addition of these two systems gives the aircraft an impressive STOL capability.
The experimental McDonnell Douglas YC-15 and its production derivative, the Boeing C-17 Globemaster III, also employ the effect. The NOTAR helicopter replaces the conventional propeller tail rotor with a Coandă effect tail (diagram on the left).
A better understanding of Coandă effect was provided by the scientific literature produced by ACHEON EU FP7 project. This project utilized a particular symmetric nozzle to produce an effective modeling of the Coandă effect, and determined innovative STOL aircraft configurations based on the effect. This activity has been expanded by Dragan in the turbomachinery sector, with the objective of better optimizing the shape of rotating blades by Romanian Comoti Research Centre's work on turbomachinery.
A practical use of the Coandă effect is for inclined hydropower screens, which separate debris, fish, etc., otherwise in the input flow to the turbines. Due to the slope, the debris falls from the screens without mechanical clearing, and due to the wires of the screen optimizing the Coandă effect, the water flows through the screen to the penstocks leading the water to the turbines.
The Coandă effect is used in dual-pattern fluid dispensers in automobile windshield washers.
The operation principle of oscillatory flowmeters also relies on the Coandă phenomenon. The incoming liquid enters a chamber that contains two "islands". Due to the Coandă effect, the main stream splits up and goes under one of the islands. This flow then feeds itself back into the main stream making it split up again, but in the direction of the second isle. This process repeats itself as long as the liquid circulates the chamber, resulting in a self-induced oscillation that is directly proportional to the velocity of the liquid and consequently the volume of substance flowing through the meter. A sensor picks up the frequency of this oscillation and transforms it into an analog signal yielding volume passing through.
Air conditioning
In air conditioning, the Coandă effect is exploited to increase the throw of a ceiling mounted diffuser. Because the Coandă effect causes air discharged from the diffuser to "stick" to the ceiling, it travels farther before dropping for the same discharge velocity than it would if the diffuser were mounted in free air, without the neighbouring ceiling. Lower discharge velocity means lower noise levels and, in the case of variable air volume (VAV) air conditioning systems, permits greater turndown ratios. Linear diffusers and slot diffusers that present a greater length of contact with the ceiling exhibit a greater Coandă effect.
Health care
In cardiovascular medicine, the Coandă effect accounts for the separate streams of blood in the fetal right atrium. It also explains why eccentric mitral regurgitation jets are attracted and dispersed along adjacent left atrial wall surfaces (so called "wall-hugging jets" as seen on echocardiographic color-doppler interrogation). This is clinically relevant because the visual area (and thus severity) of these eccentric wall-hugging jets is often underestimated compared to the more readily apparent central jets. In these cases, volumetric methods such as the proximal isovelocity surface area (PISA) method are preferred to quantify the severity of mitral regurgitation.
In medicine, the Coandă effect is used in ventilators.
Meteorology
In meteorology, the Coandă effect theory has also been applied to some air streams flowing out of mountain ranges such as the Carpathian Mountains and Transylvanian Alps, where effects on agriculture and vegetation have been noted. It also appears to be an effect in the Rhone Valley in France and near Big Delta in Alaska.
Auto-racing
In Formula One automobile racing, the Coandă effect has been exploited by the McLaren, Sauber, Ferrari and Lotus teams, after the first introduction by Adrian Newey (Red Bull Team) in 2011, to help redirect exhaust gases to run through the rear diffuser with the intention of increasing downforce at the rear of the car. Due to changes in regulations set in place by the FIA from the beginning of the 2014 Formula One season, the intention of redirecting exhaust gases to use the Coandă effect have been negated, due to the mandatory requirement that the car exhaust not have bodywork intended to contribute to aerodynamic effect situated directly behind it.
Fluidics
In fluidics, the Coandă effect was used to build bistable multivibrators, where the working stream (compressed air) stuck to one curved wall or another and control beams could switch the stream between the walls.
Mixer
The Coandă effect is also used to mix two different fluids in a mixer.
Demonstration
The Coandă effect can be demonstrated by directing a small jet of air upwards at an angle over a ping pong ball. The jet is drawn to and follows the upper surface of the ball curving around it, due to the (radial) acceleration (slowing and turning) of the air around the ball. With enough airflow, this change in momentum is balanced by the equal and opposite force on the ball supporting its weight. This demonstration can be performed using a hairdryer on the lowest setting or a vacuum cleaner if the outlet can be attached to the pipe and aimed upwards at an angle.
A common misconception is that the Coandă effect is demonstrated when a stream of tap water flows over the back of a spoon held lightly in the stream and the spoon is pulled into the stream (for example, uses the Coandă effect to explain the deflection of water around a cylinder). While the flow looks very similar to the air flow over the ping pong ball above (if one could see the air flow), the cause is not really the Coandă effect. Here, because it is a flow of water into air, there is little entrainment of the surrounding fluid (the air) into the jet (the stream of water). This particular demonstration is dominated by surface tension. ( states that the water deflection "actually demonstrates molecular attraction and surface tension.")
Another demonstration is to direct the air flow from, e.g., a vacuum cleaner operating in reverse, tangentially past a round cylinder. A waste basket works well. The air flow seems to "wrap around" the cylinder and can be detected at more than 180° from the incoming flow. Under the right conditions, flow rate, weight of the cylinder, smoothness of the surface it sits on, the cylinder actually moves. Note that the cylinder does not move directly into the flow as a misapplication of the Bernoulli effect would predict, but at a diagonal.
The Coandă effect can also be demonstrated by placing a can in front of a lit candle, such that when one's line of sight is along the top of the can, the candle flame is completely hidden from view behind it. If one then blows directly at the can, the candle will be extinguished despite the can being "in the way". This is because the airflow directed at the can bends around it and still reaches the candle to extinguish it, in accordance with the Coandă effect.
Problems caused
The engineering use of Coandă effect has disadvantages as well as advantages.
In marine propulsion, the efficiency of a propeller or thruster can be severely curtailed by the Coandă effect. The force on the vessel generated by a propeller is a function of the speed, volume and direction of the water jet leaving the propeller. Under certain conditions (e.g., when a ship moves through water) the Coandă effect changes the direction of a propeller jet, causing it to follow the shape of the ship's hull. The side force from a tunnel thruster at the bow of a ship decreases rapidly with forward speed. The side thrust may completely disappear at speeds above about 3 knots.
If the Coandă effect is applied to symmetrically shaped nozzles, it presents resonance problems.
See also
Aerodynamics
Airfoil
Boundary layer
Circulation control wing
Fluid dynamics
Fluid friction
Lift (force)
Magnus effect
Microelectromechanical systems
Microfluidics
NOTAR
Teapot effect
Tesla valve
Trench effect
References
Notes
Citations
Sources
External links
Flight 1945
Coandă effect video (1)
Coandă effect video (2)
Information on the patents of Coandă
New UK based UAV project utilising the Coandă effect
Report on the Coandă Effect and lift
How to see the Coandă effect at home (www.physics.org comic)
Aerodynamics
Boundary layers
Microfluidics
Physical phenomena
Romanian inventions
Effect | Coandă effect | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 4,719 | [
"Physical phenomena",
"Microfluidics",
"Microtechnology",
"Aerodynamics",
"Boundary layers",
"Aerospace engineering",
"Fluid dynamics"
] |
191,490 | https://en.wikipedia.org/wiki/Machine%20tool | A machine tool is a machine for handling or machining metal or other rigid materials, usually by cutting, boring, grinding, shearing, or other forms of deformations. Machine tools employ some sort of tool that does the cutting or shaping. All machine tools have some means of constraining the workpiece and provide a guided movement of the parts of the machine. Thus, the relative movement between the workpiece and the cutting tool (which is called the toolpath) is controlled or constrained by the machine to at least some extent, rather than being entirely "offhand" or "freehand". It is a power-driven metal cutting machine which assists in managing the needed relative motion between cutting tool and the job that changes the size and shape of the job material.
The precise definition of the term machine tool varies among users, as discussed below. While all machine tools are "machines that help people to make things", not all factory machines are machine tools.
Today machine tools are typically powered other than by the human muscle (e.g., electrically, hydraulically, or via line shaft), used to make manufactured parts (components) in various ways that include cutting or certain other kinds of deformation.
With their inherent precision, machine tools enabled the economical production of interchangeable parts.
Nomenclature and key concepts, interrelated
Many historians of technology consider that true machine tools were born when the toolpath first became guided by the machine itself in some way, at least to some extent, so that direct, freehand human guidance of the toolpath (with hands, feet, or mouth) was no longer the only guidance used in the cutting or forming process. In this view of the definition, the term, arising at a time when all tools up till then had been hand tools, simply provided a label for "tools that were machines instead of hand tools". Early lathes, those prior to the late medieval period, and modern woodworking lathes and potter's wheels may or may not fall under this definition, depending on how one views the headstock spindle itself; but the earliest historical records of a lathe with direct mechanical control of the cutting tool's path are of a screw-cutting lathe dating to about 1483. This lathe "produced screw threads out of wood and employed a true compound slide rest".
The mechanical toolpath guidance grew out of various root concepts:
First is the spindle concept itself, which constrains workpiece or tool movement to rotation around a fixed axis. This ancient concept predates machine tools per se; the earliest lathes and potter's wheels incorporated it for the workpiece, but the movement of the tool itself on these machines was entirely freehand.
The machine slide (tool way), which has many forms, such as dovetail ways, box ways, or cylindrical column ways. Machine slides constrain tool or workpiece movement linearly. If a stop is added, the length of the line can also be accurately controlled. (Machine slides are essentially a subset of linear bearings, although the language used to classify these various machine elements may be defined differently by some users in some contexts, and some elements may be distinguished by contrasting with others)
Tracing, which involves following the contours of a model or template and transferring the resulting motion to the toolpath.
cam operation, which is related in principle to tracing but can be a step or two removed from the traced element's matching the reproduced element's final shape. For example, several cams, no one of which directly matches the desired output shape, can actuate a complex toolpath by creating component vectors that add up to a net toolpath.
Van Der Waals Force between like materials is high; freehand manufacture of square plates, produces only square, flat, machine tool building reference components, accurate to millionths of an inch, but of nearly no variety. The process of feature replication allows the flatness and squareness of a milling machine cross slide assembly, or the roundness, lack of taper, and squareness of the two axes of a lathe machine to be transferred to a machined work piece with accuracy and precision better than a thousandth of an inch, not as fine as millionths of an inch. As the fit between sliding parts of a made product, machine, or machine tool approaches this critical thousandth of an inch measurement, lubrication and capillary action combine to prevent Van Der Waals force from welding like metals together, extending the lubricated life of sliding parts by a factor of thousands to millions; the disaster of oil depletion in the conventional automotive engine is an accessible demonstration of the need, and in aerospace design, like-to-unlike design is used along with solid lubricants to prevent Van Der Waals welding from destroying mating surfaces. Given the modulus of elasticity of metals, the range of fit tolerances near one thousandth of an inch correlates to the relevant range of constraint between at one extreme, permanent assembly of two mating parts and at the other, a free sliding fit of those same two parts.
Abstractly programmable toolpath guidance began with mechanical solutions, such as in musical box cams and Jacquard looms. The convergence of programmable mechanical control with machine tool toolpath control was delayed many decades, in part because the programmable control methods of musical boxes and looms lacked the rigidity for machine tool toolpaths. Later, electromechanical solutions (such as servos) and soon electronic solutions (including computers) were added, leading to numerical control and computer numerical control.
When considering the difference between freehand toolpaths and machine-constrained toolpaths, the concepts of accuracy and precision, efficiency, and productivity become important in understanding why the machine-constrained option adds value.
Matter-Additive, Matter-Preserving, and Matter-Subtractive "Manufacturing" can proceed in sixteen ways: Firstly, the work may be held either in a hand, or a clamp; secondly, the tool may be held either in a hand, or a clamp; thirdly, the energy can come from either the hand(s) holding the tool and/or the work, or from some external source, including for examples a foot treadle by the same worker, or a motor, without limitation; and finally, the control can come from either the hand(s) holding the tool and/or the work, or from some other source, including computer numerical control. With two choices for each of four parameters, the types are enumerated to sixteen types of Manufacturing, where Matter-Additive might mean painting on canvas as readily as it might mean 3D printing under computer control, Matter-Preserving might mean forging at the coal fire as readily as stamping license plates, and Matter-Subtracting might mean casually whittling a pencil point as readily as it might mean precision grinding the final form of a laser deposited turbine blade.
A precise description of what a machine tool is and does in an instant moment is given by a 12 component vector relating the linear and rotational degrees of freedom of the single work piece and the single tool contacting that work piece in any machine arbitrarily and in order to visualize this vector it makes sense to arrange it in four rows of three columns with labels x y and z on the columns and labels spin and move on the rows, with those two labels repeated one more time to make a total of four rows so that the first row might be labeled spin work, the second row might be labeled move work, the third row might be labeled spin tool, and the fourth row might be labeled move tool although the position of the labels is arbitrary which is to say there is no agreement in the literature of mechanical engineering on what order these labels should be but there are 12 degrees of freedom in a machine tool. That said it is important to remember that this is in an instant moment and that instant moment may be a preparatory moment before a tool makes contact with a work piece, or maybe an engaged moment during which contact with work and tool requires an input of rather large amounts of power to get work done which is why machine tools are large and heavy and stiff. Since what these vectors describe our instant moments of degrees of freedom the vector structure is capable of expressing the changing mode of a machine tool as well as expressing its fundamental structure in the following way: imagine a lathe spending a cylinder on a horizontal axis with a tool ready to cut a face on that cylinder in some preparatory moment. What the operator of such a lathe would do is lock the x-axis on the carriage of the lathe establishing a new vector condition with a zero in the x slide position for the tool. Then the operator would unlock the y-axis on the cross slide of the lathe, assuming that our examples were equipped with that, and then the operator would apply some method of traversing the facing tool across the face of the cylinder being cut and a depth combined with the rotational speed selected which engages cutting ability within the power of range of the motor powering the lathe. So the answer to what a machine tool is, is a very simple answer but it is highly technical and is unrelated to the history of machine tools.
Preceding, there is an answer for what machine tools are. We may consider what they do also. Machine tools produce finished surfaces. They may produce any finish from an arbitrary degree of very rough work to a specular optical grade finish the improvement of which is moot. Machine tools produce the surfaces comprising the features of machine parts by removing chips. These chips may be very rough or even as fine as dust. Every machine tools supports its removal process with a stiff, redundant and so vibration resisting structure because each chip is removed in a semi a synchronous way, creating multiple opportunities for vibration to interfere with precision.
Humans are generally quite talented in their freehand movements; the drawings, paintings, and sculptures of artists such as Michelangelo or Leonardo da Vinci, and of countless other talented people, show that human freehand toolpath has great potential. The value that machine tools added to these human talents is in the areas of rigidity (constraining the toolpath despite thousands of newtons (pounds) of force fighting against the constraint), accuracy and precision, efficiency, and productivity. With a machine tool, toolpaths that no human muscle could constrain can be constrained; and toolpaths that are technically possible with freehand methods, but would require tremendous time and skill to execute, can instead be executed quickly and easily, even by people with little freehand talent (because the machine takes care of it). The latter aspect of machine tools is often referred to by historians of bytechnology as "building the skill into the tool", in contrast to the toolpath-constraining skill being in the person who wields the tool. As an example, it is physically possible to make interchangeable screws, bolts, and nuts entirely with freehand toolpaths. But it is economically practical to make them only with machine tools.
In the 1930s, the U.S. National Bureau of Economic Research (NBER) referenced the definition of a machine tool as "any machine operating by other than hand power which employs a tool to work on metal".
The narrowest colloquial sense of the term reserves it only for machines that perform metal cutting—in other words, the many kinds of [conventional] machining and grinding. These processes are a type of deformation that produces swarf. However, economists use a slightly broader sense that also includes metal deformation of other types that squeeze the metal into shape without cutting off swarf, such as rolling, stamping with dies, shearing, swaging, riveting, and others. Thus presses are usually included in the economic definition of machine tools. For example, this is the breadth of definition used by Max Holland in his history of Burgmaster and Houdaille, which is also a history of the machine tool industry in general from the 1940s through the 1980s; he was reflecting the sense of the term used by Houdaille itself and other firms in the industry. Many reports on machine tool export and import and similar economic topics use this broader definition.
The colloquial sense implying [conventional] metal cutting is also growing obsolete because of changing technology over the decades. The many more recently developed processes labeled "machining", such as electrical discharge machining, electrochemical machining, electron beam machining, photochemical machining, and ultrasonic machining, or even plasma cutting and water jet cutting, are often performed by machines that could most logically be called machine tools. In addition, some of the newly developed additive manufacturing processes, which are not about cutting away material but rather about adding it, are done by machines that are likely to end up labeled, in some cases, as machine tools. In fact, machine tool builders are already developing machines that include both subtractive and additive manufacturing in one work envelope, and retrofits of existing machines are underway.
The natural language use of the terms varies, with subtle connotative boundaries. Many speakers resist using the term "machine tool" to refer to woodworking machinery (joiners, table saws, routing stations, and so on), but it is difficult to maintain any true logical dividing line, and therefore many speakers accept a broad definition. It is common to hear machinists refer to their machine tools simply as "machines". Usually the mass noun "machinery" encompasses them, but sometimes it is used to imply only those machines that are being excluded from the definition of "machine tool". This is why the machines in a food-processing plant, such as conveyors, mixers, vessels, dividers, and so on, may be labeled "machinery", while the machines in the factory's tool and die department are instead called "machine tools" in contradistinction.
Regarding the 1930s NBER definition quoted above, one could argue that its specificity to metal is obsolete, as it is quite common today for particular lathes, milling machines, and machining centers (definitely machine tools) to work exclusively on plastic cutting jobs throughout their whole working lifespan. Thus the NBER definition above could be expanded to say "which employs a tool to work on metal or other materials of high hardness". And its specificity to "operating by other than hand power" is also problematic, as machine tools can be powered by people if appropriately set up, such as with a treadle (for a lathe) or a hand lever (for a shaper). Hand-powered shapers are clearly "the 'same thing' as shapers with electric motors except smaller", and it is trivial to power a micro lathe with a hand-cranked belt pulley instead of an electric motor. Thus one can question whether power source is truly a key distinguishing concept; but for economics purposes, the NBER's definition made sense, because most of the commercial value of the existence of machine tools comes about via those that are powered by electricity, hydraulics, and so on. Such are the vagaries of natural language and controlled vocabulary, both of which have their places in the business world.
History
Forerunners of machine tools included bow drills and potter's wheels, which had existed in ancient Egypt prior to 2500 BC, and lathes, known to have existed in multiple regions of Europe since at least 1000 to 500 BC. But it was not until the later Middle Ages and the Age of Enlightenment that the modern concept of a machine tool—a class of machines used as tools in the making of metal parts, and incorporating machine-guided toolpath—began to evolve. Clockmakers of the Middle Ages and renaissance men such as Leonardo da Vinci helped expand humans' technological milieu toward the preconditions for industrial machine tools. During the 18th and 19th centuries, and even in many cases in the 20th, the builders of machine tools tended to be the same people who would then use them to produce the end products (manufactured goods). However, from these roots also evolved an industry of machine tool builders as we define them today, meaning people who specialize in building machine tools for sale to others.
Historians of machine tools often focus on a handful of major industries that most spurred machine tool development. In order of historical emergence, they have been firearms (small arms and artillery); clocks; textile machinery; steam engines (stationary, marine, rail, and otherwise) (the story of how Watt's need for an accurate cylinder spurred Boulton's boring machine is discussed by Roe); sewing machines; bicycles; automobiles; and aircraft. Others could be included in this list as well, but they tend to be connected with the root causes already listed. For example, rolling-element bearings are an industry of themselves, but this industry's main drivers of development were the vehicles already listed—trains, bicycles, automobiles, and aircraft; and other industries, such as tractors, farm implements, and tanks, borrowed heavily from those same parent industries.
Machine tools filled a need created by textile machinery during the Industrial Revolution in England in the middle to late 1700s. Until that time, machinery was made mostly from wood, often including gearing and shafts. The increase in mechanization required more metal parts, which were usually made of cast iron or wrought iron. Cast iron could be cast in molds for larger parts, such as engine cylinders and gears, but was difficult to work with a file and could not be hammered. Red hot wrought iron could be hammered into shapes. Room temperature wrought iron was worked with a file and chisel and could be made into gears and other complex parts; however, hand working lacked precision and was a slow and expensive process.
James Watt was unable to have an accurately bored cylinder for his first steam engine, trying for several years until John Wilkinson invented a suitable boring machine in 1774, boring Boulton & Watt's first commercial engine in 1776.
The advance in the accuracy of machine tools can be traced to Henry Maudslay and refined by Joseph Whitworth. That Maudslay had established the manufacture and use of master plane gages in his shop (Maudslay & Field) located on Westminster Road south of the Thames River in London about 1809, was attested to by James Nasmyth who was employed by Maudslay in 1829 and Nasmyth documented their use in his autobiography.
The process by which the master plane gages were produced dates back to antiquity but was refined to an unprecedented degree in the Maudslay shop. The process begins with three square plates each given an identification (ex., 1,2 and 3). The first step is to rub plates 1 and 2 together with a marking medium (called bluing today) revealing the high spots which would be removed by hand scraping with a steel scraper, until no irregularities were visible. This would not produce true plane surfaces but a "ball and socket" concave-concave and convex-convex fit, as this mechanical fit, like two perfect planes, can slide over each other and reveal no high spots. The rubbing and marking are repeated after rotating 2 relative to 1 by 90 degrees to eliminate concave-convex "potato-chip" curvature. Next, plate number 3 is compared and scraped to conform to plate number 1 in the same two trials. In this manner plates number 2 and 3 would be identical. Next plates number 2 and 3 would be checked against each other to determine what condition existed, either both plates were "balls" or "sockets" or "chips" or a combination. These would then be scraped until no high spots existed and then compared to plate number 1. Repeating this process of comparing and scraping the three plates could produce plane surfaces accurate to within millionths of an inch (the thickness of the marking medium).
The traditional method of producing the surface gages used an abrasive powder rubbed between the plates to remove the high spots, but it was Whitworth who contributed the refinement of replacing the grinding with hand scraping. Sometime after 1825, Whitworth went to work for Maudslay and it was there that Whitworth perfected the hand scraping of master surface plane gages. In his paper presented to the British Association for the Advancement of Science at Glasgow in 1840, Whitworth pointed out the inherent inaccuracy of grinding due to no control and thus unequal distribution of the abrasive material between the plates which would produce uneven removal of material from the plates.
With the creation of master plane gages of such high accuracy, all critical components of machine tools (i.e., guiding surfaces such as machine ways) could then be compared against them and scraped to the desired accuracy.
The first machine tools offered for sale (i.e., commercially available) were constructed by Matthew Murray in England around 1800. Others, such as Henry Maudslay, James Nasmyth, and Joseph Whitworth, soon followed the path of expanding their entrepreneurship from manufactured end products and millwright work into the realm of building machine tools for sale.
Important early machine tools included the slide rest lathe, screw-cutting lathe, turret lathe, milling machine, pattern tracing lathe, shaper, and metal planer, which were all in use before 1840. With these machine tools the decades-old objective of producing interchangeable parts was finally realized. An important early example of something now taken for granted was the standardization of screw fasteners such as nuts and bolts. Before about the beginning of the 19th century, these were used in pairs, and even screws of the same machine were generally not interchangeable. Methods were developed to cut screw thread to a greater precision than that of the feed screw in the lathe being used. This led to the bar length standards of the 19th and early 20th centuries.
American production of machine tools was a critical factor in the Allies' victory in World War II. Production of machine tools tripled in the United States in the war. No war was more industrialized than World War II, and it has been written that the war was won as much by machine shops as by machine guns.
The production of machine tools is concentrated in about 10 countries worldwide: China, Japan, Germany, Italy, South Korea, Taiwan, Switzerland, US, Austria, Spain and a few others. Machine tool innovation continues in several public and private research centers worldwide.
Drive power sources
Machine tools can be powered from a variety of sources. Human and animal power (via cranks, treadles, treadmills, or treadwheels) were used in the past, as was water power (via water wheel); however, following the development of high-pressure steam engines in the mid 19th century, factories increasingly used steam power. Factories also used hydraulic and pneumatic power. Many small workshops continued to use water, human and animal power until electrification after 1900.
Today most machine tools are powered by electricity; hydraulic and pneumatic power are sometimes used, but this is uncommon.
Automatic control
Machine tools can be operated manually, or under automatic control. Early machines used flywheels to stabilize their motion and had complex systems of gears and levers to control the machine and the piece being worked on. Soon after World War II, the numerical control (NC) machine was developed. NC machines used a series of numbers punched on paper tape or punched cards to control their motion. In the 1960s, computers were added to give even more flexibility to the process. Such machines became known as computerized numerical control (CNC) machines. NC and CNC machines could precisely repeat sequences over and over, and could produce much more complex pieces than even the most skilled tool operators.
Before long, the machines could automatically change the specific cutting and shaping tools that were being used. For example, a drill machine might contain a magazine with a variety of drill bits for producing holes of various sizes. Previously, either machine operators would usually have to manually change the bit or move the work piece to another station to perform these different operations. The next logical step was to combine several different machine tools together, all under computer control. These are known as machining centers, and have dramatically changed the way parts are made.
Examples
Examples of machine tools are:
Broaching machine
Drill press
Gear shaper
Hobbing machine
Hone
Lathe
Honing Machine
Screw machines
Milling machine
Shear (sheet metal)
Shaper
Bandsaw Saws
Planer
Stewart platform mills
Grinding machines
Multitasking machines (MTMs)—CNC machine tools with many axes that combine turning, milling, grinding, and material handling into one highly automated machine tool
When fabricating or shaping parts, several techniques are used to remove unwanted metal. Among these are:
Electrical discharge machining
Grinding (abrasive cutting)
Multiple edge cutting tools
Single edge cutting tools
Other techniques are used to add desired material. Devices that fabricate components by selective addition of material are called rapid prototyping machines.
Adverse effects on humans
Adverse effects mitigations
Regulations
Machine tool manufacturing industry
The worldwide market for machine tools was approximately $81 billion in production in 2014 according to a survey by market research firm Gardner Research. The largest producer of machine tools was China with $23.8 billion of production followed by Germany and Japan at neck and neck with $12.9 billion and $12.88 billion respectively. South Korea and Italy rounded out the top 5 producers with revenue of $5.6 billion and $5 billion respectively.
Safety
See also
References
Bibliography
A history most specifically of Burgmaster, which specialized in turret drills; but in telling Burgmaster's story, and that of its acquirer Houdaille, Holland provides a history of the machine tool industry in general between World War II and the 1980s that ranks with Noble's coverage of the same era (Noble 1984) as a seminal history. Later republished under the title From Industry to Alchemy: Burgmaster, a Machine Tool Company.
. The Moore family firm, the Moore Special Tool Company, independently invented the jig borer (contemporaneously with its Swiss invention), and Moore's monograph is a seminal classic of the principles of machine tool design and construction that yield the highest possible accuracy and precision in machine tools (second only to that of metrological machines). The Moore firm epitomized the art and science of the tool and die maker.
. A seminal classic of machine tool history. Extensively cited by later works.
. Collection of previously published monographs bound as one volume. A collection of seminal classics of machine tool history.
Further reading
A memoir that contains quite a bit of general history of the industry.
. A monograph with a focus on history, economics, and import and export policy. Original 1976 publication: LCCN 75-046133, .
One of the most detailed histories of the machine tool industry from the late 18th century through 1932. Not comprehensive in terms of firm names and sales statistics (like Floud focuses on), but extremely detailed in exploring the development and spread of practicable interchangeability, and the thinking behind the intermediate steps. Extensively cited by later works.
One of the most detailed histories of the machine tool industry from World War II through the early 1980s, relayed in the context of the social impact of evolving automation via NC and CNC.
. A biography of a machine tool builder that also contains some general history of the industry.
Ryder, Thomas and Son, Machines to Make Machines 1865 to 1968, a centenary booklet, (Derby: Bemrose & Sons, 1968)
External links
Milestones in the History of Machine Tools
Industrial machinery
Machines
Machining
Tools
Woodworking | Machine tool | [
"Physics",
"Technology",
"Engineering"
] | 5,643 | [
"Machines",
"Machine tools",
"Physical systems",
"Mechanical engineering",
"Industrial machinery"
] |
191,646 | https://en.wikipedia.org/wiki/OLED | An organic light-emitting diode (OLED), also known as organic electroluminescent (organic EL) diode, is a type of light-emitting diode (LED) in which the emissive electroluminescent layer is an organic compound film that emits light in response to an electric current. This organic layer is situated between two electrodes; typically, at least one of these electrodes is transparent. OLEDs are used to create digital displays in devices such as television screens, computer monitors, and portable systems such as smartphones and handheld game consoles. A major area of research is the development of white OLED devices for use in solid-state lighting applications.
There are two main families of OLED: those based on small molecules and those employing polymers. Adding mobile ions to an OLED creates a light-emitting electrochemical cell (LEC) which has a slightly different mode of operation. An OLED display can be driven with a passive-matrix (PMOLED) or active-matrix (AMOLED) control scheme. In the PMOLED scheme, each row and line in the display is controlled sequentially, one by one, whereas AMOLED control uses a thin-film transistor (TFT) backplane to directly access and switch each individual pixel on or off, allowing for higher resolution and larger display sizes. OLEDs are fundamentally different from LEDs, which are based on a p-n diode crystalline solid structure. In LEDs, doping is used to create p- and n-regions by changing the conductivity of the host semiconductor. OLEDs do not employ a crystalline p-n structure. Doping of OLEDs is used to increase radiative efficiency by direct modification of the quantum-mechanical optical recombination rate. Doping is additionally used to determine the wavelength of photon emission.
OLED displays are made in a similar way to LCDs, including manufacturing of several displays on a mother substrate that is later thinned and cut into several displays. Substrates for OLED displays come in the same sizes as those used for manufacturing LCDs. For OLED manufacture, after the formation of TFTs (for active matrix displays), addressable grids (for passive matrix displays), or indium tin oxide (ITO) segments (for segment displays), the display is coated with hole injection, transport and blocking layers, as well with electroluminescent material after the first two layers, after which ITO or metal may be applied again as a cathode. Later, the entire stack of materials is encapsulated. The TFT layer, addressable grid, or ITO segments serve as or are connected to the anode, which may be made of ITO or metal. OLEDs can be made flexible and transparent, with transparent displays being used in smartphones with optical fingerprint scanners and flexible displays being used in foldable smartphones.
History
André Bernanose and co-workers at the Nancy-Université in France made the first observations of electroluminescence in organic materials in the early 1950s. They applied high alternating voltages in air to materials such as acridine orange dye, either deposited on or dissolved in cellulose or cellophane thin films. The proposed mechanism was either direct excitation of the dye molecules or excitation of electrons.
In 1960, Martin Pope and some of his co-workers at New York University in the United States developed ohmic dark-injecting electrode contacts to organic crystals. They further described the necessary energetic requirements (work functions) for hole and electron injecting electrode contacts. These contacts are the basis of charge injection in all modern OLED devices. Pope's group also first observed direct current (DC) electroluminescence under vacuum on a single pure crystal of anthracene and on anthracene crystals doped with tetracene in 1963 using a small area silver electrode at 400 volts. The proposed mechanism was field-accelerated electron excitation of molecular fluorescence.
Pope's group reported in 1965 that in the absence of an external electric field, the electroluminescence in anthracene crystals is caused by the recombination of a thermalized electron and hole, and that the conducting level of anthracene is higher in energy than the exciton energy level. Also in 1965, Wolfgang Helfrich and W. G. Schneider of the National Research Council in Canada produced double injection recombination electroluminescence for the first time in an anthracene single crystal using hole and electron injecting electrodes, the forerunner of modern double-injection devices. In the same year, Dow Chemical researchers patented a method of preparing electroluminescent cells using high-voltage (500–1500 V) AC-driven (100–3000Hz) electrically insulated one millimetre thin layers of a melted phosphor consisting of ground anthracene powder, tetracene, and graphite powder. Their proposed mechanism involved electronic excitation at the contacts between the graphite particles and the anthracene molecules.
The first Polymer LED (PLED) to be created was by Roger Partridge at the National Physical Laboratory in the United Kingdom. It used a film of polyvinylcarbazole up to 2.2 micrometers thick located between two charge-injecting electrodes. The light generated was readily visible in normal lighting conditions though the polymer used had 2 limitations; low conductivity and the difficulty of injecting electrons. Later development of conjugated polymers would allow others to largely eliminate these problems. His contribution has often been overlooked due to the secrecy NPL imposed on the project. When it was patented in 1974 it was given a deliberately obscure "catch all" name while the government's Department for Industry tried and failed to find industrial collaborators to fund further development.
Practical OLEDs
Chemists Ching Wan Tang and Steven Van Slyke at Eastman Kodak built the first practical OLED device in 1987. This device used a two-layer structure with separate hole transporting and electron transporting layers such that recombination and light emission occurred in the middle of the organic layer; this resulted in a reduction in operating voltage and improvements in efficiency.
Research into polymer electroluminescence culminated in 1990, with J. H. Burroughes at the Cavendish Laboratory at Cambridge University, UK, reporting a high-efficiency green light-emitting polymer-based device using 100nm thick films of poly(p-phenylene vinylene). Moving from molecular to macromolecular materials solved the problems previously encountered with the long-term stability of the organic films and enabled high-quality films to be easily made. Subsequent research developed multilayer polymers and the new field of plastic electronics and OLED research and device production grew rapidly. White OLEDs, pioneered by J. Kido et al. at Yamagata University, Japan in 1995, achieved the commercialization of OLED-backlit displays and lighting.
In 1999, Kodak and Sanyo had entered into a partnership to jointly research, develop, and produce OLED displays. They announced the world's first 2.4-inch active-matrix, full-color OLED display in September the same year. In September 2002, they presented a prototype of 15-inch HDTV format display based on white OLEDs with color filters at the CEATEC Japan.
Manufacturing of small molecule OLEDs was started in 1997 by Pioneer Corporation, followed by TDK in 2001 and Samsung-NEC Mobile Display (SNMD), which later became one of the world's largest OLED display manufacturers - Samsung Display, in 2002.
The Sony XEL-1, released in 2007, was the first OLED television. Universal Display Corporation, one of the OLED materials companies, holds a number of patents concerning the commercialization of OLEDs that are used by major OLED manufacturers around the world.
On 5 December 2017, JOLED, the successor of Sony and Panasonic's printable OLED business units, began the world's first commercial shipment of inkjet-printed OLED panels.
Working principle
A typical OLED is composed of a layer of organic materials situated between two electrodes, the anode and cathode, all deposited on a substrate. The organic molecules are electrically conductive as a result of delocalization of pi electrons caused by conjugation over part or all of the molecule. These materials have conductivity levels ranging from insulators to conductors, and are therefore considered organic semiconductors. The highest occupied and lowest unoccupied molecular orbitals (HOMO and LUMO) of organic semiconductors are analogous to the valence and conduction bands of inorganic semiconductors.
Originally, the most basic polymer OLEDs consisted of a single organic layer. One example was the first light-emitting device synthesised by J. H. Burroughes et al., which involved a single layer of poly(p-phenylene vinylene). However multilayer OLEDs can be fabricated with two or more layers in order to improve device efficiency. As well as conductive properties, different materials may be chosen to aid charge injection at electrodes by providing a more gradual electronic profile, or block a charge from reaching the opposite electrode and being wasted. Many modern OLEDs incorporate a simple bilayer structure, consisting of a conductive layer and an emissive layer. Developments in OLED architecture in 2011 improved quantum efficiency (up to 19%) by using a graded heterojunction. In the graded heterojunction architecture, the composition of hole and electron-transport materials varies continuously within the emissive layer with a dopant emitter. The graded heterojunction architecture combines the benefits of both conventional architectures by improving charge injection while simultaneously balancing charge transport within the emissive region.
During operation, a voltage is applied across the OLED such that the anode is positive with respect to the cathode. Anodes are picked based upon the quality of their optical transparency, electrical conductivity, and chemical stability. A current of electrons flows through the device from cathode to anode, as electrons are injected into the LUMO of the organic layer at the cathode and withdrawn from the HOMO at the anode. This latter process may also be described as the injection of electron holes into the HOMO. Electrostatic forces bring the electrons and the holes towards each other and they recombine forming an exciton, a bound state of the electron and hole. This happens closer to the electron-transport layer part of the emissive layer, because in organic semiconductors holes are generally more mobile than electrons. The decay of this excited state results in a relaxation of the energy levels of the electron, accompanied by emission of radiation whose frequency is in the visible region. The frequency of this radiation depends on the band gap of the material, in this case the difference in energy between the HOMO and LUMO.
As electrons and holes are fermions with half integer spin, an exciton may either be in a singlet state or a triplet state depending on how the spins of the electron and hole have been combined. Statistically three triplet excitons will be formed for each singlet exciton. Decay from triplet states (phosphorescence) is spin forbidden, increasing the timescale of the transition and limiting the internal efficiency of fluorescent OLED emissive layers and devices. Phosphorescent organic light-emitting diodes (PHOLEDs) or emissive layers make use of spin–orbit interactions to facilitate intersystem crossing between singlet and triplet states, thus obtaining emission from both singlet and triplet states and improving the internal efficiency.
Indium tin oxide (ITO) is commonly used as the anode material. It is transparent to visible light and has a high work function which promotes injection of holes into the HOMO level of the organic layer. A second conductive (injection) layer is typically added, which may consist of PEDOT:PSS, as the HOMO level of this material generally lies between the work function of ITO and the HOMO of other commonly used polymers, reducing the energy barriers for hole injection. Metals such as barium and calcium are often used for the cathode as they have low work functions which promote injection of electrons into the LUMO of the organic layer. Such metals are reactive, so they require a capping layer of aluminium to avoid degradation. Two secondary benefits of the aluminum capping layer include robustness to electrical contacts and the back reflection of emitted light out to the transparent ITO layer.
Experimental research has proven that the properties of the anode, specifically the anode/hole transport layer (HTL) interface topography plays a major role in the efficiency, performance, and lifetime of organic light-emitting diodes. Imperfections in the surface of the anode decrease anode-organic film interface adhesion, increase electrical resistance, and allow for more frequent formation of non-emissive dark spots in the OLED material adversely affecting lifetime. Mechanisms to decrease anode roughness for ITO/glass substrates include the use of thin films and self-assembled monolayers. Also, alternative substrates and anode materials are being considered to increase OLED performance and lifetime. Possible examples include single crystal sapphire substrates treated with gold (Au) film anodes yielding lower work functions, operating voltages, electrical resistance values, and increasing lifetime of OLEDs.
Single carrier devices are typically used to study the kinetics and charge transport mechanisms of an organic material and can be useful when trying to study energy transfer processes. As current through the device is composed of only one type of charge carrier, either electrons or holes, recombination does not occur and no light is emitted. For example, electron only devices can be obtained by replacing ITO with a lower work function metal which increases the energy barrier of hole injection. Similarly, hole only devices can be made by using a cathode made solely of aluminium, resulting in an energy barrier too large for efficient electron injection.
Carrier balance
Balanced charge injection and transfer are required to get high internal efficiency, pure emission of luminance layer without contaminated emission from charge transporting layers, and high stability. A common way to balance charge is optimizing the thickness of the charge transporting layers but is hard to control. Another way is using the exciplex. Exciplex formed between hole-transporting (p-type) and electron-transporting (n-type) side chains to localize electron-hole pairs. Energy is then transferred to luminophore and provide high efficiency. An example of using exciplex is grafting Oxadiazole and carbazole side units in red diketopyrrolopyrrole-doped Copolymer main chain shows improved external quantum efficiency and color purity in no optimized OLED.
Material technologies
Small molecules
Organic small-molecule electroluminescent materials have the advantages of a wide variety, easy to purify, and strong chemical modifications. In order to make the luminescent materials to emit light as required, some chromophores or unsaturated groups such as alkene bonds and benzene rings will usually be introduced in the molecular structure design to change the size of the conjugation range of the material, so that the photophysical properties of the material changes. In general, the larger the range of π-electron conjugation system, the longer the wavelength of light emitted by the material. For instance, with the increase of the number of benzene rings, the fluorescence emission peak of benzene, naphthalene, anthracene, and tetracene gradually red-shifted from 283 nm to 480 nm. Common organic small molecule electroluminescent materials include aluminum complexes, anthracenes, biphenyl acetylene aryl derivatives, coumarin derivatives, and various fluorochromes. Efficient OLEDs using small molecules were first developed by Ching W. Tang et al. at Eastman Kodak. The term OLED traditionally refers specifically to this type of device, though the term SM-OLED is also in use.
Molecules commonly used in OLEDs include organometallic chelates (for example Alq3, used in the organic light-emitting device reported by Tang et al.), fluorescent and phosphorescent dyes and conjugated dendrimers. A number of materials are used for their charge transport properties, for example triphenylamine and derivatives are commonly used as materials for hole transport layers. Fluorescent dyes can be chosen to obtain light emission at different wavelengths, and compounds such as perylene, rubrene and quinacridone derivatives are often used. Alq3 has been used as a green light emitter, electron transport material and as a host for yellow light and red light emitting dyes.
Because of the structural flexibility of small-molecule electroluminescent materials, thin films can be prepared by vacuum vapor deposition, which is more expensive and of limited use for large-area devices. The vacuum coating system, however, can make the entire process from film growth to OLED device preparation in a controlled and complete operating environment, helping to obtain uniform and stable films, thus ensuring the final fabrication of high-performance OLED devices.However, small molecule organic dyes are prone to fluorescence quenching in the solid state, resulting in lower luminescence efficiency. The doped OLED devices are also prone to crystallization, which reduces the luminescence and efficiency of the devices. Therefore, the development of devices based on small-molecule electroluminescent materials is limited by high manufacturing costs, poor stability, short life, and other shortcomings. Coherent emission from a laser dye-doped tandem SM-OLED device, excited in the pulsed regime, has been demonstrated. The emission is nearly diffraction limited with a spectral width similar to that of broadband dye lasers.
Researchers report luminescence from a single polymer molecule, representing the smallest possible organic light-emitting diode (OLED) device. Scientists will be able to optimize substances to produce more powerful light emissions. Finally, this work is a first step towards making molecule-sized components that combine electronic and optical properties. Similar components could form the basis of a molecular computer.
Polymer light-emitting diodes
Polymer light-emitting diodes (PLED, P-OLED), also light-emitting polymers (LEP), involve an electroluminescent conductive polymer that emits light when connected to an external voltage. They are used as a thin film for full-spectrum colour displays. Polymer OLEDs are quite efficient and require a relatively small amount of power for the amount of light produced.
Vacuum deposition is not a suitable method for forming thin films of polymers. If the polymeric OLED films are made by vacuum vapor deposition, the chain elements will be cut off and the original photophysical properties will be compromised. However, polymers can be processed in solution, and spin coating is a common method of depositing thin polymer films. This method is more suited to forming large-area films than thermal evaporation. No vacuum is required, and the emissive materials can also be applied on the substrate by a technique derived from commercial inkjet printing. However, as the application of subsequent layers tends to dissolve those already present, formation of multilayer structures is difficult with these methods. The metal cathode may still need to be deposited by thermal evaporation in vacuum. An alternative method to vacuum deposition is to deposit a Langmuir-Blodgett film.
Typical polymers used in PLED displays include derivatives of poly(p-phenylene vinylene) and polyfluorene. Substitution of side chains onto the polymer backbone may determine the colour of emitted light or the stability and solubility of the polymer for performance and ease of processing.
While unsubstituted poly(p-phenylene vinylene) (PPV) is typically insoluble, a number of PPVs and related poly(naphthalene vinylene)s (PNVs) that are soluble in organic solvents or water have been prepared via ring opening metathesis polymerization. These water-soluble polymers or conjugated poly electrolytes (CPEs) also can be used as hole injection layers alone or in combination with nanoparticles like graphene.
Phosphorescent materials
Phosphorescent organic light-emitting diodes use the principle of electrophosphorescence to convert electrical energy in an OLED into light in a highly efficient manner, with the internal quantum efficiencies of such devices approaching 100%. PHOLEDs can be deposited using vacuum deposition through a shadow mask.
Typically, a polymer such as poly(N-vinylcarbazole) is used as a host material to which an organometallic complex is added as a dopant. Iridium complexes such as Ir(mppy)3 as of 2004 were a focus of research, although complexes based on other heavy metals such as platinum have also been used.
The heavy metal atom at the centre of these complexes exhibits strong spin-orbit coupling, facilitating intersystem crossing between singlet and triplet states. By using these phosphorescent materials, both singlet and triplet excitons will be able to decay radiatively, hence improving the internal quantum efficiency of the device compared to a standard OLED where only the singlet states will contribute to emission of light.
Applications of OLEDs in solid state lighting require the achievement of high brightness with good CIE coordinates (for white emission). The use of macromolecular species like polyhedral oligomeric silsesquioxanes (POSS) in conjunction with the use of phosphorescent species such as Ir for printed OLEDs have exhibited brightnesses as high as 10,000cd/m2.
Device architectures
Structure
Bottom emission
The bottom-emission organic light-emitting diode (BE-OLED) is the architecture that was used in the early-stage AMOLED displays. It had a transparent anode fabricated on a glass substrate, and a shiny reflective cathode. Light is emitted from the transparent anode direction. To reflect all the light towards the anode direction, a relatively thick metal cathode such as aluminum is used. For the anode, high-transparency indium tin oxide (ITO) was a typical choice to emit as much light as possible. Organic thin-films, including the emissive layer that actually generates the light, are then sandwiched between the ITO anode and the reflective metal cathode. The downside of bottom emission structure is that the light has to travel through the pixel drive circuits such as the thin film transistor (TFT) substrate, and the area from which light can be extracted is limited and the light emission efficiency is reduced.
Top emission
An alternative configuration is to switch the mode of emission. A reflective anode, and a transparent (or more often semi-transparent) cathode are used so that the light emits from the cathode side, and this configuration is called top-emission OLED (TE-OLED). Unlike BEOLEDs where the anode is made of transparent conductive ITO, this time the cathode needs to be transparent, and the ITO material is not an ideal choice for the cathode because of a damage issue due to the sputtering process. Thus, a thin metal film such as pure Ag and the Mg:Ag alloy are used for the semi-transparent cathode due to their high transmittance and high conductivity. In contrast to the bottom emission, light is extracted from the opposite side in top emission without the need of passing through multiple drive circuit layers. Thus, the light generated can be extracted more efficiently.
Improvements
Deuterium
Using deuterium instead of hydrogen, in other words deuterated compounds, in red light , green light , blue light and white light OLED light emitting material layers and other layers nearby in OLED displays can improve their brightness by up to 30%. This is achieved by improving the current handling capacity, and lifespan of these materials.
Micro Lens Array (MLA)
Making indentations shaped like lenses on a transparent layer through which light passes from an OLED light emitting material, reduces the amount of scattered light within the display and directs it forward, improving brightness.
Micro-cavity theory
When light waves meet while traveling along the same medium, wave interference occurs. This interference can be constructive or destructive. It is sometimes desirable for several waves of the same frequency to sum up into a wave with higher amplitudes.
Since both electrodes are reflective in TEOLED, light reflections can happen within the diode, and they cause more complex interferences than those in BEOLEDs. In addition to the two-beam interference, there exists a multi-resonance interference between two electrodes. Because the structure of TEOLEDs is similar to that of the Fabry-Perot resonator or laser resonator, which contains two parallel mirrors comparable to the two reflective electrodes), this effect is especially strong in TEOLED. This two-beam interference and the Fabry-Perot interferences are the main factors in determining the output spectral intensity of OLED. This optical effect is called the "micro-cavity effect."
In the case of OLED, that means the cavity in a TEOLED could be especially designed to enhance the light output intensity and color purity with a narrow band of wavelengths, without consuming more power. In TEOLEDs, the microcavity effect commonly occurs, and when and how to restrain or make use of this effect is indispensable for device design. To match the conditions of constructive interference, different layer thicknesses are applied according to the resonance wavelength of that specific color. The thickness conditions are carefully designed and engineered according to the peak resonance emitting wavelengths of the blue light (460 nm), green light (530 nm), and red light (610 nm) color LEDs. This technology greatly improves the light-emission efficiency of OLEDs, and are able to achieve a wider color gamut due to high color purity.
Color filters
In "white + color filter method", also known as WOLED, red, green, and blue emissions are obtained from the same white-light LEDs using different color filters. With this method, the OLED materials produce white light, which is then filtered to obtain the desired RGB colors. This method eliminated the need to deposit three different organic emissive materials side by side, so only one kind of OLED material per layer is used to produce white light. It also eliminated the uneven degradation rate of blue pixels vs. red and green pixels. Disadvantages of this method are low color purity and contrast. Also, the filters absorb most of the emitted light, requiring the background white light to be relatively strong to compensate for the drop in brightness, and thus the power consumption for such displays can be higher.
Color filters can also be implemented into bottom- and top-emission OLEDs. By adding the corresponding RGB color filters after the semi-transparent cathode, even purer wavelengths of light can be obtained. The use of a microcavity in top-emission OLEDs with color filters also contributes to an increase in the contrast ratio by reducing the reflection of incident ambient light. In a conventional panel, a circular polarizer was installed on the panel surface. While this was provided to prevent the reflection of ambient light, it also reduced the light output. By replacing this polarizing layer with color filters, the light intensity is not affected, and essentially all ambient reflected light can be cut, allowing a better contrast on the display panel. This potentially reduced the need for brighter pixels and can lower the power consumption.
Other architectures
Transparent OLEDs
Transparent OLEDs use transparent or semi-transparent contacts on both sides of the device to create displays that can be made to be both top and bottom emitting (transparent). TOLEDs can greatly improve contrast, making it much easier to view displays in bright sunlight. This technology can be used in Head-up displays, smart windows or augmented reality applications.
Graded heterojunction
Graded heterojunction OLEDs gradually decrease the ratio of electron holes to electron transporting chemicals. This results in almost double the quantum efficiency of existing OLEDs.
Stacked OLEDs
Stacked OLEDs use a pixel architecture that stacks the red, green, and blue subpixels on top of one another instead of next to one another, leading to substantial increase in gamut and color depth, and greatly reducing pixel gap. Other display technologies with RGB (and RGBW) pixels mapped next to each other, tend to decrease potential resolution.
Tandem OLEDs are similar but have 2 layers of the same color stacked together. This improves the brightness of OLED displays.
Inverted OLED
In contrast to a conventional OLED, in which the anode is placed on the substrate, an inverted OLED uses a bottom cathode that can be connected to the drain end of an n-channel TFT, especially for the low-cost amorphous silicon TFT backplane useful in the manufacturing of AMOLED displays.
All OLED displays (passive and active matrix) use a driver IC, often mounted using the chip-on-glass (COG) technology with an anisotropic conductive film.
Color patterning technologies
Shadow mask patterning method
The most commonly used patterning method for organic light-emitting displays is shadow masking during film deposition, also called the "RGB side-by-side" method or "RGB pixelation" method. Metal sheets with multiple apertures made of low thermal expansion material, such as nickel alloy, are placed between the heated evaporation source and substrate, so that the organic or inorganic material from the evaporation source is masked off, or blocked by the sheet from reaching the substrate in most locations, so the materials are deposited only on the desired locations on the substrate, and the rest is deposited and remains on the sheet. Almost all small OLED displays for smartphones have been manufactured using this method.
Fine metal masks (FMMs) made by photochemical machining, reminiscent of old CRT shadow masks, are used in this process. The dot density of the mask will determine the pixel density of the finished display. Fine Hybrid Masks (FHMs) are lighter than FFMs, reducing bending caused by the mask's own weight, and are made using an electroforming process.
This method requires heating the electroluminescent materials at 300 °C using a thermal method in a high vacuum of 10Pa. An oxygen meter ensures that no oxygen enters the chamber as it could damage (through oxidation) the electroluminescent material, which is in powder form. The mask is aligned with the mother substrate before every use, and it is placed just below the substrate. The substrate and mask assembly are placed at the top of the deposition chamber. Afterwards, the electrode layer is deposited, by subjecting silver and aluminum powder to 1000 °C, using an electron beam. Shadow masks allow for high pixel densities of up to . High pixel densities are
necessary for virtual reality headsets.
White + color filter method (WOLED)
Although the shadow-mask patterning method is a mature technology used from the first OLED manufacturing, it causes many issues like dark spot formation due to mask-substrate contact or misalignment of the pattern due to the deformation of shadow mask. Such defect formation can be regarded as trivial when the display size is small, however it causes serious issues when a large display is manufactured, which brings significant production yield loss. To circumvent such issues, white emission devices with 4-sub-pixel color filters (white, red, green and blue) have been used for large televisions. In spite of the light absorption by the color filter, state-of-the-art OLED televisions can reproduce color very well, such as 100% NTSC, and consume little power at the same time. This is done by using an emission spectrum with high human-eye sensitivity, special color filters with a low spectrum overlap, and performance tuning with color statistics into consideration.
This approach is also called the "Color-by-white" method.
Other color patterning approaches
There are other types of emerging patterning technologies to increase the manufacturability of OLEDs.
Patternable organic light-emitting devices use a light or heat activated electroactive layer. A latent material (PEDOT-TMA) is included in this layer that, upon activation, becomes highly efficient as a hole injection layer. Using this process, light-emitting devices with arbitrary patterns can be prepared.
Colour patterning can be accomplished by means of a laser, such as a radiation-induced sublimation transfer (RIST).
Organic vapour jet printing (OVJP) uses an inert carrier gas, such as argon or nitrogen, to transport evaporated organic molecules (as in organic vapour phase deposition). The gas is expelled through a micrometre-sized nozzle or nozzle array close to the substrate as it is being translated. This allows printing arbitrary multilayer patterns without the use of solvents.
Like ink jet material deposition, inkjet etching (IJE) deposits precise amounts of solvent onto a substrate designed to selectively dissolve the substrate material and induce a structure or pattern. Inkjet etching of polymer layers in OLEDs can be used to increase the overall out-coupling efficiency. In OLEDs, light produced from the emissive layers of the OLED is partially transmitted out of the device and partially trapped inside the device by total internal reflection (TIR). This trapped light is wave-guided along the interior of the device until it reaches an edge where it is dissipated by either absorption or emission. Inkjet etching can be used to selectively alter the polymeric layers of OLED structures to decrease overall TIR and increase out-coupling efficiency of the OLED. Compared to a non-etched polymer layer, the structured polymer layer in the OLED structure from the IJE process helps to decrease the TIR of the OLED device. IJE solvents are commonly organic instead of water-based due to their non-acidic nature and ability to effectively dissolve materials at temperatures under the boiling point of water.
Transfer-printing is an emerging technology to assemble large numbers of parallel OLED and AMOLED devices efficiently. It takes advantage of standard metal deposition, photolithography, and etching to create alignment marks commonly on glass or other device substrates. Thin polymer adhesive layers are applied to enhance resistance to particles and surface defects. Microscale ICs are transfer-printed onto the adhesive surface and then baked to fully cure adhesive layers. An additional photosensitive polymer layer is applied to the substrate to account for the topography caused by the printed ICs, reintroducing a flat surface. Photolithography and etching removes some polymer layers to uncover conductive pads on the ICs. Afterwards, the anode layer is applied to the device backplane to form the bottom electrode. OLED layers are applied to the anode layer with conventional vapor deposition, and covered with a conductive metal electrode layer. transfer-printing was capable to print onto target substrates up to 500mm × 400mm. This size limit needs to expand for transfer-printing to become a common process for the fabrication of large OLED/AMOLED displays.
Experimental OLED displays using conventional photolithography techniques instead of FMMs have been demonstrated, allowing for large substrate sizes (as it eliminates the need for a mask that needs to be as large as the substrate) and good yield control. Visionox has announced the use of photolithography for depositing OLED emissive materials.
Thin-film transistor backplanes
For a high resolution display like a TV, a thin-film transistor (TFT) backplane is necessary to drive the pixels correctly. As of 2019, low-temperature polycrystalline silicon (LTPS)– TFT is widely used for commercial AMOLED displays due to its superior current handling capacity over amorphous silicon (a-Si) TFTs. LTPS-TFT has variation of the performance in a display, so various compensation circuits have been reported. Due to the size limitation of the excimer laser used for LTPS, the AMOLED size was limited. To cope with the hurdle related to the panel size, amorphous-silicon/microcrystalline-silicon backplanes have been reported with large display prototype demonstrations. An indium gallium zinc oxide (IGZO) backplane can also be used. Large OLED displays usually use AOS (amporphous oxide semiconductor) TFT transistors instead, also called oxide TFTs and these are usually based on IGZO.
Many AMOLED displays use LTPO TFT transistors. These transistors offer stability at low refresh rates, and variable refresh rates, which allows for power saving displays that do not show visual artifacts.
Advantages
The different manufacturing process of OLEDs has several advantages over flat panel displays made with LCD technology.
Lower cost in the future OLEDs can be printed onto any suitable substrate by an inkjet printer or even by screen printing, theoretically making them cheaper to produce than LCD or plasma displays. However, fabrication of the OLED substrate as of 2018 is costlier than that for TFT LCDs. Roll-to-roll vapor-deposition methods for organic devices do allow mass production of thousands of devices per minute for minimal cost; however, this technique also induces problems: devices with multiple layers can be challenging to make because of registration — lining up the different printed layers to the required degree of accuracy.
Lightweight and flexible plastic substrates OLED displays can be fabricated on flexible plastic substrates, leading to the possible fabrication of flexible organic light-emitting diodes for other new applications, such as roll-up displays embedded in fabrics or clothing. If a substrate like polyethylene terephthalate (PET) can be used, the displays may be produced inexpensively. Furthermore, plastic substrates are shatter-resistant, unlike the glass displays used in LCD devices. Flexible OLED displays are made on polyamide plastic films which are bonded to glass panels during production. Once the OLED display is encapsulated, a Laser is used to separate the plastic from the glass in a Laser Lift-Off (LLO) process.
Power efficiency LCDs filter the light emitted from a backlight, allowing a small fraction of light through. Thus, they cannot show true black. However, an inactive OLED element does not produce light or consume power, allowing true blacks. Removing the backlight also makes OLEDs lighter because some substrates are not needed.
Response time OLEDs also have a much faster response time than an LCD. Using response time compensation technologies, the fastest modern LCDs can reach response times as low as 1ms for their fastest color transition, and are capable of refresh frequencies as high as 240Hz. According to LG, OLED response times are up to 1,000 times faster than LCD, putting conservative estimates at under 10μs (0.01ms), which could theoretically accommodate refresh frequencies approaching 100kHz (100,000Hz). Due to their extremely fast response time, OLED displays can also be easily designed to be strobed, creating an effect similar to CRT flicker in order to avoid the sample-and-hold behavior seen on both LCDs and some OLED displays, which creates the perception of motion blur.
High dynamic range support Because OLEDs can turn off individual pixels showing true black, the contrast ratio of an OLED display can be very large, which allows for representation of high dynamic range (HDR) images and video at high quality. Data must be encoded with a HDR format to display in HDR, and HDR format support varies by OLED display. Maximum (peak) brightness also varies by OLED display, which impacts the dynamic range that can be represented.
Disadvantages
Lifespan
The biggest technical problem for OLEDs is the limited lifetime of the organic materials. One 2008 technical report on an OLED TV panel found that after 1,000hours, the blue luminance degraded by 12%, the red by 7% and the green by 8%. In particular, blue OLEDs at that time had a lifetime of around 14,000hours to half original brightness (five years at eight hours per day) when used for flat-panel displays. This is lower than the typical lifetime of LCD, LED or PDP technology; each rated for about 25,000–40,000hours to half brightness, depending on manufacturer and model. One major challenge for OLED displays is the formation of dark spots due to the ingress of oxygen and moisture, which degrades the organic material over time whether or not the display is powered. In 2016, LG Electronics reported an expected lifetime of 100,000 hours, up from 36,000 hours in 2013. A US Department of Energy paper shows that the expected lifespans of OLED lighting products goes down with increasing brightness, with an expected lifespan of 40,000 hours at 25% brightness, or 10,000 hours at 100% brightness. Compared to LCDs, OLEDs may be more susceptible to screen burn-in and/or brightness degradation.
Degradation
Degradation occurs because of the accumulation of nonradiative recombination centers and luminescence quenchers in the emissive zone. It is said that the chemical breakdown in the semiconductors occurs in four steps:
recombination of charge carriers through the absorption of UV light
homolytic dissociation
subsequent radical addition reactions that form radicals
disproportionation between two radicals resulting in hydrogen-atom transfer reactions
In 2007, experimental OLEDs were created which can sustain 400cd/m2 of luminance for over 198,000hours for green OLEDs and 62,000hours for blue OLEDs. In 2012, OLED lifetime to half of the initial brightness was improved to 900,000hours for red, 1,450,000hours for yellow and 400,000hours for green at an initial luminance of 1,000cd/m2. Proper encapsulation is critical for prolonging an OLED display's lifetime, as the OLED light emitting electroluminescent materials are sensitive to oxygen and moisture. When exposed to moisture or oxygen, the electroluminescent materials in OLEDs degrade as they oxidize, generating black spots and reducing or shrinking the area that emits light, reducing light output. This reduction can occur in a pixel by pixel basis. This can also lead to delamination of the electrode layer, eventually leading to complete panel failure.
Degradation occurs three orders of magnitude faster when exposed to moisture than when exposed to oxygen. Encapsulation can be performed by applying an epoxy adhesive with dessicant, by laminating a glass sheet with epoxy glue and dessicant followed by vacuum degassing, or by using Thin-Film Encapsulation (TFE), which is a multi-layer coating of alternating organic and inorganic layers. The organic layers are applied using inkjet printing, and the inorganic layers are applied using Atomic Layer Deposition (ALD). The encapsulation process is carried out under a nitrogen environment, using UV-curable LOCA glue and the electroluminescent and electrode material deposition processes are carried out under a high vacuum. The encapsulation and material deposition processes are carried out by a single machine, after the Thin-film transistors have been applied. The transistors are applied in a process that is the same for LCDs. The electroluminescent materials can also be applied using inkjet printing.
Color balance
The OLED material used to produce blue light degrades much more rapidly than the materials used to produce other colors; in other words, blue light output will decrease relative to the other colors of light. This variation in the differential color output will change the color balance of the display, and is much more noticeable than a uniform decrease in overall luminance. This can be avoided partially by adjusting the color balance, but this may require advanced control circuits and input from a knowledgeable user. More commonly, though, manufacturers optimize the size of the R, G and B subpixels to reduce the current density through the subpixel in order to equalize lifetime at full luminance. For example, a blue subpixel may be 75% larger than the green subpixel. The red subpixel may be 10% larger than the green.
Efficiency of blue OLEDs
Improvements to the efficiency and lifetime of blue OLEDs is vital to the success of OLEDs as replacements for LCD technology. Considerable research has been invested in developing blue OLEDs with high external quantum efficiency, as well as a deeper blue color.
Since 2012, research focuses on organic materials exhibiting thermally activated delayed fluorescence (TADF), discovered at Kyushu University OPERA and UC Santa Barbara CPOS. TADF would allow stable and high-efficiency solution processable (meaning that the organic materials are layered in solutions producing thinner layers) blue emitters, with internal quantum efficiencies reaching 100%. Early in 2017, TADF materials based on oxygen-based fully bridged boron-type electron accepttors had achieved huge breakthrough in their proprities. The external quantum efficiency of TADF-OLED for blue and green light had achieved 38%, with thin full-width half-maximum and high color purity. In 2022, Han et al. synthesized a new D-A type luminescent material, TDBA-Cz, and used the m-AC-DBNA synthesized by Meng et al. as a control to investigate the effect of the substitution site of the carbazole unit as an electron donor on the oxygen-bridged triphenylboron electron acceptor unit on the photophysical properties of the overall molecule. It was found that the introduction of two carbazole units into the same benzene ring of the oxygen-bridged triphenylboron electron acceptor unit could effectively suppress the conformational relaxation of the molecule during the radiative transition, resulting in narrow bandwidth blue light emission. In addition, TDBA-Cz is the first reported blue material to achieve both a FWHM down to 45 nm and a maximum EQE of 21.4% in a non-doped TADF-OLED.
Blue TADF emitters are expected to market by 2020 and would be used for WOLED displays with phosphorescent color filters, as well as blue OLED displays with ink-printed QD color filters.
Water damage
Water can instantly damage the organic materials of the displays. Therefore, improved sealing processes are important for practical manufacturing. Water damage especially may limit the longevity of more flexible displays.
Outdoor performance
As an emissive display technology, OLEDs rely completely upon converting electricity to light, unlike most LCDs which are to some extent reflective. E-paper leads the way in efficiency with ~ 33% ambient light reflectivity, enabling the display to be used without any internal light source. The metallic cathode in an OLED acts as a mirror, with reflectance approaching 80%, leading to poor readability in bright ambient light such as outdoors. However, with the proper application of a circular polarizer and antireflective coatings, the diffuse reflectance can be reduced to less than 0.1%. With 10,000 fc incident illumination (typical test condition for simulating outdoor illumination), that yields an approximate photopic contrast of 5:1. Advances in OLED technologies, however, enable OLEDs to become actually better than LCDs in bright sunlight. The AMOLED display in the Galaxy S5, for example, was found to outperform all LCD displays on the market in terms of power usage, brightness and reflectance.
Power consumption
While an OLED will consume around 40% of the power of an LCD displaying an image that is primarily black, for the majority of images it will consume 60–80% of the power of an LCD. However, an OLED can use more than 300% power to display an image with a white background, such as a document or web site. This can lead to reduced battery life in mobile devices when white backgrounds are used.
Screen flicker
Many OLEDs use pulse width modulation to display colour/brightness gradations. For example, a pixel instructed to display gray will flicker on and off rapidly, creating a subtle strobe effect. The alternative way to decrease brightness would be to decrease power to the display, which would eliminate screen flicker to the detriment of colour balance, which deteriorates as brightness decreases. However, use of PWM gradations may be more harmful for eye health.
Manufacturers and commercial uses
Almost all OLED manufacturers rely on material deposition equipment that is only made by a handful of companies, the most notable one being Canon Tokki, a unit of Canon Inc. although Ulvac and Sunic System are also notable. Canon Tokki is reported to have a near-monopoly of the giant OLED-manufacturing vacuum machines, notable for their size. Apple has relied solely on Canon Tokki in its bid to introduce its own OLED displays for the iPhones released in 2017. The electroluminescent materials needed for OLEDs are also made by a handful of companies, some of them being Merck, Universal Display Corporation and LG Chem. The machines that apply these materials can operate continuously for 5–6 days, and can process a mother substrate in 5 minutes.
OLED displays are mainly made by Samsung Display and LG Display. OLED technology is used in commercial applications such as displays for mobile phones and portable digital media players, car radios and digital cameras among others, as well as lighting. Such portable display applications favor the high light output of OLEDs for readability in sunlight and their low power drain. Portable displays are also used intermittently, so the lower lifespan of organic displays is less of an issue. Prototypes have been made of flexible and rollable displays which use OLEDs' unique characteristics. Applications in flexible signs and lighting are also being developed. OLED lighting offers several advantages over LED lighting, such as higher quality illumination, more diffuse light source, and panel shapes. Philips Lighting has made OLED lighting samples under the brand name "Lumiblade" available online and Novaled AG based in Dresden, Germany, introduced a line of OLED desk lamps called "Victory" in September, 2011.
Nokia introduced OLED mobile phones including the N85 and the N86 8MP, both of which feature an AMOLED display. OLEDs have also been used in most Motorola and Samsung color cell phones, as well as some HTC, LG and Sony Ericsson models. OLED technology can also be found in digital media players such as the Creative ZEN V, the iriver clix, the Zune HD and the Sony Walkman X Series.
The Google and HTC Nexus One smartphone includes an AMOLED screen, as does HTC's own Desire and Legend phones. However, due to supply shortages of the Samsung-produced displays, certain HTC models will use Sony's SLCD displays in the future, while the Google and Samsung Nexus S smartphone will use "Super Clear LCD" instead in some countries.
OLED displays were used in watches made by Fossil (JR-9465) and Diesel (DZ-7086). Other manufacturers of OLED panels include Anwell Technologies Limited (Hong Kong), AU Optronics (Taiwan), Chimei Innolux Corporation (Taiwan), LG (Korea), and others.
DuPont stated in a press release in May 2010, that they can produce a 50-inch OLED TV in two minutes with a new printing technology. If this can be scaled up in terms of manufacturing, then the total cost of OLED TVs would be greatly reduced. DuPont also states that OLED TVs made with this less expensive technology can last up to 15 years if left on for a normal eight-hour day.
The use of OLEDs may be subject to patents held by Universal Display Corporation, Eastman Kodak, DuPont, General Electric, Royal Philips Electronics, numerous universities and others. By 2008, thousands of patents associated with OLEDs, came from larger corporations and smaller technology companies.
Flexible OLED displays have been used by manufacturers to create curved displays such as the Galaxy S7 Edge but they were not in devices that can be flexed by the users. Samsung demonstrated a roll-out display in 2016.
On 31 October 2018, Royole, a Chinese electronics company, unveiled the world's first foldable screen phone featuring a flexible OLED display. On 20 February 2019, Samsung announced the Samsung Galaxy Fold with a foldable OLED display from Samsung Display, its majority-owned subsidiary. At MWC 2019 on 25 February 2019, Huawei announced the Huawei Mate X featuring a foldable OLED display from BOE.
The 2010s also saw the wide adoption of tracking gate-line in pixel (TGP), which moves the driving circuitry from the borders of the display to in between the display's pixels, allowing for narrow bezels.
In 2023 the German startup Inuru has announced to manufacture low-cost OLED with printing for packaging and fashion applications.
Fashion
Textiles incorporating OLEDs are an innovation in the fashion world and pose for a way to integrate lighting to bring inert objects to a whole new level of fashion. The hope is to combine the comfort and low cost properties of textile with the OLEDs properties of illumination and low energy consumption. Although this scenario of illuminated clothing is highly plausible, challenges are still a road block. Some issues include: the lifetime of the OLED, rigidness of flexible foil substrates, and the lack of research in making more fabric like photonic textiles.
Automotive
The number of automakers using OLEDs is still rare and limited to the high-end of the market. For example, the 2010 Lexus RX features an OLED display instead of a thin film transistor (TFT-LCD) display.
A Japanese manufacturer Pioneer Electronic Corporation produced the first car stereos with a monochrome OLED display, which was also the world's first OLED product. The Aston Martin DB9 incorporated the world's first automotive OLED display, which was manufactured by Yazaki, followed by the 2004 Jeep Grand Cherokee and the Chevrolet Corvette C6. The 2015 Hyundai Sonata and Kia Soul EV use a 3.5-inch white PMOLED display.
Company-specific applications
Samsung
By 2004, Samsung Display, a subsidiary of South Korea's largest conglomerate and a former Samsung-NEC joint venture, was the world's largest OLED manufacturer, producing 40% of the OLED displays made in the world, and as of 2010, has a 98% share of the global AMOLED market. The company is leading the world of OLED industry, generating $100.2million out of the total $475million revenues in the global OLED market in 2006. As of 2006, it held more than 600 American patents and more than 2800 international patents, making it the largest owner of AMOLED technology patents.
Samsung SDI announced in 2005, the world's largest OLED TV at the time, at . This OLED featured the highest resolution at the time, of 6.22million pixels. In addition, the company adopted active matrix-based technology for its low power consumption and high-resolution qualities. This was exceeded in January 2008, when Samsung showcased the world's largest and thinnest OLED TV at the time, at 31inches (78cm) and 4.3mm.
In May 2008, Samsung unveiled an ultra-thin 12.1inch (30cm) laptop OLED display concept, with a 1,280×768 resolution with infinite contrast ratio. According to Woo Jong Lee, Vice President of the Mobile Display Marketing Team at Samsung SDI, the company expected OLED displays to be used in notebook PCs as soon as 2010.
In October 2008, Samsung showcased the world's thinnest OLED display, also the first to be "flappable" and bendable. It measures just 0.05mm (thinner than paper), yet a Samsung staff member said that it is "technically possible to make the panel thinner". To achieve this thickness, Samsung etched an OLED panel that uses a normal glass substrate. The drive circuit was formed by low-temperature polysilicon TFTs. Also, low-molecular organic EL materials were employed. The pixel count of the display is 480 × 272. The contrast ratio is 100,000:1, and the luminance is 200cd/m2. The colour reproduction range is 100% of the NTSC standard.
At the Consumer Electronics Show (CES) in January 2010, Samsung demonstrated a laptop computer with a large, transparent OLED display featuring up to 40% transparency and an animated OLED display in a photo ID card.
Samsung's 2010 AMOLED smartphones used their Super AMOLED trademark, with the Samsung Wave S8500 and Samsung i9000 Galaxy S being launched in June 2010. In January 2011, Samsung announced their Super AMOLED Plus displays, which offer several advances over the older Super AMOLED displays: real stripe matrix (50% more sub pixels), thinner form factor, brighter image and an 18% reduction in energy consumption.
At CES 2012, Samsung introduced the first 55" TV screen that uses Super OLED technology.
On 8 January 2013, at CES Samsung unveiled a unique curved 4K Ultra S9 OLED television, which they state provides an "IMAX-like experience" for viewers.
On 13 August 2013, Samsung announced availability of a 55-inch curved OLED TV (model KN55S9C) in the US at a price point of $8999.99.
On 6 September 2013, Samsung launched its 55-inch curved OLED TV (model KE55S9C) in the United Kingdom with John Lewis.
Samsung introduced the Galaxy Round smartphone in the Korean market in October 2013. The device features a 1080p screen, measuring , that curves on the vertical axis in a rounded case. The corporation has promoted the following advantages: A new feature called "Round Interaction" that allows users to look at information by tilting the handset on a flat surface with the screen off, and the feel of one continuous transition when the user switches between home screens.
Samsung released a new line of OLED TVs in 2022, its first using the technology since 2013. They use panels sourced from Samsung Display; previously, LG was the sole manufacturer of OLED panels for TVs.
Sony
The Sony CLIÉ PEG-VZ90 was released in 2004, being the first PDA to feature an OLED screen. Other Sony products to feature OLED screens include the MZ-RH1 portable minidisc recorder, released in 2006 and the Walkman X Series.
At the 2007, Las Vegas Consumer Electronics Show (CES), Sony showcased a , (resolution 960×540) and , full HD resolution at OLED TV models. Both claimed 1,000,000:1 contrast ratios and total thicknesses (including bezels) of 5mm. In April 2007, Sony announced it would manufacture 1000 OLED TVs per month for market testing purposes. On 1 October 2007, Sony announced that the model XEL-1, was the first commercial OLED TV and it was released in Japan in December 2007.
In May 2007, Sony publicly unveiled a video of a flexible OLED screen which is only 0.3 millimeters thick. At the Display 2008 exhibition, Sony demonstrated a 0.2mm thick display with a resolution of 320×200 pixels and a 0.3mm thick display with 960×540 pixels resolution, one-tenth the thickness of the XEL-1.
In July 2008, a Japanese government body said it would fund a joint project of leading firms, which is to develop a key technology to produce large, energy-saving organic displays. The project involves one laboratory and 10 companies including Sony Corp. NEDO said the project was aimed at developing a core technology to mass-produce 40inch or larger OLED displays in the late 2010s.
In October 2008, Sony published results of research it carried out with the Max Planck Institute over the possibility of mass-market bending displays, which could replace rigid LCDs and plasma screens. Eventually, bendable, see-through displays could be stacked to produce 3D images with much greater contrast ratios and viewing angles than existing products.
Sony exhibited a 24.5" (62cm) prototype OLED 3D television during the Consumer Electronics Show in January 2010.
In January 2011, Sony announced the PlayStation Vita handheld game console (the successor to the PSP) will feature a 5-inch OLED screen.
On 17 February 2011, Sony announced its 25" (63.5cm) OLED Professional Reference Monitor aimed at the Cinema and high end Drama Post Production market.
On 25 June 2012, Sony and Panasonic announced a joint venture for creating low cost mass production OLED televisions by 2013.
Sony unveiled its first OLED TV since 2008 at CES 2017 called A1E. It revealed two other models in 2018 one at CES 2018 called A8F and other a Master Series TV called A9F. At CES 2019 they unveiled another two models one the A8G and the other another Bravia Series TV called A9G. Then, at CES 2020, they revealed the A8H, which was effectively an A9G in terms of picture quality but with some compromises due to its lower cost. At the same event, they also revealed a 48-inch version of the A9G, making this its smallest OLED TV since the XEL-1.
LG
On 9 April 2009, LG acquired Kodak's OLED business and started to utilize white OLED technology. As of 2010, LG Electronics produced one model of OLED television, the 15EL9500 and had announced a OLED 3D television for March 2011. On 26 December 2011, LG officially announced the "world's largest OLED panel" and featured it at CES 2012. In late 2012, LG announces the launch of the 55EM9600 OLED television in Australia.
In January 2015, LG Display signed a long-term agreement with Universal Display Corporation for the supply of OLED materials and the right to use their patented OLED emitters.
As of 2022, LG produces the world's largest OLED TV, at 97 inches.
Mitsubishi
Lumiotec is the first company in the world developing and selling, since January 2011, mass-produced OLED lighting panels with such brightness and long lifetime. Lumiotec is a joint venture of Mitsubishi Heavy Industries, ROHM, Toppan Printing, and Mitsui & Co.
On 1 June 2011, Mitsubishi Electric installed a 6-meter OLED 'sphere' in Tokyo's Science Museum.
Recom Group
On 6 January 2011, Los Angeles-based technology company Recom Group introduced the first small screen consumer application of the OLED at the Consumer Electronics Show in Las Vegas. This was a 2.8" (7cm) OLED display being used as a wearable video name tag. At the Consumer Electronics Show in 2012, Recom Group introduced the world's first video mic flag incorporating three 2.8" (7cm) OLED displays on a standard broadcaster's mic flag. The video mic flag allowed video content and advertising to be shown on a broadcasters standard mic flag.
Dell
On 6 January 2016, Dell announced the Ultrasharp UP3017Q OLED monitor at the Consumer Electronics Show in Las Vegas. The monitor was announced to feature a 4K UHD OLED panel with a 120Hz refresh rate, 0.1 millisecond response time, and a contrast ratio of 400,000:1. The monitor was set to sell at a price of $4,999 and release in March, 2016, just a few months later. As the end of March rolled around, the monitor was not released to the market and Dell did not speak on reasons for the delay. Reports suggested that Dell canceled the monitor as the company was unhappy with the image quality of the OLED panel, especially the amount of color drift that it displayed when you viewed the monitor from the sides. On 13 April 2017, Dell finally released the UP3017Q OLED monitor to the market at a price of $3,499 ($1,500 less than its original spoken price of $4,999 at CES 2016). In addition to the price drop, the monitor featured a 60Hz refresh rate and a contrast ratio of 1,000,000:1. As of June, 2017, the monitor is no longer available to purchase from Dell's website.
Apple
Apple began using OLED panels in its watches in 2015 and in its laptops in 2016 with the introduction of an OLED touchbar to the MacBook Pro. In 2017, Apple announced the introduction of their tenth anniversary iPhone X with their own optimized OLED display licensed from Universal Display Corporation. With the exception of the iPhone SE line, iPhone XR and iPhone 11, all iPhones released since then have also featured OLED displays. In 2024, Apple announced the 7th generation iPad Pro, which featured a "tandem OLED" panel in an attempt to increase the panel's brightness.
Nintendo
A third model of Nintendo's Switch, a hybrid gaming system, features an OLED panel in place of the original model's LCD panel. Announced in the summer of 2021, it was released on 8 October 2021.
Research
In 2014, Mitsubishi Chemical Corporation (MCC), a subsidiary of Mitsubishi Chemical Holdings, developed an OLED panel with a 30,000-hour life, twice that of conventional OLED panels.
The search for efficient OLED materials has been extensively supported by simulation methods; it is possible to calculate important properties computationally, independent of experimental input, making materials development cheaper.
On 18 October 2018, Samsung showed of their research roadmap at their 2018 Samsung OLED Forum. This included Fingerprint on Display (FoD), Under Panel Sensor (UPS), Haptic on Display (HoD) and Sound on Display (SoD).
Various venders are also researching cameras under OLEDs (Under Display Cameras). According to IHS Markit Huawei has partnered with BOE, Oppo with China Star Optoelectronics Technology (CSOT), Xiaomi with Visionox.
In 2020, researchers at the Queensland University of Technology (QUT) proposed using human hair which is a source of carbon and nitrogen to create OLED displays.
See also
Flexible organic light-emitting diode
List of emerging technologies
List of flat panel display manufacturers
(Dark Mode)
LED display
microLED
Mini LED
Notes
References
Further reading
T. Tsujimura, OLED Display Fundamentals and Applications, Wiley-SID Series in Display Technology, New York (2017). .
P. Chamorro-Posada, J. Martín-Gil, P. Martín-Ramos, L.M. Navas-Gracia, Fundamentos de la Tecnología OLED (Fundamentals of OLED Technology). University of Valladolid, Spain (2008). . Available online, with permission from the authors, at the webpage: Fundamentos de la Tecnología OLED
Shinar, Joseph (Ed.), Organic Light-Emitting Devices: A Survey. NY: Springer-Verlag (2004). .
Hari Singh Nalwa (Ed.), Handbook of Luminescence, Display Materials and Devices, Volume 1–3. American Scientific Publishers, Los Angeles (2003). . Volume 1: Organic Light-Emitting Diodes
Hari Singh Nalwa (Ed.), Handbook of Organic Electronics and Photonics, Volume 1–3. American Scientific Publishers, Los Angeles (2008). .
Müllen, Klaus (Ed.), Organic Light Emitting Devices: Synthesis, Properties and Applications. Wiley-VCH (2006).
Yersin, Hartmut (Ed.), Highly Efficient OLEDs with Phosphorescent Materials. Wiley-VCH (2007).
Kho, Mu-Jeong, Javed, T., Mark, R., Maier, E., and David, C. (2008) 'Final Report: OLED Solid State Lighting – Kodak European Research' MOTI (Management of Technology and Innovation) Project, Judge Business School of the University of Cambridge and Kodak European Research, Final Report presented on 4 March 2008 at Kodak European Research at Cambridge Science Park, Cambridge, UK., pages 1–12.
External links
OLED, LCD & TFT - Construction and Difference, advantages and disadvantages 08. Juli 2020
Structure and working principle of OLEDs and electroluminescent displays
MIT introduction to OLED technology (video)
Historical list of OLED products from 1996 to present
American inventions
Conductive polymers
Electronic display devices
Display technology
Energy-saving lighting
Flexible electronics
Light-emitting diodes
Molecular electronics
Optical diodes
Organic electronics | OLED | [
"Chemistry",
"Materials_science",
"Engineering"
] | 14,348 | [
"Molecular physics",
"Molecular electronics",
"Electronic engineering",
"Display technology",
"Nanotechnology",
"Flexible electronics",
"Conductive polymers"
] |
191,897 | https://en.wikipedia.org/wiki/Electric-field%20screening | In physics, screening is the damping of electric fields caused by the presence of mobile charge carriers. It is an important part of the behavior of charge-carrying mediums, such as ionized gases (classical plasmas), electrolytes, and electronic conductors (semiconductors, metals).
In a fluid, with a given permittivity , composed of electrically charged constituent particles, each pair of particles (with charges and ) interact through the Coulomb force as
where the vector is the relative position between the charges. This interaction complicates the theoretical treatment of the fluid. For example, a naive quantum mechanical calculation of the ground-state energy density yields infinity, which is unreasonable. The difficulty lies in the fact that even though the Coulomb force diminishes with distance as , the average number of particles at each distance is proportional to , assuming the fluid is fairly isotropic. As a result, a charge fluctuation at any one point has non-negligible effects at large distances.
In reality, these long-range effects are suppressed by the flow of particles in response to electric fields. This flow reduces the effective interaction between particles to a short-range "screened" Coulomb interaction. This system corresponds to the simplest example of a renormalized interaction.
In solid-state physics, especially for metals and semiconductors, the screening effect describes the electrostatic field and Coulomb potential of an ion inside the solid. Like the electric field of the nucleus is reduced inside an atom or ion due to the shielding effect, the electric fields of ions in conducting solids are further reduced by the cloud of conduction electrons.
Description
Consider a fluid composed of electrons moving in a uniform background of positive charge (one-component plasma). Each electron possesses a negative charge. According to Coulomb's interaction, negative charges repel each other. Consequently, this electron will repel other electrons creating a small region around itself in which there are fewer electrons. This region can be treated as a positively charged "screening hole". Viewed from a large distance, this screening hole has the effect of an overlaid positive charge which cancels the electric field produced by the electron. Only at short distances, inside the hole region, can the electron's field be detected. For a plasma, this effect can be made explicit by an -body calculation. If the background is made up of positive ions, their attraction by the electron of interest reinforces the above screening mechanism. In atomic physics, a germane effect exists for atoms with more than one electron shell: the shielding effect. In plasma physics, electric-field screening is also called Debye screening or shielding. It manifests itself on macroscopic scales by a sheath (Debye sheath) next to a material with which the plasma is in contact.
The screened potential determines the inter atomic force and the phonon dispersion relation in metals. The screened potential is used to calculate the electronic band structure of a large variety of materials, often in combination with pseudopotential models. The screening effect leads to the independent electron approximation, which explains the predictive power of introductory models of solids like the Drude model, the free electron model and the nearly free electron model.
Theory and models
The first theoretical treatment of electrostatic screening, due to Peter Debye and Erich Hückel, dealt with a stationary point charge embedded in a fluid.
Consider a fluid of electrons in a background of heavy, positively charged ions. For simplicity, we ignore the motion and spatial distribution of the ions, approximating them as a uniform background charge. This simplification is permissible since the electrons are lighter and more mobile than the ions, provided we consider distances much larger than the ionic separation. In condensed matter physics, this model is referred to as jellium.
Screened Coulomb interactions
Let ρ denote the number density of electrons, and φ the electric potential. At first, the electrons are evenly distributed so that there is zero net charge at every point. Therefore, φ is initially a constant as well.
We now introduce a fixed point charge Q at the origin. The associated charge density is Qδ(r), where δ(r) is the Dirac delta function. After the system has returned to equilibrium, let the change in the electron density and electric potential be Δρ(r) and Δφ(r) respectively. The charge density and electric potential are related by Poisson's equation, which gives
where ε0 is the vacuum permittivity.
To proceed, we must find a second independent equation relating Δρ and Δφ. We consider two possible approximations, under which the two quantities are proportional: the Debye–Hückel approximation, valid at high temperatures (e.g. classical plasmas), and the Thomas–Fermi approximation, valid at low temperatures (e.g. electrons in metals).
Debye–Hückel approximation
In the Debye–Hückel approximation, we maintain the system in thermodynamic equilibrium, at a temperature T high enough that the fluid particles obey Maxwell–Boltzmann statistics. At each point in space, the density of electrons with energy j has the form
where kB is the Boltzmann constant. Perturbing in φ and expanding the exponential to first order, we obtain
where
The associated length is called the Debye length. The Debye length is the fundamental length scale of a classical plasma.
Thomas–Fermi approximation
In the Thomas–Fermi approximation, named after Llewellyn Thomas and Enrico Fermi, the system is maintained at a constant electron chemical potential (Fermi level) and at low temperature. The former condition corresponds, in a real experiment, to keeping the metal/fluid in electrical contact with a fixed potential difference with ground. The chemical potential μ is, by definition, the energy of adding an extra electron to the fluid. This energy may be decomposed into a kinetic energy T part and the potential energy −eφ part. Since the chemical potential is kept constant,
If the temperature is extremely low, the behavior of the electrons comes close to the quantum mechanical model of a Fermi gas. We thus approximate T by the kinetic energy of an additional electron in the Fermi gas model, which is simply the Fermi energy EF. The Fermi energy for a 3D system is related to the density of electrons (including spin degeneracy) by
where kF is the Fermi wavevector. Perturbing to first order, we find that
Inserting this into the above equation for Δμ yields
where
is called the Thomas–Fermi screening wave vector.
This result follows from the equations of a Fermi gas, which is a model of non-interacting electrons, whereas the fluid, which we are studying, contains the Coulomb interaction. Therefore, the Thomas–Fermi approximation is only valid when the electron density is low, so that the particle interactions are relatively weak.
Result: Screened potential
Our results from the Debye–Hückel or Thomas–Fermi approximation may now be inserted into Poisson's equation. The result is
which is known as the screened Poisson equation. The solution is
which is called a screened Coulomb potential. It is a Coulomb potential multiplied by an exponential damping term, with the strength of the damping factor given by the magnitude of k0, the Debye or Thomas–Fermi wave vector. Note that this potential has the same form as the Yukawa potential. This screening yields a dielectric function .
Many-body theory
Classical physics and linear response
A mechanical -body approach provides together the derivation of screening effect and of Landau damping. It deals with a single realization of a one-component plasma whose electrons have a velocity dispersion (for a thermal plasma, there must be many particles in a Debye sphere, a volume whose radius is the Debye length). On using the linearized motion of the electrons in their own electric field, it yields an equation of the type
where is a linear operator, is a source term due to the particles, and is the Fourier-Laplace transform of the electrostatic potential. When substituting an integral over a smooth distribution function for the discrete sum over the particles in , one gets
where is the plasma permittivity, or dielectric function, classically obtained by a linearized Vlasov-Poisson equation, is the wave vector, is the frequency, and is the sum of source terms due to the particles.
By inverse Fourier-Laplace transform, the potential due to each particle is the sum of two parts One corresponds to the excitation of Langmuir waves by the particle, and the other one is its screened potential, as classically obtained by a linearized Vlasovian calculation involving a test particle. The screened potential is the above screened Coulomb potential for a thermal plasma and a thermal particle. For a faster particle, the potential is modified. Substituting an integral over a smooth distribution function for the discrete sum over the particles in , yields the Vlasovian expression enabling the calculation of Landau damping.
Quantum-mechanical approach
In real metals, the screening effect is more complex than described above in the Thomas–Fermi theory. The assumption that the charge carriers (electrons) can respond at any wavevector is just an approximation. However, it is not energetically possible for an electron within or on a Fermi surface to respond at wavevectors shorter than the Fermi wavevector. This constraint is related to the Gibbs phenomenon, where Fourier series for functions that vary rapidly in space are not good approximations unless a very large number of terms in the series are retained. In physics, this phenomenon is known as Friedel oscillations, and applies both to surface and bulk screening. In each case the net electric field does not fall off exponentially in space, but rather as an inverse power law multiplied by an oscillatory term. Theoretical calculations can be obtained from quantum hydrodynamics and density functional theory (DFT).
See also
Bjerrum length
Debye length
References
External links
Condensed matter physics
Electromagnetism concepts
Plasma phenomena | Electric-field screening | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,089 | [
"Physical phenomena",
"Plasma physics",
"Electromagnetism concepts",
"Plasma phenomena",
"Phases of matter",
"Materials science",
"Condensed matter physics",
"Matter"
] |
191,933 | https://en.wikipedia.org/wiki/Exponential%20growth | Exponential growth occurs when a quantity grows as an exponential function of time. The quantity grows at a rate directly proportional to its present size. For example, when it is 3 times as big as it is now, it will be growing 3 times as fast as it is now.
In more technical language, its instantaneous rate of change (that is, the derivative) of a quantity with respect to an independent variable is proportional to the quantity itself. Often the independent variable is time. Described as a function, a quantity undergoing exponential growth is an exponential function of time, that is, the variable representing time is the exponent (in contrast to other types of growth, such as quadratic growth). Exponential growth is the inverse of logarithmic growth.
Not all cases of growth at an always increasing rate are instances of exponential growth. For example the function grows at an ever increasing rate, but is much slower than growing exponentially. For example, when it grows at 3 times its size, but when it grows at 30% of its size. If an exponentially growing function grows at a rate that is 3 times is present size, then it always grows at a rate that is 3 times its present size. When it is 10 times as big as it is now, it will grow 10 times as fast.
If the constant of proportionality is negative, then the quantity decreases over time, and is said to be undergoing exponential decay instead. In the case of a discrete domain of definition with equal intervals, it is also called geometric growth or geometric decay since the function values form a geometric progression.
The formula for exponential growth of a variable at the growth rate , as time goes on in discrete intervals (that is, at integer times 0, 1, 2, 3, ...), is
where is the value of at time 0. The growth of a bacterial colony is often used to illustrate it. One bacterium splits itself into two, each of which splits itself resulting in four, then eight, 16, 32, and so on. The amount of increase keeps increasing because it is proportional to the ever-increasing number of bacteria. Growth like this is observed in real-life activity or phenomena, such as the spread of virus infection, the growth of debt due to compound interest, and the spread of viral videos. In real cases, initial exponential growth often does not last forever, instead slowing down eventually due to upper limits caused by external factors and turning into logistic growth.
Terms like "exponential growth" are sometimes incorrectly interpreted as "rapid growth". Indeed, something that grows exponentially can in fact be growing slowly at first.
Examples
Biology
The number of microorganisms in a culture will increase exponentially until an essential nutrient is exhausted, so there is no more of that nutrient for more organisms to grow. Typically the first organism splits into two daughter organisms, who then each split to form four, who split to form eight, and so on. Because exponential growth indicates constant growth rate, it is frequently assumed that exponentially growing cells are at a steady-state. However, cells can grow exponentially at a constant rate while remodeling their metabolism and gene expression.
A virus (for example COVID-19, or smallpox) typically will spread exponentially at first, if no artificial immunization is available. Each infected person can infect multiple new people.
Physics
Avalanche breakdown within a dielectric material. A free electron becomes sufficiently accelerated by an externally applied electrical field that it frees up additional electrons as it collides with atoms or molecules of the dielectric media. These secondary electrons also are accelerated, creating larger numbers of free electrons. The resulting exponential growth of electrons and ions may rapidly lead to complete dielectric breakdown of the material.
Nuclear chain reaction (the concept behind nuclear reactors and nuclear weapons). Each uranium nucleus that undergoes fission produces multiple neutrons, each of which can be absorbed by adjacent uranium atoms, causing them to fission in turn. If the probability of neutron absorption exceeds the probability of neutron escape (a function of the shape and mass of the uranium), the production rate of neutrons and induced uranium fissions increases exponentially, in an uncontrolled reaction. "Due to the exponential rate of increase, at any point in the chain reaction 99% of the energy will have been released in the last 4.6 generations. It is a reasonable approximation to think of the first 53 generations as a latency period leading up to the actual explosion, which only takes 3–4 generations."
Positive feedback within the linear range of electrical or electroacoustic amplification can result in the exponential growth of the amplified signal, although resonance effects may favor some component frequencies of the signal over others.
Economics
Economic growth is expressed in percentage terms, implying exponential growth.
Finance
Compound interest at a constant interest rate provides exponential growth of the capital. See also rule of 72.
Pyramid schemes or Ponzi schemes also show this type of growth resulting in high profits for a few initial investors and losses among great numbers of investors.
Computer science
Processing power of computers. See also Moore's law and technological singularity. (Under exponential growth, there are no singularities. The singularity here is a metaphor, meant to convey an unimaginable future. The link of this hypothetical concept with exponential growth is most vocally made by futurist Ray Kurzweil.)
In computational complexity theory, computer algorithms of exponential complexity require an exponentially increasing amount of resources (e.g. time, computer memory) for only a constant increase in problem size. So for an algorithm of time complexity , if a problem of size requires 10 seconds to complete, and a problem of size requires 20 seconds, then a problem of size will require 40 seconds. This kind of algorithm typically becomes unusable at very small problem sizes, often between 30 and 100 items (most computer algorithms need to be able to solve much larger problems, up to tens of thousands or even millions of items in reasonable times, something that would be physically impossible with an exponential algorithm). Also, the effects of Moore's Law do not help the situation much because doubling processor speed merely increases the feasible problem size by a constant. E.g. if a slow processor can solve problems of size in time , then a processor twice as fast could only solve problems of size in the same time . So exponentially complex algorithms are most often impractical, and the search for more efficient algorithms is one of the central goals of computer science today.
Internet phenomena
Internet contents, such as internet memes or videos, can spread in an exponential manner, often said to "go viral" as an analogy to the spread of viruses. With media such as social networks, one person can forward the same content to many people simultaneously, who then spread it to even more people, and so on, causing rapid spread. For example, the video Gangnam Style was uploaded to YouTube on 15 July 2012, reaching hundreds of thousands of viewers on the first day, millions on the twentieth day, and was cumulatively viewed by hundreds of millions in less than two months.
Basic formula
A quantity depends exponentially on time if
where the constant is the initial value of , the constant is a positive growth factor, and is the time constant—the time required for to increase by one factor of :
If and , then has exponential growth. If and , or and , then has exponential decay.
Example: If a species of bacteria doubles every ten minutes, starting out with only one bacterium, how many bacteria would be present after one hour? The question implies , and .
After one hour, or six ten-minute intervals, there would be sixty-four bacteria.
Many pairs of a dimensionless non-negative number and an amount of time (a physical quantity which can be expressed as the product of a number of units and a unit of time) represent the same growth rate, with proportional to . For any fixed not equal to 1 (e.g. e or 2), the growth rate is given by the non-zero time . For any non-zero time the growth rate is given by the dimensionless positive number .
Thus the law of exponential growth can be written in different but mathematically equivalent forms, by using a different base. The most common forms are the following:
where expresses the initial quantity .
Parameters (negative in the case of exponential decay):
The growth constant is the frequency (number of times per unit time) of growing by a factor ; in finance it is also called the logarithmic return, continuously compounded return, or force of interest.
The e-folding time τ is the time it takes to grow by a factor e.
The doubling time T is the time it takes to double.
The percent increase (a dimensionless number) in a period .
The quantities , , and , and for a given also , have a one-to-one connection given by the following equation (which can be derived by taking the natural logarithm of the above):
where corresponds to and to and being infinite.
If is the unit of time the quotient is simply the number of units of time. Using the notation for the (dimensionless) number of units of time rather than the time itself, can be replaced by , but for uniformity this has been avoided here. In this case the division by in the last formula is not a numerical division either, but converts a dimensionless number to the correct quantity including unit.
A popular approximated method for calculating the doubling time from the growth rate is the rule of 70,
that is, .
Reformulation as log-linear growth
If a variable exhibits exponential growth according to , then the log (to any base) of grows linearly over time, as can be seen by taking logarithms of both sides of the exponential growth equation:
This allows an exponentially growing variable to be modeled with a log-linear model. For example, if one wishes to empirically estimate the growth rate from intertemporal data on , one can linearly regress on .
Differential equation
The exponential function satisfies the linear differential equation:
saying that the change per instant of time of at time is proportional to the value of , and has the initial value .
The differential equation is solved by direct integration:
so that
In the above differential equation, if , then the quantity experiences exponential decay.
For a nonlinear variation of this growth model see logistic function.
Other growth rates
In the long run, exponential growth of any kind will overtake linear growth of any kind (that is the basis of the Malthusian catastrophe) as well as any polynomial growth, that is, for all :
There is a whole hierarchy of conceivable growth rates that are slower than exponential and faster than linear (in the long run). See .
Growth rates may also be faster than exponential. In the most extreme case, when growth increases without bound in finite time, it is called hyperbolic growth. In between exponential and hyperbolic growth lie more classes of growth behavior, like the hyperoperations beginning at tetration, and , the diagonal of the Ackermann function.
Logistic growth
In reality, initial exponential growth is often not sustained forever. After some period, it will be slowed by external or environmental factors. For example, population growth may reach an upper limit due to resource limitations. In 1845, the Belgian mathematician Pierre François Verhulst first proposed a mathematical model of growth like this, called the "logistic growth".
Limitations of models
Exponential growth models of physical phenomena only apply within limited regions, as unbounded growth is not physically realistic. Although growth may initially be exponential, the modelled phenomena will eventually enter a region in which previously ignored negative feedback factors become significant (leading to a logistic growth model) or other underlying assumptions of the exponential growth model, such as continuity or instantaneous feedback, break down.
Exponential growth bias
Studies show that human beings have difficulty understanding exponential growth. Exponential growth bias is the tendency to underestimate compound growth processes. This bias can have financial implications as well.
Rice on a chessboard
According to legend, vizier Sissa Ben Dahir presented an Indian King Sharim with a beautiful handmade chessboard. The king asked what he would like in return for his gift and the courtier surprised the king by asking for one grain of rice on the first square, two grains on the second, four grains on the third, and so on. The king readily agreed and asked for the rice to be brought. All went well at first, but the requirement for grains on the th square demanded over a million grains on the 21st square, more than a million million ( trillion) on the 41st and there simply was not enough rice in the whole world for the final squares. (From Swirski, 2006)
The "second half of the chessboard" refers to the time when an exponentially growing influence is having a significant economic impact on an organization's overall business strategy.
Water lily
French children are offered a riddle, which appears to be an aspect of exponential growth: "the apparent suddenness with which an exponentially growing quantity approaches a fixed limit". The riddle imagines a water lily plant growing in a pond. The plant doubles in size every day and, if left alone, it would smother the pond in 30 days killing all the other living things in the water. Day after day, the plant's growth is small, so it is decided that it won't be a concern until it covers half of the pond. Which day will that be? The 29th day, leaving only one day to save the pond.
See also
Accelerating change
Albert Allen Bartlett
Arthrobacter
Asymptotic notation
Bacterial growth
Bounded growth
Cell growth
Combinatorial explosion
Exponential algorithm
EXPSPACE
EXPTIME
Hausdorff dimension
Hyperbolic growth
Information explosion
Law of accelerating returns
List of exponential topics
Logarithmic growth
Logistic function
Malthusian growth model
Power law
Menger sponge
Moore's law
Quadratic growth
Stein's law
References
Sources
Meadows, Donella. Randers, Jorgen. Meadows, Dennis. The Limits to Growth: The 30-Year Update. Chelsea Green Publishing, 2004.
Meadows, Donella H., Dennis L. Meadows, Jørgen Randers, and William W. Behrens III. (1972) The Limits to Growth. New York: University Books.
Porritt, J. Capitalism as if the world matters, Earthscan 2005.
Swirski, Peter. Of Literature and Knowledge: Explorations in Narrative Thought Experiments, Evolution, and Game Theory. New York: Routledge.
Thomson, David G. Blueprint to a Billion: 7 Essentials to Achieve Exponential Growth, Wiley Dec 2005,
Tsirel, S. V. 2004. On the Possible Reasons for the Hyperexponential Growth of the Earth Population. Mathematical Modeling of Social and Economic Dynamics / Ed. by M. G. Dmitriev and A. P. Petrov, pp. 367–9. Moscow: Russian State Social University, 2004.
External links
Growth in a Finite World – Sustainability and the Exponential Function — Presentation
Dr. Albert Bartlett: Arithmetic, Population and Energy — streaming video and audio 58 min
Ordinary differential equations
Exponentials
Temporal exponentials
Mathematical modeling
Growth curves | Exponential growth | [
"Physics",
"Mathematics"
] | 3,111 | [
"Mathematical modeling",
"Physical quantities",
"Time",
"Applied mathematics",
"E (mathematical constant)",
"Exponentials",
"Temporal exponentials",
"Spacetime"
] |
191,941 | https://en.wikipedia.org/wiki/Bioethics | Bioethics is both a field of study and professional practice, interested in ethical issues related to health (primarily focused on the human, but also increasingly includes animal ethics), including those emerging from advances in biology, medicine, and technologies. It proposes the discussion about moral discernment in society (what decisions are "good" or "bad" and why) and it is often related to medical policy and practice, but also to broader questions as environment, well-being and public health. Bioethics is concerned with the ethical questions that arise in the relationships among life sciences, biotechnology, medicine, politics, law, theology and philosophy. It includes the study of values relating to primary care, other branches of medicine ("the ethics of the ordinary"), ethical education in science, animal, and environmental ethics, and public health.
Etymology
The term bioethics (Greek , "life"; , "moral nature, behavior") was coined in 1927 by Fritz Jahr in an article about a "bioethical imperative" regarding the use of animals and plants in scientific research. In 1970, the American biochemist, and oncologist Van Rensselaer Potter used the term to describe the relationship between the biosphere and a growing human population. Potter's work laid the foundation for global ethics, a discipline centered around the link between biology, ecology, medicine, and human values. Sargent Shriver, the spouse of Eunice Kennedy Shriver, claimed that he had invented the term "bioethics" in the living room of his home in Bethesda, Maryland, in 1970. He stated that he thought of the word after returning from a discussion earlier that evening at Georgetown University, where he discussed with others a possible Kennedy family sponsorship of an institute focused around the "application of moral philosophy to concrete medical dilemmas".
Purpose and scope
The discipline of bioethics has addressed a wide swathe of human inquiry; ranging from debates over the boundaries of lifestyles (e.g. abortion, euthanasia), surrogacy, the allocation of scarce health care resources (e.g. organ donation, health care rationing), to the right to refuse medical care for religious or cultural reasons. Bioethicists disagree among themselves over the precise limits of their discipline, debating whether the field should concern itself with the ethical evaluation of all questions involving biology and medicine, or only a subset of these questions. Some bioethicists would narrow ethical evaluation only to the morality of medical treatments or technological innovations, and the timing of medical treatment of humans. Others would increase the scope of moral assessment to encompass the morality of all moves that would possibly assist or damage organisms successful of feeling fear.
The scope of bioethics has evolved past mere biotechnology to include topics such as cloning, gene therapy, life extension, human genetic engineering, astroethics and life in space, and manipulation of basic biology through altered DNA, XNA and proteins. These (and other) developments may affect future evolution and require new principles that address life at its core, such as biotic ethics that values life itself at its basic biological processes and structures, and seeks their propagation. Moving beyond the biological, issues raised in public health such as vaccination and resource allocation have also encouraged the development of novel ethics frameworks to address such challenges. A study published in 2022 based on the corpus of full papers from eight main bioethics journals demonstrated the heterogeneity of this field by distinguishing 91 topics that have been discussed in these journals over the past half a century.
Principles
One of the first areas addressed by modern bioethicists was human experimentation. According to the Declaration of Helsinki published by the World Medical Association, the essential principles in medical research involving human subjects are autonomy, beneficence, non-maleficence, and justice.
The autonomy of individuals to make decisions while assuming responsibility for them and respecting the autonomy of others ought to be respected. For people unable to exercise their autonomy, special measures ought to be taken to protect their rights and interests.
In US, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research was initially established in 1974 to identify the basic ethical principles that should underlie the conduct of biomedical and behavioral research involving human subjects. However, the fundamental principles announced in the Belmont Report (1979)—namely, respect for persons, beneficence and justice—have influenced the thinking of bioethicists across a wide range of issues. Others have added non-maleficence, human dignity, and the sanctity of life to this list of cardinal values.
Overall, the Belmont Report has guided lookup in a course centered on defending prone topics as properly as pushing for transparency between the researcher and the subject. Research has flourished within the past 40 years and due to the advance in technology, it is thought that human subjects have outgrown the Belmont Report, and the need for revision is desired.
Another essential precept of bioethics is its placement of cost on dialogue and presentation. Numerous dialogue based bioethics organizations exist in universities throughout the United States to champion precisely such goals. Examples include the Ohio State Bioethics Society and the Bioethics Society of Cornell. Professional level versions of these organizations also exist.
Many bioethicists, in particular scientific scholars, accord the easiest precedence to autonomy. They trust that every affected person ought to decide which direction of motion they think about most in line with their beliefs. In other words, the patient should always have the freedom to choose their own treatment.
Medical ethics
Medical ethics is a utilized department of ethics that analyzes the exercise of clinical medicinal drug and associated scientific research. Medical ethics is based on a set of values. These values consist of the appreciation for autonomy, beneficence, and justice.
Ethics affects medical decisions made by healthcare providers and patients. Medical ethics is the study of moral values and judgments as they apply to medicine. The four main moral commitments are respect for autonomy, beneficence, nonmaleficence, and justice. Using these four principles and thinking about what the physicians' specific concern is for their scope of practice can help physicians make moral decisions. As a scholarly discipline, medical ethics encompasses its practical application in clinical settings as well as work on its history, philosophy, theology, and sociology.
Medical ethics tends to be understood narrowly as applied professional ethics; whereas bioethics has a more expansive application, touching upon the philosophy of science and issues of biotechnology. The two fields often overlap, and the distinction is more so a matter of style than professional consensus. Medical ethics shares many principles with other branches of healthcare ethics, such as nursing ethics. A bioethicist assists the health care and research community in examining moral issues involved in our understanding of life and death, and resolving ethical dilemmas in medicine and science. Examples of this would be the topic of equality in medicine, the intersection of cultural practices and medical care, ethical distribution of healthcare resources in pandemics, and issues of bioterrorism.
Medical ethical concerns frequently touch on matters of life and death. Patient rights, informed consent, confidentiality, competency, advance directives, carelessness, and many other topics are highlighted as serious health concerns.
The proper actions to take in light of all the circumstances are what ethics is all about. It discusses the difference between what is proper and wrong at a certain moment and a particular society. Medical ethics is concerned with the duties that doctors, hospitals, and other healthcare providers have to patients, society, and other health professionals.
The health profession has a set of ethical standards that are relevant to various organizations of health workers and medical facilities. Ethics are never stagnant and always relevant. What is seen as acceptable ethics now may not be so one hundred years ago. The hospital administrator is required to have a thorough awareness of their moral and legal obligations.
Medical sociology
The practice of bioethics in clinical care have been studied by medical sociology. Many scholars consider that bioethics arose in response to a perceived lack of accountability in medical care in the 1970s. Studying the clinical practice of ethics in medical care, Hauschildt and Vries found that ethical questions were often reframed as clinical judgments to allow clinicians to make decisions. Ethicists most often put key decisions in the hands of physicians rather than patients.
Communication strategies suggested by ethicists act to decrease patient autonomy. Examples include, clinicians discussing treatment options with one another prior to talking to patients or their family to present a united front limited patient autonomy, hiding uncertainty amongst clinicians. Decisions about overarching goals of treatment were reframed as technical matters excluding patients and their families. Palliative care experts were used as intermediaries to guide patients towards less invasive end-of-live treatment. In their study, Hauschild and Vries found that 76% of ethical consultants were trained as clinicians.
Studying informed consent, Corrigan found that some social processes resulted in limitations to patients choice, but also at times patients could find questions regarding consent to medical trials burdensome.
The most prevalent subject is how social stratification (based on SES, gender, class, ethnicity, and age) affects patterns of behavior related to health and sickness, illness risk, disability, and other outcomes related to health care. The study of health care organization and provision, which encompasses the evolving organizational structures of health care organizations and the social psychology of health and health care, is another important approach. These latter research cover topics including connections between doctors and patients, coping mechanisms, and social support. The description of other important fields of medical sociology study emphasizes how theory and research have changed in the twenty-first century.
Perspectives and methodology
Bioethicists come from a wide variety of backgrounds and have training in a diverse array of disciplines.
The field contains individuals trained in philosophy such as Baruch Brody of Rice University, Julian Savulescu of the University of Oxford, Arthur Caplan of NYU, Peter Singer of Princeton University, Frances Kamm of Rutgers University, Daniel Callahan of the Hastings Center, and Daniel Brock of Harvard University; medically trained clinician ethicists such as Mark Siegler of the University of Chicago and Joseph Fins of Cornell University; lawyers such as Nancy Dubler of Albert Einstein College of Medicine or Jerry Menikoff of the federal Office for Human Research Protections; political scientists like Francis Fukuyama; religious studies scholars including James Childress; and theologians like Lisa Sowle Cahill and Stanley Hauerwas.
The field, formerly dominated by formally trained philosophers, has become increasingly interdisciplinary, with some critics even claiming that the methods of analytic philosophy have harmed the field's development. Leading journals in the field include The Journal of Medicine and Philosophy, the Hastings Center Report, the American Journal of Bioethics, the Journal of Medical Ethics, Bioethics, the Kennedy Institute of Ethics Journal, Public Health Ethics, and the Cambridge Quarterly of Healthcare Ethics. Bioethics has also benefited from the process philosophy developed by Alfred North Whitehead.
Another discipline that discusses bioethics is the field of feminism; the International Journal of Feminist Approaches to Bioethics has played an important role in organizing and legitimizing feminist work in bioethics.
Many religious communities have their histories of inquiry into bioethical issues and have developed rules and guidelines on how to deal with these issues from within the viewpoint of their respective faiths. The Jewish, Christian and Muslim faiths have each developed a considerable body of literature on these matters. In the case of many non-Western cultures, a strict separation of religion from philosophy does not exist. In many Asian cultures, for example, there is a lively discussion on bioethical issues. Buddhist bioethics, in general, is characterized by a naturalistic outlook that leads to a rationalistic, pragmatic approach. Buddhist bioethicists include Damien Keown. In India, Vandana Shiva is a leading bioethicist speaking from the Hindu tradition.
In Africa, and partly also in Latin America, the debate on bioethics frequently focuses on its practical relevance in the context of underdevelopment and geopolitical power relations. In Africa, their bioethical approach is influenced by and similar to Western bioethics due to the colonization of many African countries. Some African bioethicists are calling for a shift in bioethics that utilizes indigenous African philosophy rather than western philosophy. Some African bioethicists also believe that Africans will be more likely to accept a bioethical approach grounded in their own culture, as well as empower African people.
Masahiro Morioka argues that in Japan the bioethics movement was first launched by disability activists and feminists in the early 1970s, while academic bioethics began in the mid-1980s. During this period, unique philosophical discussions on brain death and disability appeared both in the academy and journalism. In Chinese culture and bioethics, there is not as much of an emphasis on autonomy as opposed to the heavy emphasis placed on autonomy in Western bioethics. Community, social values, and family are all heavily valued in Chinese culture, and contribute to the lack of emphasis on autonomy in Chinese bioethics. The Chinese believe that the family, community, and individual are all interdependent of each other, so it is common for the family unit to collectively make decisions regarding healthcare and medical decisions for a loved one, instead of an individual making an independent decision for his or her self.
Some argue that spirituality and understanding one another as spiritual beings and moral agents is an important aspect of bioethics, and that spirituality and bioethics are heavily intertwined with one another. As a healthcare provider, it is important to know and understand varying world views and religious beliefs. Having this knowledge and understanding can empower healthcare providers with the ability to better treat and serve their patients. Developing a connection and understanding of a patient's moral agent helps enhance the care provided to the patient. Without this connection or understanding, patients can be at risk of becoming "faceless units of work" and being looked at as a "set of medical conditions" as opposed to the storied and spiritual beings that they are.
Islamic bioethics
Bioethics in the realm of Islam differs from Western bioethics, but they share some similar perspectives viewpoints as well. Western bioethics is focused on rights, especially individual rights. Islamic bioethics focuses more on religious duties and obligations, such as seeking treatment and preserving life. Islamic bioethics is heavily influenced and connected to the teachings of the Qur'an as well as the teachings of Muhammad. These influences essentially make it an extension of Shariah or Islamic Law. In Islamic bioethics, passages from the Qur'an are often used to validate various medical practices. For example, a passage from the Qur'an states "whosoever killeth a human being ... it shall be as if he had killed all humankind, and whosoever saveth the life of one, it shall be as if he saved the life of all humankind." This excerpt can be used to encourage using medicine and medical practices to save lives, but can also be looked at as a protest against euthanasia and assisted suicide. A high value and worth are placed on human life in Islam, and in turn, human life is deeply valued in the practice of Islamic bioethics as well. Muslims believe all human life, even one of poor quality, needs to be given appreciation and must be cared for and conserved.
The Islamic education on sensible problems associated to existence in normal and human lifestyles in unique can be sought in Islamic bioethics. As we will see later, due to the fact of interconnectedness of the Islamic regulation and the Islamic ethics, the Islamic bioethics has to reflect on consideration on necessities of the Islamic regulation (Shari‘ah) in addition to ethical considerations.
To react to new technological and medical advancements, informed Islamic jurists regularly will hold conferences to discuss new bioethical issues and come to an agreement on where they stand on the issue from an Islamic perspective. This allows Islamic bioethics to stay pliable and responsive to new advancements in medicine. The standpoints taken by Islamic jurists on bioethical issues are not always unanimous decisions and at times may differ. There is much diversity among Muslims varying from country to country, and the different degrees to which they adhere by Shariah. Differences and disagreements in regards to jurisprudence, theology, and ethics between the two main branches of Islam, Sunni, and Shia, lead to differences in the methods and ways in which Islamic bioethics is practiced throughout the Islamic world. An area where there is a lack of consensus is brain death. The Organization of Islamic Conferences Islamic Fiqh Academy (OIC-IFA) holds the view that brain death is equivalent to cardiopulmonary death, and acknowledges brain death in an individual as the individual being deceased. On the contrary, the Islamic Organization of Medical Sciences (IOMS) states that brain death is an "intermediate state between life and death" and does not acknowledge a brain dead individual as being deceased.
Islamic bioethicists look to the Qur'an and religious leaders regarding their outlook on reproduction and abortion. It is firmly believed that the reproduction of a human child can only be proper and legitimate via marriage. This does not mean that a child can only be reproduced via sexual intercourse between a married couple, but that the only proper and legitimate way to have a child is when it is an act between husband and wife. It is okay for a married couple to have a child artificially and from techniques using modern biotechnology as opposed to sexual intercourse, but to do this out of the context of marriage would be deemed immoral.
Islamic bioethics is strongly against abortion and strictly prohibits it. The IOMS states that "from the moment a zygote settles inside a woman's body, it deserves a unanimously recognized degree of respect." Abortion may only be permitted in unique situations where it is considered to be the "lesser evil".
Islamic bioethics may be used to find advice on practical matters relating to life in general and human life in particular. As we will see later, Islamic bioethics must take into account both moral concerns and the requirements of the Islamic law (Shari'ah) due to the interdependence of Islamic law and Islamic ethics. In order to avoid making a mistake, everything must be thoroughly examined, first against moral criteria and then against legal ones. It appears that many writers on Islamic bioethics have failed to distinguish between the two.
Despite the fact that Islamic law and morality are completely in agreement with one another, they may have distinct prescriptions because of their diverse ends and objectives. One distinction, for instance, is that Islamic ethics seeks to teach those with higher desires how to become more perfect and closer to God, but Islamic law seeks to decrease criteria for perfection or pleasure in both realms that are doable for the average or even lower than average.
So whatever is deemed essential or required by Islamic law is undoubtedly viewed the same way by Islamic ethics. However, there may be situations where something is not against Islamic law but is nonetheless condemned by Islamic ethics. Or there can be circumstances that, while not required by Islamic law, are essential from an ethical standpoint. For instance, while idle conversation is not strictly forbidden by Islamic law, it is morally unacceptable since it wastes time and is detrimental to one's spiritual growth. The night prayers are another illustration (which should be performed after midnight and before dawn).
Islamic bioethics' first influences Islamic bioethics is founded on the Qur'an, the Sunnah, and reason (al-'aql), much like any other inquiry into Islam. Sunni Muslims may use terms like ijmaa' (consensus) and qiyas in place of reason (analogy). Ijmaa' and qiyas as such are not recognized by Shi'a since they are insufficient proofs on their own.
Christian bioethics
In Christian bioethics it is noted that the Bible, especially the New Testament, teaches about healing by faith. Healing in the Bible is often associated with the ministry of specific individuals including Elijah, Jesus and Paul. The largest group of miracles mentioned in the New Testament involves cures, the Gospels give varying amounts of detail for each episode, sometimes Jesus cures simply by saying a few words, at other times, he employs material such as spit and mud.
Christian physician Reginald B. Cherry views faith healing as a pathway of healing in which God uses both the natural and the supernatural to heal. Being healed has been described as a privilege of accepting Christ's redemption on the cross. Pentecostal writer Wilfred Graves Jr. views the healing of the body as a physical expression of salvation. , after describing Jesus exorcising at sunset and healing all of the sick who were brought to him, quotes these miracles as a fulfillment of the prophecy in : "He took up our infirmities and carried our diseases".
Jesus endorsed the use of the medical assistance of the time (medicines of oil and wine) when he told the parable of the Good Samaritan (Luke 10:25–37), who "bound up [an injured man's] wounds, pouring on oil and wine" (verse 34) as a physician would. Jesus then told the doubting teacher of the law (who had elicited this parable by his self-justifying question, "And who is my neighbor?" in verse 29) to "go, and do likewise" in loving others with whom he would never ordinarily associate (verse 37).
The principle of the sacredness of human life is at the basis of Catholic bioethics. On the subject of abortion, for example, Catholics and Orthodox are on very similar positions. Catholic bioethics insists on this concept, without exception, while Anglicans, Waldensians and Lutherans have positions closer to secular ones, for example with regard to the end of life.
In 1936, Ludwig Bieler argued that Jesus was stylized in the New Testament in the image of the "divine man" (Greek: theios aner), which was widespread in antiquity. It is said that many of the famous rulers and elders of the time had divine healing powers.
Contemporary bioethical and health care policy issues, including abortion, the distribution of limited resources, the nature of appropriate hospital chaplaincy, fetal experimentation, the use of fetal tissue in treatment, genetic engineering, the use of critical care units, distinctions between ordinary and extraordinary treatment, euthanasia, free and informed consent, competency determinations, the meaning of life, are being examined within the framework of traditional Christian moral commitments.
Feminist bioethics
Feminist bioethics critiques the fields of bioethics and medicine for its lack of inclusion of women's and other marginalized group's perspectives. This lack of perspective from women is thought to create power imbalances that favor men. These power imbalances are theorized to be created from the androcentric nature of medicine. One example of a lack of consideration of women is in clinical drug trials that exclude women due to hormonal fluctuations and possible future birth defects. This has led to a gap in the research on how pharmaceuticals can affect women. Feminist bioethicists call for the necessity of feminist approaches to bioethics because the lack of diverse perspectives in bioethics and medicine can cause preventable harm to already vulnerable groups.
This study first gained prevalence in the field of reproductive medicine as it was viewed as a "woman's issue". Since then, feminist approaches to bioethics has expanded to include bioethical topics in mental health, disability advocacy, healthcare accessibility, and pharmaceuticals. Lindemann notes the need for the future agenda of feminist approaches to bioethics to expand further to include healthcare organizational ethics, genetics, stem cell research, and more.
Notable figures in feminist bioethics include Carol Gilligan, Susan Sherwin, and the creators of the International Journal of Feminist Approaches to Bioethics, Mary C. Rawlinson and Anne Donchin. Sherwin's book No Longer Patient: Feminist Ethics in Health Care (1992) is credited with being one of the first full-length books published on the topic of feminist bioethics and points out the shortcomings in then-current bioethical theories. Sherwin's viewpoint incorporates models of oppression within healthcare that intend to further marginalize women, people of color, immigrants, and people with disabilities. Since created in 1992, the International Journal of Feminist Approaches to Bioethics has done much work to legitimize feminist work and theory in bioethics.
By pointing out the male marking of its purportedly generic human subject and the fact that the tradition does not see women's rights as human rights, feminist bioethics challenges bioethics. This article explores how the other gender becomes mute and invisible as a result of this unseen gendering of the universal. It demonstrates how the dehumanization of "man" is a root cause of illness on a social and personal level. Finally, it makes many recommendations for how representations of women's experience and bodies could help to constructively reconsider fundamental ethical principles.
Environmental bioethics
Bioethics, the ethics of the life sciences in general, expanded from the encounter between experts in medicine and the laity, to include organizational and social ethics, environmental ethics. As of 2019 textbooks of green bioethics existed.
Ethical issues in gene therapy
Gene therapy involves ethics, because scientists are making changes to genes, the building blocks of the human body. Currently, therapeutic gene therapy is available to treat specific genetic disorders by editing cells in specific body parts. For example, gene therapy can treat hematopoietic disease. There is also a controversial gene therapy called "germline gene therapy", in which genes in a sperm or egg can be edited to prevent genetic disorder in the future generation. It is unknown how this type of gene therapy affects long-term human development. In the United States, federal funding cannot be used to research germline gene therapy.
The ethical challenges in gene therapy for rare childhood diseases underscore the complexity of initiating trials, determining dosage levels, and involving affected families. With over a third of gene therapies targeting rare, genetic, pediatric-onset, and life-limiting diseases, fair participant selection and transparent engagement with patient communities become crucial ethical considerations. Another concern involves the use of virus-derived vectors for gene transfer, raising safety and hereditary implications. Additionally, the ethical dilemma in gene therapy explores the potential harms of human enhancement, particularly regarding the birth of disabled individuals. Addressing these challenges is vital for responsible development, application, and equitable access to gene therapies. The experience with human growth hormone further illustrates the blurred lines between therapy and enhancement, emphasizing the importance of ethical considerations in balancing therapeutic benefits and potential enhancements, especially in the rapidly advancing field of genomic medicine. As gene therapies progress towards FDA approval, collaboration with clinical genetics providers becomes essential to navigate the ethical complexities of this new era in medicine.
Professional practice
Bioethics as a subject of expert exercise (although now not a formal profession) developed at the beginning in North America in the Nineteen Eighties and Nineteen Nineties, in the areas of clinical / medical ethics and research ethics. Slowly internationalizing as a field, since the 2000s professional bioethics has expanded to include other specialties, such as organizational ethics in health systems, public health ethics, and more recently Ethics of artificial intelligence. Professional ethicists may be called consultants, ethicists, coordinators, or even analysts; and they may work in healthcare organizations, government agencies, and in both the public and private sectors. They may also be full-time employees, unbiased consultants, or have cross-appointments with educational institutions, such as lookup centres or universities.
Models of bioethics
According to Igor Boyko's book "Bioethics", there are three models of bioethics in the world:
Model 1 is "liberal" when there are no restrictions.
Model 2 is "utilitarian", when what is prohibited is allowed for one person or a group of persons, if it is useful and beneficial for the majority of people.
Model 3 is "personalistic", where the human person is considered a supernatural and inviolable integrity.
Learned societies and professional associations
The field of bioethics has developed national and international learned societies and professional associations, such as the American Society for Bioethics and Humanities, the Canadian Bioethics Society, the Canadian Association of Research Ethics Boards, the Association of Bioethics Program Directors, the Bangladesh Bioethics Society and the International Association of Bioethics.
Education
Bioethics is taught in courses at the undergraduate and graduate level in different academic disciplines or programs, such as Philosophy, Medicine, Law, Social Sciences. It has become a requirement for professional accreditation in many health professional programs (Medicine, Nursing, Rehabilitation), to have obligatory training in ethics (e.g., professional ethics, medical ethics, clinical ethics, nursing ethics). Interest in the field and professional opportunities have led to the development of dedicated programs with concentrations in Bioethics, largely in the United States, Canada (List of Canadian bioethics programs) and Europe, offering undergraduate majors/minors, graduate certificates, and master's and doctoral degrees.
Training in bioethics (usually clinical, medical, or professional ethics) are part of core competency requirements for health professionals in fields such as nursing, medicine or rehabilitation. For example, every medical school in Canada teaches bioethics so that students can gain an understanding of biomedical ethics and use the knowledge gained in their future careers to provide better patient care. Canadian residency training programs are required to teach bioethics as it is one of the conditions of accreditation, and is a requirement by the College of Family Physicians of Canada and by the Royal College of Physicians and Surgeons of Canada.
Criticism
As a field of study, bioethics has also drawn criticism. For instance, Paul Farmer noted that bioethics tends to focus its attention on problems that arise from "too much care" for patients in industrialized nations while giving little or no attention to the ethical problem of too little care for the poor. Farmer characterizes the bioethics of handling morally difficult clinical situations, normally in hospitals in industrialized countries, as "quandary ethics". He does not regard quandary ethics and clinical bioethics as unimportant; he argues, rather, that bioethics must be balanced and give due weight to the poor.
Additionally, bioethics has been condemned for its lack of diversity in thought, particularly concerning race. Even as the field has grown to include the areas of public opinion, policymaking, and medical decision-making, little to no academic writing has been authored concerning the intersection between race–especially the cultural values imbued in that construct–and bioethical literature. John Hoberman illustrates this in a 2016 critique, in which he points out that bioethicists have been traditionally resistant to expanding their discourse to include sociological and historically relevant applications. Central to this is the notion of white normativity, which establishes the dominance of white hegemonic structures in bioethical academia and tends to reinforce existing biases.
These points and critiques, along with the neglect of women's perspectives within bioethics, have also been discussed amongst feminist bioethical scholars.
However, differing views on bioethics' lack of diversity of thought and social inclusivity have also been advanced. For example, one historian has argued that the diversity of thought and social inclusivity are the two essential cornerstones of bioethics, albeit they have not been fully realized.
In order to practice critical bioethics, bioethicists must base their investigations in empirical research, refute ideas with facts, engage in self-reflection, and be skeptical of the assertions made by other bioethicists, scientists, and doctors. A thorough normative study of actual moral experience is what is intended.
Issues
Research in bioethics is conducted by a broad and interdisciplinary community of scholars, and is not restricted only to those researchers who define themselves as "bioethicists": it includes researchers from the humanities, social sciences, health sciences and health professions, law, the fundamental sciences, etc. These researchers may be working in specialized bioethics centers and institutes associated with university bioethics training programs; but they may also be based in disciplinary departments without a specific bioethics focus. Notable examples of research centers include, amongst others, The Hastings Center, the Kennedy Institute of Ethics, the Yale Interdisciplinary Center for Bioethics, the Centre for Human Bioethics.
Areas of bioethics research that are the subject of published, peer-reviewed bioethical analysis include:
See also
References
Further reading
Ігор Бойко, Біоетика, скрипти для студентів, Український Католицький Університет, Львів 2008. – 180 с./Ihor Boyko, Bioethics, scripts for students, Ukrainian Catholic University, Lviv 2008. – 180 p./
External links
Bioethics entry in the Internet Encyclopedia of Philosophy.
"Feminist Bioethics" at the Stanford Encyclopedia of Philosophy
"MyBioethics" – a free online resource (app) for learning bioethics through real cases.
Applied ethics
Philosophy of biology | Bioethics | [
"Technology",
"Biology"
] | 6,903 | [
"Bioethics",
"Behavior",
"Ethics of science and technology",
"Human behavior",
"Applied ethics"
] |
192,025 | https://en.wikipedia.org/wiki/Green%20roof | A green roof or living roof is a roof of a building that is partially or completely covered with vegetation and a growing medium, planted over a waterproofing membrane. It may also include additional layers such as a root barrier and drainage and irrigation systems. Container gardens on roofs, where plants are maintained in pots, are not generally considered to be true green roofs, although this is debated. Rooftop ponds are another form of green roofs which are used to treat greywater. Vegetation, soil, drainage layer, roof barrier and irrigation system constitute the green roof.
Green roofs serve several purposes for a building, such as absorbing rainwater, providing insulation, creating a habitat for wildlife, and decreasing stress of the people around the roof by providing a more aesthetically pleasing landscape, and helping to lower urban air temperatures and mitigate the heat island effect. Green roofs are suitable for retrofit or redevelopment projects as well as new buildings and can be installed on small garages or larger industrial, commercial and municipal buildings. They effectively use the natural functions of plants to filter water and treat air in urban and suburban landscapes. There are two types of green roof: intensive roofs, which are thicker, with a minimum depth of , and can support a wider variety of plants but are heavier and require more maintenance, and extensive roofs, which are shallow, ranging in depth from , lighter than intensive green roofs, and require minimal maintenance.
The term green roof may also be used to indicate roofs that use some form of green technology, such as a cool roof, a roof with solar thermal collectors or photovoltaic panels. Green roofs are also referred to as eco-roofs, oikosteges, vegetated roofs, living roofs, greenroofs and VCPH (Horizontal Vegetated Complex Partitions)
Environmental benefits
Thermal reduction and energy conservation
Green roofs improve and reduce energy consumption. They can reduce heating by adding mass and thermal resistance value, and can reduce the heat island effect by increasing evapotranspiration. A 2005 study by Brad Bass of the University of Toronto showed that green roofs can also reduce heat loss and energy consumption in winter conditions. A modeling study found that adding green roofs to 50 percent of the available surfaces in downtown Toronto would cool the entire city by .
Through evaporative cooling, a green roof reduces cooling loads on a building by fifty to ninety percent, especially if it is glassed-in so as to act as a terrarium and passive solar heat reservoir.
A concentration of green roofs in an urban area can reduce the city's average temperatures during the summer, combating the urban heat island effect. Traditional building materials soak up the sun's radiation and re-emit it as heat, making cities at least hotter than surrounding areas. On Chicago's City Hall, by contrast, which features a green roof, roof temperatures on a hot day are typically cooler than they are on traditionally roofed buildings nearby. Green roofs are becoming common in Chicago, as well as in Atlanta, Portland, and other United States cities, where their use is encouraged by regulations to combat the urban heat-island effect. Green roofs are a type of low impact development. In the case of Chicago, the city has passed codes offering incentives to builders who put green roofs on their buildings. The Chicago City Hall green roof is one of the earliest and most well-known examples of green roofs in the United States; it was planted as an experiment to determine the effects a green roof would have on the microclimate of the roof. Following this and other studies, it has now been estimated that if all the roofs in a major city were greened, urban temperatures could be reduced by as much as .
Water management
Green roofs can reduce stormwater runoff via water-wise gardening techniques. Green roofs play a significant role in retrofitting the Low Impact Development (LID) practices in urban areas. A study presented at the Green Roofs for Healthy Cities Conference in June 2004, cited by the EPA, found water runoff was reduced by over 75% during rainstorms. Water is stored by the roof's substrate and then taken up by the plants, from which it is returned to the atmosphere through transpiration and evaporation.
Green roofs decrease the total amount of runoff and slow the rate of runoff from the roof. It has been found that they can retain up to 75% of rainwater, gradually releasing it back into the atmosphere via condensation and transpiration, while retaining pollutants in their soil. Many green roofs are installed to comply with local regulations and government fees, often regarding stormwater runoff management. In areas with combined sewer-stormwater systems, heavy storms can overload the wastewater system and cause it to flood, dumping raw sewage into the local waterways. Often, phosphorus and nitrogen are in this category of environmentally harmful substances even though they are stimulating to the growth of plant life and agriculture. When these substances are added to a system, it can create mass biological activity since they are considered limiting factors of plant growth and by adding more of them to a system, it allows for more plant growth.
Ecological benefits
Green roofs create natural habitat as part of an urban wilderness. Even in high-rise urban settings as tall as 19 stories, it has been found that green roofs can attract beneficial insects, birds, bees and butterflies. A recent list of the bee species recorded from green roofs (worldwide) highlights both the diversity of species, but also the (expected) bias towards small ground-nesting species (Hofmann and Renner, 2017). Rooftop greenery complements wild areas by providing stepping stones for songbirds, migratory birds and other wildlife facing shortages of natural habitat. Bats have also been reported to be more active over green roofs due to the foraging opportunities these roofs provide. Research at the Javits Center green roof in New York, NY, has shown a correlation between higher numbers of certain insects on the roof, particularly moths, with an increased amount of bat foraging activity.
Green roofs also serve as a green wall, filtering pollutants and carbon dioxide out of the air, helping to lower rates of diseases such as asthma. They can also filter pollutants and heavy metals out of rainwater.
Carbon sequestration
An additional environmental benefit of a green roof is the ability to sequester carbon. Carbon is the main component of plant matter and is naturally absorbed by plant tissue. The carbon is stored in the plant tissue and the soil substrate through plant litter and root exudates. A study on green roofs in Michigan and Maryland found the above ground biomass and below ground substrate stored on average between 168 g C m−2 and 107 g C m−2 . Variations occurred among the different species of plant used. Substrate carbon content averaged 913 g C m−2 and after the subtraction of the original carbon content the total sequestration was 378 g C m−2. The sequestration can be improved by changing plant species, increasing substrate depth, substrate composition, and management practices. In a study done in Michigan above ground sequestration ranged from 64 g C m−2 to 239 g C m−2 for S. acre and S album. Also, by increasing the substrate depth would allow for more area of carbon storage and diversify the types of plants with greater potential of carbon storage. The direct carbon sequestration techniques and methods can be measured and accounted for. Green roofs also indirectly reduce CO2 given off by power plants through their ability to insulate buildings. Buildings in the US account for 38% of the total carbon dioxide emissions. A model supported by the U.S. Department of Energy found a 2 percent reduction in electricity consumption and 9-11% reduction in natural gas when implementing green roofs.
Other
Help to insulate a building for sound; the soil helps to block lower frequencies and the plants block higher frequencies
If installed correctly many living roofs can contribute to LEED points
Increase agricultural space
Green roofs not only retain rainwater, but also moderate the temperature of the water and act as natural filters for any of the water that happens to run off.
Costs and financial benefits
A properly designed and installed extensive green-roof system can cost while an intensive green roof costs However, since most of the materials used to build the green roof can be salvaged, it is estimated that the cost of replacing a green roof is generally one third of the initial installation costs.
With the initial cost of installing a green roof in mind, there are many financial benefits that accompany green roofing.
Green roofing can extend the lifespan of a roof by over 200% by covering the waterproofing membrane with growing medium and vegetation, this shields the membrane from ultra-violet radiation and physical damage. Further, Penn State University's Green Roof Research Center expects the lifespan of a roof to increase by as much as three times after greening the roof.
It is estimated that the installation of a green roof could increase the real estate value of an average house by about 7%.
Reduction in energy use is an important property of green roofing. By improving the thermal performance of a roof, green roofing allows buildings to better retain their heat during the cooler winter months while reflecting and absorbing solar radiation during the hotter summer months, allowing buildings to remain cooler. A study conducted by Environment Canada found a 26% reduction in summer cooling needs and a 26% reduction in winter heat losses when a green roof is used. With respect to hotter summer weather, green roofing is able to reduce the solar heating of a building by reflecting 27% of solar radiation, absorbing 60% by the vegetation through photosynthesis and evapotranspiration, and absorbing the remaining 13% into the growing medium. Such mitigation of solar radiation has been found to reduce building temperatures by up to and reduce energy needs for air-conditioning by 25% to 80%. This reduction in energy required to cool a building in the summer is accompanied by a reduction in energy required to heat a building in the winter, thus reducing the energy requirements of the building year-round which allows the building temperature to be controlled at a lower cost.
Depending on the region in which a green roof is installed, incentives may be available in the form of stormwater tax reduction, grants, or rebates. The regions where these incentives will most likely be found are areas where failing storm water management infrastructure is in place, urban heat island effect has significantly increased the local air temperature, or areas where environmental contaminants in the storm water runoff is of great concern. An example of such an incentive is a one-year property tax credit is available in New York City, since 2009, for property owners who green at least 50% of their roof area.
Disadvantages
The main disadvantage of green roofs is that the initial cost of installing a green roof can be double that of a normal roof. Depending on what kind of green roof it is, the maintenance costs could be higher, but some types of green roof have little or no ongoing cost. Some kinds of green roofs also place higher demands on the waterproofing system of the structure, both because water is retained on the roof and due to the possibility of roots penetrating the waterproof membrane. Another disadvantage is that the wildlife they attract may include pest insects which could easily infiltrate a residential building through open windows.
The additional mass of the soil substrate and retained water places a large strain on the structural support of a building. This makes it unlikely for intensive green roofs to become widely implemented due to a lack of buildings that are able to support such a large amount of added weight as well as the added cost of reinforcing buildings to be able to support such weight. Some types of green roofs do have more demanding structural standards especially in seismic regions of the world. Some existing buildings cannot be retrofitted with certain kinds of green roofs because of the weight load of the substrate and vegetation exceeds permitted static loading. The weight of a green roof caused the collapse of a large sports hall roof in Hong Kong in 2016. In the wake of the disaster numerous other green roofs around the territory were removed.
Green roofs require significantly more maintenance and maintenance energy compared to a standard roof. Standard maintenance include removing debris, controlling weeds, deadhead trimming, checking moisture levels, and fertilizing. The maintenance energy use for green roofs has many variables including: climate, intensity of rainfall, type of building, type of vegetation, and external coatings. The most significant effect comes from scarce rainfall which will increase the maintenance energy due to the watering required. During a 10-year roof maintenance cycle a house with a green roof requires more retrofit embodied energy than a house with a white roof. The individual components of a green roof have implications during the manufacturing process have additional implications compared to a conventional roof. The embodied energy for green roof components are of green roof. This value is equivalent to 6448 g C m−2 which is significantly greater than 378 g C m−2. Criteria for waste management practices when green roofs reach their end-of-life remain uncodified.
Both sod roofs and LWA-based (Lightweight Aggregates) roofs have been found to have a negative impact on the quality of their resulting runoff.
Types
Green roofs can be categorized as intensive, semi-intensive, or extensive, depending on the depth of planting medium and the amount of maintenance they need. Extensive green roofs traditionally support of vegetation while intensive roofs support of vegetation. Traditional roof gardens, which require a reasonable depth of soil to grow large plants or conventional lawns, are considered intensive because they are labour-intensive, requiring irrigation, feeding, and other maintenance. Intensive roofs are more park-like with easy access and may include anything from kitchen herbs to shrubs and small trees.
Extensive green roofs, by contrast, are designed to be virtually self-sustaining and should require only a minimum of maintenance, perhaps a once-yearly weeding or an application of slow-release fertiliser to boost growth. Extensive roofs are usually only accessed for maintenance. They can be established on a very thin layer of soil (most use specially formulated composts): even a thin layer of rockwool laid directly onto a watertight roof can support a planting of Sedum species and mosses. Some green roof designs incorporate both intensive and extensive elements. To protect the roof, a waterproofing membrane is often used, which is manufactured to remain watertight in extreme conditions including constant dampness, ponding water, high and low alkaline conditions and exposure to plant roots, fungi and bacterial organisms.
Advances in green roof technology have led to the development of new systems that do not fit into the traditional classification of green roof types. Comprehensive green roofs bring the most advantageous qualities of extensive and intensive green roofs together. Comprehensive green roofs support plant varieties typically seen in intensive green roofs at the depth and weight of an extensive green roof system.
Another important distinction is between pitched green roofs and flat green roofs. Pitched sod roofs, a traditional feature of many Scandinavian buildings, tend to be of a simpler design than flat green roofs. This is because the pitch of the roof reduces the risk of water penetrating through the roof structure, allowing the use of fewer waterproofing and drainage layers.
History
In ancient times green roofs consisted of cave-like structures or sod roofs covered with earth and plants commonly used for agriculture, dwelling, and ceremonial purposes. These early shelters provided protection from the elements, good insulation during the winter months, and a cool location in the summer. Unfortunately for modern conveniences, these were neither waterproof nor was there any system to keep out unwanted burrowing wildlife.
Modern green roofs, which are made of a system of manufactured layers deliberately placed over roofs to support growing medium and vegetation, are a relatively new phenomenon. However, green roofs or sod roofs in northern Scandinavia have been around for centuries. The modern trend started when green roofs were developed in Germany in the 1960s, and has since spread to many countries. Today, it is estimated that about 10% of all German roofs have been "greened".
A number of European Countries have very active associations promoting green roofs, including Germany, Switzerland, the Netherlands, Norway, Italy, Austria, Hungary, Sweden, the UK, and Greece. Germany was the first country to start developing green roof systems and market them on a large scale. The City of Linz in Austria has been paying developers to install green roofs since 1983, and in Switzerland, it has been a federal law since the late 1990s. In the UK, their uptake has been slow, but a number of cities have developed policies to encourage their use, notably London and Sheffield.
Green roofs are also becoming increasingly popular in North America, although they are not as common as in some parts of Europe. Numerous North American cities offer tax incentives to developers who integrate green roofs in their buildings. Toronto and San Francisco legally mandate new buildings to include green roofs.
Rooftop water purification is also being implemented in green roofs. These forms of green roofs are actually treatment ponds built into the rooftops. They are built either from a simple substrate (as being done in Dongtan) or with plant-based ponds. Plants used include calamus, Menyanthes trifoliata, Mentha aquatica, etc.)
Several studies have been carried out in Germany since the 1970s. Berlin is one of the most important centers of green roof research in Germany. Particularly in the last 10 years, much more research has begun. About ten green roof research centers exist in the US and activities exist in about 40 countries. In a recent study on the impacts of green infrastructure, in particular green roofs in the Greater Manchester area, researchers found that adding green roofs can help keep temperatures down, particularly in urban areas: "adding green roofs to all buildings can have a dramatic effect on maximum surface temperatures, keeping temperatures below the 1961–1990 current form case for all time periods and emissions scenarios. Roof greening makes the biggest difference…where the building proportion is high and the evaporative fraction is low. Thus, the largest difference was made in the town centers".
Brown roofs
Industrial brownfield sites can be valuable ecosystems, supporting rare species of plants, animals and invertebrates. Increasingly in demand for redevelopment, these habitats are under threat. "Brown roofs", also known as "biodiverse roofs", can partly mitigate this loss of habitat by covering the flat roofs of new developments with a layer of locally sourced material. Construction techniques for brown roofs are typically similar to those used to create flat green roofs, the main difference being the choice of growing medium (usually locally sourced rubble, gravel, soil, etc...) to meet a specific biodiversity objective. In Switzerland, it is common to use alluvial gravels from the foundations; in London, a mix of brick rubble and some concrete has been used.
The original idea was to allow the roofs to self-colonise with plants, but they are sometimes seeded to increase their biodiversity potential in the short term. Such practices are derided by purists. The roofs are colonised by spiders and insects (many of which are becoming extremely rare in the UK as such sites are developed) and provide a feeding site for insectivorous birds. Laban, a centre for contemporary dance in London, has a brown roof specifically designed to encourage the nationally rare black redstart. A green roof, above ground level, and claimed to be the highest in the UK and Europe "and probably in the world" to act as nature reserve, is on the Barclays Bank HQ in Canary Wharf. Designed combining the principles of green and brown roofs, it is already home to a range of rare invertebrates.
ASLA Award Green Roof Projects
2017 Award: Seeding Green Roofs for Greater Biodiversity and Lower Costs, Lincoln, NE, USA. Richard Sutton
2013 Award: Green Roof Innovation Testing Laboratory, Toronto, Ontario, Canada. John H. Daniels, Brooklyn Botanic Garden Visitors Center, Brooklyn. HMWhite, and NYC Parks Green Roof: A Living Laboratory for Innovative Green Roof Design, New York, NY. NYC Parks
2012 Award: Lafayette Greens: Urban Agriculture, Urban Fabric, Urban Sustainability, Detroit. Kenneth Weikal Landscape Architecture 200 Fifth Avenue, NYC. Landworks Studio, Inc.
2011 Award: Manassas Park Elementary School Landscape, Manassas Park, VA. Siteworks]
2009 Award: California Academy of Sciences, San Francisco, CA. SWA Group, Changi Airport Terminal 3 Interior Landscape, Singapore. Tierra Design (S) Pte Ltd, Corporate Headquarters, San Francisco, CA. OLIN, Macallen Building, South Boston, MA. Landworks Studio, Inc., and Museo del Acero Horno3, Monterrey, Mexico. Surfacedesign Inc.+ Harari arquitectos
2008 Award: Gannett/USA Today Headquarters, McLean, Virginia. Michael Vergason Landscape Architects, Ltd.
2007 Award: Washington Mutual Center Roof Garden, Seattle, Washington. Phillips Farevaag Smallenberg
2002 Award: Chicago City Hall Green Roof, Chicago, Illinois. David Yocca
Examples by country
Australia
Green roofs have been increasing in popularity in Australia over the past 10 years. Some of the early examples include the Freshwater Place residential tower in Melbourne (2002) with its Level 10 rooftop Half Acre Garden, CH2 building housing the Melbourne City Council (2006) – Australia's first 6-star Green Star Design commercial office building as certified by the Green Building Council of Australia, and Condor Tower (2005) with a lawn on the 4th floor.
Since 2008, city councils and influential business groups in Australia have become active promoting the benefits of green roofs. "The Blueprint to Green Roof Melbourne" is one program being run by the Committee for Melbourne. In 2010, the largest Australian green roof project was announced. The Victorian Desalination Project will have a "living tapestry" of 98,000 Australian indigenous plants over a roof area spanning more than . The roof will form part of the desalination plant's sophisticated roof system, designed to blend the building into the landscape, and provide acoustic protection, corrosion resistance, thermal control, and reduced maintenance.
In June 2014 ecological artist Lloyd Godman, with structural engineer Stuart Jones and environmental scientist Grant Harris collaborated to install an experiment using Tillandsia plants in extreme outdoor conditions at levels 92, 91, 65 and 56 on Eureka Tower in Melbourne, Australia. The selected air plants are extremely light, and are able to grow with no soil or watering system, and the plants have been checked at regular intervals since their installation and are still growing and flowering. One species; Tillandsia bergeri, has grown from a single shoot to several thriving colonies.
The project is now titled Tillandsia SWARM and has been expanded to include many other buildings across Australia, including Federation Square, National Gallery of Victoria and Essendon Airport. Godman has also experimented with Tillandsia plant screens that can be moved across skylights to create shade in summer and to allow in sun during winter. Temperature readings taken on a 40 °C day in summer revealed that the surface temperature on the roof had reached 84 °C, while the shadows cast by the plants had reduced the surface temperature on the roof to 51 °C.
Canada
The city of Toronto approved a by-law in May 2009 mandating green roofs on residential and industrial buildings. There is criticism from Green Roofs for Healthy Cities that the new laws are not stringent enough, since they will only apply to residential building that are a minimum of six stories high. By 31 January 2011, industrial buildings were required to render 10% or of their roofs green. Toronto City Hall's Podium roof was renovated to include a rooftop garden, the largest publicly accessible roof in the city. The green roof was opened to the public in June 2010. Many green roofs in Canada also use sustainable rainwater harvesting practices.
In 2008, the Vancouver Convention Centre installed a living roof of indigenous plants and grasses on its West building, making it the largest green roof in Canada.
The new Canadian War Museum in Ottawa, opened in 2005, also features a grass-covered roof.
During the renovation of the Hamilton City Hall in Hamilton, Ontario that spanned from 2007 to 2010, many efforts were taken to enhance the environmentally friendly nature of the structure, which included the addition of a grass-covered roof.
Simon Fraser University's Burnaby campus contains a substantial number of green roofs.
Canada's first LEED Platinum V4 Home in Wakefield QC, EcoHome's Edelweiss House, has a living Green Roof which is sloped at 12 degrees.
Costa Rica
Living green roofs have been built and grown at Saint Michael's Sustainable Community since 2012. Native plants, mostly flowers chosen for the environment, maximum shade and mass provide a colorful and functional living roof. The community has the largest number of green roofs in the country.
Egypt
In Egypt, soil-less agriculture is used to grow plants on the roofs of buildings. No soil is placed directly on the roof itself, thus eliminating the need for an insulating layer; instead, plants are grown on wooden tables. Vegetables and fruit are the most popular candidates, providing a fresh, healthy source of food that is free from pesticides.
A more advanced method, (aquaponics), being used experimentally in Egypt, is farming fish next to plants in a closed cycle. This allows the plants to benefit from the ammonia excreted by the fish, helping the plants to grow better and at the same time eliminating the need for changing the water for the fish, because the plants help to keep it clean by absorbing the ammonia. The fish also get some nutrients from the roots of the plants.
Finland
In Finland, green roofs are still scarce. Some experimental green roofs have been built in big cities. However, the capital city of Helsinki has published guidelines for enhancing the building of green roofs in the city. There is on-going research on the topic as the conditions in the southern Europe are very different from those in the north and knowledge acquired there can't be directly applied to colder climates. The fifth dimension – Green roofs and walls in urban areas -research program aims to produce high-level scientific and broadly applicable knowledge on optimal green roof and -wall solutions in Finland.
France
In France, an extensive, cable-supported green roof has been created on the International School in Lyon. Another huge green roof of roughly has been incorporated into the new museum L'Historial de la Vendée which opened in June 2006 at Les Lucs-sur-Boulogne.
Germany
Long-held green roof traditions started in the early industrialization period more than 100 years ago exist in Germany. In the 1970s, green roof technology was elevated to the next level. Serious storm-water issues made cities think about innovative solutions, preferably with living plants. Modern green roof technology with high performance, lightweight materials were used to grow hardy vegetation even on roofs that can hardly support any additional load. In the 1980s modern green roof technology was common knowledge in Germany while it was practically unknown in any other country in the world. In Stuttgart, with one of the most innovative Department of Parks and Recreation and with the world's oldest horticultural Universities, modern green roof technology was perfected and implemented on a large scale. By the early 2000s, Germany had laws mandating that many metropolitan areas have green roofs.
With the first green roof industry boom in Germany there were quality issues recorded. The FLL formed a committee that is focused on modern green roof technology. FLL stands for Forschungsgesellschaft Landschaftsentwicklung Landschaftsbau e.V. (German Landscape Research, Development and Construction Society). The FLL is an independent non-profit organization. It was founded in 1975 by eight professional organizations for "the improvement of environmental conditions through the advancement and dissemination of plant research and its planned applications". The FLL green roof working group is only one of 40 committees which have published a long list of guidelines and labor instructions. Some of these guidelines also available in English including the German FLL-Guideline for the Planning, Execution and Upkeep of Green-Roof Sites. The results of the research and synthesis done by FLL members are constantly updated and promulgated utilizing the same principles which govern the compilation of DIN standards and are published as either guiding principles or labor instructions.
The current Green Roof Guideline was published in 2011.
Today most elements of the German FLL are part of standards and guidelines around the world (FM Global, ASTM, NRCA, SPRI etc.).
Fachvereinigung Bauwerksbegrünung (FBB) was founded in 1990 as the second green roof association after DDV (Deutscher Dachgaertner Verband) in 1985. FBB was founded as an open forum for manufacturers and planners, merchants and operators in 1990. The organization was born from the then-visionary idea of understanding the relationship between nature and constructions not as oppositional, but as an opportunity. Both the green roofing and conventional roofing industries are equally represented.
The FBB has developed to become an innovative lobbying group with a strong market presence, internationally known through its cooperation with other European associations. Today, approximately 100 member companies use the multifaceted services offered by FBB, which offers a greater degree of market expertise and competitiveness. "Kompetenz im Markt".
Today, about of new green roofs are being constructed each year. According latest studies about of these are extensive; the last are roof gardens. The cities with the most green roofs in Germany are Berlin and Stuttgart. Surveys about the status of regulation are done by the FBB. Nearly one third of all German cities have regulations to support green-roof and rain-water technology. Green-roof research institutions are located in several cities as including Hannover, Berlin, Geisenheim and Neubrandenburg.
Germany is the country with the most green roofs in the world as well as the country with the most advanced knowledge in modern green roof technology. Green roofs in Germany are part of the 2 –3 years apprentice educations system of landscaping professionals.
Greece
The Greek Ministry of Finance has now installed a green roof on the Treasury in Constitution Square in Athens. The so-called "oikostegi" (Greek – oiko, , meaning building-ecological, and stegi, pronounced staygee, meaning roof-abode-shelter) was inaugurated in September 2008. Studies of the thermodynamics of the roof in September 2008 concluded that the thermal performance of the building was significantly affected by the installation. In further studies, in August 2009, energy savings of 50% were observed for air conditioning in the floor directly below the installation. The ten-floor building has a total floor space of . The oikostegi covers , equalling 52% of the roof space and 8% of the total floor space. Despite this, energy savings totalling €5,630 per annum were recorded, which translates to a 9% saving in air conditioning and a 4% saving in heating bills for the whole building. An additional observation and conclusion of the study was that the thermodynamic performance of the oikostegi had improved as biomass was added over the 12 months between the first and second study. This suggests that further improvements will be observed as the biomass increases still further. The study also stated that while measurements were being made by thermal cameras, a plethora of beneficial insects were observed on the roof, such as butterflies, honey bees and ladybirds. Obviously this was not the case before installation. Finally, the study suggested that both the micro-climate and biodiversity of Constitution Square, in Athens, Greece had been improved by the oikostegi.
Iceland
Sod roofs are frequently found on traditional farmhouses and farm buildings in Iceland.
Malaysia
Bus stops in Kuala Lumpur were fitted with green roofs in 2019.
Poland
Several cities in Poland have implemented policies and incentives to encourage the installation of green roofs, including Warsaw, Krakow, and Wroclaw. These policies have helped to increase the adoption of green roofs in the country, particularly in urban areas, where they are seen as an important tool for mitigating the environmental impacts of urbanization and improving the quality of life for city residents. The University of Warsaw green roof is one of the most impressive and well-known examples of green roofs in Poland. It covers an area of approximately 10,000 square meters and includes over 30,000 plants from more than 70 different species.
Singapore
Singapore installed a green roof on a bus in 2019 as part of an experiment led by researchers at the National University of Singapore. Green roofs on bus stops in Singapore were found to reduce ambient temperatures by up to 2C.
Switzerland
Switzerland has one of Europe's oldest green roofs, created in 1914 at the Moos lake water-treatment plant, Wollishofen, Zürich. Its filter tanks have of flat concrete roofs. To keep the interior cool and prevent bacterial growth in the filtration beds, a drainage layer of gravel and a layer of soil was spread over the roofs, which had been waterproofed with asphalt. A meadow developed from seeds already present in the soil; it is now a haven for many plant species, some of which are now otherwise extinct in the district, most notably 6,000 Orchis morio (green-winged orchid). More recent Swiss examples can be found at Klinikum 1 and Klinikum 2, the Cantonal Hospitals of Basel, and the Sihlpost platform at Zürich's main railway station.
Sweden
What is claimed to be the world's first green roof botanical garden was set up in Augustenborg, Malmö in May 1999. The International Green Roof Institute (IGRI) opened to the public in April 2001 as a research station and educational facility. (It has since been renamed the Scandinavian Green Roof Institute (SGRI), in view of the increasing number of similar organisations around the world.) Green roofs are well-established in Malmö: the Augustenborg housing development near the SGRI botanical garden incorporates green roofs and extensive landscaping of streams, ponds, and soak-ways between the buildings to deal with storm water run-off.
The new Bo01 urban residential development (in the Västra Hamnen (Western Harbour) close to the foot of the Turning Torso office and apartment block, designed by Santiago Calatrava) is built on the site of old shipyards and industrial areas, and incorporates many green roofs.
In 2012, the shopping mall Emporia with its roof garden, was opened. The size of the roof garden is approximately equivalent to 4 soccer fields, which makes it one of the biggest green roof parks in Europe that is accessible to the public.
United Kingdom
In 2003 English Nature concluded that 'in the UK policy makers have largely ignored green roofs'. However, British examples can be found with increasing frequency. The Kensington Roof Gardens are a notable early roof garden which was built above the former Derry & Toms department store in Kensington, London in 1938. More recent examples can be found at the University of Nottingham Jubilee Campus, and in London at Sainsbury's Millennium Store in Greenwich, the Horniman Museum and at Canary Wharf. The Ethelred Estate, close to the River Thames in central London, is the British capital's largest roof-greening project to date. Toxteth in Liverpool is also a candidate for a major roof-greening project.
In the United Kingdom, intensive green roofs are sometimes used in built-up city areas where residents and workers often do not have access to gardens or local parks. Extensive green roofs are sometimes used to blend buildings into rural surroundings, for example by Rolls-Royce Motor Cars, who has one of the biggest green roofs in Europe (covering more than on their factory at Goodwood, West Sussex.
The University of Sheffield has created a Green Roof Centre of Excellence and conducted research, particularly in a UK context, into green roofs. Nigel Dunnett of Sheffield University published a UK-centric book about green roofing in 2004 (updated 2008).
Fort Dunlop has the largest green roof in the UK since its redevelopment between 2004 and 2006.
The UK also has one of the most innovative food preparation facilities in Europe, the Kanes salad factory in Evesham. It is topped with a wildflower roof featuring nearly 90 species of wildflower and natural grasses. The seed mix was prepared in consultation with leading ecologists to try to minimise the impact on the local environment. The pre-grown wildflower blanket sits on top of a standing seam roof and is combined with solar panels to create an eco-friendly finish to the entire factory. The development also won the 2013 National Federation of Roofing Contractors Sustainable Roof Award for Green Roofing.
United States
One of the largest expanses of extensive green roof is to be found in the US, at Ford Motor Company's River Rouge Plant, Dearborn, Michigan, where of assembly plant roofs are covered with sedum and other plants, designed by William McDonough; the $18 million assembly avoids the need of what otherwise would be $50 million worth of mechanical treatment facilities on site. Built over Millennium Park Garage, Chicago's Millennium Park is considered one of the largest intensive green roofs. Other well-known American examples include Chicago's City Hall and the former Gap headquarters, now the headquarters of YouTube, in San Bruno, CA. The U.S. military has two major green roofs in the Washington, D.C. area: the U.S. Coast Guard headquarters () and the Pentagon ().
An early green-roofed building (completed in 1971) is the Weyerhaeuser Corporate Headquarters building in Federal Way, Washington. Its 5-story office roof system comprises a series of stepped terraces covered in greenery. From the air, the building blends into the landscape.
The largest green roof in New York City was installed in midtown Manhattan atop the United States Postal Service's Morgan Processing and Distribution Center. Construction on the project began in September 2008, and was finished and dedicated in July 2009. Covered in native vegetation and having an expected lifetime of fifty years, this green roof will not only save the USPS approximately $30,000 a year in heating and cooling costs, but will also significantly reduce the amount of storm water contaminants entering the municipal water system.
In 2001, atop Chicago City Hall, the roof gardens were completed, serving as a pilot project to assess the impact green roofs would have on the heat island effect in urban areas, rainwater runoff, and the effectiveness of differing types of green roofs and plant species for Chicago's climate. Although the rooftop is not normally accessible to the public, it is visually accessible from 33 taller buildings in the area. The garden consists of 20,000 plants of more than 150 species, including shrubs, vines and two trees. The green roof design team was headed by the Chicago area firm Conservation Design Forum in conjunction with noted "green" architect William McDonough. With an abundance of flowering plants on the rooftop, beekeepers harvest approximately of honey each year from hives installed on the rooftop. Tours of the green roof are by special arrangement only. Chicago City Hall Green Roof won merit design award of the American Society of Landscape Architecture (ASLA) competition in 2002.
The of outdoor space on the seventh floor of Zeckendorf Towers, formerly an undistinguished rooftop filled with potted plants, make up the largest residential green roof in New York. The roof was transformed in 2010 as part of Mayor Michael Bloomberg's NYC Green Infrastructure campaign, and supposedly serves to capture some of the rain that falls on it rather than letting it run off and contribute to flooding in the adjacent Union Square subway station.
Some cost can also be attributed to maintenance. Extensive green roofs have low maintenance requirements but they are generally not maintenance free. German research has quantified the need to remove unwanted seedlings to approximately 6 seconds/m2/year. Maintenance of green roofs often includes fertilization to increase flowering and succulent plant cover. If aesthetics are not an issue, fertilization and maintenance are generally not needed. Extensive green roofs should only be fertilized with controlled-release fertilizers in order to avoid pollution of the storm water. Conventional fertilizers should never be used on extensive vegetated roofs. German studies have approximated the nutrient requirement of vegetated roofs to 5 gN/m2. It is also important to use a substrate that does not contain too many available nutrients. The FLL guidelines specify maximum-allowable nutrient content of substrates.
One of the oldest American green roofs in existence is atop the Rockefeller Center in Manhattan, built in 1936. This roof was primarily an aesthetic undertaking for the enjoyment of the center's workers, and remains to this day, having been refurbished in 1986.
With the passage of Denver's Green Roof Initiative in the November 2017 elections, effective January 2018, new buildings or existing buildings meeting the initiative's thresholds are required to have rooftop gardens, optionally combined with solar photovoltaic panels.
Seattle is another city in which green roofs have been used on an increasing basis. This phenomenon is in large part due to efforts on behalf of the city to encourage green roofs through new and improved building codes. In 2006, the Seattle Green Factor program was approved. The program rewards the incorporation of landscaping in new building developments in an attempt to reduce stormwater runoff and associated pollution, stabilize temperatures, and create habitats for birds and insects. These changes were expanded in 2009 to recognize the specific stormwater benefits of green roofs, and to reward developers who used them accordingly.
By 2010, Seattle was home to approximately of green roofs. Despite initial hiccups in the city stemming from weeds, lack of irrigation during dry summer months, and a need for continuous replanting, the project has continued to succeed as understanding around the best soils and plants and the need for monitoring and upkeep has increased. A 2010 survey of the green roofs in Seattle acknowledged that while the initial costs of implementing a green roof may deter businesses or homeowners, it is likely that green roofs actually preserve the roofing material and cut costs in the long run. In light of the success in Seattle, other cities such as Portland, Chicago, and Washington, D.C. have all made efforts to develop their own Green Factor programs.
The Seattle City Hall has led the way by implementing a green roof project that has involved the planting of more than 22,000 pots of sedum, fescue, and grass. The City hopes that the project can reduce the annual stormwater runoff for the building by 50 to 75 percent, which will in turn reduce damage to local watershed areas that provide habitats for native species such as salmon. The historic Union Stables building has used green roofs alongside other efficiency based changes to reduce stormwater runoff and decrease the building's energy use by 70 percent. The Park Place building in Seattle's downtown provides a leading example of the use of landscaping to recapture rain water with the hopes of cutting back spending on utilities.
Washington, D.C.
Washington, D.C., started implementing incentives for green roofs within their city at the beginning of the 21st century. In 2003, the Chesapeake Bay Foundation introduced a “green roof demonstration project” in combination with the D.C. Water and Sewer Authority. This program issued grants to several pilot green roofs, which would assist with the cost of construction for the building owner. From this project the city began to understand how beneficial these roofs could be and more programs were implemented over the years. In 2007, the Riversmart Rewards Program introduced a RiverSmart Rooftops Green Roof Rebate Program that would lend a $3 per square foot subsidy to potential green roof projects within the District. This culminated to assist 12 projects that year. A year later, the subsidy was raised to $5, incentivizing even more developers to use this program within their design. There is also possibility through the RiverSmart Rewards program for “residents and property owners to receive a significant discount on their water utility fees” if they install approved stormwater management features. In 2016, a rebate of $10-$15 per square feet was introduced, “promoting the voluntary installation of green roofs for the purpose of reducing stormwater runoff and pollutants”. $10 per square foot rebates were set for installation within a combined sewer system. $15 per square foot rebates were set for installation within a municipal storm sewer system. The greatest aspect of this incentivized project is the lack of restriction of building type that qualifies. There is no size cap on properties that qualify whether it’s residential, commercial or institutional. In 2016 there was a total of 2.3 million square feet of green roofing within the district. As of 2020, there is 5.1 million square feet of green roofing.
See also
Arcology
Blue roof
Covering (construction)
Ecovillage
Energy-efficient landscaping
Hanging Gardens of Babylon
Low-impact development
Rainwater harvesting
Ralph Hancock, designer, The Rockefeller Center Roof Gardens
Roof garden
Sod roof, traditional roof in Scandinavia
Sustainable city
Subtropical climate vegetated roof
References
Further reading
Snodgrass, E. and McIntyre, L., The Green Roof Manual: A Professional Guide to Design, Installation, and Maintenance Publisher: Timber Press (2010).
Dunnett, N. and Kingsbury, N., Planting Green Roofs and Living Walls Publisher: Timber Press (updated 2008).
Miller-Klein, Jan. Gardening for Butterflies, Bees and other beneficial insects has large section on green and brown roofs and brownfields, including how to make your own, with contributions from several UK practitioners.
Hilary, David. Creating My Green Roof: A guide to planning, installing, and maintaining a beautiful, energy-saving green roof. (2015).
Roland Appl, Reimer Meier, Wolfgang Ansel: Green Roofs – Bringing Nature Back to Town. (Proceedings) Publisher: International Green Roof Association IGRA, (2009)
Diversity of Fauna on Green Roofs Diversity of fauna
External links
ASLA Design Award 2009: CALIFORNIA ACADEMY OF SCIENCE
Roof gardens
Roofs
Sustainable architecture
Sustainable gardening
Sustainable building
Garden features
Types of garden
Landscape architecture
Urban agriculture
Hydrology and urban planning
Environmental engineering
Roofing materials
Climate change adaptation
it:Copertura#Tetto a giardino pensile | Green roof | [
"Chemistry",
"Technology",
"Engineering",
"Environmental_science"
] | 9,403 | [
"Structural engineering",
"Sustainable building",
"Hydrology",
"Sustainable architecture",
"Building engineering",
"Chemical engineering",
"Landscape architecture",
"Structural system",
"Construction",
"Civil engineering",
"Hydrology and urban planning",
"Environmental engineering",
"Environ... |
192,198 | https://en.wikipedia.org/wiki/Polio%20vaccine | Polio vaccines are vaccines used to prevent poliomyelitis (polio). Two types are used: an inactivated poliovirus given by injection (IPV) and a weakened poliovirus given by mouth (OPV). The World Health Organization (WHO) recommends all children be fully vaccinated against polio. The two vaccines have eliminated polio from most of the world, and reduced the number of cases reported each year from an estimated 350,000 in 1988 to 33 in 2018.
The inactivated polio vaccines are very safe. Mild redness or pain may occur at the site of injection. Oral polio vaccines cause about three cases of vaccine-associated paralytic poliomyelitis per million doses given. This compares with 5,000 cases per million who are paralysed following a polio infection. Both types of vaccine are generally safe to give during pregnancy and in those who have HIV/AIDS but are otherwise well. However, the emergence of circulating vaccine-derived poliovirus (cVDPV), a form of the vaccine virus that has reverted to causing poliomyelitis, has led to the development of novel oral polio vaccine type 2 (nOPV2) which aims to make the vaccine safer and thus stop further outbreaks of cVDPV.
The first successful demonstration of a polio vaccine was by Hilary Koprowski in 1950, with a live attenuated virus which people drank. The vaccine was not approved for use in the United States, but was used successfully elsewhere. The success of an inactivated (killed) polio vaccine, developed by Jonas Salk, was announced in 1955. Another attenuated live oral polio vaccine was developed by Albert Sabin and came into commercial use in 1961.
Polio vaccine is on the World Health Organization's List of Essential Medicines.
Medical uses
Interruption of person-to-person transmission of the virus by vaccination is important in global polio eradication, since no long-term carrier state exists for poliovirus in individuals with normal immune function, polio viruses have no non-primate reservoir in nature, and survival of the virus in the environment for an extended period appears to be remote. There are two types of vaccine: inactivated polio vaccine (IPV) and oral polio vaccine (OPV).
Inactivated
When the IPV (injection) is used, 90% or more of individuals develop protective antibodies to all three serotypes of polio virus after two doses of inactivated polio vaccine (IPV), and at least 99% are immune to poliovirus following three doses. The duration of immunity induced by IPV is not known with certainty, although a complete series is thought to protect for many years. IPV replaced the oral vaccine in many developed countries in the 1990s mainly due to the (small) risk of vaccine-derived polio in the oral vaccine.
Attenuated
Oral polio vaccines were easier to administer than IPV, as they eliminated the need for sterile syringes and therefore were more suitable for mass vaccination campaigns. OPV also provided longer-lasting immunity than the Salk vaccine, as it provides both humoral immunity and cell-mediated immunity.
One dose of trivalent OPV produces immunity to all three poliovirus serotypes in roughly 50% of recipients. Three doses of live-attenuated OPV produce protective antibodies to all three poliovirus types in more than 95% of recipients. As with other live-virus vaccines, immunity initiated by OPV is probably lifelong. OPV produces excellent immunity in the intestine, the primary site of wild poliovirus entry, which helps prevent infection with wild virus in areas where the virus is endemic. The oral administration does not require special medical equipment or extensive training. Attenuated poliovirus derived from the oral polio vaccine is excreted for a few days after vaccination, potentially infecting and thus indirectly inducing immunity in unvaccinated individuals, and thus amplifying the effects of the doses delivered. Taken together, these advantages have made it the favored vaccine of many countries, and it has long been preferred by the global eradication initiative.The primary disadvantage of OPV derives from its inherent nature. As an attenuated but active virus, it can induce vaccine-associated paralytic poliomyelitis (VAPP) in approximately one individual per every 2.7million doses administered. The live virus can circulate in under-vaccinated populations (termed either variant poliovirus or circulating vaccine-derived poliovirus, cVDPV) and over time can revert to a neurovirulent form causing paralytic polio. This genetic reversal of the pathogen to a virulent form takes a considerable time and does not affect the person who was originally vaccinated. With wild polio cases at record lows, 2017 was the first year where more cases of cVDPV were recorded than the wild poliovirus.
Until recent times, a trivalent OPV containing all three virus strains was used, and had nearly eradicated polio infection worldwide. With the complete eradication of wild poliovirus type2 this was phased out in 2016 and replaced with bivalent vaccine containing just types 1 and 3, supplemented with monovalent type2 OPV in regions where cVDPV type 2 was known to circulate. The switch to the bivalent vaccine and associated missing immunity against type 2 strains, among other factors, led to outbreaks of circulating vaccine-derived poliovirus type 2 (cVDPV2), which increased from 2 cases in 2016 to 1037 cases in 2020.
A novel OPV2 vaccine (nOPV2) which has been genetically modified to reduce the likelihood of disease-causing activating mutations was granted emergency licencing in 2021, and subsequently full licensure in December 2023. This has greater genetic stability than the traditional oral vaccine and is less likely to revert to a virulent form. Genetically stabilised vaccines targeting poliovirus types 1 and 3 are in development, with the intention that these will eventually completely replace the Sabin vaccines.
Schedule
In countries with endemic polio or where the risk of imported cases is high, the WHO recommends OPV vaccine at birth followed by a primary series of three OPV doses and at least one IPV dose starting at 6 weeks of age, with a minimum of 4 weeks between OPV doses. In countries with >90% immunization coverage and low risk of importation, the WHO recommends one or two IPV doses starting at 2 months of age followed by at least two OPV doses, with the doses separated by 4–8 weeks depending on the risk of exposure. In countries with the highest levels of coverage and the lowest risks of importation and transmission, the WHO recommends a primary series of three IPV injections, with a booster dose after an interval of six months or more if the first dose was administered before 2 months of age.
Side effects
The inactivated polio vaccines are very safe. Mild redness or pain may occur at the site of injection. They are generally safe to be given to pregnant women and those who have HIV/AIDS but are otherwise well.
Allergic reaction to the vaccine
Inactivated polio vaccine can cause an allergic reaction in a few people since the vaccine contains trace amounts of antibiotics, streptomycin, polymyxin B, and neomycin. It should not be given to anyone who has an allergic reaction to these medicines. Signs and symptoms of an allergic reaction, which usually appear within minutes or a few hours after receiving the injected vaccine, include breathing difficulties, weakness, hoarseness or wheezing, heart rate fluctuations, skin rash, and dizziness.
Vaccine-associated paralytic polio
A potential adverse effect of the Sabin OPV is caused by its known potential to recombine to a form that causes neurological infection and paralysis. The Sabin OPV results in vaccine-associated paralytic poliomyelitis (VAPP) in approximately one individual per every 2.7million doses administered, with symptoms identical to wild polio. Due to its improved genetic stability, the novel OPV (nOPV) has a reduced risk of this occurring.
Contamination concerns
In 1960, the rhesus monkey kidney cells used to prepare the poliovirus vaccines were determined to be infected with the simian virus-40 (SV40), which was also discovered in 1960 and is a naturally occurring virus that infects monkeys. In 1961, SV40 was found to cause tumors in rodents. More recently, the virus was found in certain forms of cancer in humans, for instance brain and bone tumors, pleural and peritoneal mesothelioma, and some types of non-Hodgkin lymphoma. However, SV40 has not been determined to cause these cancers.
SV40 was found to be present in stocks of the injected form of the IPV in use between 1955 and 1963. It is not found in the OPV form. Over 98 million Americans received one or more doses of polio vaccine between 1955 and 1963 when a proportion of vaccine was contaminated with SV40; an estimated 10–30 million Americans may have received a dose of vaccine contaminated with SV40. Later analysis suggested that vaccines produced by the former Soviet bloc countries until 1980, and used in the USSR, China, Japan, and several African countries, may have been contaminated, meaning hundreds of millions more may have been exposed to SV40.
In 1998, the National Cancer Institute undertook a large study, using cancer case information from the institute's SEER database. The published findings from the study revealed no increased incidence of cancer in persons who may have received vaccine containing SV40. Another large study in Sweden examined cancer rates of 700,000 individuals who had received potentially contaminated polio vaccine as late as 1957; the study again revealed no increased cancer incidence between persons who received polio vaccines containing SV40 and those who did not. The question of whether SV40 causes cancer in humans remains controversial, however, and the development of improved assays for detection of SV40 in human tissues will be needed to resolve the controversy.
During the race to develop an oral polio vaccine, several large-scale human trials were undertaken. By 1958, the National Institutes of Health had determined that OPV produced using the Sabin strains was the safest. Between 1957 and 1960, however, Hilary Koprowski continued to administer his vaccine around the world. In Africa, the vaccines were administered to roughly one million people in the Belgian territories (now the Democratic Republic of the Congo, Rwanda, and Burundi). The results of these human trials have been controversial, and unfounded accusations in the 1990s arose that the vaccine had created the conditions necessary for transmission of simian immunodeficiency virus from chimpanzees to humans, causing HIV/AIDS. These hypotheses, however, have been conclusively refuted. By 2004, cases of poliomyelitis in Africa had been reduced to just a small number of isolated regions in the western portion of the continent, with sporadic cases elsewhere. Recent local opposition to vaccination campaigns have evolved due to lack of adequate information, often relating to fears that the vaccine might induce sterility. The disease has since resurged in Nigeria and in several other African nations without necessary information, which epidemiologists believe is due to refusals by certain local populations to allow their children to receive the polio vaccine.
Manufacture
Inactivated
The Salk vaccine, IPV, is based on three wild, virulent reference strains, Mahoney (type 1 poliovirus), MEF-1 (type 2 poliovirus), and Saukett (type 3 poliovirus), grown in a type of monkey kidney tissue culture (Vero cell line), which are then inactivated with formalin. The injected Salk vaccine confers IgG-mediated immunity in the bloodstream, which prevents polio infection from progressing to viremia and protects the motor neurons, thus eliminating the risk of bulbar polio and post-polio syndrome.
In the United States, the vaccine is administered along with the tetanus, diphtheria, and acellular pertussis vaccines (DTaP) and a pediatric dose of hepatitis B vaccine. In the UK, IPV is combined with tetanus, diphtheria, pertussis, and Haemophilus influenzae type b vaccines.
Attenuated
OPV is an attenuated vaccine, produced by the passage of the virus through nonhuman cells at a subphysiological temperature, which produces spontaneous mutations in the viral genome. Oral polio vaccines were developed by several groups, one of which was led by Albert Sabin. Other groups, led by Hilary Koprowski and H.R. Cox, developed their attenuated vaccine strains. In 1958, the National Institutes of Health created a special committee on live polio vaccines. The various vaccines were carefully evaluated for their ability to induce immunity to polio while retaining a low incidence of neuropathogenicity in monkeys. Large-scale clinical trials performed in the Soviet Union in the late 1950s to early 1960s by Mikhail Chumakov and his colleagues demonstrated the safety and high efficacy of the vaccine. Based on these results, the Sabin strains were chosen for worldwide distribution. Fifty-seven nucleotide substitutions distinguish the attenuated Sabin 1 strain from its virulent parent (the Mahoney serotype), two nucleotide substitutions attenuate the Sabin 2 strain, and 10 substitutions are involved in attenuating the Sabin 3 strain. The primary attenuating factor common to all three Sabin vaccines is a mutation located in the virus's internal ribosome entry site, which alters stem-loop structures and reduces the ability of poliovirus to translate its RNA template within the host cell. The attenuated poliovirus in the Sabin vaccine replicates very efficiently in the gut, the primary site of infection and replication, but is unable to replicate efficiently within nervous system tissue. In 1961, type 1 and 2 monovalent oral poliovirus vaccine (MOPV) was licensed, and in 1962, type 3 MOPV was licensed. In 1963, trivalent OPV (TOPV) was licensed, and became the vaccine of choice in the United States and most other countries of the world, largely replacing the inactivated polio vaccine. A second wave of mass immunizations led to a further dramatic decline in the number of polio cases. Between 1962 and 1965, about 100 million Americans (roughly 56% of the population at that time) received the Sabin vaccine. The result was a substantial reduction in the number of poliomyelitis cases, even from the much-reduced levels following the introduction of the Salk vaccine.
OPV is usually provided in vials containing 10–20 doses of vaccine. A single dose of oral polio vaccine (usually two drops) contains 1,000,000 infectious units of Sabin 1 (effective against PV1), 100,000 infectious units of the Sabin 2 strain, and 600,000 infectious units of Sabin 3. The vaccine contains small traces of antibiotics—neomycin and streptomycin—but does not contain preservatives.
History
In a generic sense, vaccination works by priming the immune system with an 'immunogen'. Stimulating immune response, by use of an infectious agent, is known as immunization. The development of immunity to polio efficiently blocks person-to-person transmission of wild poliovirus, thereby protecting both individual vaccine recipients and the wider community.
The development of two polio vaccines led to the first modern mass inoculations. The last cases of paralytic poliomyelitis caused by endemic transmission of wild virus in the United States occurred in 1979, with an outbreak among the Amish in several Midwest states.
1930s
In the 1930s, poliovirus was perceived as especially terrifying, as little was known of how the disease was transmitted or how it could be prevented. This virus was also notable for primarily impacting affluent children, making it a prime target for vaccine development, despite its relatively low mortality and morbidity. Despite this, the community of researchers in the field thus far had largely observed an informal moratorium on any vaccine development as it was perceived to present too high a risk for too little likelihood of success.
This shifted in the early 1930s when American groups took up the challenge: Maurice Brodie led a team from the public health laboratory of the city of New York and John A. Kolmer collaborated with the Research Institute of Cutaneous Medicine in Philadelphia. The rivalry between these two researchers lent itself to a race-like mentality which, combined with a lack of oversight of medical studies, was reflected in the methodology and outcomes of each of these early vaccine development ventures.
Kolmer's live vaccine
Kolmer began his vaccine development project in 1932 and ultimately focused on producing an attenuated or live virus vaccine. Inspired by the success of vaccines for rabies and yellow fever, he hoped to use a similar process to denature the polio virus. In order to go about attenuating his polio vaccine, he repeatedly passed the virus through monkeys. Using methods of production that were later described as "hair-raisingly amateurish, the therapeutic equivalent of bath-tub gin," Kolmer ground the spinal cords of his infected monkeys and soaked them in a salt solution. He then filtered the solution through mesh, treated it with ricinolate, and refrigerated the product for 14 days to ultimately create what would later be prominently critiqued as a "veritable witches brew".
In keeping with the norms of the time, Kolmer completed a relatively small animal trial with 42 monkeys before proceeding to self experimentation in 1934. He tested his vaccine upon himself, his two children, and his assistant. He gave his vaccine to just 23 more children before declaring it safe and sending it out to doctors and health departments for a larger test of efficacy. By April 1935, he was able to report having tested the vaccine on 100 children without ill effect. Kolmer's first formal presentation of results would not come about until November 1935 where he presented the results of 446 children and adults he had vaccinated with his attenuated vaccine. He also reported that together the Research Institute of Cutaneous Medicine and the Merrell Company of Cincinnati (the manufacturer who held the patent for his ricinoleating process) had distributed 12,000 doses of vaccine to some 700 physicians across the United States and Canada. Kolmer did not describe any monitoring of this experimental vaccination program nor did he provide these physicians with instructions in how to administer the vaccine or how to report side effects. Kolmer dedicated the bulk of his publications thereafter to explaining what he believed to be the cause of the 10+ reported cases of paralytic polio following vaccination, in many cases in towns where no polio outbreak had occurred. Six of these cases had been fatal. Kolmer had no control group but asserted that many more children would have gotten sick.
Brodie's inactivated vaccine
At nearly the same time as Kolmer's project, Maurice Brodie had joined immunologist William H. Park at the New York City Health Department where they worked together on poliovirus. With the aid of grant funding from the President's Birthday Ball Commission (a predecessor to what would become the March of Dimes), Brodie was able to pursue the development of an inactivated or "killed virus" vaccine. Brodie's process also began by grinding the spinal cords of infectious monkeys and then treating the cords with various germicides, ultimately finding a solution of formaldehyde to be the most effective. By 1 June 1934, Brodie was able to publish his first scholarly article describing his successful induction of immunity in three monkeys with inactivated poliovirus. Through continued study on an additional 26 monkeys, Brodie ultimately concluded that administration of live virus vaccine tended to result in humoral immunity while administration of killed virus vaccine tended to result in tissue immunity.
Soon after, following a similar protocol to Kolmer, Brodie proceeded with self-experimentation upon himself and his co-workers at the NYC Health Department laboratory. Brodie's progress was eagerly covered by popular press as the public hoped for a successful vaccine to become available. Such reporting did not make mention of the 12 children in a New York City Asylum who were subjected to early safety trials. As none of the subjects experienced ill effects, Park, described by contemporaries as "never one to let grass grow under his feet," declared the vaccine safe. When a severe polio outbreak overwhelmed Kern County, California it became the first trial site for the new vaccine on very short notice. Between November 1934 - May 1935, over 1,500 doses of the vaccine were administered in Kern County. While initial results were very promising, insufficient staffing and poor protocol design left Brodie open to criticism when he published the California results in August 1935. Through private physicians, Brodie also conducted a broader field study, including 9,000 children who received the vaccine and 4,500 age- and location-matched controls who did not receive a vaccine. Again, the results were promising. Of those who received the vaccine, only a few went on to develop polio. Most had been exposed before vaccination and none had received the full series of vaccine doses being studied. Additionally, a polio epidemic in Raleigh, North Carolina provided an opportunity for the U.S. Public Health Service to conduct a highly structured trial of the Brodie vaccine using funding from the Birthday Ball Commission.
Academic reception
While their work was ongoing, the larger community of bacteriologists began to raise concerns regarding the safety and efficacy of the new poliovirus vaccines. At this time there was very little oversight of medical studies and the ethical treatment of study participants largely relied upon moral pressure from peer academic scientists. Brodie's inactivated vaccines faced scrutiny from many who felt killed virus vaccines could not be efficacious. While researchers were able to replicate the tissue immunity he had produced in his animal trials, the prevailing wisdom was that humoral immunity was essential for an efficacious vaccine. Kolmer directly questioned the killed virus approach in scholarly journals. Kolmer's studies however had raised even more concern with increasing reports of children becoming paralysed following vaccination with his live virus vaccine and notably, with paralysis beginning at the arm rather than the foot in many cases. Both Kolmer and Brodie were called to present their research at the Annual Meeting of the American Public Health Association in Milwaukee WI in October 1935. Additionally, Thomas M. Rivers was asked to discuss each of the presented papers as a prominent critic of the vaccine development effort. This resulted in the APHA arranging a Symposium on Poliomyelitis to be delivered at the Annual Meeting of their Southern Branch the following month. It was during the discussion at this meeting that James Leake of the U.S. Public Health Service stood to immediately present clinical evidence that the Kolmer vaccine had caused several deaths and then allegedly accused Kolmer of being a murderer. As Rivers recalled in his oral history, "All hell broke loose, and it seemed as if everybody was trying to talk at the same time...Jimmy Leake used the strongest language that I have ever heard used at a scientific meeting." In response to the attacks from all sides, Brodie was reported to have stood up and stated, "It looks as though, according to Dr. Rivers, my vaccine is no good, and, according to Dr. Leake, Dr Kolmer's is dangerous." Kolmer simply responded by stating, "Gentlemen, this is one time I wish the floor would open up and swallow me." Ultimately, Kolmer's live vaccine was undoubtedly shown to be dangerous and had already been withdrawn in September 1935 before the Milwaukee meeting. While the consensus of the symposium was largely skeptical of the efficacy of Brodie's vaccine, its safety was not in question and the recommendation was for a much larger well-controlled trial. However, when three children became ill with paralytic polio following a dose of the vaccine, the directors of the Warm Springs Foundation in Georgia (acting as the primary funders for the project) requested it be withdrawn in December 1935. Following its withdrawal, the previously observed moratorium on human poliomyelitis vaccine development resumed and there would not be another attempt for nearly 20 years.
While Brodie had arguably made the most progress in the pursuit of a poliovirus vaccine, he suffered the most significant career repercussions due to his status as a less widely known researcher. Modern researchers recognize that Brodie may well have developed an effective polio vaccine, however, the basic science and technology of the time were insufficient to understand and utilize this breakthrough. Brodie's work using formalin-inactivated virus would later become the basis for the Salk vaccine, but he would not live to see this success. Brodie was fired from his position within three months of the symposium's publication. While he was able to find another laboratory position, he died of a heart attack only three years later at age 36. By contrast, Park, who was believed in the community to be reaching senility at this point in his older age, was able to retire from his position with honors before he died in 1939. Kolmer, already an established and well-respected researcher, returned to Temple University as a professor of medicine. Kolmer had a very productive career, receiving multiple awards, and publishing countless papers, articles, and textbooks up until his retirement in 1957.
1948
A breakthrough came in 1948 when a research group headed by John Enders at the Children's Hospital Boston successfully cultivated the poliovirus in human tissue in the laboratory. This group had recently successfully grown mumps in cell culture. In March 1948, Thomas H. Weller was attempting to grow varicella virus in embryonic lung tissue. He had inoculated the planned number of tubes when he noticed that there were a few unused tubes. He retrieved a sample of mouse brain infected with poliovirus and added it to the remaining test tubes, on the off chance that the virus might grow. The varicella cultures failed to grow, but the polio cultures were successful. This development greatly facilitated vaccine research and ultimately allowed for the development of vaccines against polio. Enders and his colleagues, Thomas H. Weller and Frederick C. Robbins, were recognized in 1954 for their efforts with a Nobel Prize in Physiology or Medicine. Other important advances that led to the development of polio vaccines were: the identification of three poliovirus serotypes (Poliovirus type 1 – PV1, or Mahoney; PV2, Lansing; and PV3, Leon); the finding that before paralysis, the virus must be present in the blood; and the demonstration that administration of antibodies in the form of gamma globulin protects against paralytic polio.
1950–1955
During the early 1950s, polio rates in the U.S. were above 25,000 annually; in 1952 and 1953, the U.S. experienced an outbreak of 58,000 and 35,000 polio cases, respectively, up from a typical number of some 20,000 a year, with deaths in those years numbering 3,200 and 1,400. Amid this U.S. polio epidemic, millions of dollars were invested in finding and marketing a polio vaccine by commercial interests, including Lederle Laboratories in New York under the direction of H. R. Cox. Also working at Lederle was Polish-born virologist and immunologist Hilary Koprowski of the Wistar Institute in Philadelphia, who tested the first successful polio vaccine, in 1950. His vaccine, however, being a live attenuated virus taken orally, was still in the research stage and would not be ready for use until five years after Jonas Salk's polio vaccine (a dead-virus injectable vaccine) had reached the market. Koprowski's attenuated vaccine was prepared by successive passages through the brains of Swiss albino mice. By the seventh passage, the vaccine strains could no longer infect nervous tissue or cause paralysis. After one to three further passages on rats, the vaccine was deemed safe for human use. On 27 February 1950, Koprowski's live, attenuated vaccine was tested for the first time on an 8-year-old boy living at Letchworth Village, an institution for physically and mentally disabled people located in New York. After the child had no side effects, Koprowski enlarged his experiment to include 19 other children.
Jonas Salk
The first effective polio vaccine was developed in 1952 by Jonas Salk and a team at the University of Pittsburgh that included Julius Youngner, Byron Bennett, L. James Lewis, and Lorraine Friedman, which required years of subsequent testing. Salk went on CBS radio to report a successful test on a small group of adults and children on 26 March 1953; two days later, the results were published in JAMA. Leone N. Farrell invented a key laboratory technique that enabled the mass production of the vaccine by a team she led in Toronto. Beginning 23 February 1954, the vaccine was tested at Arsenal Elementary School and the Watson Home for Children in Pittsburgh, Pennsylvania.
Salk's vaccine was then used in a test called the Francis Field Trial, led by Thomas Francis, the largest medical experiment in history at that time. The test began with about 4,000 children at Franklin Sherman Elementary School in McLean, Virginia, and eventually involved 1.8 million children, in 44 states from Maine to California. By the conclusion of the study, roughly 440,000 received one or more injections of the vaccine, about 210,000 children received a placebo, consisting of harmless culture media, and 1.2 million children received no vaccination and served as a control group, who would then be observed to see if any contracted polio.
The results of the field trial were announced on 12 April 1955 (the tenth anniversary of the death of President Franklin D. Roosevelt, whose paralytic illness was generally believed to have been caused by polio). The Salk vaccine had been 60–70% effective against PV1 (poliovirus type 1), over 90% effective against PV2 and PV3, and 94% effective against the development of bulbar polio. Soon after Salk's vaccine was licensed in 1955, children's vaccination campaigns were launched. In the U.S., following a mass immunization campaign promoted by the March of Dimes, the annual number of polio cases fell from 35,000 in 1953 to 5,600 by 1957. By 1961 only 161 cases were recorded in the United States.
A week before the announcement of the Francis Field Trial results in April 1955, Pierre Lépine at the Pasteur Institute in Paris had also announced an effective polio vaccine.
Safety incidents
In April 1955, soon after mass polio vaccination began in the US, the Surgeon General began to receive reports of patients who contracted paralytic polio about a week after being vaccinated with the Salk polio vaccine from the Cutter pharmaceutical company, with the paralysis starting in the limb the vaccine was injected into. The Cutter vaccine had been used in vaccinating 409,000 children in the western and midwestern United States.
Later investigations showed that the Cutter vaccine had caused 260 cases of polio, killing 11.
In response, the Surgeon General pulled all polio vaccines made by Cutter Laboratories from the market, but not before 260 cases of paralytic illness had occurred. Eli Lilly, Parke-Davis, Pitman-Moore, and Wyeth polio vaccines were also reported to have paralyzed numerous children. It was soon discovered that some lots of Salk polio vaccine made by Cutter, Wyeth, and the other labs had not been properly inactivated, allowing live poliovirus into more than 100,000 doses of vaccine. In May 1955, the National Institutes of Health and Public Health Services established a Technical Committee on Poliomyelitis Vaccine to test and review all polio vaccine lots and advise the Public Health Service as to which lots should be released for public use. These incidents reduced public confidence in the polio vaccine, leading to a drop in vaccination rates.
1961
At the same time that Salk was testing his vaccine, both Albert Sabin and Hilary Koprowski continued working on developing a vaccine using live virus. During a meeting in Stockholm to discuss polio vaccines in November 1955, Sabin presented results obtained on a group of 80 volunteers, while Koprowski read a paper detailing the findings of a trial enrolling 150 people. Sabin and Koprowski both eventually succeeded in developing vaccines. Because of the commitment to the Salk vaccine in America, Sabin and Koprowski both did their testing outside the United States, Sabin in Mexico and the Soviet Union, Koprowski in the Congo and Poland. In 1957, Sabin developed a trivalent vaccine containing attenuated strains of all three types of poliovirus. In 1959, ten million children in the Soviet Union received the Sabin oral vaccine. For this work, Sabin was given the medal of the Order of Friendship of Peoples, described as the Soviet Union's highest civilian honor. Sabin's oral vaccine using live virus came into commercial use in 1961.
Once Sabin's oral vaccine became widely available, it supplanted Salk's injected vaccine, which had been tarnished in the public's opinion by the Cutter incident of 1955, in which Salk vaccines improperly prepared by one company resulted in several children dying or becoming paralyzed.
1987
An enhanced-potency IPV was licensed in the United States in November 1987, and is currently the vaccine of choice there. The first dose of the polio vaccine is given shortly after birth, usually between 1 and 2 months of age, and a second dose is given at 4 months of age. The timing of the third dose depends on the vaccine formulation but should be given between 6 and 18 months of age. A booster vaccination is given at 4 to 6 years of age, for a total of four doses at or before school entry. In some countries, a fifth vaccination is given during adolescence. Routine vaccination of adults (18 years of age and older) in developed countries is neither necessary nor recommended because most adults are already immune and have a very small risk of exposure to wild poliovirus in their home countries. In 2002, a pentavalent (five-component) combination vaccine (called Pediarix) containing IPV was approved for use in the United States.
1988
A global effort to eradicate polio, led by the World Health Organization (WHO), UNICEF, and the Rotary Foundation, began in 1988, and has relied largely on the oral polio vaccine developed by Albert Sabin and Mikhail Chumakov (Sabin-Chumakov vaccine).
After 1990
Polio was eliminated in the Americas by 1994. The disease was officially eliminated in 36 Western Pacific countries, including China and Australia, in 2000. Europe was declared polio-free in 2002. Since January 2011, no cases of the disease have been reported in India, hence in February 2012, the country was taken off the WHO list of polio-endemic countries. In March 2014, India was declared a polio-free country.
Although poliovirus transmission has been interrupted in much of the world, transmission of wild poliovirus does continue and creates an ongoing risk for the importation of wild poliovirus into previously polio-free regions. If importations of poliovirus occur, outbreaks of poliomyelitis may develop, especially in areas with low vaccination coverage and poor sanitation. As a result, high levels of vaccination coverage must be maintained. In November 2013, the WHO announced a polio outbreak in Syria. In response, the Armenian government put out a notice asking Syrian Armenians under age 15 to get the polio vaccine. As of 2014, polio virus had spread to 10 countries, mainly in Africa, Asia, and the Middle East, with Pakistan, Syria, and Cameroon advising vaccinations to outbound travellers.
Polio vaccination programs have been resisted by some people in Pakistan, Afghanistan, and Nigeria - the three countries as of 2017 with remaining polio cases. Almost all Muslim religious and political leaders have endorsed the vaccine, but a fringe minority believes that the vaccines are secretly being used for the sterilisation of Muslims. The fact that the CIA organized a fake vaccination program in 2011 to help find Osama bin Laden is an additional cause of distrust. In 2015, the WHO announced a deal with the Taliban to encourage them to distribute the vaccine in areas they control. However, the Pakistani Taliban was not supportive. On 11 September 2016, two unidentified gunmen associated with the Pakistani Taliban, Jamaat-ul-Ahrar, shot Zakaullah Khan, a doctor who was administering polio vaccines in Pakistan. The leader of the Jamaat-ul-Ahrar claimed responsibility for the shooting and stated that the group would continue this type of attack. Such resistance to and skepticism of vaccinations has consequently slowed down the polio eradication process within the two remaining endemic countries.
Travel requirements
Travellers who wish to enter or leave certain countries must be vaccinated against polio, usually at most 12 months and at least 4 weeks before crossing the border, and be able to present a vaccination record/certificate at the border checks. Most requirements apply only to travel to or from so-called 'polio-endemic', 'polio-affected', 'polio-exporting', 'polio-transmission', or 'high-risk' countries. As of August 2020, Afghanistan and Pakistan are the only polio-endemic countries in the world (where wild polio has not yet been eradicated). Several countries have additional precautionary polio vaccination travel requirements, for example to and from 'key at-risk countries', which as of December 2020 include China, Indonesia, Mozambique, Myanmar, and Papua New Guinea.
Society and culture
Cost
, the Global Alliance for Vaccines and Immunization supplies the inactivated vaccine to developing countries for as little as (about ) per dose in 10-dose vials.
Misconceptions
A misconception has been present in Pakistan that the polio vaccine contains haram ingredients and could cause impotence and infertility in male children, leading some parents not to have their children vaccinated. This belief is most common in the Khyber Pakhtunkhwa province and the FATA region. Attacks on polio vaccination teams have also occurred, thereby hampering international efforts to eradicate polio in Pakistan and globally.
References
Further reading
External links
History of Vaccines Website – History of Polio History of Vaccines, a project of the College of Physicians of Philadelphia
PBS.org – 'People and Discoveries: Salk Produces Polio Vaccine 1952', Public Broadcasting Service (PBS)
Polio
1952 in biology
1955 introductions
American inventions
Inactivated vaccines
Live vaccines
Vaccines
World Health Organization essential medicines (vaccines)
Wikipedia medicine articles ready to translate | Polio vaccine | [
"Biology"
] | 8,075 | [
"Vaccination",
"Vaccines"
] |
192,266 | https://en.wikipedia.org/wiki/Trace%20class | In mathematics, specifically functional analysis, a trace-class operator is a linear operator for which a trace may be defined, such that the trace is a finite number independent of the choice of basis used to compute the trace. This trace of trace-class operators generalizes the trace of matrices studied in linear algebra. All trace-class operators are compact operators.
In quantum mechanics, quantum states are described by density matrices, which are certain trace class operators.
Trace-class operators are essentially the same as nuclear operators, though many authors reserve the term "trace-class operator" for the special case of nuclear operators on Hilbert spaces and use the term "nuclear operator" in more general topological vector spaces (such as Banach spaces).
Note that the trace operator studied in partial differential equations is an unrelated concept.
Definition
Let be a separable Hilbert space, an orthonormal basis and a positive bounded linear operator on . The trace of is denoted by and defined as
independent of the choice of orthonormal basis. A (not necessarily positive) bounded linear operator is called trace class if and only if
where denotes the positive-semidefinite Hermitian square root.
The trace-norm of a trace class operator is defined as
One can show that the trace-norm is a norm on the space of all trace class operators and that , with the trace-norm, becomes a Banach space.
When is finite-dimensional, every (positive) operator is trace class and this definition of trace of coincides with the definition of the trace of a matrix. If is complex, then is always self-adjoint (i.e. ) though the converse is not necessarily true.
Equivalent formulations
Given a bounded linear operator , each of the following statements is equivalent to being in the trace class:
is finite for every orthonormal basis of .
is a nuclear operator
There exist two orthogonal sequences and in and positive real numbers in such that and
where are the singular values of (or, equivalently, the eigenvalues of ), with each value repeated as often as its multiplicity.
is a compact operator with
If is trace class then
is an integral operator.
is equal to the composition of two Hilbert-Schmidt operators.
is a Hilbert-Schmidt operator.
Examples
Spectral theorem
Let be a bounded self-adjoint operator on a Hilbert space. Then is trace class if and only if has a pure point spectrum with eigenvalues such that
Mercer's theorem
Mercer's theorem provides another example of a trace class operator. That is, suppose is a continuous symmetric positive-definite kernel on , defined as
then the associated Hilbert–Schmidt integral operator is trace class, i.e.,
Finite-rank operators
Every finite-rank operator is a trace-class operator. Furthermore, the space of all finite-rank operators is a dense subspace of (when endowed with the trace norm).
Given any define the operator by
Then is a continuous linear operator of rank 1 and is thus trace class;
moreover, for any bounded linear operator A on H (and into H),
Properties
If is a non-negative self-adjoint operator, then is trace-class if and only if Therefore, a self-adjoint operator is trace-class if and only if its positive part and negative part are both trace-class. (The positive and negative parts of a self-adjoint operator are obtained by the continuous functional calculus.)
The trace is a linear functional over the space of trace-class operators, that is,
The bilinear map is an inner product on the trace class; the corresponding norm is called the Hilbert–Schmidt norm. The completion of the trace-class operators in the Hilbert–Schmidt norm are called the Hilbert–Schmidt operators.
is a positive linear functional such that if is a trace class operator satisfying then
If is trace-class then so is and
If is bounded, and is trace-class, then and are also trace-class (i.e. the space of trace-class operators on H is an ideal in the algebra of bounded linear operators on H), and
Furthermore, under the same hypothesis, and
The last assertion also holds under the weaker hypothesis that A and T are Hilbert–Schmidt.
If and are two orthonormal bases of H and if T is trace class then
If A is trace-class, then one can define the Fredholm determinant of : where is the spectrum of The trace class condition on guarantees that the infinite product is finite: indeed,
It also implies that if and only if is invertible.
If is trace class then for any orthonormal basis of the sum of positive terms is finite.
If for some Hilbert-Schmidt operators and then for any normal vector holds.
Lidskii's theorem
Let be a trace-class operator in a separable Hilbert space and let be the eigenvalues of Let us assume that are enumerated with algebraic multiplicities taken into account (that is, if the algebraic multiplicity of is then is repeated times in the list ). Lidskii's theorem (named after Victor Borisovich Lidskii) states that
Note that the series on the right converges absolutely due to Weyl's inequality
between the eigenvalues and the singular values of the compact operator
Relationship between common classes of operators
One can view certain classes of bounded operators as noncommutative analogue of classical sequence spaces, with trace-class operators as the noncommutative analogue of the sequence space
Indeed, it is possible to apply the spectral theorem to show that every normal trace-class operator on a separable Hilbert space can be realized in a certain way as an sequence with respect to some choice of a pair of Hilbert bases. In the same vein, the bounded operators are noncommutative versions of the compact operators that of (the sequences convergent to 0), Hilbert–Schmidt operators correspond to and finite-rank operators to (the sequences that have only finitely many non-zero terms). To some extent, the relationships between these classes of operators are similar to the relationships between their commutative counterparts.
Recall that every compact operator on a Hilbert space takes the following canonical form: there exist orthonormal bases and and a sequence of non-negative numbers with such that
Making the above heuristic comments more precise, we have that is trace-class iff the series is convergent, is Hilbert–Schmidt iff is convergent, and is finite-rank iff the sequence has only finitely many nonzero terms. This allows to relate these classes of operators. The following inclusions hold and are all proper when is infinite-dimensional:
The trace-class operators are given the trace norm The norm corresponding to the Hilbert–Schmidt inner product is
Also, the usual operator norm is By classical inequalities regarding sequences,
for appropriate
It is also clear that finite-rank operators are dense in both trace-class and Hilbert–Schmidt in their respective norms.
Trace class as the dual of compact operators
The dual space of is Similarly, we have that the dual of compact operators, denoted by is the trace-class operators, denoted by The argument, which we now sketch, is reminiscent of that for the corresponding sequence spaces. Let we identify with the operator defined by
where is the rank-one operator given by
This identification works because the finite-rank operators are norm-dense in In the event that is a positive operator, for any orthonormal basis one has
where is the identity operator:
But this means that is trace-class. An appeal to polar decomposition extend this to the general case, where need not be positive.
A limiting argument using finite-rank operators shows that Thus is isometrically isomorphic to
As the predual of bounded operators
Recall that the dual of is In the present context, the dual of trace-class operators is the bounded operators More precisely, the set is a two-sided ideal in So given any operator we may define a continuous linear functional on by This correspondence between bounded linear operators and elements of the dual space of is an isometric isomorphism. It follows that the dual space of This can be used to define the weak-* topology on
See also
Trace operator
References
Bibliography
Dixmier, J. (1969). Les Algebres d'Operateurs dans l'Espace Hilbertien. Gauthier-Villars.
Operator theory
Topological tensor products
Linear operators | Trace class | [
"Mathematics",
"Engineering"
] | 1,726 | [
"Functions and mappings",
"Tensors",
"Mathematical objects",
"Linear operators",
"Mathematical relations",
"Topological tensor products"
] |
192,294 | https://en.wikipedia.org/wiki/Screened%20Poisson%20equation | In physics, the screened Poisson equation is a Poisson equation, which arises in (for example) the Klein–Gordon equation, electric field screening in plasmas, and nonlocal granular fluidity in granular flow.
Statement of the equation
The equation is
where is the Laplace operator, λ is a constant that expresses the "screening", f is an arbitrary function of position (known as the "source function") and u is the function to be determined.
In the homogeneous case (f=0), the screened Poisson equation is the same as the time-independent Klein–Gordon equation. In the inhomogeneous case, the screened Poisson equation is very similar to the inhomogeneous Helmholtz equation, the only difference being the sign within the brackets.
Electrostatics
In electric-field screening, screened Poisson equation for the electric potential is usually written as (SI units)
where is the screening length, is the charge density produced by an external field in the absence of screening and is the vacuum permittivity. This equation can be derived in several screening models like Thomas–Fermi screening in solid-state physics and Debye screening in plasmas.
Solutions
Three dimensions
Without loss of generality, we will take λ to be non-negative. When λ is zero, the equation reduces to Poisson's equation. Therefore, when λ is very small, the solution approaches that of the unscreened Poisson equation, which, in dimension , is a superposition of 1/r functions weighted by the source function f:
On the other hand, when λ is extremely large, u approaches the value f/λ2, which goes to zero as λ goes to infinity. As we shall see, the solution for intermediate values of λ behaves as a superposition of screened (or damped) 1/r functions, with λ behaving as the strength of the screening.
The screened Poisson equation can be solved for general f using the method of Green's functions. The Green's function G is defined by
where δ3 is a delta function with unit mass concentrated at the origin of R3.
Assuming u and its derivatives vanish at large r, we may perform a continuous Fourier transform in spatial coordinates:
where the integral is taken over all space. It is then straightforward to show that
The Green's function in r is therefore given by the inverse Fourier transform,
This integral may be evaluated using spherical coordinates in k-space. The integration over the angular coordinates is straightforward, and the integral reduces to one over the radial wavenumber :
This may be evaluated using contour integration. The result is:
The solution to the full problem is then given by
As stated above, this is a superposition of screened 1/r functions, weighted by the source function f and with λ acting as the strength of the screening. The screened 1/r function is often encountered in physics as a screened Coulomb potential, also called a "Yukawa potential".
Two dimensions
In two dimensions:
In the case of a magnetized plasma, the screened Poisson equation is quasi-2D:
with and , with the magnetic field and is the (ion) Larmor radius.
The two-dimensional Fourier Transform of the associated Green's function is:
The 2D screened Poisson equation yields:
The Green's function is therefore given by the inverse Fourier transform:
This integral can be calculated using polar coordinates in k-space:
The integration over the angular coordinate gives a Bessel function, and the integral reduces to one over the radial wavenumber :
Connection to the Laplace distribution
The Green's functions in both 2D and 3D are identical to the probability density function of the multivariate Laplace distribution for two and three dimensions respectively.
Application in differential geometry
The homogeneous case, studied in the context of differential geometry, involving Einstein warped product manifolds, explores cases where the warped function satisfies the homogeneous version of the screened Poisson equation. Under specific conditions, the manifold size, Ricci curvature, and screening parameter are interconnected via a quadratic relationship.
See also
Yukawa interaction
References
Partial differential equations
Plasma physics equations
Electrostatics | Screened Poisson equation | [
"Physics"
] | 850 | [
"Equations of physics",
"Plasma physics equations"
] |
192,316 | https://en.wikipedia.org/wiki/Virtual%20particle | A virtual particle is a theoretical transient particle that exhibits some of the characteristics of an ordinary particle, while having its existence limited by the uncertainty principle, which allows the virtual particles to spontaneously emerge from vacuum at short time and space ranges. The concept of virtual particles arises in the perturbation theory of quantum field theory (QFT) where interactions between ordinary particles are described in terms of exchanges of virtual particles. A process involving virtual particles can be described by a schematic representation known as a Feynman diagram, in which virtual particles are represented by internal lines.
Virtual particles do not necessarily carry the same mass as the corresponding ordinary particle, although they always conserve energy and momentum. The closer its characteristics come to those of ordinary particles, the longer the virtual particle exists. They are important in the physics of many processes, including particle scattering and Casimir forces. In quantum field theory, forces—such as the electromagnetic repulsion or attraction between two charges—can be thought of as resulting from the exchange of virtual photons between the charges. Virtual photons are the exchange particles for the electromagnetic interaction.
The term is somewhat loose and vaguely defined, in that it refers to the view that the world is made up of "real particles". "Real particles" are better understood to be excitations of the underlying quantum fields. Virtual particles are also excitations of the underlying fields, but are "temporary" in the sense that they appear in calculations of interactions, but never as asymptotic states or indices to the scattering matrix. The accuracy and use of virtual particles in calculations is firmly established, but as they cannot be detected in experiments, deciding how to precisely describe them is a topic of debate. Although widely used, they are by no means a necessary feature of QFT, but rather are mathematical conveniences — as demonstrated by lattice field theory, which avoids using the concept altogether.
Properties
The concept of virtual particles arises in the perturbation theory of quantum field theory, an approximation scheme in which interactions (in essence, forces) between actual particles are calculated in terms of exchanges of virtual particles. Such calculations are often performed using schematic representations known as Feynman diagrams, in which virtual particles appear as internal lines. By expressing the interaction in terms of the exchange of a virtual particle with four-momentum , where is given by the difference between the four-momenta of the particles entering and leaving the interaction vertex, both momentum and energy are conserved at the interaction vertices of the Feynman diagram.
A virtual particle does not precisely obey the energy–momentum relation . Its kinetic energy may not have the usual relationship to velocity. It can be negative. This is expressed by the phrase off mass shell. The probability amplitude for a virtual particle to exist tends to be canceled out by destructive interference over longer distances and times. As a consequence, a real photon is massless and thus has only two polarization states, whereas a virtual one, being effectively massive, has three polarization states.
Quantum tunnelling may be considered a manifestation of virtual particle exchanges. The range of forces carried by virtual particles is limited by the uncertainty principle, which regards energy and time as conjugate variables; thus, virtual particles of larger mass have more limited range.
Written in the usual mathematical notations, in the equations of physics, there is no mark of the distinction between virtual and actual particles. The amplitudes of processes with a virtual particle interfere with the amplitudes of processes without it, whereas for an actual particle the cases of existence and non-existence cease to be coherent with each other and do not interfere any more. In the quantum field theory view, actual particles are viewed as being detectable excitations of underlying quantum fields. Virtual particles are also viewed as excitations of the underlying fields, but appear only as forces, not as detectable particles. They are "temporary" in the sense that they appear in some calculations, but are not detected as single particles. Thus, in mathematical terms, they never appear as indices to the scattering matrix, which is to say, they never appear as the observable inputs and outputs of the physical process being modelled.
There are two principal ways in which the notion of virtual particles appears in modern physics. They appear as intermediate terms in Feynman diagrams; that is, as terms in a perturbative calculation. They also appear as an infinite set of states to be summed or integrated over in the calculation of a semi-non-perturbative effect. In the latter case, it is sometimes said that virtual particles contribute to a mechanism that mediates the effect, or that the effect occurs through the virtual particles.
Manifestations
There are many observable physical phenomena that arise in interactions involving virtual particles. For bosonic particles that exhibit rest mass when they are free and actual, virtual interactions are characterized by the relatively short range of the force interaction produced by particle exchange. Confinement can lead to a short range, too. Examples of such short-range interactions are the strong and weak forces, and their associated field bosons.
For the gravitational and electromagnetic forces, the zero rest-mass of the associated boson particle permits long-range forces to be mediated by virtual particles. However, in the case of photons, power and information transfer by virtual particles is a relatively short-range phenomenon (existing only within a few wavelengths of the field-disturbance, which carries information or transferred power), as for example seen in the characteristically short range of inductive and capacitative effects in the near field zone of coils and antennas.
Some field interactions which may be seen in terms of virtual particles are:
The Coulomb force (static electric force) between electric charges. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space this exchange results in the inverse square law for electric force. Since the photon has no mass, the coulomb potential has an infinite range.
The magnetic field between magnetic dipoles. It is caused by the exchange of virtual photons. In symmetric 3-dimensional space, this exchange results in the inverse cube law for magnetic force. Since the photon has no mass, the magnetic potential has an infinite range. Even though the range is infinite, the time lapse allowed for a virtual photon existence is not infinite.
Electromagnetic induction. This phenomenon transfers energy to and from a magnetic coil via a changing (electro)magnetic field.
The strong nuclear force between quarks is the result of interaction of virtual gluons. The residual of this force outside of quark triplets (neutron and proton) holds neutrons and protons together in nuclei, and is due to virtual mesons such as the pi meson and rho meson.
The weak nuclear force is the result of exchange by virtual W and Z bosons.
The spontaneous emission of a photon during the decay of an excited atom or excited nucleus; such a decay is prohibited by ordinary quantum mechanics and requires the quantization of the electromagnetic field for its explanation.
The Casimir effect, where the ground state of the quantized electromagnetic field causes attraction between a pair of electrically neutral metal plates.
The van der Waals force, which is partly due to the Casimir effect between two atoms.
Vacuum polarization, which involves pair production or the decay of the vacuum, which is the spontaneous production of particle-antiparticle pairs (such as electron-positron).
Lamb shift of positions of atomic levels.
The impedance of free space, which defines the ratio between the electric field strength and the magnetic field strength : .
Much of the so-called near-field of radio antennas, where the magnetic and electric effects of the changing current in the antenna wire and the charge effects of the wire's capacitive charge may be (and usually are) important contributors to the total EM field close to the source, but both of which effects are dipole effects that decay with increasing distance from the antenna much more quickly than do the influence of "conventional" electromagnetic waves that are "far" from the source. These far-field waves, for which is (in the limit of long distance) equal to , are composed of actual photons. Actual and virtual photons are mixed near an antenna, with the virtual photons responsible only for the "extra" magnetic-inductive and transient electric-dipole effects, which cause any imbalance between and . As distance from the antenna grows, the near-field effects (as dipole fields) die out more quickly, and only the "radiative" effects that are due to actual photons remain as important effects. Although virtual effects extend to infinity, they drop off in field strength as rather than the field of EM waves composed of actual photons, which drop as .
Most of these have analogous effects in solid-state physics; indeed, one can often gain a better intuitive understanding by examining these cases. In semiconductors, the roles of electrons, positrons and photons in field theory are replaced by electrons in the conduction band, holes in the valence band, and phonons or vibrations of the crystal lattice. A virtual particle is in a virtual state where the probability amplitude is not conserved. Examples of macroscopic virtual phonons, photons, and electrons in the case of the tunneling process were presented by Günter Nimtz and Alfons A. Stahlhofen.
Feynman diagrams
The calculation of scattering amplitudes in theoretical particle physics requires the use of some rather large and complicated integrals over a large number of variables. These integrals do, however, have a regular structure, and may be represented as Feynman diagrams. The appeal of the Feynman diagrams is strong, as it allows for a simple visual presentation of what would otherwise be a rather arcane and abstract formula. In particular, part of the appeal is that the outgoing legs of a Feynman diagram can be associated with actual, on-shell particles. Thus, it is natural to associate the other lines in the diagram with particles as well, called the "virtual particles". In mathematical terms, they correspond to the propagators appearing in the diagram.
In the adjacent image, the solid lines correspond to actual particles (of momentum p1 and so on), while the dotted line corresponds to a virtual particle carrying momentum k. For example, if the solid lines were to correspond to electrons interacting by means of the electromagnetic interaction, the dotted line would correspond to the exchange of a virtual photon. In the case of interacting nucleons, the dotted line would be a virtual pion. In the case of quarks interacting by means of the strong force, the dotted line would be a virtual gluon, and so on.
Virtual particles may be mesons or vector bosons, as in the example above; they may also be fermions. However, in order to preserve quantum numbers, most simple diagrams involving fermion exchange are prohibited. The image to the right shows an allowed diagram, a one-loop diagram. The solid lines correspond to a fermion propagator, the wavy lines to bosons.
Vacuums
In formal terms, a particle is considered to be an eigenstate of the particle number operator a†a, where a is the particle annihilation operator and a† the particle creation operator (sometimes collectively called ladder operators). In many cases, the particle number operator does not commute with the Hamiltonian for the system. This implies the number of particles in an area of space is not a well-defined quantity but, like other quantum observables, is represented by a probability distribution. Since these particles are not certain to exist, they are called virtual particles or vacuum fluctuations of vacuum energy. In a certain sense, they can be understood to be a manifestation of the time-energy uncertainty principle in a vacuum.
An important example of the "presence" of virtual particles in a vacuum is the Casimir effect. Here, the explanation of the effect requires that the total energy of all of the virtual particles in a vacuum can be added together. Thus, although the virtual particles themselves are not directly observable in the laboratory, they do leave an observable effect: Their zero-point energy results in forces acting on suitably arranged metal plates or dielectrics. On the other hand, the Casimir effect can be interpreted as the relativistic van der Waals force.
Pair production
Virtual particles are often popularly described as coming in pairs, a particle and antiparticle which can be of any kind. These pairs exist for an extremely short time, and then mutually annihilate, or in some cases, the pair may be boosted apart using external energy so that they avoid annihilation and become actual particles, as described below.
This may occur in one of two ways. In an accelerating frame of reference, the virtual particles may appear to be actual to the accelerating observer; this is known as the Unruh effect. In short, the vacuum of a stationary frame appears, to the accelerated observer, to be a warm gas of actual particles in thermodynamic equilibrium.
Another example is pair production in very strong electric fields, sometimes called vacuum decay. If, for example, a pair of atomic nuclei are merged to very briefly form a nucleus with a charge greater than about 140, (that is, larger than about the inverse of the fine-structure constant, which is a dimensionless quantity), the strength of the electric field will be such that it will be energetically favorable to create positron–electron pairs out of the vacuum or Dirac sea, with the electron attracted to the nucleus to annihilate the positive charge. This pair-creation amplitude was first calculated by Julian Schwinger in 1951.
Compared to actual particles
As a consequence of quantum mechanical uncertainty, any object or process that exists for a limited time or in a limited volume cannot have a precisely defined energy or momentum. For this reason, virtual particles – which exist only temporarily as they are exchanged between ordinary particles – do not typically obey the mass-shell relation; the longer a virtual particle exists, the more the energy and momentum approach the mass-shell relation.
The lifetime of real particles is typically vastly longer than the lifetime of the virtual particles. Electromagnetic radiation consists of real photons which may travel light years between the emitter and absorber, but (Coulombic) electrostatic attraction and repulsion is a relatively short-range force that is a consequence of the exchange of virtual photons .
See also
Anomalous photovoltaic effect
False vacuum
Force carrier
Quasiparticle
Static forces and virtual-particle exchange
Zero-energy universe
Vacuum Rabi oscillation
Quantum foam
Virtual black hole
Added mass
Footnotes
References
External links
Are virtual particles really constantly popping in and out of existence?– Gordon Kane, director of the Michigan Center for Theoretical Physics at the University of Michigan at Ann Arbor, proposes an answer at the Scientific American website.
Virtual Particles: What are they?
D Kaiser (2005) American Scientist 93 p. 156 popular article
Concepts in physics
Quantum field theory | Virtual particle | [
"Physics"
] | 3,070 | [
"Quantum field theory",
"Quantum mechanics",
"nan"
] |
192,595 | https://en.wikipedia.org/wiki/Methicillin-resistant%20Staphylococcus%20aureus | Methicillin-resistant Staphylococcus aureus (MRSA) is a group of gram-positive bacteria that are genetically distinct from other strains of Staphylococcus aureus. MRSA is responsible for several difficult-to-treat infections in humans. It caused more than 100,000 deaths worldwide attributable to antimicrobial resistance in 2019.
MRSA is any strain of S. aureus that has developed (through natural selection) or acquired (through horizontal gene transfer) a multiple drug resistance to beta-lactam antibiotics. Beta-lactam (β-lactam) antibiotics are a broad-spectrum group that include some penams (penicillin derivatives such as methicillin and oxacillin) and cephems such as the cephalosporins. Strains unable to resist these antibiotics are classified as methicillin-susceptible S. aureus, or MSSA.
MRSA infection is common in hospitals, prisons, and nursing homes, where people with open wounds, invasive devices such as catheters, and weakened immune systems are at greater risk of healthcare-associated infection. MRSA began as a hospital-acquired infection but has become community-acquired, as well as livestock-acquired. The terms HA-MRSA (healthcare-associated or hospital-acquired MRSA), CA-MRSA (community-associated MRSA), and LA-MRSA (livestock-associated MRSA) reflect this.
Signs and symptoms
In humans, Staphylococcus aureus is part of the normal microbiota present in the upper respiratory tract, and on skin and in the gut mucosa. However, along with similar bacterial species that can colonize and act symbiotically, they can cause disease if they begin to take over the tissues they have colonized or invade other tissues; the resultant infection has been called a "pathobiont".
After 72 hours, MRSA can take hold in human tissues and eventually become resistant to treatment. The initial presentation of MRSA is small red bumps that resemble pimples, spider bites, or boils; they may be accompanied by fever and, occasionally, rashes. Within a few days, the bumps become larger and more painful; they eventually open into deep, pus-filled boils. About 75 percent of CA-MRSA infections are localized to skin and soft tissue and usually can be treated effectively.
Risk factors
A select few of the populations at risk include:
People with indwelling implants, prostheses, drains, and catheters
People who are frequently in crowded places, especially with shared equipment and skin-to-skin contact
People with weak immune systems (HIV/AIDS, lupus, or cancer patients; transplant recipients; severe asthmatics; primary immune deficiencies , etc.)
Diabetics
Intravenous drug users
Regular contact with someone who has injected drugs in the past year
Users of quinolone antibiotics
Elderly people
School children sharing sports and other equipment
College students living in dormitories
People staying or working in a health-care facility for an extended period of time
People who spend time in coastal waters where MRSA is present, such as some beaches in Florida and the West Coast of the United States
People who spend time in confined spaces with other people, including occupants of homeless shelters, prison inmates, and military recruits in basic training
Veterinarians, livestock handlers, and pet owners
People who ingest unpasteurized milk
People who are immunocompromised and also colonized
People with chronic obstructive pulmonary disease
People who have had thoracic surgery
As many as 22% of people infected with MRSA do not have any discernable risk factors.
Hospitalized people
People who are hospitalized, including the elderly, are often immunocompromised and susceptible to infection of all kinds, including MRSA; an infection by MRSA is called healthcare-associated or hospital-acquired methicillin-resistant S. aureus (HA-MRSA).
Generally, those infected by MRSA stay infected for just under 10 days, if treated by a doctor, although effects may vary from person to person.
Both surgical and nonsurgical wounds can be infected with HA-MRSA. Surgical site infections occur on the skin surface, but can spread to internal organs and blood to cause sepsis. Transmission can occur between healthcare providers and patients because some providers may neglect to perform preventative hand-washing between examinations.
People in nursing homes are at risk for all the reasons above, further complicated by their generally weaker immune systems.
Prison inmates and military personnel
Prisons and military barracks can be crowded and confined, and poor hygiene conditions may proliferate, thus putting inhabitants at increased risk of contracting MRSA. Cases of MRSA in such populations were first reported in the United States and later in Canada. The earliest reports were made by the Centers for Disease Control and Prevention in US state prisons. In the news media, hundreds of reports of MRSA outbreaks in prisons appeared between 2000 and 2008. For example, in February 2008, the Tulsa County jail in Oklahoma started treating an average of 12 S. aureus cases per month.
Animals
Antibiotic use in livestock increases the risk that MRSA will develop among the livestock and other animals that may reside near them; strains MRSA ST398 and CC398 are transmissible to humans. Generally, animals are asymptomatic.
Domestic pets are susceptible to MRSA infection by transmission from their owners; conversely, MRSA-infected pets can also transmit MRSA to humans.
Athletes
Locker rooms, gyms, and related athletic facilities offer potential sites for MRSA contamination and infection. Athletes have been identified as a high-risk group. A study linked MRSA to the abrasions caused by artificial turf. Three studies by the Texas State Department of Health found the infection rate among football players was 16 times the national average. In October 2006, a high-school football player was temporarily paralyzed from MRSA-infected turf burns. His infection returned in January 2007 and required three surgeries to remove infected tissue, and three weeks of hospital stay.
In 2013, Lawrence Tynes, Carl Nicks, and Johnthan Banks of the Tampa Bay Buccaneers were diagnosed with MRSA. Tynes and Nicks apparently did not contract the infection from each other, but whether Banks contracted it from either individual is unknown. In 2015, Los Angeles Dodgers infielder Justin Turner was infected while the team visited the New York Mets. In October 2015, New York Giants tight end Daniel Fells was hospitalized with a serious MRSA infection.
Children
MRSA is becoming a critical problem in children; studies found 4.6% of patients in U.S. health-care facilities, (presumably) including hospital nurseries, were infected or colonized with MRSA. Children and adults who come in contact with day-care centers, playgrounds, locker rooms, camps, dormitories, classrooms and other school settings, and gyms and workout facilities are at higher risk of contracting MRSA. Parents should be especially cautious of children who participate in activities where sports equipment is shared, such as football helmets and uniforms.
Intravenous drug users
Needle-required drugs have caused an increase of MRSA, with injection drug use (IDU) making up 24.1% (1,839 individuals) of Tennessee Hospital's Discharge System. The unsanitary methods of injection causes an access point for the MRSA to enter the blood stream and begin infecting the host. Furthermore, with MRSA's high contagion rate, a common risk factor is individuals who are in constant contact with someone who has injected drugs in the past year.
Mechanism
Antimicrobial resistance is genetically based; resistance is mediated by the acquisition of extrachromosomal genetic elements containing genes that confer resistance to certain antibiotics. Examples of such elements include plasmids, transposable genetic elements, and genomic islands, which can be transferred between bacteria through horizontal gene transfer. A defining characteristic of MRSA is its ability to thrive in the presence of penicillin-like antibiotics, which normally prevent bacterial growth by inhibiting synthesis of cell wall material. This is due to a resistance gene, mecA, which stops β-lactam antibiotics from inactivating the enzymes (transpeptidases) critical for cell wall synthesis.
SCCmec
Staphylococcal cassette chromosome mec (SCCmec) is a genomic island of unknown origin containing the antibiotic resistance gene mecA. SCCmec contains additional genes beyond mecA, including the cytolysin gene psm-mec, which may suppress virulence in HA-acquired MRSA strains. In addition, this locus encodes strain-dependent gene regulatory RNAs known as psm-mecRNA. SCCmec also contains ccrA and ccrB; both genes encode recombinases that mediate the site-specific integration and excision of the SCCmec element from the S. aureus chromosome. Currently, six unique SCCmec types ranging in size from 21 to 67 kb have been identified; they are designated types I–VI and are distinguished by variation in mec and ccr gene complexes. Owing to the size of the SCCmec element and the constraints of horizontal gene transfer, a minimum of five clones are thought to be responsible for the spread of MRSA infections, with clonal complex (CC) 8 most prevalent. SCCmec is thought to have originated in the closely related Staphylococcus sciuri species and transferred horizontally to S. aureus.
Different SCCmec genotypes confer different microbiological characteristics, such as different antimicrobial resistance rates. Different genotypes are also associated with different types of infections. Types I–III SCCmec are large elements that typically contain additional resistance genes and are characteristically isolated from HA-MRSA strains. Conversely, CA-MRSA is associated with types IV and V, which are smaller and lack resistance genes other than mecA.
These distinctions were thoroughly investigated by Collins et al. in 2001, and can be explained by the fitness differences associated with carriage of a large or small SCCmec plasmid. Carriage of large plasmids, such as SCCmecI–III, is costly to the bacteria, resulting in a compensatory decrease in virulence expression. MRSA is able to thrive in hospital settings with increased antibiotic resistance but decreased virulence – HA-MRSA targets immunocompromised, hospitalized hosts, thus a decrease in virulence is not maladaptive. In contrast, CA-MRSA tends to carry lower-fitness cost SCCmec elements to offset the increased virulence and toxicity expression required to infect healthy hosts.
mecA
mecA is a biomarker gene responsible for resistance to methicillin and other β-lactam antibiotics. After acquisition of mecA, the gene must be integrated and localized in the S. aureus chromosome. mecA encodes penicillin-binding protein 2a (PBP2a), which differs from other penicillin-binding proteins as its active site does not bind methicillin or other β-lactam antibiotics. As such, PBP2a can continue to catalyze the transpeptidation reaction required for peptidoglycan cross-linking, enabling cell wall synthesis even in the presence of antibiotics. As a consequence of the inability of PBP2a to interact with β-lactam moieties, acquisition of mecA confers resistance to all β-lactam antibiotics in addition to methicillin.
mecA is under the control of two regulatory genes, mecI and mecR1. MecI is usually bound to the mecA promoter and functions as a repressor. In the presence of a β-lactam antibiotic, MecR1 initiates a signal transduction cascade that leads to transcriptional activation of mecA. This is achieved by MecR1-mediated cleavage of MecI, which alleviates MecI repression. mecA is further controlled by two co-repressors, blaI and blaR1. blaI and blaR1 are homologous to mecI and mecR1, respectively, and normally function as regulators of blaZ, which is responsible for penicillin resistance. The DNA sequences bound by mecI and blaI are identical; therefore, blaI can also bind the mecA operator to repress transcription of mecA.
Arginine catabolic mobile element
The arginine catabolic mobile element (ACME) is a virulence factor present in many MRSA strains but not prevalent in MSSA. SpeG-positive ACME compensates for the polyamine hypersensitivity of S. aureus and facilitates stable skin colonization, wound infection, and person-to-person transmission.
Strains
Acquisition of SCCmec in methicillin-sensitive S. aureus (MSSA) gives rise to a number of genetically different MRSA lineages. These genetic variations within different MRSA strains possibly explain the variability in virulence and associated MRSA infections. The first MRSA strain, ST250 MRSA-1, originated from SCCmec and ST250-MSSA integration. Historically, major MRSA clones ST2470-MRSA-I, ST239-MRSA-III, ST5-MRSA-II, and ST5-MRSA-IV were responsible for causing hospital-acquired MRSA (HA-MRSA) infections. ST239-MRSA-III, known as the Brazilian clone, was highly transmissible compared to others and distributed in Argentina, Czech Republic, and Portugal.
In the UK, the most common strains of MRSA are EMRSA15 and EMRSA16. EMRSA16 has been found to be identical to the ST36:USA200 strain, which circulates in the United States, and to carry the SCCmec type II, enterotoxin A and toxic shock syndrome toxin 1 genes. Under the new international typing system, this strain is now called MRSA252. EMRSA 15 is also found to be one of the common MRSA strains in Asia. Other common strains include ST5:USA100 and EMRSA 1. These strains are genetic characteristics of HA-MRSA.
Community-acquired MRSA (CA-MRSA) strains emerged in late 1990 to 2000, infecting healthy people who had not been in contact with healthcare facilities. Researchers suggest that CA-MRSA did not evolve from HA-MRSA. This is further proven by molecular typing of CA-MRSA strains and genome comparison between CA-MRSA and HA-MRSA, which indicate that novel MRSA strains integrated SCCmec into MSSA separately on its own. By mid-2000, CA-MRSA was introduced into healthcare systems and distinguishing CA-MRSA from HA-MRSA became a difficult process. Community-acquired MRSA is more easily treated and more virulent than hospital-acquired MRSA (HA-MRSA). The genetic mechanism for the enhanced virulence in CA-MRSA remains an active area of research. The Panton–Valentine leukocidin (PVL) genes are of particular interest because they are a unique feature of CA-MRSA.
In the United States, most cases of CA-MRSA are caused by a CC8 strain designated ST8:USA300, which carries SCCmec type IV, Panton–Valentine leukocidin, PSM-alpha and enterotoxins Q and K, and ST1:USA400. The ST8:USA300 strain results in skin infections, necrotizing fasciitis, and toxic shock syndrome, whereas the ST1:USA400 strain results in necrotizing pneumonia and pulmonary sepsis. Other community-acquired strains of MRSA are ST8:USA500 and ST59:USA1000. In many nations of the world, MRSA strains with different genetic background types have come to predominate among CA-MRSA strains; USA300 easily tops the list in the U.S. and is becoming more common in Canada after its first appearance there in 2004. For example, in Australia, ST93 strains are common, while in continental Europe ST80 strains, which carry SCCmec type IV, predominate. In Taiwan, ST59 strains, some of which are resistant to many non-beta-lactam antibiotics, have arisen as common causes of skin and soft tissue infections in the community. In a remote region of Alaska, unlike most of the continental U.S., USA300 was found only rarely in a study of MRSA strains from outbreaks in 1996 and 2000 as well as in surveillance from 2004 to 2006.
A MRSA strain, CC398, is found in intensively reared production animals (primarily pigs, but also cattle and poultry), where it can be transmitted to humans as LA-MRSA (livestock-associated MRSA).
Diagnosis
Diagnostic microbiology laboratories and reference laboratories are key for identifying outbreaks of MRSA. Normally, a bacterium must be cultured from blood, urine, sputum, or other body-fluid samples, and in sufficient quantities to perform confirmatory tests early-on. Still, because no quick and easy method exists to diagnose MRSA, initial treatment of the infection is often based upon "strong suspicion" and techniques by the treating physician; these include quantitative PCR procedures, which are employed in clinical laboratories for quickly detecting and identifying MRSA strains.
Another common laboratory test is a rapid latex agglutination test that detects the PBP2a protein. PBP2a is a variant penicillin-binding protein that imparts the ability of S. aureus to be resistant to oxacillin.
Microbiology
Like all S. aureus (also abbreviated SA at times), methicillin-resistant S. aureus is a gram-positive, spherical (coccus) bacterium about 1 micron in diameter. It does not form spores and it is not motile. It is frequently found in grape-like clusters or chains. Unlike methicillin-susceptible S. aureus (MSSA), MRSA is slow-growing on a variety of media and has been found to exist in mixed colonies of MSSA. The mecA gene, which confers resistance to a number of antibiotics, is always present in MRSA and usually absent in MSSA; however, in some instances, the mecA gene is present in MSSA but is not expressed. Polymerase chain reaction (PCR) testing is the most precise method for identifying MRSA strains. Specialized culture media have been developed to better differentiate between MSSA and MRSA and, in some cases, such media can be used to identify specific strains that are resistant to different antibiotics.
Other strains of S. aureus have emerged that are resistant to oxacillin, clindamycin, teicoplanin, and erythromycin. These resistant strains may or may not possess the mecA gene. S. aureus has also developed resistance to vancomycin (VRSA). One strain is only partially susceptible to vancomycin and is called vancomycin-intermediate S. aureus (VISA). GISA, a strain of resistant S. aureus, is glycopeptide-intermediate S. aureus and is less suspectible to vancomycin and teicoplanin. Resistance to antibiotics in S. aureus can be quantified by determining the amount of the antibiotic that must be used to inhibit growth. If S. aureus is inhibited at a concentration of vancomycin less than or equal to 4 μg/ml, it is said to be susceptible. If a concentration greater than 32 μg/ml is necessary to inhibit growth, it is said to be resistant.
Prevention
Screening
In health-care settings, isolating those with MRSA from those without the infection is one method to prevent transmission. Rapid culture and sensitivity testing and molecular testing identifies carriers and reduces infection rates. It is especially important to test patients in these settings since 2% of people are carriers of MRSA, even though in many of these cases the bacteria reside in the nostril and the patient will not present any symptoms.
MRSA can be identified by swabbing the nostrils and isolating the bacteria found there. Combined with extra sanitary measures for those in contact with infected people, swab screening people admitted to hospitals has been found to be effective in minimizing the spread of MRSA in hospitals in the United States, Denmark, Finland, and the Netherlands.
Handwashing
The Centers for Disease Control and Prevention offers suggestions for preventing the contraction and spread of MRSA infection which are applicable to those in community settings, including incarcerated populations, childcare center employees, and athletes. To prevent the spread of MRSA, the recommendations are to wash hands thoroughly and regularly using soap and water or an alcohol-based sanitizer. Additional recommendations are to keep wounds clean and covered, avoid contact with other people's wounds, avoid sharing personal items such as razors or towels, shower after exercising at athletic facilities, and shower before using swimming pools or whirlpools.
Isolation
Excluding medical facilities, current US guidance does not require workers with MRSA infections to be routinely excluded from the general workplace. The National Institutes of Health recommend that those with wound drainage that cannot be covered and contained with a clean, dry bandage and those who cannot maintain good hygiene practices be reassigned, and patients with wound drainage should also automatically be put on "Contact Precaution," regardless of whether or not they have a known infection. Workers with active infections are excluded from activities where skin-to-skin contact is likely to occur. To prevent the spread of staphylococci or MRSA in the workplace, employers are encouraged to make available adequate facilities that support good hygiene. In addition, surface and equipment sanitizing should conform to Environmental Protection Agency-registered disinfectants. In hospital settings, contact isolation can be stopped after one to three cultures come back negative. Before the patient is cleared from isolation, it is advised that there is dedicated patient-care or single-use equipment for that particular patient. If this is not possible, the equipment must be properly disinfected before it is used on another patient.
To prevent the spread of MRSA in the home, health departments recommend laundering materials that have come into contact with infected persons separately and with a dilute bleach solution; to reduce the bacterial load in one's nose and skin; and to clean and disinfect those things in the house that people regularly touch, such as sinks, tubs, kitchen counters, cell phones, light switches, doorknobs, phones, toilets, and computer keyboards.
Restricting antibiotic use
Glycopeptides, cephalosporins, and in particular, quinolones are associated with an increased risk of colonisation of MRSA. Reducing use of antibiotic classes that promote MRSA colonisation, especially fluoroquinolones, is recommended in current guidelines.
Public health considerations
Mathematical models describe one way in which a loss of infection control can occur after measures for screening and isolation seem to be effective for years, as happened in the UK. In the "search and destroy" strategy that was employed by all UK hospitals until the mid-1990s, all hospitalized people with MRSA were immediately isolated, and all staff were screened for MRSA and were prevented from working until they had completed a course of eradication therapy that was proven to work. Loss of control occurs because colonised people are discharged back into the community and then readmitted; when the number of colonised people in the community reaches a certain threshold, the "search and destroy" strategy is overwhelmed. One of the few countries not to have been overwhelmed by MRSA is the Netherlands: an important part of the success of the Dutch strategy may have been to attempt eradication of carriage upon discharge from hospital.
Decolonization
As of 2013, no randomized clinical trials had been conducted to understand how to treat nonsurgical wounds that had been colonized, but not infected, with MRSA, and insufficient studies had been conducted to understand how to treat surgical wounds that had been colonized with MRSA. As of 2013, whether strategies to eradicate MRSA colonization of people in nursing homes reduced infection rates was not known.
Care should be taken when trying to drain boils, as disruption of surrounding tissue can lead to larger infections, including infection of the blood stream. Mupirocin 2% ointment can be effective at reducing the size of lesions. A secondary covering of clothing is preferred. As shown in an animal study with diabetic mice, the topical application of a mixture of sugar (70%) and 3% povidone-iodine paste is an effective agent for the treatment of diabetic ulcers with MRSA infection.
Community settings
Maintaining the necessary cleanliness may be difficult for people if they do not have access to facilities such as public toilets with handwashing facilities. In the United Kingdom, the Workplace (Health, Safety and Welfare) Regulations 1992 require businesses to provide toilets for their employees, along with washing facilities including soap or other suitable means of cleaning. Guidance on how many toilets to provide and what sort of washing facilities should be provided alongside them is given in the Workplace (Health, Safety and Welfare) Approved Code of Practice and Guidance L24, available from Health and Safety Executive Books, but no legal obligations exist on local authorities in the United Kingdom to provide public toilets, and although in 2008, the House of Commons Communities and Local Government Committee called for a duty on local authorities to develop a public toilet strategy, this was rejected by the Government.
Agriculture
The World Health Organization advocates regulations on the use of antibiotics in animal feed to prevent the emergence of drug-resistant strains of MRSA. MRSA is established in animals and birds.
Treatment
Antibiotics
Treatment of MRSA infection is urgent and delays can be fatal. The location and history related to the infection determines the treatment. The route of administration of an antibiotic varies. Antibiotics effective against MRSA can be given by IV, oral, or a combination of both, and depend on the specific circumstances and patient characteristics. The use of concurrent treatment with vancomycin or other beta-lactam agents may have a synergistic effect.
Both CA-MRSA and HA-MRSA are resistant to traditional anti-staphylococcal beta-lactam antibiotics, such as cephalexin. CA-MRSA has a greater spectrum of antimicrobial susceptibility to sulfa drugs (like co-trimoxazole (trimethoprim/sulfamethoxazole), tetracyclines (like doxycycline and minocycline) and clindamycin (for osteomyelitis). MRSA can be eradicated with a regimen of linezolid, though treatment protocols vary and serum levels of antibiotics vary widely from person to person and may affect outcomes.
The effective treatment of MRSA with linezolid has been successful in 87% of people. Linezolid is more effective in soft tissue infections than vancomycin. This is compared to eradication of infection in those with MRSA treated with vancomycin. Treatment with vancomycin is successful in approximately 49% of people. Linezolid belongs to the newer oxazolidinone class of antibiotics which has been shown to be effective against both CA-MRSA and HA-MRSA. The Infectious Disease Society of America recommends vancomycin, linezolid, or clindamycin (if susceptible) for treating those with MRSA pneumonia.
Ceftaroline, a fifth-generation cephalosporin, is the first beta-lactam antibiotic approved in the US to treat MRSA infections in skin and soft tissue or community-acquired pneumonia.
Vancomycin and teicoplanin are glycopeptide antibiotics used to treat MRSA infections. Teicoplanin is a structural congener of vancomycin that has a similar activity spectrum but a longer half-life. Because the oral absorption of vancomycin and teicoplanin is very low, these agents can be administered intravenously to control systemic infections. Treatment of MRSA infection with vancomycin can be complicated, due to its inconvenient route of administration. Moreover, the efficacy of vancomycin against MRSA is inferior to that of anti-staphylococcal beta-lactam antibiotics against methicillin-susceptible S. aureus (MSSA).
Several newly discovered strains of MRSA show antibiotic resistance even to vancomycin and teicoplanin. Strains with intermediate (4–8 μg/ml) levels of resistance, termed glycopeptide-intermediate S. aureus (GISA) or vancomycin-intermediate S. aureus (VISA), began appearing in the late 1990s. The first identified case was in Japan in 1996, and strains have since been found in hospitals in England, France, and the US. The first documented strain with complete (>16 μg/ml) resistance to vancomycin, termed vancomycin-resistant S. aureus (VRSA), appeared in the United States in 2002. In 2011, a variant of vancomycin was tested that binds to the lactate variation and also binds well to the original target, thus reinstating potent antimicrobial activity. Linezolid, quinupristin/dalfopristin, daptomycin, ceftaroline, and tigecycline are used to treat more severe infections that do not respond to glycopeptides such as vancomycin. Current guidelines recommend daptomycin for VISA bloodstream infections and endocarditis.
Oxazolidinones such as linezolid became available in the 1990s and are comparable to vancomycin in effectiveness against MRSA. Linezolid resistance in S. aureus was reported in 2001, but infection rates have been at consistently low levels. In the United Kingdom and Ireland, no linezolid resistance was found in staphylococci collected from bacteremia cases between 2001 and 2006.
Skin and soft-tissue infections
In skin abscesses, the primary treatment recommended is removal of dead tissue, incision, and drainage. More information is needed to determine the effectiveness of specific antibiotics therapy in surgical site infections (SSIs). Examples of soft-tissue infections from MRSA include ulcers, impetigo, abscesses, and SSIs.
In surgical wounds, evidence is weak (high risk of bias) that linezolid may be better than vancomycin to eradicate MRSA SSIs.
MRSA colonization is also found in nonsurgical wounds such as traumatic wounds, burns, and chronic ulcers (i.e.: diabetic ulcer, pressure ulcer, arterial insufficiency ulcer, venous ulcer). No conclusive evidence has been found about the best antibiotic regimen to treat MRSA colonization.
Children
In skin infections and secondary infection sites, topical mupirocin is used successfully. For bacteremia and endocarditis, vancomycin or daptomycin is considered. For children with MRSA-infected bone or joints, treatment is individualized and long-term. Neonates can develop neonatal pustulosis as a result of topical infection with MRSA. Clindamycin is not approved for the treatment of MRSA infection, but it is still used in children for soft-tissue infections.
Endocarditis and bacteremia
Evaluation for the replacement of a prosthetic valve is considered. Appropriate antibiotic therapy may be administered for up to six weeks. Four to six weeks of antibiotic treatment is often recommended, and is dependent upon the extent of MRSA infection.
Respiratory infections
CA-MRSA in hospitalized patients pneumonia treatment begins before culture results. After the susceptibility to antibiotics is performed, the infection may be treated with vancomycin or linezolid for up to 21 days. If the pneumonia is complicated by the accumulation of pus in the pleural cavity surrounding the lungs, drainage may be done along with antibiotic therapy. People with cystic fibrosis may develop respiratory complications related to MRSA infection. The incidence of MRSA in those with cystic fibrosis increased during 2000 to 2015 by five times. Most of these infections were HA-MRSA. MRSA accounts for 26% of lung infections in those with cystic fibrosis.
There is insufficient evidence to support the use of topical or systematic antibiotics for nasal or extra-nasal MRSA infection.
Bone and joint infections
Cleaning the wound of dead tissue and draining abscesses is the first action to treat the MRSA infection. Administration of antibiotics is not standardized and is adapted by a case-by-case basis. Antibiotic therapy can last up to 3 months and sometimes even longer.
Infected implants
MRSA infection can occur associated with implants and joint replacements. Recommendations on treatment are based upon the length of time the implant has been in place. In cases of a recent placement of a surgical implant or artificial joint, the device may be retained while antibiotic therapy continues. If the placement of the device has occurred over 3 weeks ago, the device may be removed. Antibiotic therapy is used in each instance sometimes long-term.
Central nervous system
MRSA can infect the central nervous system and form brain abscess, subdural empyema, and spinal epidural abscess. Excision and drainage can be done along with antibiotic treatment. Septic thrombosis of cavernous or dural venous sinus can sometimes be a complication.
Other infections
Treatment is not standardized for other instances of MRSA infection in a wide range of tissues. Treatment varies for MRSA infections related to: subperiosteal abscesses, necrotizing pneumonia, cellulitis, pyomyositis, necrotizing fasciitis, mediastinitis, myocardial, perinephric, hepatic, and splenic abscesses, septic thrombophlebitis, and severe ocular infections, including endophthalmitis. Pets can be reservoirs and pass on MRSA to people. In some cases, the infection can be symptomatic and the pet can develop a MRSA infection. Health departments recommend that the pet be taken to the veterinarian if MRSA infections keep occurring in the people who have contact with the pet.
Epidemiology
Worldwide, an estimated 2 billion people carry some form of S. aureus; of these, up to 53 million (2.7% of carriers) are thought to carry MRSA. S. aureus was identified as one of the six leading pathogens for deaths associated with resistance in 2019 and 100,000 deaths caused by MRSA were attributable to antimicrobial resistance.
HA-MRSA (healthcare associated)
In a US cohort study of 1,300 healthy children, 2.4% carried MRSA in their nose. Bacterial sepsis occurs with most (75%) of cases of invasive MRSA infection. In 2009, there were an estimated 463,017 hospitalizations due to MRSA, or a rate of 11.74 per 1,000 hospitalizations. Many of these infections are less serious, but the Centers for Disease Control and Prevention (CDC) estimate that there are 80,461 invasive MRSA infections and 11,285 deaths due to MRSA annually. In 2003, the cost for a hospitalization due to MRSA infection was US$92,363; a hospital stay for MSSA was $52,791.
Infection after surgery is relatively uncommon, but occurs as much as 33% in specific types of surgeries. Infections of surgical sites range from 1% to 33%. MRSA sepsis that occurs within 30 days following a surgical infection has a 15–38% mortality rate; MRSA sepsis that occurs within one year has a mortality rate of around 55%. There may be increased mortality associated with cardiac surgery. There is a rate of 12.9% in those infected with MRSA while only 3% infected with other organisms. SSIs infected with MRSA had longer hospital stays than those who did not.
Globally, MRSA infection rates are dynamic and vary year to year. According to the 2006 SENTRY Antimicrobial Surveillance Program report, the incidence of MRSA bloodstream infections was 35.9% in North America. MRSA blood infections in Latin America was 29%. European incidence was 22.8%. The rate of all MRSA infections in Europe ranged from 50% in Portugal down to 0.8% in Sweden. Overall MRSA infection rates varied in Latin America: Colombia and Venezuela combined had 3%, Mexico had 50%, Chile 38%, Brazil 29%, and Argentina 28%.
The Centers for Disease Control and Prevention (CDC) estimated that about 1.7 million nosocomial infections occurred in the United States in 2002, with 99,000 associated deaths. The estimated incidence is 4.5 nosocomial infections per 100 admissions, with direct costs (at 2004 prices) ranging from $10,500 (£5300, €8000 at 2006 rates) per case (for bloodstream, urinary tract, or respiratory infections in immunocompetent people) to $111,000 (£57,000, €85,000) per case for antibiotic-resistant infections in the bloodstream in people with transplants. With these numbers, conservative estimates of the total direct costs of nosocomial infections are above $17 billion. The reduction of such infections forms an important component of efforts to improve healthcare safety. (BMJ 2007) MRSA alone was associated with 8% of nosocomial infections reported to the CDC National Healthcare Safety Network from January 2006 to October 2007.
The British National Audit Office estimated that the incidence of nosocomial infections in Europe ranges from 4% to 10% of all hospital admissions. As of early 2005, the number of deaths in the United Kingdom attributed to MRSA has been estimated by various sources to lie in the area of 3,000 per year.
In the United States, an estimated 95 million people carry S. aureus in their noses; of these, 2.5 million (2.6% of carriers) carry MRSA. A population review conducted in three U.S. communities showed the annual incidence of CA-MRSA during 2001–2002 to be 18–25.7/100,000; most CA-MRSA isolates were associated with clinically relevant infections, and 23% of people required hospitalization.
CA-MRSA (community associated)
In a US cohort study of 1,300 healthy children, 2.4% carried MRSA in their noses. There are concerns that the presence of MRSA in the environment may allow resistance to be transferred to other bacteria through phages (viruses that infect bacteria). The source of MRSA could come from hospital waste, farm sewage, or other waste water. MRSA is also common in infections of dogs and cats and transmission to humans can occur, since pet owners hug and kiss their pets or let them sleep in their beds. While sharing of isolates can occur, infections in humans seem to originate from HA-MRSA rather than from pet-acquired CA-MRSA.
LA-MRSA (livestock associated)
In 2004, MRSA was first isolated on a Dutch pig farm leading to further investigations of livestock associated MRSA (LA-MRSA). Livestock associated MRSA (LA-MRSA) has been observed in Korea, Brazil, Switzerland, Malaysia, India, Great Britain, Denmark, and China.
History
In 1961, the first known MRSA isolates were reported in a British study, and from 1961 to 1967, infrequent hospital outbreaks occurred in Western Europe and Australia, with methicillin then being licensed in England to treat resistant infections. Other reports of MRSA began to be described in the 1970s. Resistance to other antibiotics was documented in some strains of S. aureus. In 1996, vancomycin resistance was reported in Japan. In many countries, outbreaks of MRSA infection were reported to be transmitted between hospitals. The rate had increased to 22% by 1995, and by 1997 the level of hospital S. aureus infections attributable to MRSA had reached 50%.
The first report of community-associated MRSA (CA-MRSA) occurred in 1981, and in 1982, a large outbreak of CA-MRSA occurred among intravenous drug users in Detroit, Michigan. Additional outbreaks of CA-MRSA were reported through the 1980s and 1990s, including outbreaks among Australian Aboriginal populations that had never been exposed to hospitals. In the mid-1990s, scattered reports of CA-MRSA outbreaks among US children were made. While HA-MRSA rates stabilized between 1998 and 2008, CA-MRSA rates continued to rise. A report released by the University of Chicago Children's Hospital comparing two periods (1993–1995 and 1995–1997) found a 25-fold increase in the rate of hospitalizations due to MRSA among children in the United States. In 1999, the University of Chicago reported the first deaths from invasive MRSA among otherwise healthy children in the United States. By 2004, the genome for various strains of MRSA were described.
The observed increased mortality among MRSA-infected people arguably may be the result of the increased underlying morbidity of these people. Several studies, however, including one by Blot and colleagues, that have adjusted for underlying disease still found MRSA bacteremia to have a higher attributable mortality than methicillin-susceptible S. aureus (MSSA) bacteremia.
A population-based study of the incidence of MRSA infections in San Francisco during 2004–05 demonstrated that nearly one in 300 residents had such an infection in the course of a year and that greater than 85% of these infections occurred outside of the healthcare setting. A 2004 study showed that people in the United States with S. aureus infection had, on average, three times the length of hospital stay (14.3 vs. 4.5 days), incurred three times the total cost ($48,824 vs. $14,141), and experienced five times the risk of in-hospital death (11.2% vs 2.3%) than people without this infection. In a meta-analysis of 31 studies, Cosgrove et al., concluded that MRSA bacteremia is associated with increased mortality as compared with MSSA bacteremia (odds ratio= 1.93; 95% ). In addition, Wyllie et al. report a death rate of 34% within 30 days among people infected with MRSA, a rate similar to the death rate of 27% seen among MSSA-infected people.
In the US, the CDC issued guidelines on October 19, 2006, citing the need for additional research, but declined to recommend such screening.
According to the CDC, the most recent estimates of the incidence of healthcare-associated infections that are attributable to MRSA in the United States indicate a decline in such infection rates. Incidence of MRSA central line-associated blood-stream infections as reported by hundreds of intensive care units decreased 50–70% from 2001 to 2007. A separate system tracking all hospital MRSA bloodstream infections found an overall 34% decrease between 2005 and 2008. In 2010, vancomycin was the drug of choice.
Across Europe, based mostly on data from 2013, seven countries (Iceland, Norway, Sweden, the Netherlands, Denmark, Finland, and Estonia, from lowest to highest) had low levels of hospital-acquired MRSA infections compared to the others, and among countries with higher levels, significant improvements had been made only in Bulgaria, Poland, and the British Isles.
A 1,000-year-old eye salve recipe found in the medieval Bald's Leechbook at the British Library, one of the earliest known medical textbooks, was found to have activity against MRSA in vitro and in skin wounds in mice.
In the media
MRSA is frequently a media topic, especially if well-known personalities have announced that they have or have had the infection. Word of outbreaks of infection appears regularly in newspapers and television news programs. A report on skin and soft-tissue infections in the Cook County jail in Chicago in 2004–05 demonstrated MRSA was the most common cause of these infections among those incarcerated there. Lawsuits filed against those who are accused of infecting others with MRSA are also popular stories in the media.
MRSA is the topic of radio programs, television shows, books, and movies.
Research
Various antibacterial chemical extracts from various species of the sweetgum tree (genus Liquidambar) have been investigated for their activity in inhibiting MRSA. Specifically, these are: cinnamic acid, cinnamyl cinnamate, ethyl cinnamate, benzyl cinnamate, styrene, vanillin, cinnamyl alcohol, 2-phenylpropyl alcohol, and 3-phenylpropyl cinnamate.
The delivery of inhaled antibiotics along with systematic administration to treat MRSA are being developed. This may improve the outcomes of those with cystic fibrosis and other respiratory infections. Phage therapy has been used for years in MRSA in eastern countries, and studies are ongoing in western countries. Host-directed therapeutics, including host kinase inhibitors, as well as antimicrobial peptides are under study as adjunctive or alternative treatment for MRSA.
A 2015 Cochrane systematic review aimed to assess the effectiveness of wearing gloves, gowns and masks to help stop the spread of MRSA in hospitals, however no eligible studies were identified for inclusion. The review authors concluded that there is a need for randomized controlled trials to be conducted to help determine if the use of gloves, gowns, and masks reduces the transmission of MRSA in hospitals.
See also
MRSA ST398
References
Further reading
The Centers for Disease Control and Prevention information, prevention, statistics, at risk groups, causes, educational resources, and environmental factors.
National Institute for Occupational Safety and Health information on the bacteria, exposure in the workplace, and reducing risks of being infected.
Antibiotic-resistant bacteria
Bacterial diseases
Healthcare-associated infections
Staphylococcaceae
Infection-related cutaneous conditions
Bacterium-related cutaneous conditions
Cat diseases
Pathovars | Methicillin-resistant Staphylococcus aureus | [
"Biology"
] | 9,678 | [
"Bacteria",
"Antibiotic-resistant bacteria"
] |
192,904 | https://en.wikipedia.org/wiki/Ultimate%20fate%20of%20the%20universe | The ultimate fate of the universe is a topic in physical cosmology, whose theoretical restrictions allow possible scenarios for the evolution and ultimate fate of the universe to be described and evaluated. Based on available observational evidence, deciding the fate and evolution of the universe has become a valid cosmological question, being beyond the mostly untestable constraints of mythological or theological beliefs. Several possible futures have been predicted by different scientific hypotheses, including that the universe might have existed for a finite and infinite duration, or towards explaining the manner and circumstances of its beginning.
Observations made by Edwin Hubble during the 1930s–1950s found that galaxies appeared to be moving away from each other, leading to the currently accepted Big Bang theory. This suggests that the universe began very dense about 13.787 billion years ago, and it has expanded and (on average) become less dense ever since. Confirmation of the Big Bang mostly depends on knowing the rate of expansion, average density of matter, and the physical properties of the mass–energy in the universe.
There is a strong consensus among cosmologists that the shape of the universe is considered "flat" (parallel lines stay parallel) and will continue to expand forever.
Factors that need to be considered in determining the universe's origin and ultimate fate include the average motions of galaxies, the shape and structure of the universe, and the amount of dark matter and dark energy that the universe contains.
Emerging scientific basis
Theory
The theoretical scientific exploration of the ultimate fate of the universe became possible with Albert Einstein's 1915 theory of general relativity. General relativity can be employed to describe the universe on the largest possible scale. There are several possible solutions to the equations of general relativity, and each solution implies a possible ultimate fate of the universe.
Alexander Friedmann proposed several solutions in 1922, as did Georges Lemaître in 1927. In some of these solutions, the universe has been expanding from an initial singularity which was, essentially, the Big Bang.
Observation
In 1929, Edwin Hubble published his conclusion, based on his observations of Cepheid variable stars in distant galaxies, that the universe was expanding. From then on, the beginning of the universe and its possible end have been the subjects of serious scientific investigation.
Big Bang and Steady State theories
In 1927, Georges Lemaître set out a theory that has since come to be called the Big Bang theory of the origin of the universe. In 1948, Fred Hoyle set out his opposing Steady State theory in which the universe continually expanded but remained statistically unchanged as new matter is constantly created. These two theories were active contenders until the 1965 discovery, by Arno Allan Penzias and Robert Woodrow Wilson, of the cosmic microwave background radiation, a fact that is a straightforward prediction of the Big Bang theory, and one that the original Steady State theory could not account for. As a result, the Big Bang theory quickly became the most widely held view of the origin of the universe.
Cosmological constant
Einstein and his contemporaries believed in a static universe. When Einstein found that his general relativity equations could easily be solved in such a way as to allow the universe to be expanding at the present and contracting in the far future, he added to those equations what he called a cosmological constant — essentially a constant energy density, unaffected by any expansion or contraction — whose role was to offset the effect of gravity on the universe as a whole in such a way that the universe would remain static. However, after Hubble announced his conclusion that the universe was expanding, Einstein would write that his cosmological constant was "the greatest blunder of my life."
Density parameter
An important parameter in fate of the universe theory is the density parameter, omega (), defined as the average matter density of the universe divided by a critical value of that density. This selects one of three possible geometries depending on whether is equal to, less than, or greater than . These are called, respectively, the flat, open and closed universes. These three adjectives refer to the overall geometry of the universe, and not to the local curving of spacetime caused by smaller clumps of mass (for example, galaxies and stars). If the primary content of the universe is inert matter, as in the dust models popular for much of the 20th century, there is a particular fate corresponding to each geometry. Hence cosmologists aimed to determine the fate of the universe by measuring , or equivalently the rate at which the expansion was decelerating.
Repulsive force
Starting in 1998, observations of supernovas in distant galaxies have been interpreted as consistent with a universe whose expansion is accelerating. Subsequent cosmological theorizing has been designed so as to allow for this possible acceleration, nearly always by invoking dark energy, which in its simplest form is just a positive cosmological constant. In general, dark energy is a catch-all term for any hypothesized field with negative pressure, usually with a density that changes as the universe expands. Some cosmologists are studying whether dark energy which varies in time (due to a portion of it being caused by a scalar field in the early universe) can solve the crisis in cosmology. Upcoming galaxy surveys from the Euclid, Nancy Grace Roman and James Webb space telescopes (and data from next-generation ground-based telescopes) are expected to further develop our understanding of dark energy (specifically whether it is best understood as a constant energy intrinsic to space, as a time varying quantum field or as something else entirely).
Role of the shape of the universe
The current scientific consensus of most cosmologists is that the ultimate fate of the universe depends on its overall shape, how much dark energy it contains and on the equation of state which determines how the dark energy density responds to the expansion of the universe. Recent observations conclude, from 7.5 billion years after the Big Bang, that the expansion rate of the universe has probably been increasing, commensurate with the Open Universe theory. However, measurements made by the Wilkinson Microwave Anisotropy Probe suggest that the universe is either flat or very close to flat.
Closed universe
If , the geometry of space is closed like the surface of a sphere. The sum of the angles of a triangle exceeds 180 degrees and there are no parallel lines; all lines eventually meet. The geometry of the universe is, at least on a very large scale, elliptic.
In a closed universe, gravity eventually stops the expansion of the universe, after which it starts to contract until all matter in the universe collapses to a point, a final singularity termed the "Big Crunch", the opposite of the Big Bang. If, however, the universe contains dark energy, then the resulting repulsive force may be sufficient to cause the expansion of the universe to continue forever—even if . This is the case in the currently accepted Lambda-CDM model, where dark energy is found through observations to account for roughly 68% of the total energy content of the universe. According to the Lambda-CDM model, the universe would need to have an average matter density roughly seventeen times greater than its measured value today in order for the effects of dark energy to be overcome and the universe to eventually collapse. This is in spite of the fact that, according to the Lambda-CDM model, any increase in matter density would result in .
Open universe
If , the geometry of space is open, i.e., negatively curved like the surface of a saddle. The angles of a triangle sum to less than 180 degrees, and lines that do not meet are never equidistant; they have a point of least distance and otherwise grow apart. The geometry of such a universe is hyperbolic.
Even without dark energy, a negatively curved universe expands forever, with gravity negligibly slowing the rate of expansion. With dark energy, the expansion not only continues but accelerates. The ultimate fate of an open universe with dark energy is either universal heat death or a "Big Rip" where the acceleration caused by dark energy eventually becomes so strong that it completely overwhelms the effects of the gravitational, electromagnetic and strong binding forces. Conversely, a negative cosmological constant, which would correspond to a negative energy density and positive pressure, would cause even an open universe to re-collapse to a big crunch.
Flat universe
If the average density of the universe exactly equals the critical density so that , then the geometry of the universe is flat: as in Euclidean geometry, the sum of the angles of a triangle is 180 degrees and parallel lines continuously maintain the same distance. Measurements from the Wilkinson Microwave Anisotropy Probe have confirmed the universe is flat within a 0.4% margin of error.
In the absence of dark energy, a flat universe expands forever but at a continually decelerating rate, with expansion asymptotically approaching zero. With dark energy, the expansion rate of the universe initially slows, due to the effects of gravity, but eventually increases, and the ultimate fate of the universe becomes the same as that of an open universe.
Theories about the end of the universe
The fate of the universe may be determined by its density. The preponderance of evidence to date, based on measurements of the rate of expansion and the mass density, favors a universe that will continue to expand indefinitely, resulting in the "Big Freeze" scenario below. However, observations are not conclusive, and alternative models are still possible.
Big Freeze or Heat Death
The heat death of the universe, also known as the Big Freeze (or Big Chill), is a scenario under which continued expansion results in a universe that asymptotically approaches absolute zero temperature. Under this scenario, the universe eventually reaches a state of maximum entropy in which everything is evenly distributed and there are no energy gradients—which are needed to sustain information processing, one form of which is life. This scenario has gained ground as the most likely fate.
In this scenario, stars are expected to form normally for 1012 to 1014 (1–100 trillion) years, but eventually the supply of gas needed for star formation will be exhausted. As existing stars run out of fuel and cease to shine, the universe will slowly and inexorably grow darker. Eventually black holes will dominate the universe, but they will disappear over time as they emit Hawking radiation. Over infinite time, there could be a spontaneous entropy decrease by the Poincaré recurrence theorem, thermal fluctuations, and the fluctuation theorem.
The heat death scenario is compatible with any of the three spatial models, but it requires that the universe reaches an eventual temperature minimum. Without dark energy, it could occur only under a flat or hyperbolic geometry. With a positive cosmological constant, it could also occur in a closed universe.
Big Rip
The current Hubble constant defines a rate of acceleration of the universe not large enough to destroy local structures like galaxies, which are held together by gravity, but large enough to increase the space between them. A steady increase in the Hubble constant to infinity would result in all material objects in the universe, starting with galaxies and eventually (in a finite time) all forms, no matter how small, disintegrating into unbound elementary particles, radiation and beyond. As the energy density, scale factor and expansion rate become infinite, the universe ends as what is effectively a singularity.
In the special case of phantom dark energy, which has supposed negative kinetic energy that would result in a higher rate of acceleration than other cosmological constants predict, a more sudden big rip could occur.
Big Crunch
The Big Crunch hypothesis is a symmetric view of the ultimate fate of the universe. Just as the theorized Big Bang started as a cosmological expansion, this theory assumes that the average density of the universe will be enough to stop its expansion and the universe will begin contracting. The result is unknown; a simple estimation would have all the matter and spacetime in the universe collapse into a dimensionless singularity back into how the universe started with the Big Bang, but at these scales unknown quantum effects need to be considered (see Quantum gravity). Recent evidence suggests that this scenario is unlikely but has not been ruled out, as measurements have been available only over a relatively short period of time and could reverse in the future.
This scenario allows the Big Bang to occur immediately after the Big Crunch of a preceding universe. If this happens repeatedly, it creates a cyclic model, which is also known as an oscillatory universe. The universe could then consist of an infinite sequence of finite universes, with each finite universe ending with a Big Crunch that is also the Big Bang of the next universe. A problem with the cyclic universe is that it does not reconcile with the second law of thermodynamics, as entropy would build up from oscillation to oscillation and cause the eventual heat death of the universe. Current evidence also indicates the universe is not closed. This has caused cosmologists to abandon the oscillating universe model. A somewhat similar idea is embraced by the cyclic model, but this idea evades heat death because of an expansion of the branes that dilutes entropy accumulated in the previous cycle.
Big Bounce
The Big Bounce is a theorized scientific model related to the beginning of the known universe. It derives from the oscillatory universe or cyclic repetition interpretation of the Big Bang where the first cosmological event was the result of the collapse of a previous universe.
According to one version of the Big Bang theory of cosmology, in the beginning the universe was infinitely dense. Such a description seems to be at odds with other more widely accepted theories, especially quantum mechanics and its uncertainty principle. Therefore, quantum mechanics has given rise to an alternative version of the Big Bang theory, specifically that the universe tunneled into existence and had a finite density consistent with quantum mechanics, before evolving in a manner governed by classical physics. Also, if the universe is closed, this theory would predict that once this universe collapses it will spawn another universe in an event similar to the Big Bang after a universal singularity is reached or a repulsive quantum force causes re-expansion.
In simple terms, this theory states that the universe will continuously repeat the cycle of a Big Bang, followed by a Big Crunch.
Cosmic uncertainty
Each possibility described so far is based on a simple form for the dark energy equation of state. However, as the name is meant to imply, little is now known about the physics of dark energy. If the theory of inflation is true, the universe went through an episode dominated by a different form of dark energy in the first moments of the Big Bang, but inflation ended, indicating an equation of state more complex than those assumed for present-day dark energy. It is possible that the dark energy equation of state could change again, resulting in an event that would have consequences which are difficult to predict or parameterize. As the nature of dark energy and dark matter remain enigmatic, even hypothetical, the possibilities surrounding their coming role in the universe are unknown.
Other possible fates of the universe
There are also some possible events, such as the Big Slurp, which would seriously harm the universe, although the universe as a whole would not be completely destroyed as a result.
Big Slurp
This theory posits that the universe currently exists in a false vacuum and that it could become a true vacuum at any moment.
In order to best understand the false vacuum collapse theory, one must first understand the Higgs field which permeates the universe. Much like an electromagnetic field, it varies in strength based upon its potential. A true vacuum exists so long as the universe exists in its lowest energy state, in which case the false vacuum theory is irrelevant. However, if the vacuum is not in its lowest energy state (a false vacuum), it could tunnel into a lower-energy state. This is called vacuum decay. This has the potential to fundamentally alter the universe: in some scenarios, even the various physical constants could have different values, severely affecting the foundations of matter, energy, and spacetime. It is also possible that all structures will be destroyed instantaneously, without any forewarning.
However, only a portion of the universe would be destroyed by the Big Slurp while most of the universe would still be unaffected because galaxies located further than 4,200 megaparsecs (13 billion light-years) away from each other are moving away from each other faster than the speed of light while the Big Slurp itself cannot expand faster than the speed of light. To place this in context, the size of the observable universe is currently about 46 billion light years in all directions from earth. The universe is thought to be that size or larger.
Observational constraints on theories
Choosing among these rival scenarios is done by 'weighing' the universe, for example, measuring the relative contributions of matter, radiation, dark matter, and dark energy to the critical density. More concretely, competing scenarios are evaluated against data on galaxy clustering and distant supernovas, and on the anisotropies in the cosmic microwave background.
See also
Alan Guth
Andrei Linde
Anthropic principle
Arrow of time
Cosmological horizon
Cyclic model
Freeman Dyson
General relativity
John D. Barrow
Kardashev scale
Multiverse
Shape of the universe
Timeline of the far future
Zero-energy universe
References
Further reading
External links
Baez, J., 2004, "The End of the Universe".
Hjalmarsdotter, Linnea, 2005, "Cosmological parameters."
A Brief History of the End of Everything, a BBC Radio 4 series.
Cosmology at Caltech.
Jamal Nazrul Islam (1983): The Ultimate Fate of the Universe. Cambridge University Press, Cambridge, England. . (Digital print version published in 2009).
Physical cosmology | Ultimate fate of the universe | [
"Physics",
"Astronomy"
] | 3,626 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
192,989 | https://en.wikipedia.org/wiki/Big%20Rip | In physical cosmology, the Big Rip is a hypothetical cosmological model concerning the ultimate fate of the universe, in which the matter of the universe, from stars and galaxies to atoms and subatomic particles, and even spacetime itself, is progressively torn apart by the expansion of the universe at a certain time in the future, until distances between particles will infinitely increase.
According to the standard model of cosmology, the scale factor of the universe is accelerating, and, in the future era of cosmological constant dominance, will increase exponentially. However, this expansion is similar for every moment of time (hence the exponential law – the expansion of a local volume is the same number of times over the same time interval), and is characterized by an unchanging, small Hubble constant, effectively ignored by any bound material structures. By contrast, in the Big Rip scenario the Hubble constant increases to infinity in a finite time. According to recent studies, the universe is currently set for a constant expansion and heat death, because the equation of state parameter w = −1.
The possibility of sudden rip singularity occurs only for hypothetical matter (phantom energy) with implausible physical properties.
Overview
The truth of the hypothesis relies on the type of dark energy present in our universe. The type that could prove this hypothesis is a constantly increasing form of dark energy, known as phantom energy. If the dark energy in the universe increases without limit, it could overcome all forces that hold the universe together. The key value is the equation of state parameter w, the ratio between the dark energy pressure and its energy density. If −1 < w < 0, the expansion of the universe tends to accelerate, but the dark energy tends to dissipate over time, and the Big Rip does not happen. Phantom energy has w < −1, which means that its density increases as the universe expands.
A universe dominated by phantom energy is an accelerating universe, expanding at an ever-increasing rate. However, this implies that the size of the observable universe and the cosmological event horizon is continually shrinking – the distance at which objects can influence an observer becomes ever closer, and the distance over which interactions can propagate becomes ever shorter. When the size of the horizon becomes smaller than any particular structure, no interaction by any of the fundamental forces can occur between the most remote parts of the structure, and the structure is "ripped apart". The progression of time itself will stop. The model implies that after a finite time there will be a final singularity, called the "Big Rip", in which the observable universe eventually reaches zero size and all distances diverge to infinite values.
The authors of this hypothesis, led by Robert R. Caldwell of Dartmouth College, calculate the time from the present to the Big Rip to be
where w is defined above, H0 is Hubble's constant and Ωm is the present value of the density of all the matter in the universe.
Observations of galaxy cluster speeds by the Chandra X-ray Observatory seem to suggest the value of w is between approximately −0.907 and −1.075, meaning the Big Rip cannot be definitively ruled out. Based on the above equation, if the observation determines that the value of w is less than −1, but greater than or equal to −1.075, the Big Rip would occur approximately 152 billion years into the future at the earliest. More recent data from Planck mission indicates the value of w to be −1.028 (±0.031), pushing the earliest possible time of Big Rip to be approximately 200 billion years into the future.
Authors' example
In their paper, the authors consider a hypothetical example with w = −1.5, H0 = 70 km/s/Mpc, and Ωm = 0.3, in which case the Big Rip would happen approximately 22 billion years from the present. In this scenario, galaxies would first be separated from each other about 200 million years before the Big Rip. About 60 million years before the Big Rip, galaxies would begin to disintegrate as gravity becomes too weak to hold them together. Planetary systems like the Solar System would become gravitationally unbound about three months before the Big Rip, and planets would fly off into the rapidly expanding universe. In the last minutes, stars and planets would be torn apart, and the now-dispersed atoms would be destroyed about 10−19 seconds before the end (the atoms will first be ionized as electrons fly off, followed by the dissociation of the atomic nuclei). At the time the Big Rip occurs, even spacetime itself would be ripped apart and the scale factor would be infinity.
Observed universe
Evidence indicates w to be very close to −1 in our universe, which makes w the dominating term in the equation. The closer that w is to −1, the closer the denominator is to zero and the further the Big Rip is in the future. If w were exactly equal to −1, the Big Rip could not happen, regardless of the values of H0 or Ωm.
According to the latest cosmological data available, the uncertainties are still too large to discriminate among the three cases w < −1, w = −1, and w > −1.
Moreover, it is nearly impossible to measure w to be exactly at −1 due to statistical fluctuations. This means that the measured value of w can be arbitrarily close to −1 but not exactly at −1 hence the earliest possible date of the Big Rip can be pushed back further with more accurate measurements but the Big Rip is very difficult to completely rule out.
See also
"Last Contact" – A short story describing what Big Rip would be like from an everyday perspective
References
External links
Dark energy
2003 introductions
2003 in science
Ultimate fate of the universe
Physical cosmology | Big Rip | [
"Physics",
"Astronomy"
] | 1,198 | [
"Unsolved problems in astronomy",
"Astronomical sub-disciplines",
"Physical quantities",
"Concepts in astronomy",
"Theoretical physics",
"Unsolved problems in physics",
"Astrophysics",
"Energy (physics)",
"Dark energy",
"Wikipedia categories named after physical quantities",
"Physical cosmology"... |
11,253,084 | https://en.wikipedia.org/wiki/Atomic%20form%20factor | In physics, the atomic form factor, or atomic scattering factor, is a measure of the scattering amplitude of a wave by an isolated atom. The atomic form factor depends on the type of scattering, which in turn depends on the nature of the incident radiation, typically X-ray, electron or neutron. The common feature of all form factors is that they involve a Fourier transform of a spatial density distribution of the scattering object from real space to momentum space (also known as reciprocal space). For an object with spatial density distribution, , the form factor, , is defined as
,
where is the spatial density of the scatterer about its center of mass (), and is the momentum transfer. As a result of the nature of the Fourier transform, the broader the distribution of the scatterer in real space , the narrower the distribution of in ; i.e., the faster the decay of the form factor.
For crystals, atomic form factors are used to calculate the structure factor for a given Bragg peak of a crystal.
X-ray form factors
X-rays are scattered by the electron cloud of the atom and hence the scattering amplitude of X-rays increases with the atomic number, , of the atoms in a sample. As a result, X-rays are not very sensitive to light atoms, such as hydrogen and helium, and there is very little contrast between elements adjacent to each other in the periodic table. For X-ray scattering, in the above equation is the electron charge density about the nucleus, and the form factor the Fourier transform of this quantity. The assumption of a spherical distribution is usually good enough for X-ray crystallography.
In general the X-ray form factor is complex but the imaginary components only become large near an absorption edge. Anomalous X-ray scattering makes use of the variation of the form factor close to an absorption edge to vary the scattering power of specific atoms in the sample by changing the energy of the incident x-rays hence enabling the extraction of more detailed structural information.
Atomic form factor patterns are often represented as a function of the magnitude of the scattering vector . Herein is the wavenumber and is the scattering angle between the incident x-ray beam and the detector measuring the scattered intensity, while is the wavelength of the X-rays. One interpretation of the scattering vector is that it is the resolution or yardstick with which the sample is observed. In the range of scattering vectors between Å−1, the atomic form factor is well approximated by a sum of Gaussians of the form
where the values of ai, bi, and c are tabulated here.
Electron form factor
The relevant distribution, is the potential distribution of the atom, and the electron form factor is the Fourier transform of this. The electron form factors are normally calculated from X-ray form factors using the Mott–Bethe formula. This formula takes into account both elastic electron-cloud scattering and elastic nuclear scattering.
Neutron form factor
There are two distinct scattering interactions of neutrons by nuclei. Both are used in the investigation structure and dynamics of condensed matter: they are termed nuclear (sometimes also termed chemical) and magnetic scattering.
Nuclear scattering
Nuclear scattering of the free neutron by the nucleus is mediated by the strong nuclear force. The wavelength of thermal (several ångströms) and cold neutrons (up to tens of Angstroms) typically used for such investigations is 4-5 orders of magnitude larger than the dimension of the nucleus (femtometres). The free neutrons in a beam travel in a plane wave; for those that undergo nuclear scattering from a nucleus, the nucleus acts as a secondary point source, and radiates scattered neutrons as a spherical wave. (Although a quantum phenomenon, this can be visualized in simple classical terms by the Huygens–Fresnel principle.) In this case is the spatial density distribution of the nucleus, which is an infinitesimal point (delta function), with respect to the neutron wavelength. The delta function forms part of the Fermi pseudopotential, by which the free neutron and the nuclei interact. The Fourier transform of a delta function is unity; therefore, it is commonly said that neutrons "do not have a form factor;" i.e., the scattered amplitude, , is independent of .
Since the interaction is nuclear, each isotope has a different scattering amplitude. This Fourier transform is scaled by the amplitude of the spherical wave, which has dimensions of length. Hence, the amplitude of scattering that characterizes the interaction of a neutron with a given isotope is termed the scattering length, b. Neutron scattering lengths vary erratically between neighbouring elements in the periodic table and between isotopes of the same element. They may only be determined experimentally, since the theory of nuclear forces is not adequate to calculate or predict b from other properties of the nucleus.
Magnetic scattering
Although neutral, neutrons also have a nuclear spin. They are a composite fermion and hence have an associated magnetic moment. In neutron scattering from condensed matter, magnetic scattering refers to the interaction of this moment with the magnetic moments arising from unpaired electrons in the outer orbitals of certain atoms. It is the spatial distribution of these unpaired electrons about the nucleus that is for magnetic scattering.
Since these orbitals are typically of a comparable size to the wavelength of the free neutrons, the resulting form factor resembles that of the X-ray form factor. However, this neutron-magnetic scattering is only from the outer electrons, rather than being heavily weighted by the core electrons, which is the case for X-ray scattering. Hence, in strong contrast to the case for nuclear scattering, the scattering object for magnetic scattering is far from a point source; it is still more diffuse than the effective size of the source for X-ray scattering, and the resulting Fourier transform (the magnetic form factor) decays more rapidly than the X-ray form factor. Also, in contrast to nuclear scattering, the magnetic form factor is not isotope dependent, but is dependent on the oxidation state of the atom.
References
Atomic physics | Atomic form factor | [
"Physics",
"Chemistry"
] | 1,227 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
17,769,499 | https://en.wikipedia.org/wiki/Prostate%20brachytherapy | Brachytherapy is a type of radiotherapy, or radiation treatment, offered to certain cancer patients. There are two types of brachytherapy – high dose-rate (HDR) and low dose-rate (LDR). LDR brachytherapy is the one most commonly used to treat prostate cancer. It may be referred to as 'seed implantation' or it may be called 'pinhole surgery'.
In LDR brachytherapy, tiny radioactive particles the size of a grain of rice (Figure 1) are implanted directly into, or very close to, the tumour. These particles are known as 'seeds', and they can be inserted linked together as strands, or individually. The seeds deliver high doses of radiation to the tumour without affecting the normal healthy tissues around it. The procedure is less damaging than conventional radiation therapy, where the radioactive beam is delivered from outside the body and must pass through other tissues before reaching the tumour.
In addition to seeds, a newer polymer-encapsulated LDR source is available. The source features 103Pd along the full length of the device which is contained using low-Z polymers. The polymer construction and linear radioactive distribution of this source creates a very homogenous dose distribution.
LDR prostate brachytherapy (seed or line source implantation) is a proven treatment for low to high risk localized prostate cancer (when the cancer is contained within the prostate). Under a general anaesthetic, the radioactive seeds are injected through fine needles directly into the prostate, so that the radiotherapy can destroy the cancer cells. The seeds are permanently implanted. They remain in place but gradually become inactive as the radioactivity decays naturally and safely over time. Unlike traditional surgery, LDR brachytherapy requires no incisions and is normally carried out as an outpatient (day case) procedure. Sometimes a single overnight stay in hospital is required. Patients usually recover quickly from LDR brachytherapy. Most men can return to work or normal daily activities within a few days. LDR brachytherapy has fewer side-effects with less risk of incontinence or impotence than other treatment options. It is a popular alternative to major surgery (conventional radical prostatectomy or laparoscopic (keyhole surgery) radical prostatectomy).
Isotopes used include iodine 125 (half-life 59.4 days) palladium 103 (half-life 17 days) and cesium-131 (half life 9.7 days).
Procedure
When LDR prostate brachytherapy (seed or polymer source implantation) is carried out, an ultrasound probe is inserted into the rectum (back passage), and images from this probe are used to assess the size and shape of the prostate gland. This is done so that the doctor can identify how to best deliver the right radiation dose for each patient. Then the seeds are inserted in the exact locations identified at the beginning of the procedure. This usually takes 1–2 hours. No surgical incision is required; instead, the radioactive seeds are inserted into the prostate gland using needles which pass through the skin between the scrotum and the rectum (the perineum) and an ultrasound probe is used to accurately guide them to their final position. The needles are put into the target positions and between 70 and 150 seeds are placed into the prostate. The needles are then removed. {•figure•} shows the grid-like device used to guide the needles into the perineal area; co-ordinates or 'map references' on this grid or template are used to pinpoint the exact positions in the prostate where the seeds are to be placed. Figure 3 shows how the seeds are positioned to target the tumour. The doctor uses ultrasound and X-ray pictures to make sure the seeds are in the right place. A special computer software program is used to make sure the prostate gland is completely covered by just the right dose of radiation (see Figure 4) to ensure that all cancer cells present in the prostate have been completely treated.
Once in place, the seeds or sources slowly begin to release their radiation. While the sources are active, the patient must observe some basic precautions. Travel and contact with adults are fine; however, for the first two months following seed implantation, small children and pregnant women should not be in direct contact with the patient for prolonged periods – for example children should not sit on the patient's knee for any length of time. Sexual intercourse can start again within a few weeks. Very occasionally a seed can be expelled in the semen on ejaculation; if this does happen, it will usually occur in the first few ejaculations, so it is advisable to use a condom for the first two or three occasions of intercourse following LDR brachytherapy.
Patients can usually return to normal activities and work within a few days. They should expect to be seen for follow-up after four to six weeks, and then every three months for a year, six-monthly up to five years, then annually.
Indications
LDR prostate brachytherapy (seed or polymer source implantation) is recommended as a treatment for patients whose cancer is at an early stage (cancer stages T1 to T2), and which has not spread beyond the prostate (localised disease). Doctors use a combination of factors such as cancer stage and grade, PSA level, Gleason score and urine flow (bladder emptying) tests to help them decide if a patient is suitable for LDR brachytherapy. Patients should ask their doctors about the results of these different tests and how they influence the type of treatment they may be offered. LDR brachytherapy in combination with external beam radiotherapy may also be recommended for patients with later-stage cancer and higher PSA level and Gleason score.
Risks and benefits
Since its introduction in the mid-1980s, prostate brachytherapy has become a well-established treatment option for patients with early, localised disease. In the US, over 50,000 eligible prostate cancer patients a year are treated using this method. Brachytherapy is now in widespread use across the world. In the UK, prostate brachytherapy is provided at a majority of cancer centres and thousands of patients have been treated.
Clinical benefits
LDR prostate brachytherapy on its own has been shown to be highly effective for the treatment of early prostate cancer. The rate of survival with no increase in average PSA levels after LDR brachytherapy is similar to that achieved with external beam radiotherapy and radical prostatectomy. However LDR brachytherapy has a lower risk of some of the complications associated with other treatment options.
Side effects
LDR prostate brachytherapy (seed or polymer source implantation) is a very effective treatment for low to high risk localized cancer, with patients rapidly returning to normal activities. Although patients may experience urinary problems for the first six months or so after their implant, these usually settle down and lasting problems are rare, only occurring in about 1–2% of patients. The complications include:
urinary incontinence, mainly stress incontinence or urge incontinence, difficulty with urination, and urinary retention. According to a review published in 2002, on the long term, significant obstructive symptoms or persistent urinary retention requiring transurethral resection of the prostate (TURP) occurred in 0–8.7% of patients. Urinary incontinence was found in up to 19% of patients treated by implant who had not had a previous TURP, however, the percentage was a lot higher in those who did (up to 86%). The stress incontinence can be regarded as a result of direct damage to the external urethral sphincter that results from the radiation. Treatment may include lifestyle changes, bladder training, and the use of incontinence pads. Surgical treatment in those who fail initial therapy can include the use of a urethral sling or an artificial urinary sphincter.
bowel problems. Some patients (less than 10%) report an increase in bowel problems (diarrhea or urgency of the bowels), but again this usually settles down without further treatment. Radiation proctitis can be found in 0.5–21.4% of patients who received prostate brachytherapy due to the proximity of the prostate and the large bowel, with significant injury (fistula) occurring in 1–2.4% of patients.
erectile dysfunction (difficulty getting and/or keeping an erection; impotence). The problem ranges from 25 to 50% of men who receive prostate brachytherapy, which is less than that observed in men receiving standard external beam radiation. Within three years, not many men will see significant improvement in potency, and occasionally the numbers may worsen. Treatment options include the use of medications (such as sildenafil and tadalafil, or intracavernous ones), vacuum constriction device, or penile implants.
In a 2006 study looking at patients' quality of life, LDR brachytherapy compared favourably with other treatment options. Table 1 summarises the more common side effects related with each form of treatment and how these may affect patient recovery.
References
External links
Prostate UK (UK)
Radiation therapy procedures
Medical physics
Male genital procedures
Prostatic procedures
Prostate cancer | Prostate brachytherapy | [
"Physics"
] | 1,929 | [
"Applied and interdisciplinary physics",
"Medical physics"
] |
17,774,634 | https://en.wikipedia.org/wiki/Selective%20area%20epitaxy | Selective area epitaxy is the local growth of epitaxial layer through a patterned amorphous dielectric mask (typically SiO2 or Si3N4) deposited on a semiconductor substrate. Semiconductor growth conditions are selected to ensure epitaxial growth on the exposed substrate, but not on the dielectric mask. SAE can be executed in various epitaxial growth methods such as molecular beam epitaxy (MBE), metalorganic vapour phase epitaxy (MOVPE) and chemical beam epitaxy (CBE). By SAE, semiconductor nanostructures such as quantum dots and nanowires can be grown to their designed places.
Concepts
Mask
The mask used in SAE is usually amorphous dielectric such as SiO2 or SiN4 which is deposited on the semiconductor substrate. The patterns (holes) in the mask are fabricated using standard microfabrication techniques lithography and etching. Variety of lithography and etching techniques can be implemented to SAE mask fabrication. Suitable techniques depend on the pattern feature size and used materials. Electron beam lithography is widely used due to its nanometer resolution. The mask should withstand the high temperature growth conditions of semiconductors in order to limit the growth to the patterned holes in the mask.
Selectivity
Selectivity in SAE is used to express the growth on the mask. The selectivity of the growth is originated from the property that atoms doesn't favor sticking to the mask i.e. they have low sticking coefficient. Sticking coefficient can be reduced by the choice of mask material, having lower material flow and having higher growth temperature. High selectivity i.e. no growth on the mask is desired.
Growth mechanism
Epitaxial growth mechanism in SAE can be divided in to two parts: Growth before the mask level and growth after the mask level.
Growth before mask level
Before the mask level, the growth is confined to occur only in the hole in the mask. The growth starts to exceed the crystal of the substrate crystal following the pattern of the mask. The grown semiconductor has the structure of the pattern. This is employed in template assisted selective area epitaxy (TASE), where deep patterns in the mask are used as a template for the whole semiconductor structure and the growth is stopped before the mask level.
Growth after the mask level
After the mask level, the growth can exceed to any direction, because the mask is no longer limiting the growth direction. The growth continues to the direction which is energetically favorable for crystal to expand in existing growth conditions. The growth is referred as faceted growth, because it is favorable for crystal to form facets. Therefore, in SAE grown semiconductor structures, clear crystalline facets are seen. The growth direction, or more precisely, the growth rates of different crystal facets can be tuned. Growth temperature, V/III ratio, orientation of the pattern and shape of the pattern are properties that affect to the growth rates of facets. By adjusting these properties, the structure of grown semiconductor can be engineered. SAE grown nanowires and epitaxial lateral overgrown structures (ELO) are an example of structures that are engineered by SAE growth conditions. In nanowire growth, the growth rate of lateral facets is suppressed and the structure grows only in vertical direction. In ELO, the growth is initiated in the mask openings, and after mask level the growth proceeds laterally on the mask, eventually joining the grown semiconductor structures together. The main principle in ELO is to reduce the defects caused by lattice mismatch of the substrate and the grown semiconductor.
Factors that affect to SAE
Temperature of growth
V/III ratio
Choice of mask material
Orientation of window
Mask to window ratio
Quality of mask
Shape of the pattern
Techniques
SAE can be achieved in various epitaxial growth techniques, which are listed below.
Metalorganic vapour-phase epitaxy
Molecular beam epitaxy
Chemical beam epitaxy
Liquid phase epitaxy
Applications
Nanowires
Quantum dots
III/V-Silicon integration
Topological quantum computer
References
Chemical vapor deposition
Semiconductor device fabrication | Selective area epitaxy | [
"Chemistry",
"Materials_science"
] | 842 | [
"Chemical vapor deposition",
"Semiconductor device fabrication",
"Microtechnology"
] |
17,776,831 | https://en.wikipedia.org/wiki/Don%20VandenBerg | Dr. Don VandenBerg is Professor Emeritus of astronomy (Ph.D. Australian National University) at the department of physics and astronomy at the University of Victoria, British Columbia, Canada. He is internationally acclaimed for his work on modelling stars of different size and composition.
Using basic input physics, such as nuclear reaction rates and opacities, VandenBerg uses computer models to help understand the structure and evolution of stars. These models, which are tightly constrained by observations, provide insight into stellar populations and will ultimately be used to synthesize the stellar populations of distant galaxies.
Vandenberg has the most-cited research papers of any astronomer in Canada. His stellar isochrones resulting from his models are widely used throughout the world.
References
Notes
Bibliography
Maclean's Magazine
Science Magazine article
ISI Highly Cited Researchers
Don VandenBerg official site
20th-century Canadian astronomers
Academic staff of the University of Victoria
Living people
Year of birth missing (living people)
21st-century Canadian astronomers | Don VandenBerg | [
"Astronomy"
] | 199 | [
"Astronomers",
"Astronomer stubs",
"Astronomy stubs"
] |
17,780,181 | https://en.wikipedia.org/wiki/Organ-limited%20amyloidosis | Organ-limited amyloidosis is a category of amyloidosis where the distribution can be associated primarily with a single organ. It is contrasted to systemic amyloidosis, and it can be caused by several different types of amyloid.
In almost all of the organ-specific pathologies, there is debate as to whether the amyloid plaques are the causal agent of the disease or instead a downstream consequence of a common idiopathic agent. The associated proteins are indicated in parentheses.
Neurological amyloid
Alzheimer's disease (Aβ 39-43)
Parkinson's disease (alpha-synuclein)
Huntington's disease (huntingtin protein)
Transmissible spongiform encephalopathies caused by prion protein (PrP) were sometimes classed as amyloidoses, as one of the four pathological features in diseased tissue is the presence of amyloid plaques. These diseases include;
Creutzfeldt–Jakob disease (PrP in cerebrum)
Kuru (diffuse PrP deposits in brain)
Fatal familial insomnia (PrP in thalamus)
Bovine spongiform encephalopathy (PrP in cerebrum of cows)
Cardiovascular amyloid
Cardiac amyloidosis
Senile cardiac amyloidosis-may cause heart failure
Other
Amylin deposition can occur in the pancreas in some cases of type 2 diabetes mellitus
Cerebral amyloid angiopathy
References
External links
Amyloidosis
Histopathology
Structural proteins | Organ-limited amyloidosis | [
"Chemistry"
] | 305 | [
"Histopathology",
"Microscopy"
] |
17,780,458 | https://en.wikipedia.org/wiki/Matter%20power%20spectrum | The matter power spectrum describes the density contrast of the universe (the difference between the local density and the mean density) as a function of scale. It is the Fourier transform of the matter correlation function. On large scales, gravity competes with cosmic expansion, and structures grow according to linear theory. In this regime, the density contrast field is Gaussian, Fourier modes evolve independently, and the power spectrum is sufficient to completely describe the density field. On small scales, gravitational collapse is non-linear, and can only be computed accurately using N-body simulations. Higher-order statistics are necessary to describe the full field at small scales.
Definition
Let represent the matter overdensity, a dimensionless quantity defined as:
where is the average matter density over all space.
The power spectrum is most commonly understood as the Fourier transform of the autocorrelation function, , mathematically defined as:
for .
This then determines the easily derived relationship to the power spectrum, , that is
Equivalently, letting denote the Fourier transform of the overdensity , the power spectrum is given by the following average over Fourier space:
(note that is not an overdensity but the Dirac delta function).
Since has dimensions of (length)3, the power spectrum is also sometimes given in terms of the dimensionless function:
Development according to gravitational expansion
If the autocorrelation function describes the probability of a galaxy at a distance from another galaxy, the matter power spectrum decomposes this probability into characteristic lengths, , and its amplitude describes the degree to which each characteristic length contributes to the total over-probability.
The overall shape of the matter power spectrum is best understood in terms of the linear perturbation theory analysis of the growth of structure, which predicts to first order that the power spectrum grows according to:
Where is the linear growth factor in the density, that is to first order , and is commonly referred to as the primordial matter power spectrum. Determining the primordial is a question that relates to the physics of inflation.
The simplest is the Harrison–Zeldovich spectrum (named after Edward R. Harrison and Yakov Zeldovich), which characterizes according to a power law, . More advanced primordial spectra include the use of a transfer function which mediates the transition from the universe being radiation dominated to being matter dominated.
The broad shape of the matter power spectrum is determined by the growth of large-scale structure, with the turnover (the point where the spectrum goes from increasing with k to decreasing with k) at , corresponding to (where h is the dimensionless Hubble constant). The co-moving wavenumber corresponding to the maximum power in the mass power spectrum is determined by the size of the cosmic particle horizon at the time of matter-radiation equality, and therefore depends on the mean density of matter and to a lesser extent on the number of neutrino families (), , for . The at smaller k (equivalently, larger scales) corresponds to scales which were larger than the particle horizon at the time of the transition from the regime of radiation dominance to that of matter dominance.
At linear order in perturbations , the power spectrum's broad shape follows
where is the scalar spectral index.
References
Theuns, Physical Cosmology
Michael L. Norman, Simulating Galaxy Clusters
Physical cosmology | Matter power spectrum | [
"Physics",
"Astronomy"
] | 678 | [
"Astronomical sub-disciplines",
"Theoretical physics",
"Physical cosmology",
"Astrophysics"
] |
17,782,532 | https://en.wikipedia.org/wiki/Susceptibility%20weighted%20imaging | Susceptibility weighted imaging (SWI), originally called BOLD venographic imaging, is an MRI sequence that is exquisitely sensitive to venous blood, hemorrhage and iron storage. SWI uses a fully flow compensated, long echo, gradient recalled echo (GRE) pulse sequence to acquire images. This method exploits the susceptibility differences between tissues and uses the phase image to detect these differences. The magnitude and phase data are combined to produce an enhanced contrast magnitude image. The imaging of venous blood with SWI is a blood-oxygen-level dependent (BOLD) technique which is why it was (and is sometimes still) referred to as BOLD venography. Due to its sensitivity to venous blood SWI is commonly used in traumatic brain injuries (TBI) and for high resolution brain venographies but has many other clinical applications. SWI is offered as a clinical package by Philips and Siemens but can be run on any manufacturer's machine at field strengths of 1.0 T, 1.5 T, 3.0 T and higher.
Acquisition and image processing
SWI uses a fully velocity compensated, RF spoiled, high-resolution, 3D gradient recalled echo (GRE) scan. Both the magnitude and phase images are saved, and the phase image is high pass (HP) filtered to remove unwanted artifacts. The magnitude image is then combined with the phase image to create an enhanced contrast magnitude image referred to as the susceptibility weighted (SW) image. It is also common to create minimum intensity projections (mIP) over 8 to 10 mm to better visualize vein connectivity. In this way four sets of images are generated, the original magnitude, HP filtered phase, susceptibility weighted, and mIPs over the susceptibility weighted images.
Phase filtering
The values in the phase images are constrained from -π to π so if the value goes above π it wraps to -π, inhomogeneities in the magnetic field cause low frequency background gradients. This causes all the phase values to slowly increase across the image which creates phase wrapping and obscures the image. This type of artifact can be removed by phase unwrapping or by high pass filtering the original complex data to remove the low frequency variations in the phase image.
Susceptibility weighted image creation
The susceptibility weighted image is created by combining the magnitude and filtered phase images. A mask is created from the phase image by mapping all values above 0 radians to be 1 and linearly mapping values from -π to 0 radians to range from 0 to 1, respectively. Alternatively, a power function (typically 4th degree) can be used instead of a linear mapping from -π to 0 to increase the effect of the mask. The magnitude image is then multiplied by this mask. In this way phase values above 0 radians have no effect and phase values below 0 radians darken the magnitude image. This increases the contrast in the magnitude image for objects with low phase values such as veins, iron, and hemorrhage.
Clinical applications
SWI is most commonly used to detect small amounts of hemorrhage or calcium. Clinical applications are under research in different fields of medicine.
Traumatic brain injury (TBI)
The detection of micro-hemorrhages, shearing, and diffuse axonal injury (DAI) in trauma patients is often difficult as the injuries tend to be relatively small in size and can be easily missed by low resolution scans. SWI is usually run at relatively high resolution (1 mm3) and is extremely sensitive to bleeding in the gray matter/white matter boundaries making it is possible to see very small lesions increasing the ability to detect more subtle injuries.
Stroke and hemorrhage
Diffusion weighted imaging offers a powerful means to detect acute stroke. Although it is well known that gradient echo imaging can detect hemorrhage, it is best detected with SWI. In the example shown here, the gradient echo image shows the region of likely cytotoxic edema whereas the SW image shows the likely localization of the stroke and the vascular territory affected (data acquired at 1.5 T).
The bright region in the gradient echo weighted image shows the area affected in this acute stroke example. The arrows in the SWI image may show the tissue at risk that has been affected by the stroke (A, B, C) and the location of the stroke itself (D). The reason that we are able to see the affected vascular territory could be because there is a reduced level of oxygen saturation in this tissue, suggesting that the flow to this region of the brain could be reduced post stroke. Another possible explanation is that there is an increase in local venous blood volume. In either case, this image suggests that the tissue associated with this vascular territory could be tissue at risk. Future stroke research will involve comparisons of perfusion weighted imaging and SWI to learn more about local flow and oxygen saturation.
Sturge–Weber disease
An SWI venogram of a neonate with Sturge–Weber syndrome who did not display neurological symptoms is shown to the right. The initial conventional MR imaging methods did not demonstrate any abnormality. The abnormal venous vasculature in the left occipital lobe extending between the posterior horn of the ventricle and the cortical surface is clearly visible in the venogram. Due to the high resolution even collaterals can be resolved.
Tumors
Part of the characterization of tumors lies in understanding the angiographic behavior of lesions both from the perspective of angiogenesis and micro-hemorrhages. Aggressive tumors tend to have rapidly growing vasculature and many micro-hemorrhages. Hence, the ability to detect these changes in the tumor could lead to a better determination of the tumor status. The enhanced sensitivity of SWI to venous blood and blood products due to their differences in susceptibility compared to normal tissue leads to better contrast in detecting tumor boundaries and tumor hemorrhage.
Multiple sclerosis
Multiple sclerosis (MS) is usually studied with FLAIR and contrast enhanced T1 imaging. SWI adds to this by revealing the venous connectivity in some lesions and presents evidence of iron in some lesions. This key new information may help understand the physiology of MS.
The magnetic resonance frequency measured with an SWI scan was shown to be sensitive to MS lesion formation. The frequency increases months before a new lesion appears on a contrast enhanced scan. At the time of contrast enhancement the frequency increases rapidly and remains elevated for at least six months.
Vascular dementia and cerebral amyloid angiopathy (CAA)
Gradient recalled echo (GRE) imaging is the conventional way to detect hemorrhage in CAA, however SWI is a much more sensitive technique that can reveal many micro-hemorrhages that are missed on GRE images. A conventional gradient echo T2*-weighted image (left, TE=20 ms) shows some low-signal foci associated with CAA. On the other hand, an SWI image (center, with a resolution of 0.5 mm x 0.5 mm x 2.0 mm, projected over 8mm) shows many more associated low-signal foci. Phase images were used to enhance the effect of the local hemosiderin build-up. An example phase image (right) with yet higher resolution of 0.25 mm x 0.25 mm x 2.0 mm shows a clear ability to localize multiple CAA-associated foci.
Pneumocephalus
Recent studies suggest that SWI might be suitable for monitorizing neurosurgical patients recovering from Pneumocephalus, as air can be easily detected with SWI.
High field SWI
SWI is uniquely suited to take advantage of higher field systems, as the contrast in the phase image is linearly proportional to echo time (TE) and field strength. Higher fields thus allow shorter echo times without a loss of contrast which can reduce scan time and motion related artifacts. The high signal-to-noise available at higher fields also increases scan quality and allows for higher resolution scans.
See also
Magnetic resonance angiography
Quantitative susceptibility mapping
Footnotes
References
External links
SWI information brochures, including SWI software
MRI-CCSVI Pilot Study with MRA and SWI
NICE MRI
MRI institute for biomedical research
Magnetic resonance imaging
Neuroimaging | Susceptibility weighted imaging | [
"Chemistry"
] | 1,725 | [
"Nuclear magnetic resonance",
"Magnetic resonance imaging"
] |
17,782,683 | https://en.wikipedia.org/wiki/Vibrations%20of%20a%20circular%20membrane | A two-dimensional elastic membrane under tension can support transverse vibrations. The properties of an idealized drumhead can be modeled by the vibrations of a circular membrane of uniform thickness, attached to a rigid frame. Due to the phenomenon of resonance, at certain vibration frequencies, its resonant frequencies, the membrane can store vibrational energy, the surface moving in a characteristic pattern of standing waves. This is called a normal mode. A membrane has an infinite number of these normal modes, starting with a lowest frequency one called the fundamental frequency.
There exist infinitely many ways in which a membrane can vibrate, each depending on the shape of the membrane at some initial time, and the transverse velocity of each point on the membrane at that time. The vibrations of the membrane are given by the solutions of the two-dimensional wave equation with Dirichlet boundary conditions which represent the constraint of the frame. It can be shown that any arbitrarily complex vibration of the membrane can be decomposed into a possibly infinite series of the membrane's normal modes. This is analogous to the decomposition of a time signal into a Fourier series.
The study of vibrations on drums led mathematicians to pose a famous mathematical problem on whether the shape of a drum can be heard, with an answer (it cannot) being given in 1992 in the two-dimensional setting.
Practical significance
Analyzing the vibrating drum head problem explains percussion instruments such as drums and timpani. However, there is also a biological application in the working of the eardrum. From an educational point of view the modes of a two-dimensional object are a convenient way to visually demonstrate the meaning of modes, nodes, antinodes and even quantum numbers. These concepts are important to the understanding of the structure of the atom.
The problem
Consider an open disk of radius centered at the origin, which will represent the "still" drum head shape. At any time the height of the drum head shape at a point in measured from the "still" drum head shape will be denoted by which can take both positive and negative values. Let denote the boundary of that is, the circle of radius centered at the origin, which represents the rigid frame to which the drum head is attached.
The mathematical equation that governs the vibration of the drum head is the wave equation with zero boundary conditions,
Due to the circular geometry of , it will be convenient to use polar coordinates Then, the above equations are written as
Here, is a positive constant, which gives the speed at which transverse vibration waves propagate in the membrane. In terms of the physical parameters, the wave speed, c, is given by
where , is the radial membrane resultant at the membrane boundary (), , is the membrane thickness, and is the membrane density. If the membrane has uniform tension, the uniform tension force at a given radius, may be written
where is the membrane resultant in the azimuthal direction.
The axisymmetric case
We will first study the possible modes of vibration of a circular drum head that are axisymmetric. Then, the function does not depend on the angle and the wave equation simplifies to
We will look for solutions in separated variables, Substituting this in the equation above and dividing both sides by yields
The left-hand side of this equality does not depend on and the right-hand side does not depend on it follows that both sides must be equal to some constant We get separate equations for and :
The equation for has solutions which exponentially grow or decay for are linear or constant for and are periodic for . Physically it is expected that a solution to the problem of a vibrating drum head will be oscillatory in time, and this leaves only the third case, so we choose for convenience. Then, is a linear combination of sine and cosine functions,
Turning to the equation for with the observation that all solutions of this second-order differential equation are a linear combination of Bessel functions of order 0, since this is a special case of Bessel's differential equation:
The Bessel function is unbounded for which results in an unphysical solution to the vibrating drum head problem, so the constant must be null. We will also assume as otherwise this constant can be absorbed later into the constants and coming from It follows that
The requirement that height be zero on the boundary of the drum head results in the condition
The Bessel function has an infinite number of positive roots,
We get that for so
Therefore, the axisymmetric solutions of the vibrating drum head problem that can be represented in separated variables are
where
The general case
The general case, when can also depend on the angle is treated similarly. We assume a solution in separated variables,
Substituting this into the wave equation and separating the variables, gives
where is a constant. As before, from the equation for it follows that with and
From the equation
we obtain, by multiplying both sides by and separating variables, that
and
for some constant Since is periodic, with period being an angular variable, it follows that
where and and are some constants. This also implies
Going back to the equation for its solution is a linear combination of Bessel functions and With a similar argument as in the previous section, we arrive at
where with the -th positive root of
We showed that all solutions in separated variables of the vibrating drum head problem are of the form
for
Animations of several vibration modes
A number of modes are shown below together with their quantum numbers. The analogous wave functions of the hydrogen atom are also indicated as well as the associated angular frequencies . The values of are the roots of the Bessel function . This is deduced from the boundary condition which yields .
More values of can easily be computed using the following Python code with the scipy library:
from scipy import special as sc
m = 0 # order of the Bessel function (i.e. angular mode for the circular membrane)
nz = 3 # desired number of roots
alpha_mn = sc.jn_zeros(m, nz) # outputs nz zeros of Jm
See also
Vibrating string, the one-dimensional case
Chladni patterns, an early description of a related phenomenon, in particular with musical instruments; see also cymatics
Hearing the shape of a drum, characterising the modes with respect to the shape of the membrane
Atomic orbital, a related quantum-mechanical and three-dimensional problem
References
Partial differential equations
Mechanical vibrations
Drumming | Vibrations of a circular membrane | [
"Physics",
"Engineering"
] | 1,291 | [
"Structural engineering",
"Mechanics",
"Mechanical vibrations"
] |
17,785,091 | https://en.wikipedia.org/wiki/Light-cone%20coordinates | In physics, particularly special relativity, light-cone coordinates, introduced by Paul Dirac and also known as Dirac coordinates, are a special coordinate system where two coordinate axes combine both space and time, while all the others are spatial.
Motivation
A spacetime plane may be associated with the plane of split-complex numbers which is acted upon by elements of the unit hyperbola to effect Lorentz boosts. This number plane has axes corresponding to time and space. An alternative basis is the diagonal basis which corresponds to light-cone coordinates.
Light-cone coordinates in special relativity
In a light-cone coordinate system, two of the coordinates are null vectors and all the other coordinates are spatial. The former can be denoted and and the latter .
Assume we are working with a (d,1) Lorentzian signature.
Instead of the standard coordinate system (using Einstein notation)
,
with we have
with , and .
Both and can act as "time" coordinates.
One nice thing about light cone coordinates is that the causal structure is partially included into the coordinate system itself.
A boost in the plane shows up as the squeeze mapping , , . A rotation in the -plane only affects .
The parabolic transformations show up as , , . Another set of parabolic transformations show up as , and .
Light cone coordinates can also be generalized to curved spacetime in general relativity. Sometimes calculations simplify using light cone coordinates. See Newman–Penrose formalism.
Light cone coordinates are sometimes used to describe relativistic collisions, especially if the relative velocity is very close to the speed of light. They are also used in the light cone gauge of string theory.
Light-cone coordinates in string theory
A closed string is a generalization of a particle. The spatial coordinate of a point on the string is conveniently described by a parameter which runs from to . Time is appropriately described by a parameter . Associating each point on the string in a D-dimensional spacetime with coordinates and transverse coordinates , these coordinates play the role of fields in a dimensional field theory. Clearly, for such a theory more is required. It is convenient to employ instead of and , light-cone coordinates given by
so that the metric is given by
(summation over understood).
There is some gauge freedom. First, we can set and treat this degree of freedom as the time variable. A reparameterization invariance under can be imposed with a constraint which we obtain from the metric, i.e.
Thus is not an independent degree of freedom anymore. Now can be identified as the corresponding Noether charge. Consider . Then with the use of the Euler-Lagrange equations for and one obtains
Equating this to
where is the Noether charge, we obtain:
This result agrees with a result cited in the literature.
Free particle motion in light-cone coordinates
For a free particle of mass the action is
In light-cone coordinates becomes with as time variable:
The canonical momenta are
The Hamiltonian is ():
and the nonrelativistic Hamilton equations imply:
One can now extend this to a free string.
See also
Newman–Penrose formalism
References
Theory of relativity | Light-cone coordinates | [
"Physics"
] | 643 | [
"Theory of relativity"
] |
14,908,758 | https://en.wikipedia.org/wiki/Limb%20perfusion | Limb perfusion is a medical technique that is used to deliver drugs locally directly to a site of interest. It is commonly used in human medicine for administration of anticancer drugs directly to an arm or leg. It is also used in veterinary medicine to deliver drugs to a site of infection or injury, as well as for the treatment of cancer in dogs. In both cases, a tourniquet is used to reduce blood flow out of the area that is being treated.
Use in human medicine
Isolated limb perfusion was first introduced into the clinic by American surgeons from New Orleans in the mid-1950s. The main purpose of the isolated limb perfusion technique is to deliver a very high dose of chemotherapy, at elevated temperature, to tumour sites without causing overwhelming systemic damage. (Unfortunately, while these approaches can be useful against solitary or limited metastases, they are - by definition - not systemic and therefore do not treat distributed metastases or micrometastases). The flow of blood to and from the limb is temporarily stopped with a tourniquet, and anticancer drugs are put directly into the blood of the limb. This allows the person to receive a high dose of drugs in the area where the cancer occurred. The temperature is also increased to 42C causing an increased uptake of the drug by the tumor. The combination of high drug dose and high temperature is toxic systemically, thus the isolation of the limb. Blood flow through the limb is typically achieved using an extracorporeal circuit consisting of cannulae, tubing, peristaltic roller pump, heat exchanger, and pressure monitoring/safety devices. Care must be used in handling the drugs and waste material as they are extremely toxic. Among other types of cancer, isolated limb perfusion has been used to treat in transit metastatic melanoma.
In the early 1990s an alternative technique was developed at the Royal Prince Alfred Hospital in Sydney, Australia: isolated limb infusion. This technique is less complex and uses a minimal invasive percutaneous approach to circulatorily isolate a limb.
Use in veterinary medicine
Limb perfusion is also used in veterinary medicine, where is it usually referred to as regional limb perfusion (RLP). It is most commonly used in large animals, such as horses, cows, small ruminants, and camelids. These species often require large, cost-prohibitive doses of medications to treat systemically. Regional limb perfusion allows drug dose to be reduced while maintaining therapeutic concentrations at the site of interest, thereby reducing the cost of treatment, localizing application, decreasing systemic side effects, and improving efficacy.
Method
Horses are sedated and the procedure is performed standing. Horses must be sedated, because movement can force blood past the tourniquet and reduce the concentration of drug below the site of the tourniquet. The area of needle insertion is clipped and scrubbed. A wide tourniquet is placed above the site of interest, and a needle inserted into a superficial vein of the limb below the tourniquet. The medication is delivered and the tourniquet is removed after 20–30 minutes. Because of the size of the limbs, RLP is not possible above the elbow or stifle of a horse because of inadequate compression of the underlying blood vessels.
Medications used
Limb perfusion is commonly used for antibiotic administration in cases of localized infection, such as lacerations, cellulitis, infection of a synovial structure (joint, tendon sheath, bursa), or osteomyelitis. RLP has been shown to produce antibiotic concentrations 25-50 times the minimum inhibitory concentration in septic joints. Antibiotic selection is important. Antibiotics must be approved for intravenous use, and are ideally chosen based on culture and susceptibility results. Concentration-dependent antibiotics, such as gentamicin and amikacin, are best suited for RLP because they have higher efficacy at higher concentrations, while time-dependent antibiotics such as penicillin and ceftiofur may be used, but have a shorter duration. However, expense is usually less of a limiting factor because a smaller amount may be used relative to systemic administration.
Limb perfusion of carbapenem antibiotics such as imipenem and meropenem have been studied in horses. However, a retrospective study comparing horses that received meropenem via RLP for orthopedics sepsis to a group of horses that received gentamicin via RLP for the same condition had no differences in outcome. This suggests that initial RLP treatments should utilize less critically important antimicrobials for initial RLP treatment such a gentamicin, instead of critically important antimicrobials, such as meropenem.
In the case of lameness in horses, local use of regenerative therapies, such as stem cells, or bisphosphonates such as tiludronic acid are also given by RLP.
In dogs, RLP is also used to deliver chemotherapeutic agents.
Adverse effects
Side effects of RLP are relatively rare when performed correctly. Partial thrombosis of a vein can occur, especially with repeated use of a vein, but complete thrombosis is rare. There may also be localized tissue irritation. Topical application of an anti-inflammatory, such as DMSO or Diclofenac sodium may be used.
References
External links
Limb perfusion entry in the public domain NCI Dictionary of Cancer Terms
Routes of administration
Equine injury and lameness
Horse health
Veterinary procedures | Limb perfusion | [
"Chemistry"
] | 1,135 | [
"Pharmacology",
"Routes of administration"
] |
14,909,006 | https://en.wikipedia.org/wiki/Magnetic%20structure | The term magnetic structure of a material pertains to the ordered arrangement of magnetic spins, typically within an ordered crystallographic lattice. Its study is a branch of solid-state physics.
Magnetic structures
Most solid materials are non-magnetic, that is, they do not display a magnetic structure. Due to the Pauli exclusion principle, each state is occupied by electrons of opposing spins, so that the charge density is compensated everywhere and the spin degree of freedom is trivial. Still, such materials typically do show a weak magnetic behaviour, e.g. due to diamagnetism or Pauli paramagnetism.
The more interesting case is when the material's electron spontaneously break above-mentioned symmetry. For ferromagnetism in the ground state, there is a common spin quantization axis and a global excess of electrons of a given spin quantum number, there are more electrons pointing in one direction than in the other, giving a macroscopic magnetization (typically, the majority electrons are chosen to point up). In the most simple (collinear) cases of antiferromagnetism, there is still a common quantization axis, but the electronic spins are pointing alternatingly up and down, leading again to cancellation of the macroscopic magnetization. However, specifically in the case of frustration of the interactions, the resulting structures can become much more complicated, with inherently three-dimensional orientations of the local spins. Finally, ferrimagnetism as prototypically displayed by magnetite is in some sense an intermediate case: here the magnetization is globally uncompensated as in ferromagnetism, but the local magnetization points in different directions.
The above discussion pertains to the ground state structure. Of course, finite temperatures lead to excitations of the spin configuration. Here two extreme points of view can be contrasted: in the Stoner picture of magnetism (also called itinerant magnetism), the electronic states are delocalized, and their mean-field interaction leads to the symmetry breaking. In this view, with increasing temperature the local magnetization would thus decrease homogeneously, as single delocalized electrons are moved from the up- to the down-channel. On the other hand, in the local-moment case the electronic states are localized to specific atoms, giving atomic spins, which interact only over a short range and typically are analyzed with the Heisenberg model. Here, finite temperatures lead to a deviation of the atomic spins' orientations from the ideal configuration, thus for a ferromagnet also decreasing the macroscopic magnetization.
For localized magnetism, many magnetic structures can be described by magnetic space groups, which give a precise accounting for all possible symmetry groups of up/down configurations in a three-dimensional crystal. However, this formalism is unable to account for some more complex magnetic structures, such as those found in helimagnetism.
Techniques to study them
Such ordering can be studied by observing the magnetic susceptibility as a function of temperature and/or the size of the applied magnetic field, but a truly three-dimensional picture of the arrangement of the spins is best obtained by means of neutron diffraction. Neutrons are primarily scattered by the nuclei of the atoms in the structure. At a temperature above the ordering point of the magnetic moments, where the material behaves as a paramagnetic one, neutron diffraction will therefore give a picture of the crystallographic structure only. Below the ordering point, e.g. the Néel temperature of an antiferromagnet or the Curie-point of a ferromagnet the neutrons will also experience scattering from the magnetic moments because they themselves possess spin. The intensities of the Bragg reflections will therefore change. In fact in some cases entirely new Bragg-reflections will occur if the unit cell of the ordering is larger than that of the crystallographic structure. This is a form of superstructure formation. Thus the symmetry of the total structure may well differ from the crystallographic substructure. It needs to be described by one of the 1651 magnetic (Shubnikov) groups rather than one of the non-magnetic space groups.
Although ordinary X-ray diffraction is 'blind' to the arrangement of the spins, it has become possible to use a special form of X-ray diffraction to study magnetic structure. If a wavelength is selected that is close to an absorption edge of one of elements contained in the materials the scattering becomes anomalous and this component to the scattering is (somewhat) sensitive to the non-spherical shape of the outer electrons of an atom with an unpaired spin. This means that this type of anomalous X-ray diffraction does contain information of the desired type.
More recently, table-top techniques are being developed which allow magnetic structures to be studied without recourse to neutron or synchrotron sources.
Magnetic structure of the chemical elements
Only three elements are ferromagnetic at room temperature and pressure: iron, cobalt, and nickel. This is because their Curie temperature, Tc, is higher than room temperature (Tc > 298K). Gadolinium has a spontaneous magnetization just below room temperature (293 K) and is sometimes counted as the fourth ferromagnetic element. There has been some suggestion that Gadolinium has helimagnetic ordering, but others defend the longstanding view that Gadolinium is a conventional ferromagnet.
The elements Dysprosium and Erbium each have two magnetic transitions. They are paramagnetic at room temperature, but become helimagnetic below their respective Néel temperatures, and then become ferromagnetic below their Curie temperatures. The elements Holmium, Terbium, and Thulium display even more complicated magnetic structures.
There is also antiferromagnetic ordering, which becomes disordered above the Néel temperature. Chromium is somewhat like a simple antiferromagnet, but also has an incommensurate spin density wave modulation on top of the simple up-down spin alternation. Manganese (in the α-Mn form) has 29 atoms unit cell, leading to a complex, but commensurate antiferromagnetic arrangement at low temperatures (magnetic space group P2'm'). Unlike most elements, which are magnetic due to electrons, the magnetic ordering of copper and silver is dominated by the much weaker nuclear magnetic moment, (compare Bohr magneton and nuclear magneton) leading to transition temperatures near absolute zero.
Those elements which become superconductors exhibit superdiamagnetism below a critical temperature.
References
Magnetic ordering
Solid-state chemistry | Magnetic structure | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,379 | [
"Electric and magnetic fields in matter",
"Materials science",
"Magnetic ordering",
"Condensed matter physics",
"nan",
"Solid-state chemistry"
] |
14,915,121 | https://en.wikipedia.org/wiki/Disodium%20malonate | Disodium malonate is a sodium salt of malonic acid with the chemical formula CH2(COONa)2. It is a white crystal soluble in water but not in alcohols, esters or benzene. It can be prepared from the reaction of sodium hydroxide and malonic acid:
CH2(COOH)2 + 2 NaOH → CH2(COONa)2 + 2 H2O
Malonates
Organic sodium salts | Disodium malonate | [
"Chemistry"
] | 93 | [
"Salts",
"Organic compounds",
"Organic sodium salts",
"Organic compound stubs",
"Organic chemistry stubs"
] |
69,937 | https://en.wikipedia.org/wiki/Nuclear%20pulse%20propulsion | Nuclear pulse propulsion or external pulsed plasma propulsion is a hypothetical method of spacecraft propulsion that uses nuclear explosions for thrust. It originated as Project Orion with support from DARPA, after a suggestion by Stanislaw Ulam in 1947. Newer designs using inertial confinement fusion have been the baseline for most later designs, including Project Daedalus and Project Longshot.
History
Los Alamos
Calculations for a potential use of this technology were made at the laboratory from and toward the close of the 1940s to the mid-1950s.
Project Orion
Project Orion was the first serious attempt to design a nuclear pulse rocket. A design was formed at General Atomics during the late 1950s and early 1960s, with the idea of reacting small directional nuclear explosives utilizing a variant of the Teller–Ulam two-stage bomb design against a large steel pusher plate attached to the spacecraft with shock absorbers. Efficient directional explosives maximized the momentum transfer, leading to specific impulses in the range of seconds, or about thirteen times that of the Space Shuttle main engine. With refinements a theoretical maximum specific impulse of (1 MN·s/kg) might be possible. Thrusts were in the millions of tons, allowing spacecraft larger than 8 tons to be built with 1958 materials.
The reference design was to be constructed of steel using submarine-style construction with a crew of more than 200 and a vehicle takeoff weight of several thousand tons. This single-stage reference design would reach Mars and return in four weeks from the Earth's surface (compared to 12 months for NASA's current chemically powered reference mission). The same craft could visit Saturn's moons in a seven-month mission (compared to chemically powered missions of about nine years). Notable engineering problems that occurred were related to crew shielding and pusher-plate lifetime.
Although the system appeared to be workable, the project was shut down in 1965, primarily because the Partial Test Ban Treaty made it illegal; in fact, before the treaty, the US and Soviet Union had already separately detonated a combined number of at least nine nuclear bombs, including thermonuclear, in space, i.e., at altitudes of over 100 km (see high-altitude nuclear explosions). Ethical issues complicated the launch of such a vehicle within the Earth's magnetosphere: calculations using the (disputed) linear no-threshold model of radiation damage showed that the fallout from each takeoff would cause the death of approximately 1 to 10 individuals. In a threshold model, such extremely low levels of thinly distributed radiation would have no associated ill-effects, while under hormesis models, such tiny doses would be negligibly beneficial. The use of less efficient clean nuclear bombs for achieving orbit and then more efficient, higher yield dirtier bombs for travel would significantly reduce the amount of fallout caused from an Earth-based launch.
One useful mission would be to deflect an asteroid or comet on collision course with the Earth, depicted dramatically in the 1998 film Deep Impact. The high performance would permit even a late launch to succeed, and the vehicle could effectively transfer a large amount of kinetic energy to the asteroid by simple impact. The prospect of an imminent asteroid impact would obviate concerns over the few predicted deaths from fallout. An automated mission would remove the challenge of designing a shock absorber that would protect the crew.
Orion is one of very few interstellar space drives that could theoretically be constructed with available technology, as discussed in a 1968 paper, "Interstellar Transport" by Freeman Dyson.
Project Daedalus
Project Daedalus was a study conducted between 1973 and 1978 by the British Interplanetary Society (BIS) to design an interstellar uncrewed spacecraft that could reach a nearby star within about 50 years. A dozen scientists and engineers led by Alan Bond worked on the project. At the time fusion research appeared to be making great strides, and in particular, inertial confinement fusion (ICF) appeared to be adaptable as a rocket engine.
ICF uses small pellets of fusion fuel, typically lithium deuteride (6Li2H) with a small deuterium/tritium trigger at the center. The pellets are thrown into a reaction chamber where they are hit on all sides by lasers or another form of beamed energy. The heat generated by the beams explosively compresses the pellet to the point where fusion takes place. The result is a hot plasma, and a very small "explosion" compared to the minimum size bomb that would be required to instead create the necessary amount of fission.
For Daedalus, this process was to be run within a large electromagnet that formed the rocket engine. After the reaction, ignited by electron beams, the magnet funnelled the hot gas to the rear for thrust. Some of the energy was diverted to run the ship's systems and engine. In order to make the system safe and energy efficient, Daedalus was to be powered by a helium-3 fuel collected from Jupiter.
Medusa
The Medusa design has more in common with solar sails than with conventional rockets. It was envisioned by Johndale Solem in the 1990s and published in the Journal of the British Interplanetary Society (JBIS).
A Medusa spacecraft would deploy a large sail ahead of it, attached by independent cables, and then launch nuclear explosives forward to detonate between itself and its sail. The sail would be accelerated by the plasma and photonic impulse, running out the tethers as when a fish flees a fisher, generating electricity at the "reel". The spacecraft would use some of the generated electricity to reel itself up toward the sail, constantly smoothly accelerating as it goes.
In the original design, multiple tethers connected to multiple motor generators. The advantage over the single tether is to increase the distance between the explosion and the tethers, thus reducing damage to the tethers.
For heavy payloads, performance could be improved by taking advantage of lunar materials, for example, wrapping the explosive with lunar rock or water, stored previously at a stable Lagrange point.
Medusa performs better than the classical Orion design because its sail intercepts more of the explosive impulse, its shock-absorber stroke is much longer, and its major structures are in tension and hence can be quite lightweight. Medusa-type ships would be capable of a specific impulse of (500 to 1000 kN·s/kg).
Medusa became widely known to the public in the BBC documentary film To Mars By A-Bomb: The Secret History of Project Orion. A short film shows an artist's conception of how the Medusa spacecraft works "by throwing bombs into a sail that's ahead of it".
Project Longshot
Project Longshot was a NASA-sponsored research project carried out in conjunction with the US Naval Academy in the late 1980s. Longshot was in some ways a development of the basic Daedalus concept, in that it used magnetically funneled ICF. The key difference was that they felt that the reaction could not power both the rocket and the other systems, and instead included a 300 kW conventional nuclear reactor for running the ship. The added weight of the reactor reduced performance somewhat, but even using LiD fuel it would be able to reach neighboring star Alpha Centauri in 100 years (approx. velocity of 13,411 km/s, at a distance of 4.5 light years, equivalent to 4.5% of light speed).
Antimatter-catalyzed nuclear reaction
In the mid-1990s, research at Pennsylvania State University led to the concept of using antimatter to catalyze nuclear reactions. Antiprotons would react inside the nucleus of uranium, releasing energy that breaks the nucleus apart as in conventional nuclear reactions. Even a small number of such reactions can start the chain reaction that would otherwise require a much larger volume of fuel to sustain. Whereas the "normal" critical mass for plutonium is about 11.8 kilograms (for a sphere at standard density), with antimatter catalyzed reactions this could be well under one gram.
Several rocket designs using this reaction were proposed, some which would use all-fission reactions for interplanetary missions, and others using fission-fusion (effectively a very small version of Orion's bombs) for interstellar missions.
Magneto-inertial fusion
NASA funded MSNW LLC and the University of Washington in 2011 to study and develop a fusion rocket through the NASA Innovative Advanced Concepts NIAC Program.
The rocket uses a form of magneto-inertial fusion to produce a direct thrust fusion rocket. Magnetic fields cause large metal rings to collapse around the deuterium-tritium plasma, triggering fusion. The energy heats and ionizes the shell of metal formed by the crushed rings. The hot ionized metal is shot out of a magnetic rocket nozzle at a high speed (up to 30 km/s). Repeating this process roughly every minute would accelerate or decelerate the spacecraft. The fusion reaction is not self-sustaining and requires electrical energy to explode each pulse. With electrical requirements estimated to be between 100 kW to 1,000 kW (300 kW average), designs incorporate solar panels to produce the required energy.
Foil Liner Compression creates fusion at the proper energy scale. The proof of concept experiment in Redmond, Washington, was to use aluminum liners for compression. However, the ultimate design was to use lithium liners.
Performance characteristics are dependent on the fusion energy gain factor achieved by the reactor. Gains were expected to be between 20 and 200, with an estimated average of 40. Higher gains produce higher exhaust velocity, higher specific impulse and lower electrical power requirements. The table below summarizes different performance characteristics for a theoretical 90-day Mars transfer at gains of 20, 40, and 200.
By April 2013, MSNW had demonstrated subcomponents of the systems: heating deuterium plasma up to fusion temperatures and concentrating the magnetic fields needed to create fusion. They planned to put the two technologies together for a test before the end of 2013.
Pulsed fission-fusion propulsion
Pulsed fission-fusion (PuFF) propulsion is reliant on principles similar to magneto-inertial fusion. It aims to solve the problem of the extreme stress induced on containment by an Orion-like motor by ejecting the plasma obtained from small fuel pellets that undergo autocatalytic fission and fusion reactions initiated by a Z-pinch. It is a theoretical propulsion system researched through the NIAC Program by the University of Alabama in Huntsville. It is in essence a fusion rocket that uses a Z-pinch configuration, but coupled with a fission reaction to boost the fusion process.
A PuFF fuel pellet, around 1 cm in diameter, consists of two components: A deuterium-tritium (D-T) cylinder of plasma, called the target, which undergoes fusion, and a surrounding U-235 sheath that undergoes fission enveloped by a lithium liner. Liquid lithium, serving as a moderator, fills the space between the D-T cylinder and the uranium sheath. Current is run through the liquid lithium, a Lorentz force is generated which then compresses the D-T plasma by a factor of 10 in what is known as a Z-pinch. The compressed plasma reaches criticality and undergoes fusion reactions. However, the fusion energy gain (Q) of these reactions is far below breakeven (Q < 1), meaning that the reaction consumes more energy than it produces.
In a PuFF design, the fast neutrons released by the initial fusion reaction induce fission in the U-235 sheath. The resultant heat causes the sheath to expand, increasing its implosion velocity onto the D-T core and compressing it further, releasing more fast neutrons. Those again amplify the fission rate in the sheath, rendering the process autocatalytic. It is hoped that this results in a complete burn up of both the fission and fusion fuels, making PuFF more efficient than other nuclear pulse concepts. Much like in a magneto-inertial fusion rocket, the performance of the engine is dependent on the degree to which the fusion gain of the D-T target is increased.
One "pulse" consist of the injection of a fuel pellet into the combustion chamber, its consumption through a series of fission-fusion reactions, and finally the ejection of the released plasma through a magnetic nozzle, thus generating thrust. A single pulse is expected to take only a fraction of a second to complete.
See also
References
External links
G.R. Schmidt, J.A. Bunornetti and P.J. Morton, Nuclear Pulse Propulsion – Orion and Beyond, NASA technical report AlAA 2000-3856, 2000
J. C. Nance, "Nuclear Pulse Propulsion," IEEE Trans. on Nuclear Science 12, 177 (1965) [Reprinted as Ann. N.Y. Acad. Sci. 140, 396 (1966)].
"Nuclear Pulse Space Vehicle Study, Vol III," Report on NASA Contract NAS 8-11053, General Atomics, GA-5009, 19 Sep 64.
F. Dyson, "Death of a Project," Science 149, 141 (1965).
W. H. Robbins and H. B. Finger, H.B., "An Historical Perspective of the NERVA Nuclear Rocket Engine Technology Program", NASA Contractor Report 187154, AIAA-91-3451, July 1991.
Nuclear spacecraft propulsion
Plasma technology and applications | Nuclear pulse propulsion | [
"Physics"
] | 2,740 | [
"Plasma technology and applications",
"Plasma physics"
] |
70,085 | https://en.wikipedia.org/wiki/J.%20J.%20Thomson | Sir Joseph John Thomson (18 December 1856 – 30 August 1940) was an English physicist who received the Nobel Prize in Physics in 1906 for his discovery of the electron, the first subatomic particle to be found.
In 1897, Thomson showed that cathode rays were composed of previously unknown negatively charged particles (now called electrons), which he calculated must have bodies much smaller than atoms and a very large charge-to-mass ratio. Thomson is also credited with finding the first evidence for isotopes of a stable (non-radioactive) element in 1913, as part of his exploration into the composition of canal rays (positive ions). His experiments to determine the nature of positively charged particles, with Francis William Aston, were the first use of mass spectrometry and led to the development of the mass spectrograph.
Thomson was awarded the 1906 Nobel Prize in Physics for his work on the conduction of electricity in gases. Thomson was also a teacher, and seven of his students went on to win Nobel Prizes: Ernest Rutherford (Chemistry 1908), Lawrence Bragg (Physics 1915), Charles Barkla (Physics 1917), Francis Aston (Chemistry 1922), Charles Thomson Rees Wilson (Physics 1927), Owen Richardson (Physics 1928) and Edward Victor Appleton (Physics 1947). Only Arnold Sommerfeld's record of mentorship offers a comparable list of high-achieving students.
Education and personal life
Joseph John Thomson was born on 18 December 1856 in Cheetham Hill, Manchester, Lancashire, England. His mother, Emma Swindells, came from a local textile family. His father, Joseph James Thomson, ran an antiquarian bookshop founded by Thomson's great-grandfather. He had a brother, Frederick Vernon Thomson, who was two years younger than he was. J. J. Thomson was a reserved yet devout Anglican.
His early education was in small private schools where he demonstrated outstanding talent and interest in science. In 1870, he was admitted to Owens College in Manchester (now University of Manchester) at the unusually young age of 14 and came under the influence of Balfour Stewart, Professor of Physics, who initiated Thomson into physical research. Thomson began experimenting with contact electrification and soon published his first scientific paper. His parents planned to enroll him as an apprentice engineer to Sharp, Stewart & Co, a locomotive manufacturer, but these plans were cut short when his father died in 1873.
He moved on to Trinity College, Cambridge, in 1876. In 1880, he obtained his Bachelor of Arts degree in mathematics (Second Wrangler in the Tripos and 2nd Smith's Prize). He applied for and became a fellow of Trinity College in 1881. He received his Master of Arts degree (with Adams Prize) in 1883.
Family
In 1890, Thomson married Rose Elisabeth Paget at the church of St. Mary the Less. Rose, who was the daughter of Sir George Edward Paget, a physician and then Regius Professor of Physic at Cambridge, was interested in physics. Beginning in 1882, women could attend demonstrations and lectures at the University of Cambridge. Rose attended demonstrations and lectures, among them Thomson's, leading to their relationship.
They had two children: George Paget Thomson, who was also awarded a Nobel Prize for his work on the wave properties of the electron, and Joan Paget Thomson (later Charnock), who became an author, writing children's books, non-fiction and biographies.
Career and research
Overview
On 22 December 1884, Thomson was appointed Cavendish Professor of Physics at the University of Cambridge. The appointment caused considerable surprise, given that candidates such as Osborne Reynolds or Richard Glazebrook were older and more experienced in laboratory work. Thomson was known for his work as a mathematician, where he was recognised as an exceptional talent.
He was awarded a Nobel Prize in 1906, "in recognition of the great merits of his theoretical and experimental investigations on the conduction of electricity by gases." He was knighted in 1908 and appointed to the Order of Merit in 1912. In 1914, he gave the Romanes Lecture in Oxford on "The atomic theory". In 1918, he became Master of Trinity College, Cambridge, where he remained until his death. He died on 30 August 1940; his ashes rest in Westminster Abbey, near the graves of Sir Isaac Newton and his former student Ernest Rutherford.
Rutherford succeeded him as Cavendish Professor of Physics. Six of Thomson's research assistants and junior colleagues (Charles Glover Barkla, Niels Bohr, Max Born, William Henry Bragg, Owen Willans Richardson and Charles Thomson Rees Wilson) won Nobel Prizes in physics, and two (Francis William Aston and Ernest Rutherford) won Nobel prizes in chemistry. Thomson's son (George Paget Thomson) also won the 1937 Nobel Prize in physics for proving the wave-like properties of electrons.
Early work
Thomson's prize-winning master's work, Treatise on the motion of vortex rings, shows his early interest in atomic structure. In it, Thomson mathematically described the motions of William Thomson's vortex theory of atoms.
Thomson published a number of papers addressing both mathematical and experimental issues of electromagnetism. He examined the electromagnetic theory of light of James Clerk Maxwell, introduced the concept of electromagnetic mass of a charged particle, and demonstrated that a moving charged body would apparently increase in mass.
Much of his work in mathematical modelling of chemical processes can be thought of as early computational chemistry. In further work, published in book form as Applications of dynamics to physics and chemistry (1888), Thomson addressed the transformation of energy in mathematical and theoretical terms, suggesting that all energy might be kinetic. His next book, Notes on recent researches in electricity and magnetism (1893), built upon Maxwell's Treatise upon electricity and magnetism, and was sometimes referred to as "the third volume of Maxwell". In it, Thomson emphasized physical methods and experimentation and included extensive figures and diagrams of apparatus, including a number for the passage of electricity through gases. His third book, Elements of the mathematical theory of electricity and magnetism (1895) was a readable introduction to a wide variety of subjects, and achieved considerable popularity as a textbook.
A series of four lectures, given by Thomson on a visit to Princeton University in 1896, were subsequently published as Discharge of electricity through gases (1897). Thomson also presented a series of six lectures at Yale University in 1904.
Discovery of the electron
Several scientists, such as William Prout and Norman Lockyer, had suggested that atoms were built up from a more fundamental unit, but they envisioned this unit to be the size of the smallest atom, hydrogen. Thomson in 1897 was the first to suggest that one of the fundamental units of the atom was more than 1,000 times smaller than an atom, suggesting the subatomic particle now known as the electron. Thomson discovered this through his explorations on the properties of cathode rays. Thomson made his suggestion on 30 April 1897 following his discovery that cathode rays (at the time known as Lenard rays) could travel much further through air than expected for an atom-sized particle. He estimated the mass of cathode rays by measuring the heat generated when the rays hit a thermal junction and comparing this with the magnetic deflection of the rays. His experiments suggested not only that cathode rays were over 1,000 times lighter than the hydrogen atom, but also that their mass was the same in whichever type of atom they came from. He concluded that the rays were composed of very light, negatively charged particles which were a universal building block of atoms. He called the particles "corpuscles", but later scientists preferred the name electron which had been suggested by George Johnstone Stoney in 1891, prior to Thomson's actual discovery.
In April 1897, Thomson had only early indications that the cathode rays could be deflected electrically (previous investigators such as Heinrich Hertz had thought they could not be). A month after Thomson's announcement of the corpuscle, he found that he could reliably deflect the rays by an electric field if he evacuated the discharge tube to a very low pressure. By comparing the deflection of a beam of cathode rays by electric and magnetic fields he obtained more robust measurements of the mass-to-charge ratio that confirmed his previous estimates. This became the classic means of measuring the charge-to-mass ratio of the electron. Later in 1899 he measured the charge of the electron to be of .
Thomson believed that the corpuscles emerged from the atoms of the trace gas inside his cathode-ray tubes. He thus concluded that atoms were divisible, and that the corpuscles were their building blocks. In 1904, Thomson suggested a model of the atom, hypothesizing that it was a sphere of positive matter within which electrostatic forces determined the positioning of the corpuscles. To explain the overall neutral charge of the atom, he proposed that the corpuscles were distributed in a uniform sea of positive charge. In this "plum pudding model", the electrons were seen as embedded in the positive charge like raisins in a plum pudding (although in Thomson's model they were not stationary, but orbiting rapidly).
Thomson made the discovery around the same time that Walter Kaufmann and Emil Wiechert discovered the correct mass to charge ratio of these cathode rays (electrons).
The name "electron" was adopted for these particles by the scientific community, mainly due to the advocation by George Francis FitzGerald, Joseph Larmor, and Hendrik Lorentz. The term was originally coined by George Johnstone Stoney in 1891 as a tentative name for the basic unit of electrical charge (which had then yet to be discovered). For some years Thomson resisted using the word "electron" because he didn't like how some physicists talked of a "positive electron" that was supposed to be the elementary unit of positive charge just as the "negative electron" is the elementary unit of negative charge. Thomson preferred to stick with the word "corpuscle" which he strictly defined as negatively charged. He relented by 1914, using the word "electron" in his book The Atomic Theory. In 1920, Rutherford and his fellows agreed to call the nucleus of the hydrogen ion "proton", establishing a distinct name for the smallest known positively-charged particle of matter (that can exist independently anyway).
Isotopes and mass spectrometry
In 1912, as part of his exploration into the composition of the streams of positively charged particles then known as canal rays, Thomson and his research assistant F. W. Aston channelled a stream of neon ions through a magnetic and an electric field and measured its deflection by placing a photographic plate in its path. They observed two patches of light on the photographic plate (see image on right), which suggested two different parabolas of deflection, and concluded that neon is composed of atoms of two different atomic masses (neon-20 and neon-22), that is to say of two isotopes. This was the first evidence for isotopes of a stable element; Frederick Soddy had previously proposed the existence of isotopes to explain the decay of certain radioactive elements.
Thomson's separation of neon isotopes by their mass was the first example of mass spectrometry, which was subsequently improved and developed into a general method by F. W. Aston and by A. J. Dempster.
Experiments with cathode rays
Earlier, physicists debated whether cathode rays were immaterial like light ("some process in the aether") or were "in fact wholly material, and ... mark the paths of particles of matter charged with negative electricity", quoting Thomson. The aetherial hypothesis was vague, but the particle hypothesis was definite enough for Thomson to test.
Magnetic deflection
Thomson first investigated the magnetic deflection of cathode rays. Cathode rays were produced in the side tube on the left of the apparatus and passed through the anode into the main bell jar, where they were deflected by a magnet. Thomson detected their path by the fluorescence on a squared screen in the jar. He found that whatever the material of the anode and the gas in the jar, the deflection of the rays was the same, suggesting that the rays were of the same form whatever their origin.
Electrical charge
While supporters of the aetherial theory accepted the possibility that negatively charged particles are produced in Crookes tubes, they believed that they are a mere by-product and that the cathode rays themselves are immaterial. Thomson set out to investigate whether or not he could actually separate the charge from the rays.
Thomson constructed a Crookes tube with an electrometer set to one side, out of the direct path of the cathode rays. Thomson could trace the path of the ray by observing the phosphorescent patch it created where it hit the surface of the tube. Thomson observed that the electrometer registered a charge only when he deflected the cathode ray to it with a magnet. He concluded that the negative charge and the rays were one and the same.
Electrical deflection
In May–June 1897, Thomson investigated whether or not the rays could be deflected by an electric field. Previous experimenters had failed to observe this, but Thomson believed their experiments were flawed because their tubes contained too much gas.
Thomson constructed a Crookes tube with a better vacuum. At the start of the tube was the cathode from which the rays projected. The rays were sharpened to a beam by two metal slits – the first of these slits doubled as the anode, the second was connected to the earth. The beam then passed between two parallel aluminium plates, which produced an electric field between them when they were connected to a battery. The end of the tube was a large sphere where the beam would impact on the glass, created a glowing patch. Thomson pasted a scale to the surface of this sphere to measure the deflection of the beam. Any electron beam would collide with some residual gas atoms within the Crookes tube, thereby ionizing them and producing electrons and ions in the tube (space charge); in previous experiments this space charge electrically screened the externally applied electric field. However, in Thomson's Crookes tube the density of residual atoms was so low that the space charge from the electrons and ions was insufficient to electrically screen the externally applied electric field, which permitted Thomson to successfully observe electrical deflection.
When the upper plate was connected to the negative pole of the battery and the lower plate to the positive pole, the glowing patch moved downwards, and when the polarity was reversed, the patch moved upwards.
Measurement of mass-to-charge ratio
In his classic experiment, Thomson measured the mass-to-charge ratio of the cathode rays by measuring how much they were deflected by a magnetic field and comparing this with the electric deflection. He used the same apparatus as in his previous experiment, but placed the discharge tube between the poles of a large electromagnet. He found that the mass-to-charge ratio was over a thousand times lower than that of a hydrogen ion (H+), suggesting either that the particles were very light and/or very highly charged. Significantly, the rays from every cathode yielded the same mass-to-charge ratio. This is in contrast to anode rays (now known to arise from positive ions emitted by the anode), where the mass-to-charge ratio varies from anode-to-anode. Thomson himself remained critical of what his work established, in his Nobel Prize acceptance speech referring to "corpuscles" rather than "electrons".
Thomson's calculations can be summarised as follows (in his original notation, using F instead of E for the electric field and H instead of B for the magnetic field):
The electric deflection is given by , where Θ is the angular electric deflection, F is applied electric intensity, e is the charge of the cathode ray particles, l is the length of the electric plates, m is the mass of the cathode ray particles and v is the velocity of the cathode ray particles. The magnetic deflection is given by , where φ is the angular magnetic deflection and H is the applied magnetic field intensity.
The magnetic field was varied until the magnetic and electric deflections were the same, when . This can be simplified to give . The electric deflection was measured separately to give Θ and H, F and l were known, so m/e could be calculated.
Conclusions
As to the source of these particles, Thomson believed they emerged from the molecules of gas in the vicinity of the cathode.
Thomson imagined the atom as being made up of these corpuscles orbiting in a sea of positive charge; this was his plum pudding model. This model was later proved incorrect when his student Ernest Rutherford showed that the positive charge is concentrated in the nucleus of the atom.
Other work
In 1905, Thomson discovered the natural radioactivity of potassium.
In 1906, Thomson demonstrated that hydrogen had only a single electron per atom. Previous theories allowed various numbers of electrons.
Awards and honours
During his life
Thomson was elected a Fellow of the Royal Society (FRS) and appointed to the Cavendish Professorship of Experimental Physics at the Cavendish Laboratory, University of Cambridge in 1884. Thomson won numerous awards and honours during his career including:
Adams Prize (1882)
Royal Medal (1894)
Hughes Medal (1902)
Hodgkins Medal (1902)
Nobel Prize for Physics (1906)
Elliott Cresson Medal (1910)
Copley Medal (1914)
Franklin Medal (1922)
Thomson was elected a fellow of the Royal Society on 12 June 1884 and served as President of the Royal Society from 1915 to 1920.
Thomson was elected an International Honorary Member of the American Academy of Arts and Sciences in 1902, and International Member of the American Philosophical Society in 1903, and the United States National Academy of Sciences in 1903.
In November 1927, Thomson opened the Thomson building, named in his honour, in the Leys School, Cambridge.
Posthumous
In 1991, the thomson (symbol: Th) was proposed as a unit to measure mass-to-charge ratio in mass spectrometry in his honour.
J J Thomson Avenue, on the University of Cambridge's West Cambridge site, is named after Thomson.
The Thomson Medal Award, sponsored by the International Mass Spectrometry Foundation, is named after Thomson.
The Institute of Physics Joseph Thomson Medal and Prize is named after Thomson.
Thomson Crescent in Deep River, Ontario, connects with Rutherford Ave.
See also
List of presidents of the Royal Society
References
Bibliography
1883. A Treatise on the Motion of Vortex Rings: An essay to which the Adams Prize was adjudged in 1882, in the University of Cambridge. London: Macmillan and Co., pp. 146. Recent reprint: .
1888. Applications of Dynamics to Physics and Chemistry. London: Macmillan and Co., pp. 326. Recent reprint: .
1893. Notes on recent researches in electricity and magnetism: intended as a sequel to Professor Clerk-Maxwell's 'Treatise on Electricity and Magnetism. Oxford University Press, pp. xvi & 578. 1991, Cornell University Monograph: .
Thomson, Joseph John (1904). Electricity and matter (in English). Oxford : Clarendon Press.
1921 (1895). Elements of the Mathematical Theory of Electricity And Magnetism. London: Macmillan and Co. Scan of 1895 edition.
A Text book of Physics in Five Volumes, co-authored with J.H. Poynting: (1) Properties of Matter, (2) Sound, (3) Heat, (4) Light, and (5) Electricity and Magnetism. Dated 1901 and later, and with revised later editions.
J.J. Thomson (1897) "Cathode Rays", The Electrician 39, 104, also published in Proceedings of the Royal Institution 30 April 1897, 1–14 – first announcement of the "corpuscle" (before the classic mass and charge experiment)
J.J. Thomson (1897), Cathode rays, Philosophical Magazine, 44, 293 – the classic measurement of the electron mass and charge
J.J. Thomson (1904), "On the Structure of the Atom: an Investigation of the Stability and Periods of Oscillation of a number of Corpuscles arranged at equal intervals around the Circumference of a Circle; with Application of the Results to the Theory of Atomic Structure," Philosophical Magazine Series 6, Volume 7, Number 39, pp. 237–265. This paper presents the classical "plum pudding model" from which the Thomson Problem is posed.
J.J. Thomson (1912), "Further experiments on positive rays" Philosophical Magazine, 24, 209–253 – first announcement of the two neon parabolae
J.J. Thomson (1913), Rays of positive electricity, Proceedings of the Royal Society, A 89, 1–20 – discovery of neon isotopes
J.J. Thomson (1923), The Electron in Chemistry: Being Five Lectures Delivered at the Franklin Institute, Philadelphia.
Thomson, Sir J. J. (1936), Recollections and Reflections, London: G. Bell & Sons, Ltd. Republished as digital edition, Cambridge: University Press, 2011 (Cambridge Library Collection series).
Thomson, George Paget. (1964) J.J. Thomson: Discoverer of the Electron. Great Britain: Thomas Nelson & Sons, Ltd.
Davis, Eward Arthur & Falconer, Isobel (1997), J.J. Thomson and the Discovery of the Electron.
Falconer, Isobel (1988) "J.J. Thomson's Work on Positive Rays, 1906–1914" Historical Studies in the Physical and Biological Sciences 18(2) 265–310
Falconer, Isobel (2001) "Corpuscles to Electrons" in J Buchwald and A Warwick (eds) Histories of the Electron, Cambridge, Mass: MIT Press, pp. 77–100.
External links
The Discovery of the Electron
with the Nobel Lecture, 11 December 1906 Carriers of Negative Electricity
Annotated bibliography for Joseph J. Thomson from the Alsos Digital Library for Nuclear Issues
Essay on Thomson life and religious views
The Cathode Ray Tube site
Thomson's discovery of the isotopes of Neon
Photos of some of Thomson's remaining apparatus at the Cavendish Laboratory Museum
A short film of Thomson lecturing on electrical engineering and the discovery of the electron (1934)
A history of the electron: JJ and GP Thomson published by the University of the Basque Country (2013)
1856 births
1940 deaths
20th-century British physicists
Alumni of Trinity College, Cambridge
Burials at Westminster Abbey
English Anglicans
20th-century British mathematicians
British Nobel laureates
Experimental physicists
Fellows of the Royal Society
Foreign associates of the National Academy of Sciences
Masters of Trinity College, Cambridge
Members of the Order of Merit
Nobel laureates in Physics
People from Cheetham Hill
Presidents of the Royal Society
Recipients of the Copley Medal
Royal Medal winners
Knights Bachelor
Second Wranglers
Alumni of the Victoria University of Manchester
Presidents of the British Science Association
Presidents of the Institute of Physics
Presidents of the Physical Society
Mass spectrometrists
Recipients of the Dalton Medal
Cavendish Professors of Physics
Recipients of Franklin Medal
Members of the American Philosophical Society
Presidents of the Cambridge Philosophical Society | J. J. Thomson | [
"Physics",
"Chemistry"
] | 4,749 | [
"Biochemists",
"Mass spectrometry",
"Spectrum (physical sciences)",
"Mass spectrometrists"
] |
70,117 | https://en.wikipedia.org/wiki/Floodplain | A floodplain or flood plain or bottomlands is an area of land adjacent to a river. Floodplains stretch from the banks of a river channel to the base of the enclosing valley, and experience flooding during periods of high discharge. The soils usually consist of clays, silts, sands, and gravels deposited during floods.
Because of regular flooding, floodplains frequently have high soil fertility since nutrients are deposited with the flood waters. This can encourage farming; some important agricultural regions, such as the Nile and Mississippi river basins, heavily exploit floodplains. Agricultural and urban regions have developed near or on floodplains to take advantage of the rich soil and freshwater. However, the risk of inundation has led to increasing efforts to control flooding.
Formation
Most floodplains are formed by deposition on the inside of river meanders and by overbank flow.
Wherever the river meanders, the flowing water erodes the river bank on the outside of the meander. At the same time, sediments are simultaneously deposited in a bar on the inside of the meander. This is described as lateral accretion since the deposition builds the point bar laterally into the river channel. Erosion on the outside of the meander usually closely balances deposition on the inside so that the channel shifts in the direction of the meander without changing significantly in width. The point bar is built up to a level very close to that of the river banks. Significant net erosion of sediments occurs only when the meander cuts into higher ground. The overall effect is that, as the river meanders, it creates a level flood plain composed mostly of point bar deposits. The rate at which the channel shifts varies greatly, with reported rates ranging from too slow to measure to as much as per year for the Kosi River of India.
Overbank flow takes place when the river is flooded with more water than can be accommodated by the river channel. Flow over the banks of the river deposits a thin veneer of sediments that is coarsest and thickest close to the channel. This is described as vertical accretion, since the deposits build upwards. In undisturbed river systems, overbank flow is frequent, typically occurring every one to two years, regardless of climate or topography. Sedimentation rates for a three-day flood of the Meuse and Rhine Rivers in 1993 found average sedimentation rates in the floodplain of between 0.57 and 1.0 kg/m2. Higher rates were found on the levees (4 kg/m2 or more) and on low-lying areas (1.6 kg/m2).
Sedimentation from the overbank flow is concentrated on natural levees, crevasse splays, and in wetlands and shallow lakes of flood basins. Natural levees are ridges along river banks that form from rapid deposition from the overbank flow. Most of the suspended sand is deposited on the levees, leaving the silt and clay sediments to be deposited as floodplain mud further from the river. Levees are typically built up enough to be relatively well-drained compared with nearby wetlands, and levees in non-arid climates are often heavily vegetated.
Crevasses are formed by breakout events from the main river channel. The river bank fails, and floodwaters scour a channel. Sediments from the crevasse spread out as delta-shaped deposits with numerous distributary channels. Crevasse formation is most common in sections of rivers where the river bed is accumulating sediments (aggrading).
Repeated flooding eventually builds up an alluvial ridge, whose natural levees and abandoned meander loops may stand well above most of the floodplain. The alluvial ridge is topped by a channel belt formed by successive generations of channel migration and meander cutoff. At much longer intervals, the river may abandon the channel belt and build a new one at another position on the floodplain. This process is called avulsion and occurs at intervals of 10–1000 years. Historical avulsions leading to catastrophic flooding include the 1855 Yellow River flood and the 2008 Kosi River flood.
Floodplains can form around rivers of any kind or size. Even relatively straight stretches of river are capable of producing floodplains. Mid-channel bars in braided rivers migrate downstream through processes resembling those in point bars of meandering rivers and can build up a floodplain.
The quantity of sediments in a floodplain greatly exceeds the river load of sediments. Thus, floodplains are an important storage site for sediments during their transport from where they are generated to their ultimate depositional environment.
When the rate at which the river is cutting downwards becomes great enough that overbank flows become infrequent, the river is said to have abandoned its floodplain. Portions of the abandoned floodplain may be preserved as fluvial terraces.
Ecology
Floodplains support diverse and productive ecosystems. They are characterized by considerable variability in space and time, which in turn produces some of the most species-rich of ecosystems. From the ecological perspective, the most distinctive aspect of floodplains is the flood pulse associated with annual floods, and so the floodplain ecosystem is defined as the part of the river valley that is regularly flooded and dried.
Floods bring in detrital material rich in nutrients and release nutrients from dry soil as it is flooded. The decomposition of terrestrial plants submerged by the floodwaters adds to the nutrient supply. The flooded littoral zone of the river (the zone closest to the river bank) provides an ideal environment for many aquatic species, so the spawning season for fish often coincides with the onset of flooding. Fish must grow quickly during the flood to survive the subsequent drop in water level. As the floodwaters recede, the littoral experiences blooms of microorganisms, while the banks of the river dry out and terrestrial plants germinate to stabilize the bank.
The biota of floodplains has high annual growth and mortality rates, which is advantageous for the rapid colonization of large areas of the floodplain. This allows them to take advantage of shifting floodplain geometry. For example, floodplain trees are fast-growing and tolerant of root disturbance. Opportunists (such as birds) are attracted to the rich food supply provided by the flood pulse.
Floodplain ecosystems have distinct biozones. In Europe, as one moves away from the river, the successive plant communities are bank vegetation (usually annuals); sedge and reeds; willow shrubs; willow-poplar forest; oak-ash forest; and broadleaf forest. Human disturbance creates wet meadows that replace much of the original ecosystem. The biozones reflect a soil moisture and oxygen gradient that in turn corresponds to a flooding frequency gradient. The primeval floodplain forests of Europe were dominated by oak (60%) elm (20%) and hornbeam (13%), but human disturbance has shifted the makeup towards ash (49%) with maple increasing to 14% and oak decreasing to 25%.
Semiarid floodplains have a much lower species diversity. Species are adapted to alternating drought and flood. Extreme drying can destroy the ability of the floodplain ecosystem to shift to a healthy wet phase when flooded.
Floodplain forests constituted 1% of the landscape of Europe in the 1800s. Much of this has been cleared by human activity, though floodplain forests have been impacted less than other kinds of forests. This makes them important refugia for biodiversity. Human destruction of floodplain ecosystems is largely a result of flood control, hydroelectric development (such as reservoirs), and conversion of floodplains to agriculture use. Transportation and waste disposal also have detrimental effects. The result is the fragmentation of these ecosystems, resulting in loss of populations and diversity and endangering the remaining fragments of the ecosystem. Flood control creates a sharper boundary between water and land than in undisturbed floodplains, reducing physical diversity. Floodplain forests protect waterways from erosion and pollution and reduce the impact of floodwaters.
The disturbance by humans of temperate floodplain ecosystems frustrates attempts to understand their natural behavior. Tropical rivers are less impacted by humans and provide models for temperate floodplain ecosystems, which are thought to share many of their ecological attributes.
Flood control
Excluding famines and epidemics, some of the worst natural disasters in history (measured by fatalities) have been river floods, particularly in the Yellow River in China – see list of deadliest floods. The worst of these, and the worst natural disaster (excluding famine and epidemics), was the 1931 China floods, estimated to have killed millions. This had been preceded by the 1887 Yellow River flood, which killed around one million people and is the second-worst natural disaster in history.
The extent of floodplain inundation depends partly on flood magnitude, defined by the return period.
In the United States, the Federal Emergency Management Agency (FEMA) manages the National Flood Insurance Program (NFIP). The NFIP offers insurance to properties located within a flood-prone area, as defined by the Flood Insurance Rate Map (FIRM), which depicts various flood risks for a community. The FIRM typically focuses on the delineation of the 100-year flood inundation area, also known within the NFIP as the Special Flood Hazard Area.
Where a detailed study of a waterway has been done, the 100-year floodplain will also include the floodway, the critical portion of the floodplain which includes the stream channel and any adjacent areas that must be kept free of encroachments that might block flood flows or restrict storage of flood waters. Another commonly encountered term is the Special Flood Hazard Area, which is any area subject to inundation by a 100-year flood. A problem is that any alteration of the watershed upstream of the point in question can potentially affect the ability of the watershed to handle water, and thus potentially affects the levels of the periodic floods. A large shopping center and parking lot, for example, may raise the levels of 5-year, 100-year, and other floods, but the maps are rarely adjusted and are frequently rendered obsolete by subsequent development.
In order for a flood-prone property to qualify for government-subsidized insurance, a local community must adopt an ordinance that protects the floodway and requires that new residential structures built in Special Flood Hazard Areas be elevated to at least the level of the 100-year flood. Commercial structures can be elevated or floodproofed to or above this level. In some areas without detailed study information, structures may be required to be elevated to at least two feet above the surrounding grade. Many State and local governments have, in addition, adopted floodplain construction regulations which are more restrictive than those mandated by the NFIP. The US government also sponsors flood hazard mitigation efforts to reduce flood impacts. California's Hazard Mitigation Program is one funding source for mitigation projects. A number of whole towns such as English, Indiana, have been completely relocated to remove them from the floodplain. Other smaller-scale mitigation efforts include acquiring and demolishing flood-prone buildings or flood-proofing them.
In some floodplains, such as the Inner Niger Delta of Mali, annual flooding events are a natural part of the local ecology and rural economy, allowing for the raising of crops through recessional agriculture. However, in Bangladesh, which occupies the Ganges Delta, the advantages provided by the richness of the alluvial soil of the floodplain are severely offset by frequent floods brought on by cyclones and annual monsoon rains. These extreme weather events cause severe economic disruption and loss of human life in the densely-populated region.
Floodplain soils
Oxygen in floodplain soils
Floodplain soil composition is unique and varies widely based on microtopography. Floodplain forests have high topographic heterogeneity which creates variation in localized hydrologic conditions. Soil moisture within the upper 30 cm of the soil profile also varies widely based on microtopography, which affects oxygen availability. Floodplain soil stays aerated for long periods in between flooding events, but during flooding, saturated soil can become oxygen-depleted if it stands stagnant for long enough. More soil oxygen is available at higher elevations farther from the river. Floodplain forests generally experience alternating periods of aerobic and anaerobic soil microbe activity, affecting fine root development and desiccation.
Phosphorus cycling in floodplain soils
Floodplains have high buffering capacity for phosphorus to prevent nutrient loss to river outputs. Phosphorus nutrient loading is a problem in freshwater systems. Much of the phosphorus in freshwater systems comes from municipal wastewater treatment plants and agricultural runoff. Stream connectivity controls whether phosphorus cycling is mediated by floodplain sediments or by external processes. Under conditions of stream connectivity, phosphorus is better able to be cycled, and sediments and nutrients are more readily retained. Water in freshwater streams ends up in either short-term storage in plants or algae or long-term in sediments. Wet/dry cycling within the floodplain greatly impacts phosphorus availability because it alters water level, redox state, pH, and physical properties of minerals. Dry soils that were previously inundated have reduced availability of phosphorus and increased affinity for obtaining phosphorus. Human floodplain alterations also impact the phosphorus cycle. Particulate phosphorus and soluble reactive phosphorus (SRP) can contribute to algal blooms and toxicity in waterways when the nitrogen-to-phosphorus ratios are altered farther upstream. In areas where the phosphorus load is primarily particulate phosphorus, like the Mississippi River, the most effective ways of removing phosphorus upstream are sedimentation, soil accretion, and burial. In basins where SRP is the primary form of phosphorus, biological uptake in floodplain forests is the best way of removing nutrients. Phosphorus can transform between SRP and particulate phosphorus depending on ambient conditions or processes like decomposition, biological uptake, redoximorphic release, and sedimentation and accretion. In either phosphorus form, floodplain forests are beneficial as phosphorus sinks, and the human-caused disconnect between floodplains and rivers exacerbates the phosphorus overload.
Environmental pollutants in floodplain soils
Floodplain soils tend to be high in eco-pollutants, especially persistent organic pollutant (POP) deposition. Proper understanding of the distribution of soil contaminants is complex because of high variation in microtopography and soil texture within floodplains.
See also
as a good example of a floodway.
References
Sources
Powell, W. Gabe. 2009. Identifying Land Use/Land Cover (LULC) Using National Agriculture Imagery Program (NAIP) Data as a Hydrologic Model Input for Local Flood Plain Management. Applied Research Project, Texas State University. http://ecommons.txstate.edu/arp/296/
External links
Flood control
Fluvial landforms
Hydrology
Wetlands | Floodplain | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 3,065 | [
"Flood control",
"Hydrology",
"Wetlands",
"Environmental engineering"
] |
70,651 | https://en.wikipedia.org/wiki/Van%20der%20Waals%20radius | The van der Waals radius, r, of an atom is the radius of an imaginary hard sphere representing the distance of closest approach for another atom.
It is named after Johannes Diderik van der Waals, winner of the 1910 Nobel Prize in Physics, as he was the first to recognise that atoms were not simply points and to demonstrate the physical consequences of their size through the van der Waals equation of state.
van der Waals volume
The van der Waals volume, V, also called the atomic volume or molecular volume, is the atomic property most directly related to the van der Waals radius. It is the volume "occupied" by an individual atom (or molecule). The van der Waals volume may be calculated if the van der Waals radii (and, for molecules, the inter-atomic distances, and angles) are known. For a single atom, it is the volume of a sphere whose radius is the van der Waals radius of the atom:
For a molecule, it is the volume enclosed by the van der Waals surface.
The van der Waals volume of a molecule is always smaller than the sum of the van der Waals volumes of the constituent atoms: the atoms can be said to "overlap" when they form chemical bonds.
The van der Waals volume of an atom or molecule may also be determined by experimental measurements on gases, notably from the van der Waals constant b, the polarizability α, or the molar refractivity A.
In all three cases, measurements are made on macroscopic samples and it is normal to express the results as molar quantities.
To find the van der Waals volume of a single atom or molecule, it is necessary to divide by the Avogadro constant N.
The molar van der Waals volume should not be confused with the molar volume of the substance.
In general, at normal laboratory temperatures and pressures, the atoms or molecules of gas only occupy about of the volume of the gas, the rest is empty space.
Hence the molar van der Waals volume, which only counts the volume occupied by the atoms or molecules, is usually about times smaller than the molar volume for a gas at standard temperature and pressure.
Table of van der Waals radii
{| class="mw-collapsible " border="0" cellpadding="0" cellspacing="1" style="text-align:center; background:; border:1px solid ; width:100%; max-width:1300px; margin:0 auto; padding:2px;"
! colspan=20 style="background:; padding:2px 4px;" | Van der Waals radius of the elements in the periodic table
|- style="background:"
! width="1.0%" | Group →
! width="5.4%" | 1
! width="5.4%" | 2
! width="1.8%" |
! width="5.4%" | 3
! width="5.4%" | 4
! width="5.4%" | 5
! width="5.4%" | 6
! width="5.4%" | 7
! width="5.4%" | 8
! width="5.4%" | 9
! width="5.4%" | 10
! width="5.4%" | 11
! width="5.4%" | 12
! width="5.4%" | 13
! width="5.4%" | 14
! width="5.4%" | 15
! width="5.4%" | 16
! width="5.4%" | 17
! width="5.4%" | 18
|-
! ↓ Period
| colspan="20"|
|-
! 1
|
| colspan="17"|
|
|-
! 2
|
|
| colspan="11"|
|
|
|
|
|
|
|-
! 3
|
|
| colspan="11"|
|
|
|
|
|
|
|-
! 4
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
! 5
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
! 6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
! 7
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
| colspan="22"|
|-
| colspan="4"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
| colspan="4"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|-
| colspan=19 style="text-align:left;" |Legend
|-
| colspan=19 style="text-align:left;" |Values for the van der Waals radii are in picometers (pm or )
|-
| colspan=19 style="text-align:left;" |The shade of the box ranges from red to yellow as the radius increases; Gray indicate a lack of data.
|-
| colspan=19 style="text-align:left;" |Unless indicated otherwise, the data is from Mathematica'''s ElementData function from Wolfram Research, Inc.
|-
| colspan=19 |
|}
Methods of determination
Van der Waals radii may be determined from the mechanical properties of gases (the original method), from the critical point, from measurements of atomic spacing between pairs of unbonded atoms in crystals or from measurements of electrical or optical properties (the polarizability and the molar refractivity).
These various methods give values for the van der Waals radius which are similar (1–2 Å, 100–200 pm) but not identical.
Tabulated values of van der Waals radii are obtained by taking a weighted mean of a number of different experimental values, and, for this reason, different tables will often have different values for the van der Waals radius of the same atom.
Indeed, there is no reason to assume that the van der Waals radius is a fixed property of the atom in all circumstances: rather, it tends to vary with the particular chemical environment of the atom in any given case.
Van der Waals equation of state
The van der Waals equation of state is the simplest and best-known modification of the ideal gas law to account for the behaviour of real gases:
where is pressure, is the number of moles of the gas in question and and depend on the particular gas, is the volume, is the specific gas constant on a unit mole basis and the absolute temperature; is a correction for intermolecular forces and corrects for finite atomic or molecular sizes; the value of equals the van der Waals volume per mole of the gas.
Their values vary from gas to gas.
The van der Waals equation also has a microscopic interpretation: molecules interact with one another.
The interaction is strongly repulsive at a very short distance, becomes mildly attractive at the intermediate range, and vanishes at a long distance.
The ideal gas law must be corrected when attractive and repulsive forces are considered.
For example, the mutual repulsion between molecules has the effect of excluding neighbors from a certain amount of space around each molecule.
Thus, a fraction of the total space becomes unavailable to each molecule as it executes random motion.
In the equation of state, this volume of exclusion () should be subtracted from the volume of the container (), thus: ().
The other term that is introduced in the van der Waals equation, , describes a weak attractive force among molecules (known as the van der Waals force), which increases when increases or decreases and molecules become more crowded together.
The van der Waals constant b volume can be used to calculate the van der Waals volume of an atom or molecule with experimental data derived from measurements on gases.
For helium, b = 23.7 cm/mol. Helium is a monatomic gas, and each mole of helium contains atoms (the Avogadro constant, N):
Therefore, the van der Waals volume of a single atom V = 39.36 Å, which corresponds to r = 2.11 Å (≈ 200 picometers).
This method may be extended to diatomic gases by approximating the molecule as a rod with rounded ends where the diameter is and the internuclear distance is .
The algebra is more complicated, but the relation
can be solved by the normal methods for cubic functions.
Crystallographic measurements
The molecules in a molecular crystal are held together by van der Waals forces rather than chemical bonds.
In principle, the closest that two atoms belonging to different molecules can approach one another is given by the sum of their van der Waals radii.
By examining a large number of structures of molecular crystals, it is possible to find a minimum radius for each type of atom such that other non-bonded atoms do not encroach any closer.
This approach was first used by Linus Pauling in his seminal work The Nature of the Chemical Bond.
Arnold Bondi also conducted a study of this type, published in 1964, although he also considered other methods of determining the van der Waals radius in coming to his final estimates.
Some of Bondi's figures are given in the table at the top of this article, and they remain the most widely used "consensus" values for the van der Waals radii of the elements.
Scott Rowland and Robin Taylor re-examined these 1964 figures in the light of more recent crystallographic data: on the whole, the agreement was very good, although they recommend a value of 1.09 Å for the van der Waals radius of hydrogen as opposed to Bondi's 1.20 Å. A more recent analysis of the Cambridge Structural Database, carried out by Santiago Alvareza, provided a new set of values for 93 naturally occurring elements.
A simple example of the use of crystallographic data (here neutron diffraction) is to consider the case of solid helium, where the atoms are held together only by van der Waals forces (rather than by covalent or metallic bonds) and so the distance between the nuclei can be considered to be equal to twice the van der Waals radius.
The density of solid helium at 1.1 K and 66 atm is , corresponding to a molar volume V = .
The van der Waals volume is given by
where the factor of π/√18 arises from the packing of spheres: V = = 23.0 Å, corresponding to a van der Waals radius r = 1.76 Å.
Molar refractivity
The molar refractivity of a gas is related to its refractive index by the Lorentz–Lorenz equation:
The refractive index of helium n = at 0 °C and 101.325 kPa, which corresponds to a molar refractivity A = .
Dividing by the Avogadro constant gives V = = 0.8685 Å, corresponding to r = 0.59 Å.
Polarizability
The polarizability α of a gas is related to its electric susceptibility χ by the relation
and the electric susceptibility may be calculated from tabulated values of the relative permittivity ε using the relation χ = ε − 1.
The electric susceptibility of helium χ = at 0 °C and 101.325 kPa, which corresponds to a polarizability α = .
The polarizability is related the van der Waals volume by the relation
so the van der Waals volume of helium V = = 0.2073 Å by this method, corresponding to r'' = 0.37 Å.
When the atomic polarizability is quoted in units of volume such as Å, as is often the case, it is equal to the van der Waals volume.
However, the term "atomic polarizability" is preferred as polarizability is a precisely defined (and measurable) physical quantity, whereas "van der Waals volume" can have any number of definitions depending on the method of measurement.
See also
Atomic radii of the elements (data page)
van der Waals force
van der Waals molecule
van der Waals strain
van der Waals surface
References
Further reading
External links
van der Waals Radius of the elements at PeriodicTable.com
van der Waals Radius – Periodicity at WebElements.com
Chemical properties
Intermolecular forces
Radius
Atomic radius | Van der Waals radius | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,646 | [
"Molecular physics",
"Materials science",
"Intermolecular forces",
"Atomic radius",
"nan",
"Atoms",
"Matter"
] |
70,662 | https://en.wikipedia.org/wiki/Chaffing%20and%20winnowing | Chaffing and winnowing is a cryptographic technique to achieve confidentiality without using encryption when sending data over an insecure channel. The name is derived from agriculture: after grain has been harvested and threshed, it remains mixed together with inedible fibrous chaff. The chaff and grain are then separated by winnowing, and the chaff is discarded. The cryptographic technique was conceived by Ron Rivest and published in an on-line article on 18 March 1998. Although it bears similarities to both traditional encryption and steganography, it cannot be classified under either category.
This technique allows the sender to deny responsibility for encrypting their message. When using chaffing and winnowing, the sender transmits the message unencrypted, in clear text. Although the sender and the receiver share a secret key, they use it only for authentication. However, a third party can make their communication confidential by simultaneously sending specially crafted messages through the same channel.
How it works
The sender (Alice) wants to send a message to the receiver (Bob). In the simplest setup, Alice enumerates the symbols in her message and sends out each in a separate packet. If the symbols are complex enough, such as natural language text, an attacker may be able to distinguish the real symbols from poorly faked chaff symbols, posing a similar problem as steganography in needing to generate highly realistic fakes; to avoid this, the symbols can be reduced to just single 0/1 bits, and realistic fakes can then be simply randomly generated 50:50 and are indistinguishable from real symbols. In general the method requires each symbol to arrive in-order and to be authenticated by the receiver. When implemented over networks that may change the order of packets, the sender places the symbol's serial number in the packet, the symbol itself (both unencrypted), and a message authentication code (MAC). Many MACs use a secret key Alice shares with Bob, but it is sufficient that the receiver has a method to authenticate the packets.
Rivest notes an interesting property of chaffing-and-winnowing is that third parties (such as an ISP) can opportunistically add it to communications without needing permission or coordination with the sender/recipient. A third-party (dubbed "Charles") who transmits Alice's packets to Bob, interleaves the packets with corresponding bogus packets (called "chaff") with corresponding serial numbers, arbitrary symbols, and a random number in place of the MAC. Charles does not need to know the key to do that (real MACs are large enough that it is extremely unlikely to generate a valid one by chance, unlike in the example). Bob uses the MAC to find the authentic messages and drops the "chaff" messages. This process is called "winnowing".
An eavesdropper located between Alice and Charles can easily read Alice's message. But an eavesdropper between Charles and Bob would have to tell which packets are bogus and which are real (i.e. to winnow, or "separate the wheat from the chaff"). That is infeasible if the MAC used is secure and Charles does not leak any information on packet authenticity (e.g. via timing).
If a fourth party joins the example (named Darth) who wants to send counterfeit messages to impersonate Alice, it would require Alice to disclose her secret key. If Darth cannot force Alice to disclose an authentication key (the knowledge of which would enable him to forge messages from Alice), then her messages will remain confidential. Charles, on the other hand, is no target of Darth's at all, since Charles does not even possess any secret keys that could be disclosed.
Variations
The simple variant of the chaffing and winnowing technique described above adds many bits of overhead per bit of original message. To make the transmission more efficient, Alice can process her message with an all-or-nothing transform and then send it out in much larger chunks. The chaff packets will have to be modified accordingly. Because the original message can be reconstructed only by knowing all of its chunks, Charles needs to send only enough chaff packets to make finding the correct combination of packets computationally infeasible.
Chaffing and winnowing lends itself especially well to use in packet-switched network environments such as the Internet, where each message (whose payload is typically small) is sent in a separate network packet. In another variant of the technique, Charles carefully interleaves packets coming from multiple senders. That eliminates the need for Charles to generate and inject bogus packets in the communication. However, the text of Alice's message cannot be well protected from other parties who are communicating via Charles at the same time. This variant also helps protect against information leakage and traffic analysis.
Implications for law enforcement
Ron Rivest suggests that laws related to cryptography, including export controls, would not apply to chaffing and winnowing because it does not employ any encryption at all.
The author of the paper proposes that the security implications of handing everyone's authentication keys to the government for law-enforcement purposes would be far too risky, since possession of the key would enable someone to masquerade and communicate as another entity, such as an airline controller. Furthermore, Ron Rivest contemplates the possibility of rogue law enforcement officials framing up innocent parties by introducing the chaff into their communications, concluding that drafting a law restricting chaffing and winnowing would be far too difficult.
Trivia
The term winnowing was suggested by Ronald Rivest's father. Before the publication of Rivest's paper in 1998 other people brought to his attention a 1965 novel, Rex Stout's The Doorbell Rang, which describes the same concept and was thus included in the paper's references.
See also
References
Cryptography | Chaffing and winnowing | [
"Mathematics",
"Engineering"
] | 1,244 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
70,671 | https://en.wikipedia.org/wiki/Stress%E2%80%93energy%20tensor | The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.
Definition
The stress–energy tensor involves the use of superscripted variables ( exponents; see Tensor index notation and Einstein summation notation). If Cartesian coordinates in SI units are used, then the components of the position four-vector are given by: . In traditional Cartesian coordinates these are instead customarily written , where is coordinate time, and , , and are coordinate distances.
The stress–energy tensor is defined as the tensor of order two that gives the flux of the th component of the momentum vector across a surface with constant coordinate. In the theory of relativity, this momentum vector is taken as the four-momentum. In general relativity, the stress–energy tensor is symmetric,
In some alternative theories like Einstein–Cartan theory, the stress–energy tensor may not be perfectly symmetric because of a nonzero spin tensor, which geometrically corresponds to a nonzero torsion tensor.
Components
Because the stress–energy tensor is of order 2, its components can be displayed in matrix form:
where the indices and take on the values 0, 1, 2, 3.
In the following, and range from 1 through 3:
In solid state physics and fluid mechanics, the stress tensor is defined to be the spatial components of the stress–energy tensor in the proper frame of reference. In other words, the stress–energy tensor in engineering differs from the relativistic stress–energy tensor by a momentum-convective term.
Covariant and mixed forms
Most of this article works with the contravariant form, of the stress–energy tensor. However, it is often convenient to work with the covariant form,
or the mixed form,
This article uses the spacelike sign convention for the metric signature.
Conservation law
In special relativity
The stress–energy tensor is the conserved Noether current associated with spacetime translations.
The divergence of the non-gravitational stress–energy is zero. In other words, non-gravitational energy and momentum are conserved,
When gravity is negligible and using a Cartesian coordinate system for spacetime, this may be expressed in terms of partial derivatives as
The integral form of the non-covariant formulation is
where is any compact four-dimensional region of spacetime; is its boundary, a three-dimensional hypersurface; and is an element of the boundary regarded as the outward pointing normal.
In flat spacetime and using Cartesian coordinates, if one combines this with the symmetry of the stress–energy tensor, one can show that angular momentum is also conserved:
In general relativity
When gravity is non-negligible or when using arbitrary coordinate systems, the divergence of the stress–energy still vanishes. But in this case, a coordinate-free definition of the divergence is used which incorporates the covariant derivative
where is the Christoffel symbol, which is the gravitational force field.
Consequently, if is any Killing vector field, then the conservation law associated with the symmetry generated by the Killing vector field may be expressed as
The integral form of this is
In special relativity
In special relativity, the stress–energy tensor contains information about the energy and momentum densities of a given system, in addition to the momentum and energy flux densities.
Given a Lagrangian density that is a function of a set of fields and their derivatives, but explicitly not of any of the spacetime coordinates, we can construct the canonical stress–energy tensor by looking at the total derivative with respect to one of the generalized coordinates of the system. So, with our condition
By using the chain rule, we then have
Written in useful shorthand,
Then, we can use the Euler–Lagrange Equation:
And then use the fact that partial derivatives commute so that we now have
We can recognize the right hand side as a product rule. Writing it as the derivative of a product of functions tells us that
Now, in flat space, one can write . Doing this and moving it to the other side of the equation tells us that
And upon regrouping terms,
This is to say that the divergence of the tensor in the brackets is 0. Indeed, with this, we define the stress–energy tensor:
By construction it has the property that
Note that this divergenceless property of this tensor is equivalent to four continuity equations. That is, fields have at least four sets of quantities that obey the continuity equation. As an example, it can be seen that is the energy density of the system and that it is thus possible to obtain the Hamiltonian density from the stress–energy tensor.
Indeed, since this is the case, observing that , we then have
We can then conclude that the terms of represent the energy flux density of the system.
Trace
The trace of the stress–energy tensor is defined to be , so
Since ,
In general relativity
In general relativity, the symmetric stress–energy tensor acts as the source of spacetime curvature, and is the current density associated with gauge transformations of gravity which are general curvilinear coordinate transformations. (If there is torsion, then the tensor is no longer symmetric. This corresponds to the case with a nonzero spin tensor in Einstein–Cartan gravity theory.)
In general relativity, the partial derivatives used in special relativity are replaced by covariant derivatives. What this means is that the continuity equation no longer implies that the non-gravitational energy and momentum expressed by the tensor are absolutely conserved, i.e. the gravitational field can do work on matter and vice versa. In the classical limit of Newtonian gravity, this has a simple interpretation: kinetic energy is being exchanged with gravitational potential energy, which is not included in the tensor, and momentum is being transferred through the field to other bodies. In general relativity the Landau–Lifshitz pseudotensor is a unique way to define the gravitational field energy and momentum densities. Any such stress–energy pseudotensor can be made to vanish locally by a coordinate transformation.
In curved spacetime, the spacelike integral now depends on the spacelike slice, in general. There is in fact no way to define a global energy–momentum vector in a general curved spacetime.
Einstein field equations
In general relativity, the stress–energy tensor is studied in the context of the Einstein field equations which are often written as
where is the Einstein tensor, is the Ricci tensor, is the scalar curvature, is the metric tensor, is the cosmological constant (negligible at the scale of a galaxy or smaller), and is the Einstein gravitational constant.
Stress–energy in special situations
Isolated particle
In special relativity, the stress–energy of a non-interacting particle with rest mass and trajectory is:
where is the velocity vector (which should not be confused with four-velocity, since it is missing a )
is the Dirac delta function and is the energy of the particle.
Written in language of classical physics, the stress–energy tensor would be (relativistic mass, momentum, the dyadic product of momentum and velocity)
Stress–energy of a fluid in equilibrium
For a perfect fluid in thermodynamic equilibrium, the stress–energy tensor takes on a particularly simple form
where is the mass–energy density (kilograms per cubic meter), is the hydrostatic pressure (pascals), is the fluid's four-velocity, and is the matrix inverse of the metric tensor. Therefore, the trace is given by
The four-velocity satisfies
In an inertial frame of reference comoving with the fluid, better known as the fluid's proper frame of reference, the four-velocity is
the matrix inverse of the metric tensor is simply
and the stress–energy tensor is a diagonal matrix
Electromagnetic stress–energy tensor
The Hilbert stress–energy tensor of a source-free electromagnetic field is
where is the electromagnetic field tensor.
Scalar field
The stress–energy tensor for a complex scalar field that satisfies the Klein–Gordon equation is
and when the metric is flat (Minkowski in Cartesian coordinates) its components work out to be:
Variant definitions of stress–energy
There are a number of inequivalent definitions of non-gravitational stress–energy:
Hilbert stress–energy tensor
The Hilbert stress–energy tensor is defined as the functional derivative
where is the nongravitational part of the action, is the nongravitational part of the Lagrangian density, and the Euler–Lagrange equation has been used. This is symmetric and gauge-invariant. See Einstein–Hilbert action for more information.
Canonical stress–energy tensor
Noether's theorem implies that there is a conserved current associated with translations through space and time; for details see the section above on the stress–energy tensor in special relativity. This is called the canonical stress–energy tensor. Generally, this is not symmetric and if we have some gauge theory, it may not be gauge invariant because space-dependent gauge transformations do not commute with spatial translations.
In general relativity, the translations are with respect to the coordinate system and as such, do not transform covariantly. See the section below on the gravitational stress–energy pseudotensor.
Belinfante–Rosenfeld stress–energy tensor
In the presence of spin or other intrinsic angular momentum, the canonical Noether stress–energy tensor fails to be symmetric. The Belinfante–Rosenfeld stress–energy tensor is constructed from the canonical stress–energy tensor and the spin current in such a way as to be symmetric and still conserved. In general relativity, this modified tensor agrees with the Hilbert stress–energy tensor.
Gravitational stress–energy
By the equivalence principle, gravitational stress–energy will always vanish locally at any chosen point in some chosen frame, therefore gravitational stress–energy cannot be expressed as a non-zero tensor; instead we have to use a pseudotensor.
In general relativity, there are many possible distinct definitions of the gravitational stress–energy–momentum pseudotensor. These include the Einstein pseudotensor and the Landau–Lifshitz pseudotensor. The Landau–Lifshitz pseudotensor can be reduced to zero at any event in spacetime by choosing an appropriate coordinate system.
See also
Electromagnetic stress–energy tensor
Energy condition
Energy density of electric and magnetic fields
Maxwell stress tensor
Poynting vector
Ricci calculus
Segre classification
Notes
References
Further reading
External links
Lecture, Stephan Waner
Caltech Tutorial on Relativity — A simple discussion of the relation between the stress–energy tensor of general relativity and the metric
Tensor physical quantities
Density | Stress–energy tensor | [
"Physics",
"Mathematics",
"Engineering"
] | 2,246 | [
"Tensors",
"Physical quantities",
"Quantity",
"Mass",
"Tensor physical quantities",
"Density",
"Wikipedia categories named after physical quantities",
"Matter"
] |
70,705 | https://en.wikipedia.org/wiki/Aphrodisiac | An aphrodisiac is a substance alleged to increase libido, sexual desire, sexual attraction, sexual pleasure, or sexual behavior. These substances range from a variety of plants, spices, and foods to synthetic chemicals. Natural aphrodisiacs, such as cannabis or cocaine, are classified into plant-based and non-plant-based substances. Synthetic aphrodisiacs include MDMA and methamphetamine. Aphrodisiacs can be classified by their type of effects (psychological or physiological). Aphrodisiacs that contain hallucinogenic properties, such as bufotenin, have psychological effects that can increase sexual desire and sexual pleasure. Aphrodisiacs that have smooth muscle relaxing properties, such as yohimbine, have physiological effects that can affect hormone concentrations and increase blood flow.
Aphrodisiac effects may be due to the placebo effect. Substances that inhibit effects that aphrodisiacs aim to enhance are called anaphrodisiacs, which have the opposite effects on libido.
Both males and females can potentially benefit from the use of aphrodisiacs, but they are more focused on males, as their properties tend to increase testosterone concentrations rather than estrogen concentrations. This is in part due to the historical context of aphrodisiacs, which focused solely on males. Only recently has attention been paid to understanding how aphrodisiacs can aid female sexual function. In addition, cultural influences on appropriate sexual behavior of males and females also play a part in the research gap.
History
The word comes from the Greek ἀφροδισιακόν, aphrodisiakon, i.e. "sexual, aphrodisiac", from aphrodisios, i.e. "pertaining to Aphrodite", the Greek goddess of love. Throughout human history, food, drinks, and behaviors have had a reputation for making sex more attainable and/or pleasurable. However, from a historical and scientific standpoint, the alleged results may have been mainly due to mere belief on the part of their users that they would be effective (a placebo effect). Likewise, many medicines are reported to affect libido in inconsistent or idiopathic ways: enhancing or diminishing overall sexual desire, depending on the circumstances. For example, bupropion (Wellbutrin) is known as an antidepressant that can counteract other co-prescribed antidepressants with libido-diminishing effects. However, because bupropion increases libido only when it is already impaired by related medications, it is not generally classed as an aphrodisiac.
Ancient civilizations like Chinese, Indian, Egyptian, Roman, and Greek cultures believed that certain substances could provide the key to improving sexual desire, sexual pleasure, and/or sexual behavior. This was important, because some men suffered from erectile dysfunction and could not reproduce. Men who could not impregnate their wives and father large families were seen as failures, whereas those who could were respected. Hence, a stimulant was needed. Others who did not suffer from this also desired performance enhancers. Regardless of their usage, these substances gained popularity and began to be documented, information being passed down generations. There are Hindu poems dated back to around 2000 to 1000 BCE that spoke of performance enhancers, ingredients, and usage tips. Chinese texts date back to 2697 to 2595 BC. Roman and Chinese cultures documented their belief in aphrodisiac qualities in animal genitalia, while Egyptians wrote tips for treating erectile dysfunction. In Post-classical West Africa, a volume titled Advising Men on Sexual Engagement with Their Women from the Timbuktu Manuscripts acted as a guide on aphrodisiacs and infertility remedies. It offered advice to men on "winning back" their wives. According to Hammer, "At a time when women’s sexuality was barely acknowledged in the West, the manuscript, a kind of Baedeker to orgasm, offered tips for maximizing sexual pleasure on both sides."
Ambergris, Bufo toad, yohimbine, horny goat weed, ginseng, alcohol, and certain foods are recorded throughout these texts as possessing aphrodisiac qualities. While many plants, extracts, or manufactured hormones have been proposed as aphrodisiacs, there is little high-quality clinical evidence of their efficacy or long-term safety.
There has been increasing attention in recent years surrounding the use of aphrodisiac drugs. In 2020, Brian Earp and Julian Savulescu published a philosophy book entitled Love Drugs: The Chemical Future of Relationships (UK title Love Is the Drug: The Chemical Future of Our Relationships). They argued that certain forms of medications can be ethically consumed as a "helpful complement" in relationships. Both to fall in love and to fall out of it.
Types
Ambergris
Ambergris is found in the gut of sperm whales. It is commonly used in Arab cultures as relief medication for headaches or as a performance enhancer. The derived chemical Ambrein increases testosterone concentrations, triggering sexual desire and sexual behavior, but in animal studies only. Further research is needed to know the effects in humans.
Bufotenin
Bufotenin is found in the skin and glands of Bufo toads. It is commonly used in Caribbean and China. In the Caribbean, it is used as an aphrodisiac called 'Love Stone'. In China, it is used as a heart medication called Chan su. Research shows that the toad skin secretion containing this compound can reduce a toad’s heart rate, but its effect on humans is unknown.
Yohimbine
Yohimbine is a substance found in the bark of yohim trees in West Africa. It was traditionally used in West African cultures, in which the bark would be boiled and the resulting water drunk until it increased sexual desire. It has been approved by the Food and Drug Administration and can be prescribed for sexual dysfunction in the USA and Canada. It is also found in over-the-counter health products. Yohimbine is an indole alkaloid and is an adrenoceptor antagonist. It affects the central nervous system, autonomic nervous system, and the penile tissue and vascular smooth muscle cells that are involved in penile erection and it is also used to treat physiologically impaired and psychogenic erectile dysfunction, preferably in combination with other treatments. Known adverse effects include nausea, anxiety, irregular heartbeats, and restlessness.
Horny goat weed
Horny goat weed (Epimedii herba) is used in Chinese folk medicine. It was thought to be useful for treating medical conditions and improving sexual desire, sexual pleasure, and/or sexual behavior. Horny goat weed contains icariin, a flavanol glycoside. Its exotic name comes from the tendency of goats in the region to seek out this weed. Once farmers saw its effects on the goat population they began to use it to increase the number of workers on their farms.
Alcohol
Alcohol has been associated as an aphrodisiac, owing to its effect as a central nervous system depressant. Depressants can increase sexual desire and sexual behavior through disinhibition. Alcohol affects people both physiologically and psychologically, and it is therefore difficult to determine exactly how people experience its aphrodisiac effects (aphrodisiac qualities or the expectancy effect). Alcohol taken in moderate quantities can elicit a positive increase in sexual desire, whereas larger quantities are associated with difficulties in reaching sexual pleasure. As the porter in Shakespeare's Macbeth observes, "it provokes the desire, but it takes away the performance". Chronic alcohol consumption is related to sexual dysfunction.
Cannabis
Marijuana reports are mixed. Half of users claim an increase in sexual desire and sexual pleasure while the other half report no effect. Consumption, individual sensitivity, and possibly marijuana strain, are factors that affect outcomes.
Food
Many cultures have turned to foods as sources of increasing sexual desire; however, significant research is lacking in the study of the aphrodisiac qualities of foods. Most claims can be linked to the placebo effect. Misconceptions revolve around the visual appearance of these foods in relation to male and female genitalia (carrots, bananas, oysters, and the like). Other beliefs arise from the thought of consuming animal genitalia and absorbing their properties (e.g. cow cod soup in Jamaica and balut in the Philippines). Korean bug is a popular aphrodisiac in China, Korea, and Southeast Asia, either eaten alive or in gelatin form. The caterpillar fungus (Ophiocordyceps sinensis) is used as an aphrodisiac in China. The story of Aphrodite, who was born from the sea, is another reason why individuals believe seafood is another source of aphrodisiacs. Foods that contain volatile oils have gained little recognition in their ability to improve sexual desire, sexual pleasure, and/or sexual behavior, because they are irritants when released through the urinary tract. Chocolate has been reported to increase sexual desire in women who consume it over those who do not. Cloves and sage have been reported to demonstrate aphrodisiac qualities, but their effects have not been specified. Tropical fruits, such as Borojó and Chontaduro, are considered to be energizers in general and sexual energizers in particular.
Ginseng
Ginseng is the root of any member of the genus Panax. Ginseng's active ingredients are ginsenosides and saponin glycosides. There are three different ways of processing ginseng. Fresh ginseng is cut at four years of growth, white ginseng is cut at four to six years of growth, and red ginseng is cut, dried, and steamed at six years of growth. Red ginseng has been reported to be the most effective aphrodisiac of the three. Known adverse effects include mild gastrointestinal upsets.
Maca is a Peruvian plant sometimes called "Peruvian ginseng", although it is not related to Panax. It has been used as a tonic to improve sexual performance.
Synthetic aphrodisiacs
Popular party substances have been reported by users to consist of aphrodisiac properties because of their enhancing effects with sexual pleasure. Ecstasy users have reported an increase in sexual desire and sexual pleasure; however, there have been reports of delayed orgasm in both sexes and erectile difficulties in men. Poppers, containing drugs for inhalation, have been linked to increased sexual pleasure. Known adverse effects are headaches, nausea, and temporary erectile difficulties.
Phenethylamines
Amphetamine, methylphenidate, and methamphetamine are phenethylamine derivatives, which increase libido and cause frequent or prolonged erections as potential adverse effects, particularly in supratherapeutic doses, when sexual hyperexcitability and hypersexuality can occur; however, in some individuals who use these drugs, libido is reduced.
2C-B was sold commercially in 5 mg pills as a purported aphrodisiac under the trade name "Erox", which was manufactured by the German pharmaceutical company Drittewelle.
Testosterone
Libido in males is linked to concentrations of sex hormones, particularly testosterone. When there is reduced sex drive in individuals with relatively low concentrations of testosterone, particularly in postmenopausal women or men over the age of 60, dietary supplements that are purported to increase serum testosterone concentrations have been used, with the intention of increasing libido, although with limited benefits. Long-term therapy with synthetic oral testosterone is associated with increased risks of cardiovascular diseases.
Risks
Solid evidence is hard to obtain, as these substances come from many different environments cross-culturally and therefore give variable results, because of variations in growth and extraction. The same is also true for unnatural substances, because variations in consumption and individual sensitivity can affect outcomes. Folk medicine and self-prescribed methods can be potentially harmful, as their adverse effects are not fully known and are therefore not made aware to the people searching this topic on the internet.
In popular culture
The invention of an aphrodisiac is the basis of a number of films including Perfume: The Story of a Murderer, Spanish Fly, She'll Follow You Anywhere, Love Potion No. 9, and A Serbian Film.
The first segment of Woody Allen's movie Everything You Always Wanted to Know About Sex* (*But Were Afraid to Ask) is called "Do Aphrodisiacs Work?", and casts Allen as a court jester trying to seduce the queen.
The "Despair Arc" of Danganronpa 3: The End of Hope's Peak High School features a class being dosed with aphrodisiacs.
In episode 2 of the anime The Apothecary Diaries, Maomao makes aphrodisiacs and three of the Ladies-in-waiting eat them unaware that they are aphrodisiacs. In the film Sexually Bewitched, a witch creates consumables that bring out and enhance the lust of whomever eats them; resulting in hijinks as the magic liberates the libido repressed by those who eat her magic.
See also
Pheromone
Anaphrodisiac
Date rape drug
Empathogen–entactogen
Food and sexuality
Fork Me, Spoon Me, 2006 book
Hypersexuality
Hypoactive sexual desire disorder
Love potion
Phytoestrogen
Phytoandrogen
Vyleesi
Citations
General and cited references
Gabriele Froböse, Rolf Froböse, Michael Gross (Translator): Lust and Love: Is It More than Chemistry? Royal Society of Chemistry, 2006; .
Michael Scott: Pillow Talk: A Comprehensive Guide to Erotic Hypnosis and Relyfe Programming. Blue Deck Press, 2011; .
External links
Aphrodisiacs and Anti-aphrodisiacs: Three Essays on the Powers of Reproduction by John Davenport.
Drug classes defined by psychological effects
Sex and drugs | Aphrodisiac | [
"Chemistry"
] | 2,961 | [
"Pharmacology",
"Sex and drugs"
] |
71,020 | https://en.wikipedia.org/wiki/Ultraviolet%E2%80%93visible%20spectroscopy | Ultraviolet–visible spectrophotometry (UV–Vis or UV-VIS) refers to absorption spectroscopy or reflectance spectroscopy in part of the ultraviolet and the full, adjacent visible regions of the electromagnetic spectrum. Being relatively inexpensive and easily implemented, this methodology is widely used in diverse applied and fundamental applications. The only requirement is that the sample absorb in the UV-Vis region, i.e. be a chromophore. Absorption spectroscopy is complementary to fluorescence spectroscopy. Parameters of interest, besides the wavelength of measurement, are absorbance (A) or transmittance (%T) or reflectance (%R), and its change with time.
A UV-Vis spectrophotometer is an analytical instrument that measures the amount of ultraviolet (UV) and visible light that is absorbed by a sample. It is a widely used technique in chemistry, biochemistry, and other fields, to identify and quantify compounds in a variety of samples.
UV-Vis spectrophotometers work by passing a beam of light through the sample and measuring the amount of light that is absorbed at each wavelength. The amount of light absorbed is proportional to the concentration of the absorbing compound in the sample.
Optical transitions
Most molecules and ions absorb energy in the ultraviolet or visible range, i.e., they are chromophores. The absorbed photon excites an electron in the chromophore to higher energy molecular orbitals, giving rise to an excited state. For organic chromophores, four possible types of transitions are assumed: π–π*, n–π*, σ–σ*, and n–σ*. Transition metal complexes are often colored (i.e., absorb visible light) owing to the presence of multiple electronic states associated with incompletely filled d orbitals.
Applications
UV-Vis can be used to monitor structural changes in DNA.
UV-Vis spectroscopy is routinely used in analytical chemistry for the quantitative determination of diverse analytes or sample, such as transition metal ions, highly conjugated organic compounds, and biological macromolecules. Spectroscopic analysis is commonly carried out in solutions but solids and gases may also be studied.
Organic compounds, especially those with a high degree of conjugation, also absorb light in the UV or visible regions of the electromagnetic spectrum. The solvents for these determinations are often water for water-soluble compounds, or ethanol for organic-soluble compounds. (Organic solvents may have significant UV absorption; not all solvents are suitable for use in UV spectroscopy. Ethanol absorbs very weakly at most wavelengths.) Solvent polarity and pH can affect the absorption spectrum of an organic compound. Tyrosine, for example, increases in absorption maxima and molar extinction coefficient when pH increases from 6 to 13 or when solvent polarity decreases.
While charge transfer complexes also give rise to colours, the colours are often too intense to be used for quantitative measurement.
The Beer–Lambert law states that the absorbance of a solution is directly proportional to the concentration of the absorbing species in the solution and the path length. Thus, for a fixed path length, UV-Vis spectroscopy can be used to determine the concentration of the absorber in a solution. It is necessary to know how quickly the absorbance changes with concentration. This can be taken from references (tables of molar extinction coefficients), or more accurately, determined from a calibration curve.
A UV-Vis spectrophotometer may be used as a detector for HPLC. The presence of an analyte gives a response assumed to be proportional to the concentration. For accurate results, the instrument's response to the analyte in the unknown should be compared with the response to a standard; this is very similar to the use of calibration curves. The response (e.g., peak height) for a particular concentration is known as the response factor.
The wavelengths of absorption peaks can be correlated with the types of bonds in a given molecule and are valuable in determining the functional groups within a molecule. The Woodward–Fieser rules, for instance, are a set of empirical observations used to predict λmax, the wavelength of the most intense UV-Vis absorption, for conjugated organic compounds such as dienes and ketones. The spectrum alone is not, however, a specific test for any given sample. The nature of the solvent, the pH of the solution, temperature, high electrolyte concentrations, and the presence of interfering substances can influence the absorption spectrum. Experimental variations such as the slit width (effective bandwidth) of the spectrophotometer will also alter the spectrum. To apply UV-Vis spectroscopy to analysis, these variables must be controlled or accounted for in order to identify the substances present.
The method is most often used in a quantitative way to determine concentrations of an absorbing species in solution, using the Beer–Lambert law:
,
where A is the measured absorbance (formally dimensionless but generally reported in absorbance units (AU)), is the intensity of the incident light at a given wavelength, is the transmitted intensity, L the path length through the sample, and c the concentration of the absorbing species. For each species and wavelength, ε is a constant known as the molar absorptivity or extinction coefficient. This constant is a fundamental molecular property in a given solvent, at a particular temperature and pressure, and has units of .
The absorbance and extinction ε are sometimes defined in terms of the natural logarithm instead of the base-10 logarithm.
The Beer–Lambert law is useful for characterizing many compounds but does not hold as a universal relationship for the concentration and absorption of all substances. A 2nd order polynomial relationship between absorption and concentration is sometimes encountered for very large, complex molecules such as organic dyes (xylenol orange or neutral red, for example).
UV–Vis spectroscopy is also used in the semiconductor industry to measure the thickness and optical properties of thin films on a wafer. UV–Vis spectrometers are used to measure the reflectance of light, and can be analyzed via the Forouhi–Bloomer dispersion equations to determine the index of refraction () and the extinction coefficient () of a given film across the measured spectral range.
Practical considerations
The Beer–Lambert law has implicit assumptions that must be met experimentally for it to apply; otherwise there is a possibility of deviations from the law. For instance, the chemical makeup and physical environment of the sample can alter its extinction coefficient. The chemical and physical conditions of a test sample therefore must match reference measurements for conclusions to be valid. Worldwide, pharmacopoeias such as the American (USP) and European (Ph. Eur.) pharmacopeias demand that spectrophotometers perform according to strict regulatory requirements encompassing factors such as stray light and wavelength accuracy.
Spectral bandwidth
Spectral bandwidth of a spectrophotometer is the range of wavelengths that the instrument transmits through a sample at a given time. It is determined by the light source, the monochromator, its physical slit-width and optical dispersion and the detector of the spectrophotometer. The spectral bandwidth affects the resolution and accuracy of the measurement. A narrower spectral bandwidth provides higher resolution and accuracy, but also requires more time and energy to scan the entire spectrum. A wider spectral bandwidth allows for faster and easier scanning, but may result in lower resolution and accuracy, especially for samples with overlapping absorption peaks. Therefore, choosing an appropriate spectral bandwidth is important for obtaining reliable and precise results.
It is important to have a monochromatic source of radiation for the light incident on the sample cell to enhance the linearity of the response. The closer the bandwidth is to be monochromatic (transmitting unit of wavelength) the more linear will be the response. The spectral bandwidth is measured as the number of wavelengths transmitted at half the maximum intensity of the light leaving the monochromator.
The best spectral bandwidth achievable is a specification of the UV spectrophotometer, and it characterizes how monochromatic the incident light can be. If this bandwidth is comparable to (or more than) the width of the absorption peak of the sample component, then the measured extinction coefficient will not be accurate. In reference measurements, the instrument bandwidth (bandwidth of the incident light) is kept below the width of the spectral peaks. When a test material is being measured, the bandwidth of the incident light should also be sufficiently narrow. Reducing the spectral bandwidth reduces the energy passed to the detector and will, therefore, require a longer measurement time to achieve the same signal to noise ratio.
Wavelength error
The extinction coefficient of an analyte in solution changes gradually with wavelength. A peak (a wavelength where the absorbance reaches a maximum) in the absorbance curve vs wavelength, i.e. the UV-VIS spectrum, is where the rate of change of absorbance with wavelength is the lowest. Therefore, quantitative measurements of a solute are usually conducted, using a wavelength around the absorbance peak, to minimize inaccuracies produced by errors in wavelength, due to the change of extinction coefficient with wavelength.
Stray light
Stray light in a UV spectrophotometer is any light that reaches its detector that is not of the wavelength selected by the monochromator. This can be caused, for instance, by scattering of light within the instrument, or by reflections from optical surfaces.
Stray light can cause significant errors in absorbance measurements, especially at high absorbances, because the stray light will be added to the signal detected by the detector, even though it is not part of the actually selected wavelength. The result is that the measured and reported absorbance will be lower than the actual absorbance of the sample.
The stray light is an important factor, as it determines the purity of the light used for the analysis. The most important factor affecting it is the stray light level of the monochromator.
Typically a detector used in a UV-VIS spectrophotometer is broadband; it responds to all the light that reaches it. If a significant amount of the light passed through the sample contains wavelengths that have much lower extinction coefficients than the nominal one, the instrument will report an incorrectly low absorbance. Any instrument will reach a point where an increase in sample concentration will not result in an increase in the reported absorbance, because the detector is simply responding to the stray light. In practice the concentration of the sample or the optical path length must be adjusted to place the unknown absorbance within a range that is valid for the instrument. Sometimes an empirical calibration function is developed, using known concentrations of the sample, to allow measurements into the region where the instrument is becoming non-linear.
As a rough guide, an instrument with a single monochromator would typically have a stray light level corresponding to about 3 Absorbance Units (AU), which would make measurements above about 2 AU problematic. A more complex instrument with a double monochromator would have a stray light level corresponding to about 6 AU, which would therefore allow measuring a much wider absorbance range.
Deviations from the Beer–Lambert law
At sufficiently high concentrations, the absorption bands will saturate and show absorption flattening. The absorption peak appears to flatten because close to 100% of the light is already being absorbed. The concentration at which this occurs depends on the particular compound being measured. One test that can be used to test for this effect is to vary the path length of the measurement. In the Beer–Lambert law, varying concentration and path length has an equivalent effect—diluting a solution by a factor of 10 has the same effect as shortening the path length by a factor of 10. If cells of different path lengths are available, testing if this relationship holds true is one way to judge if absorption flattening is occurring.
Solutions that are not homogeneous can show deviations from the Beer–Lambert law because of the phenomenon of absorption flattening. This can happen, for instance, where the absorbing substance is located within suspended particles. The deviations will be most noticeable under conditions of low concentration and high absorbance. The last reference describes a way to correct for this deviation.
Some solutions, like copper(II) chloride in water, change visually at a certain concentration because of changed conditions around the coloured ion (the divalent copper ion). For copper(II) chloride it means a shift from blue to green, which would mean that monochromatic measurements would deviate from the Beer–Lambert law.
Measurement uncertainty sources
The above factors contribute to the measurement uncertainty of the results obtained with UV-Vis spectrophotometry. If UV-Vis spectrophotometry is used in quantitative chemical analysis then the results are additionally affected by uncertainty sources arising from the nature of the compounds and/or solutions that are measured. These include spectral interferences caused by absorption band overlap, fading of the color of the absorbing species (caused by decomposition or reaction) and possible composition mismatch between the sample and the calibration solution.
Ultraviolet–visible spectrophotometer
The instrument used in ultraviolet–visible spectroscopy is called a UV-Vis spectrophotometer. It measures the intensity of light after passing through a sample (), and compares it to the intensity of light before it passes through the sample (). The ratio is called the transmittance, and is usually expressed as a percentage (%T). The absorbance, , is based on the transmittance:
The UV–visible spectrophotometer can also be configured to measure reflectance. In this case, the spectrophotometer measures the intensity of light reflected from a sample (), and compares it to the intensity of light reflected from a reference material () (such as a white tile). The ratio is called the reflectance, and is usually expressed as a percentage (%R).
The basic parts of a spectrophotometer are a light source, a holder for the sample, a diffraction grating or a prism as a monochromator to separate the different wavelengths of light, and a detector. The radiation source is often a tungsten filament (300–2500 nm), a deuterium arc lamp, which is continuous over the ultraviolet region (190–400 nm), a xenon arc lamp, which is continuous from 160 to 2,000 nm; or more recently, light emitting diodes (LED) for the visible wavelengths. The detector is typically a photomultiplier tube, a photodiode, a photodiode array or a charge-coupled device (CCD). Single photodiode detectors and photomultiplier tubes are used with scanning monochromators, which filter the light so that only light of a single wavelength reaches the detector at one time. The scanning monochromator moves the diffraction grating to "step-through" each wavelength so that its intensity may be measured as a function of wavelength. Fixed monochromators are used with CCDs and photodiode arrays. As both of these devices consist of many detectors grouped into one or two dimensional arrays, they are able to collect light of different wavelengths on different pixels or groups of pixels simultaneously.
A spectrophotometer can be either single beam or double beam. In a single beam instrument (such as the Spectronic 20), all of the light passes through the sample cell. must be measured by removing the sample. This was the earliest design and is still in common use in both teaching and industrial labs.
In a double-beam instrument, the light is split into two beams before it reaches the sample. One beam is used as the reference; the other beam passes through the sample. The reference beam intensity is taken as 100% Transmission (or 0 Absorbance), and the measurement displayed is the ratio of the two beam intensities. Some double-beam instruments have two detectors (photodiodes), and the sample and reference beam are measured at the same time. In other instruments, the two beams pass through a beam chopper, which blocks one beam at a time. The detector alternates between measuring the sample beam and the reference beam in synchronism with the chopper. There may also be one or more dark intervals in the chopper cycle. In this case, the measured beam intensities may be corrected by subtracting the intensity measured in the dark interval before the ratio is taken.
In a single-beam instrument, the cuvette containing only a solvent has to be measured first. Mettler Toledo developed a single beam array spectrophotometer that allows fast and accurate measurements over the UV-Vis range. The light source consists of a Xenon flash lamp for the ultraviolet (UV) as well as for the visible (VIS) and near-infrared wavelength regions covering a spectral range from 190 up to 1100 nm. The lamp flashes are focused on a glass fiber which drives the beam of light onto a cuvette containing the sample solution. The beam passes through the sample and specific wavelengths are absorbed by the sample components. The remaining light is collected after the cuvette by a glass fiber and driven into a spectrograph. The spectrograph consists of a diffraction grating that separates the light into the different wavelengths, and a CCD sensor to record the data, respectively. The whole spectrum is thus simultaneously measured, allowing for fast recording.
Samples for UV-Vis spectrophotometry are most often liquids, although the absorbance of gases and even of solids can also be measured. Samples are typically placed in a transparent cell, known as a cuvette. Cuvettes are typically rectangular in shape, commonly with an internal width of 1 cm. (This width becomes the path length, , in the Beer–Lambert law.) Test tubes can also be used as cuvettes in some instruments. The type of sample container used must allow radiation to pass over the spectral region of interest. The most widely applicable cuvettes are made of high quality fused silica or quartz glass because these are transparent throughout the UV, visible and near infrared regions. Glass and plastic cuvettes are also common, although glass and most plastics absorb in the UV, which limits their usefulness to visible wavelengths.
Specialized instruments have also been made. These include attaching spectrophotometers to telescopes to measure the spectra of astronomical features. UV–visible microspectrophotometers consist of a UV–visible microscope integrated with a UV–visible spectrophotometer.
A complete spectrum of the absorption at all wavelengths of interest can often be produced directly by a more sophisticated spectrophotometer. In simpler instruments the absorption is determined one wavelength at a time and then compiled into a spectrum by the operator. By removing the concentration dependence, the extinction coefficient (ε) can be determined as a function of wavelength.
Microspectrophotometry
UV–visible spectroscopy of microscopic samples is done by integrating an optical microscope with UV–visible optics, white light sources, a monochromator, and a sensitive detector such as a charge-coupled device (CCD) or photomultiplier tube (PMT). As only a single optical path is available, these are single beam instruments. Modern instruments are capable of measuring UV–visible spectra in both reflectance and transmission of micron-scale sampling areas. The advantages of using such instruments is that they are able to measure microscopic samples but are also able to measure the spectra of larger samples with high spatial resolution. As such, they are used in the forensic laboratory to analyze the dyes and pigments in individual textile fibers, microscopic paint chips and the color of glass fragments. They are also used in materials science and biological research and for determining the energy content of coal and petroleum source rock by measuring the vitrinite reflectance. Microspectrophotometers are used in the semiconductor and micro-optics industries for monitoring the thickness of thin films after they have been deposited. In the semiconductor industry, they are used because the critical dimensions of circuitry is microscopic. A typical test of a semiconductor wafer would entail the acquisition of spectra from many points on a patterned or unpatterned wafer. The thickness of the deposited films may be calculated from the interference pattern of the spectra. In addition, ultraviolet–visible spectrophotometry can be used to determine the thickness, along with the refractive index and extinction coefficient of thin films. A map of the film thickness across the entire wafer can then be generated and used for quality control purposes.
Additional applications
UV-Vis can be applied to characterize the rate of a chemical reaction. Illustrative is the conversion of the yellow-orange and blue isomers of mercury dithizonate. This method of analysis relies on the fact that concentration is linearly proportional to concentration. In the same approach allows determination of equilibria between chromophores.
From the spectrum of burning gases, it is possible to determine a chemical composition of a fuel, temperature of gases, and air-fuel ratio.
See also
Applied spectroscopy
Benesi–Hildebrand method
Color – Vis spectroscopy with the human eye
Charge modulation spectroscopy
DU spectrophotometer – first UV–Vis instrument
Fourier-transform spectroscopy
Infrared spectroscopy and Raman spectroscopy are other common spectroscopic techniques, usually used to obtain information about the structure of compounds or to identify compounds. Both are forms of vibrational spectroscopy.
Isosbestic point – a wavelength where absorption does not change as the reaction proceeds. Important in kinetics measurements as a control.
Near-infrared spectroscopy
Rotational spectroscopy
Slope spectroscopy
Ultraviolet–visible spectroscopy of stereoisomers
Vibrational spectroscopy
References
Absorption spectroscopy
Scientific techniques | Ultraviolet–visible spectroscopy | [
"Physics",
"Chemistry"
] | 4,443 | [
"Spectroscopy",
"Spectrum (physical sciences)",
"Absorption spectroscopy"
] |
71,119 | https://en.wikipedia.org/wiki/Wigner%27s%20friend | Wigner's friend is a thought experiment in theoretical quantum physics, first published by the Hungarian-American physicist Eugene Wigner in 1961, and further developed by David Deutsch in 1985. The scenario involves an indirect observation of a quantum measurement: An observer observes another observer who performs a quantum measurement on a physical system. The two observers then formulate a statement about the physical system's state after the measurement according to the laws of quantum theory. In the Copenhagen interpretation, the resulting statements of the two observers contradict each other. This reflects a seeming incompatibility of two laws in the Copenhagen interpretation: the deterministic and continuous time evolution of the state of a closed system and the nondeterministic, discontinuous collapse of the state of a system upon measurement. Wigner's friend is therefore directly linked to the measurement problem in quantum mechanics with its famous Schrödinger's cat paradox.
Generalizations and extensions of Wigner's friend have been proposed. Two such scenarios involving multiple friends have been implemented in a laboratory, using photons to stand in for the friends.
Original paradox
Wigner introduced the thought experiment in a 1961 article "Remarks on the Mind-Body Question". He begins by noting that most physicists in the then-recent past had been thoroughgoing materialists who would insist that "mind" or "soul" are illusory, and that nature is fundamentally deterministic. He argues that quantum physics has changed this situation:
All that quantum mechanics purports to provide are probability connections between subsequent impressions (also called "apperceptions") of the consciousness, and even though the dividing line between the observer, whose consciousness is being affected, and the observed physical object can be shifted towards the one or the other to a considerable degree, it cannot be eliminated.
Nature of the wave function
Going into more detail, Wigner says:
Given any object, all the possible knowledge concerning that object can be given as its wave function. This is a mathematical concept the exact nature of which need not concern us here—it is composed of a (countable) infinity of numbers. If one knows these numbers, one can foresee the behavior of the object as far as it can be foreseen. More precisely, the wave function permits one to foretell with what probabilities the object will make one or another impression on us if we let it interact with us either directly, or indirectly. [...] In fact, the wave function is only a suitable language for describing the body of knowledge—gained by observations—which is relevant for predicting the future behaviour of the system. For this reason, the interactions which may create one or another sensation in us are also called observations, or measurements. One realises that all the information which the laws of physics provide consists of probability connections between subsequent impressions that a system makes on one if one interacts with it repeatedly, i.e., if one makes repeated measurements on it. The wave function is a convenient summary of that part of the past impressions which remains relevant for the probabilities of receiving the different possible impressions when interacting with the system at later times.
The wave function of an object "exists" (Wigner's quotation marks) because observers can share it:
The information given by the wave function is communicable. If someone else somehow determines the wave function of a system, he can tell me about it and, according to the theory, the probabilities for the possible different impressions (or "sensations") will be equally large, no matter whether he or I interact with the system in a given fashion.
Observing a system causes its wave functions to change indeterministically, because "the entering of an impression into our consciousness" implies a revision of "the probabilities for different impressions which we expect to receive in the future".
The observer observed
Wigner presents two arguments for the thesis that the mind influences the body, i.e., that a human body can "deviate from the laws of physics" as deduced from experimenting upon inanimate objects. The argument that he personally finds less persuasive is the one that has become known as "Wigner's friend". In this thought experiment, Wigner posits that his friend is in a laboratory, and Wigner lets the friend perform a quantum measurement on a physical system (this could be a spin system). This system is assumed to be in a superposition of two distinct states, say, state 0 and state 1 (or and in Dirac notation). When Wigner's friend measures the system in the {0,1}-basis, according to quantum mechanics, they will get one of the two possible outcomes (0 or 1) and the system will collapse into the corresponding state.
Now Wigner himself models the scenario from outside the laboratory, knowing that inside, his friend will at some point perform the 0/1-measurement on the physical system. According to the linearity of the quantum mechanical equations, Wigner will assign a superposition state to the whole laboratory (i.e. the joint system of the physical system together with the friend): The superposition state of the lab is then a linear combination of "system is in state 0 — friend has measured 0" and "system is in state 1 — friend has measured 1".
Let Wigner now ask his friend for the result of the measurement. Whichever answer the friend gives (0 or 1), Wigner would then assign the state "system is in state 0 — friend has measured 0" or "system is in state 1 — friend has measured 1" to the laboratory. Therefore, it is only at the time when he learns about his friend's result that the superposition state of the laboratory collapses.
However, unless Wigner is considered in a "privileged position as ultimate observer", the friend's point of view must be regarded as equally valid, and this is where an apparent paradox comes into play: From the point of view of the friend, the measurement result was determined long before Wigner had asked about it, and the state of the physical system has already collapsed. When exactly did the collapse occur? Was it when the friend had finished their measurement, or when the information of its result entered Wigner's consciousness? As Wigner says, he could ask his friend, "What did you feel about the [measurement result] before I asked you?" The question of what result the friend has seen is surely "already decided in his mind", Wigner writes, which implies that the friend–system joint state must already be one of the collapsed options, not a superposition of them. Wigner concludes that the linear time evolution of quantum states according to the Schrödinger equation cannot apply when the physical entity involved is a conscious being.
Wigner presents his second argument, which he finds more persuasive, much more briefly:
The second argument to support the existence of an influence of the consciousness on the physical world is based on the observation that we do not know of any phenomenon in which one subject is influenced by another without exerting an influence thereupon. This appears convincing to this writer.
As a reductio ad absurdum
According to physicist Leslie Ballentine, by 1987 Wigner had decided that consciousness does not cause a physical collapse of the wavefunction, although he still believed that his chain of inferences leading up to that conclusion were correct. As Ballentine recounts, Wigner regarded his 1961 argument as a , indicating that the postulates of quantum mechanics need to be revised in some way.
Responses in different interpretations of quantum mechanics
Many-worlds interpretations
The various versions of the many worlds interpretation avoid the need to postulate that consciousness causes collapse – indeed, that collapse occurs at all.
Hugh Everett III's doctoral thesis Relative state' formulation of quantum mechanics" serves as the foundation for today's many versions of many-worlds interpretations. In the introductory part of his work, Everett discusses the "amusing, but extremely hypothetical drama" of the Wigner's friend paradox. Note that there is evidence of a drawing of the scenario in an early draft of Everett's thesis. It was therefore Everett who provided the first written discussion of the problem four or five years before it was discussed in "Remarks on the mind-body question" by Wigner, of whom it received the name and fame thereafter. However, Everett being a student of Wigner's, it is clear that they must have discussed it together at some point.
In contrast to his teacher Wigner, who held the consciousness of an observer to be responsible for a collapse, Everett understands the Wigner's friend scenario in a different way: Insisting that quantum states assignments should be objective and nonperspectival, Everett derives a straightforward logical contradiction when letting and reason about the laboratory's state of together with . Then, the Wigner's Friend scenario shows to Everett an incompatibility of the collapse postulate for describing measurements with the deterministic evolution of closed systems. In the context of his new theory, Everett claims to solve the Wigner's friend paradox by only allowing a continuous unitary time evolution of the wave function of the universe. However, there is no evidence of any written argument of Everett's on the topic.
In many-worlds interpretations, measurements are modelled as interactions between subsystems of the universe and manifest themselves as a branching of the universal state. The different branches account for the different possible measurement outcomes and are seen to exist as subjective experiences of the corresponding observers. In this view, the friend's measurement of the spin results in a branching of the world into two parallel worlds, one, in which the friend has measured the spin to be 1, and another, in which the friend has received the measurement outcome 0. If then Wigner measures at a later time the combined system of friend and spin system, the world again splits into two parallel parts.
Objective-collapse theories
According to objective-collapse theories, wave-function collapse occurs when a superposed system reaches a certain objective threshold of size or complexity. Objective-collapse proponents would expect a system as macroscopic as a cat to have collapsed before the box was opened, so the question of observation-of-observers does not arise for them. If the measured system were much simpler (such as a single spin state), then once the observation was made, the system would be expected to collapse, since the larger system of the scientist, equipment, and room would be considered far too complex to become entangled in the superposition.
Relational quantum mechanics
Relational quantum mechanics (RQM) was developed in 1996 by Carlo Rovelli and is one of the more recent interpretations of quantum mechanics. In RQM, any physical system can play the role of an observing system, to which any other system may display "facts" about physical variables. This inherent relativity of facts in RQM provides a straightforward "solution" to the seemingly paradoxical situation in Wigner's friend scenario: The state that the friend assigns to the spin is a state relative to himself as friend, whereas the state that Wigner assigns to the combined system of friend and spin is a state relative to himself as Wigner. By construction of the theory, these two descriptions do not have to match, because both are correct assignments of states relative to their respective system.
If the physical variable that is measured of the spin system is denoted by z, where z takes the possible outcome values 0 or 1, the above Wigner's friend situation is modelled in the RQM context as follows: models the situation as the before-after-transition
of the state of relative to him (here it was assumed that received the outcome z = 1 in his measurement of ).
In RQM language, the fact z = 1 for the spin of actualized itself relative to during the interaction of the two systems.
A different way to model the same situation is again an outside (Wigner's) perspective. From that viewpoint, a measurement by one system () of another () results in a correlation of the two systems. The state displaying such a correlation is equally valid for modelling the measurement process. However, the system with respect to which this correlated state is valid changes. Assuming that Wigner () has the information that the physical variable z of is being measured by , but not knowing what received as result, must model the situation as
where is considered the state of before the measurement, and and are the states corresponding to 's state when he has measured 1 or 0 respectively. This model is depicting the situation as relative to , so the assigned states are relative states with respect to the Wigner system. In contrast, there is no value for the z outcome that actualizes with respect to , as he is not involved in the measurement.
In this sense, two accounts of the same situation (process of the measurement of the physical variable z on the system by ) are accepted within RQM to exist side by side. Only when deciding for a reference system, a statement for the "correct" account of the situation can be made.
QBism and Bayesian interpretations
In the interpretation known as QBism, advocated by N. David Mermin among others, the Wigner's-friend situation does not lead to a paradox, because there is never a uniquely correct wavefunction for any system. Instead, a wavefunction is a statement of personalist Bayesian probabilities, and moreover, the probabilities that wavefunctions encode are probabilities for experiences that are also personal to the agent who experiences them. Jaynes expresses this as follows: “There is a paradox only if we suppose that a density matrix (i.e. a probability distribution) is something ‘physically real’ and ‘absolute’. But now the dilemma disappears when we recognize the ‘relativity principle’ for probabilities. A density matrix (or, in classical physics, a probability distribution over coordinates and momenta) represents, not a physical situation, but only a certain state of knowledge about a range of possible physical situations”. And as von Baeyer puts it, “Wavefunctions are not tethered to electrons and carried along like haloes hovering over the heads of saints—they are assigned by an agent and depend on the total information available to the agent.” Consequently, there is nothing wrong in principle with Wigner and his friend assigning different wavefunctions to the same system. A similar position is taken by Brukner, who uses an elaboration of the Wigner's-friend scenario to argue for it.
De Broglie–Bohm theory
The De Broglie-Bohm theory, also known as Bohmian mechanics or pilot wave theory, postulates, in addition to the wave function, an actual configuration of particles that exists even when unobserved. This particle configuration evolves in time according to a deterministic law, with the wave function guiding the motion of the particles. The particle configuration determines the actual measurement outcome —e.g., whether Schrödinger's cat is dead or alive or whether Wigner's friend has measured 0 or 1— even if the wave function is a superposition. Indeed, according to the De Broglie-Bohm theory, the wave function never collapses on the fundamental level. There is, however, a concept of effective collapse, based on the fact that, in many situations, "empty branches" of the wave function, which do not guide the actual particle configuration, can be ignored for all practical purposes.
The De Broglie-Bohm theory does not assign any special status to conscious observers. In the Wigner's-friend situation, the first measurement would lead to an effective collapse. But even if Wigner describes the state of his friend as a superposition, there is no contradiction with this friend having observed a definite measurement outcome as described by the particle configuration. Thus, according to the De Broglie-Bohm theory, there is no paradox because the wave function alone is not a complete description of the physical state.
An extension of the Wigner's friend experiment
In 2016, Frauchiger and Renner used an elaboration of the Wigner's-friend scenario to argue that quantum theory cannot be used to model physical systems that are themselves agents who use quantum theory. They provide an information-theoretic analysis of two specifically connected pairs of "Wigner's friend" experiments, where the human observers are modelled within quantum theory. By then letting the four different agents reason about each other's measurement results (using the laws of quantum mechanics), contradictory statements are derived.
The resulting theorem highlights an incompatibility of a number of assumptions that are usually taken for granted when modelling measurements in quantum mechanics.
In the title of their published version of September 2018, the authors' interpretation of their result is apparent: Quantum theory as given by the textbook and used in the numerous laboratory experiments to date "cannot consistently describe the use of itself" in any given (hypothetical) scenario. The implications of the result are currently subject to many debates among physicists of both theoretical and experimental quantum mechanics. In particular, the various proponents of the different interpretations of quantum mechanics have challenged the validity of the Frauchiger–Renner argument.
The experiment was designed using a combination of arguments by Wigner (Wigner's friend), Deutsch and Hardy (see Hardy's paradox). The setup involves a number of macroscopic agents (observers) performing predefined quantum measurements in a given time order. Those agents are assumed to all be aware of the whole experiment and to be able to use quantum theory to make statements about other people's measurement results. The design of the thought experiment is such that the different agents' observations along with their logical conclusions drawn from a quantum-theoretical analysis yields inconsistent statements.
The scenario corresponds roughly to two parallel pairs of "Wigners" and friends: with and with . The friends each measure a specific spin system, and each Wigner measures "his" friend's laboratory (which includes the friend). The individual agents make logical conclusions that are based on their measurement result, aiming at predictions about other agent's measurements within the protocol. Frauchiger and Renner argue that an inconsistency occurs if three assumptions are taken to be simultaneously valid. Roughly speaking, those assumptions are
(Q): Quantum theory is correct.
(C): Agent's predictions are information-theoretically consistent.
(S): A measurement yields only one single outcome.
More precisely, assumption (Q) involves the probability predictions within quantum theory given by the Born rule. This means that an agent is allowed to trust this rule being correct in assigning probabilities to other outcomes conditioned on his own measurement result. It is, however, sufficient for the extended Wigner's friend experiment to assume the validity of the Born rule for probability-1 cases, i.e., if the prediction can be made with certainty.
Assumption (C) invokes a consistency among different agents' statements in the following manner: The statement "I know (by the theory) that they know (by the same theory) that x" is equivalent to "I know that x".
Assumption (S) specifies that once an agent has arrived at a probability-1 assignment of a certain outcome for a given measurement, they could never agree to a different outcome for the same measurement.
Assumptions (Q) and (S) are used by the agents when reasoning about measurement outcomes of other agents, and assumption (C) comes in when an agent combines other agent's statements with their own. The result is contradictory, and therefore, assumptions (Q), (C) and (S) cannot all be valid, hence the no-go theorem.
Reflection
The meaning and implications of the Frauchiger–Renner thought experiment are highly debated. A number of assumptions taken in the argument are very foundational in content and therefore cannot be given up easily. However, the question remains whether there are "hidden" assumptions that do not explicitly appear in the argument. The authors themselves conclude that "quantum theory cannot be extrapolated to complex systems, at least not in a straightforward manner". On the other hand, one presentation of the experiment as a quantum circuit models the agents as single qubits and their reasoning as simple conditional operations.
QBism, relational quantum mechanics and the De Broglie–Bohm theory have been argued to avoid the contradiction suggested by the extended Wigner's-friend scenario of Frauchiger and Renner.
In fiction
Stephen Baxter's novel Timelike Infinity (1992) discusses a variation of Wigner's friend thought experiment through a refugee group of humans self-named "The Friends of Wigner". They believe that an ultimate observer at the end of time may collapse all possible entangled wave-functions generated since the beginning of the universe, hence choosing a reality without oppression.
See also
Von Neumann–Wigner interpretation
Quantum suicide and immortality
References
Quantum measurement
Thought experiments in quantum mechanics
Physical paradoxes | Wigner's friend | [
"Physics"
] | 4,320 | [
"Quantum measurement",
"Quantum mechanics",
"Thought experiments in quantum mechanics"
] |
71,986 | https://en.wikipedia.org/wiki/Stark%20spectroscopy | Stark spectroscopy (sometimes known as electroabsorption/emission spectroscopy) is a form of spectroscopy based on the Stark effect. In brief, this technique makes use of the Stark effect (or electrochromism) either to reveal information about the physiochemical or physical properties of a sample using a well-characterized electric field or to reveal information about an electric field using a reference sample with a well-characterized Stark effect.
The use of the term "Stark effect" differs between the disciplines of chemistry and physics. Physicists tend to use the more classical definition of the term (see Stark effect), while chemists usually use the term to refer to what is technically electrochromism. In the former case, the applied electric field splits the atomic energy levels and is the electric field analog of the Zeeman effect. However, in the latter case, the applied electric field changes the molar absorption coefficient of the sample, which can be measured using traditional absorption or emission spectroscopic methods. This effect is known as electrochromism.
See also
Stark effect
Plasma diagnostics
References
Spectroscopy | Stark spectroscopy | [
"Physics",
"Chemistry"
] | 222 | [
"Instrumental analysis",
"Molecular physics",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
72,536 | https://en.wikipedia.org/wiki/Thermal%20conduction | Thermal conduction is the diffusion of thermal energy (heat) within one material or between materials in contact. The higher temperature object has molecules with more kinetic energy; collisions between molecules distributes this kinetic energy until an object has the same kinetic energy throughout. Thermal conductivity, frequently represented by , is a property that relates the rate of heat loss per unit area of a material to its rate of change of temperature. Essentially, it is a value that accounts for any property of the material that could change the way it conducts heat. Heat spontaneously flows along a temperature gradient (i.e. from a hotter body to a colder body). For example, heat is conducted from the hotplate of an electric stove to the bottom of a saucepan in contact with it. In the absence of an opposing external driving energy source, within a body or between bodies, temperature differences decay over time, and thermal equilibrium is approached, temperature becoming more uniform.
Every process involving heat transfer takes place by only three methods:
Conduction is heat transfer through stationary matter by physical contact. (The matter is stationary on a macroscopic scale—we know there is thermal motion of the atoms and molecules at any temperature above absolute zero.) Heat transferred between the electric burner of a stove and the bottom of a pan is transferred by conduction.
Convection is the heat transfer by the macroscopic movement of a fluid. This type of transfer takes place in a forced-air furnace and in weather systems, for example.
Heat transfer by radiation occurs when microwaves, infrared radiation, visible light, or another form of electromagnetic radiation is emitted or absorbed. An obvious example is the warming of the Earth by the Sun. A less obvious example is thermal radiation from the human body.
Overview
A region with greater thermal energy (heat) corresponds with greater molecular agitation. Thus when a hot object touches a cooler surface, the highly agitated molecules from the hot object bump the calm molecules of the cooler surface, transferring the microscopic kinetic energy and causing the colder part or object to heat up. Mathematically, thermal conduction works just like diffusion. As temperature difference goes up, the distance traveled gets shorter or the area goes up thermal conduction increases:
Where:
is the thermal conduction or power (the heat transferred per unit time over some distance between the two temperatures),
is the thermal conductivity of the material,
is the cross-sectional area of the object,
is the difference in temperature from one side to the other,
is the distance over which the heat is transferred.
Conduction is the main mode of heat transfer for solid materials because the strong inter-molecular forces allow the vibrations of particles to be easily transmitted, in comparison to liquids and gases. Liquids have weaker inter-molecular forces and more space between the particles, which makes the vibrations of particles harder to transmit. Gases have even more space, and therefore infrequent particle collisions. This makes liquids and gases poor conductors of heat.
Thermal contact conductance is the study of heat conduction between solid bodies in contact. A temperature drop is often observed at the interface between the two surfaces in contact. This phenomenon is said to be a result of a thermal contact resistance existing between the contacting surfaces. Interfacial thermal resistance is a measure of an interface's resistance to thermal flow. This thermal resistance differs from contact resistance, as it exists even at atomically perfect interfaces. Understanding the thermal resistance at the interface between two materials is of primary significance in the study of its thermal properties. Interfaces often contribute significantly to the observed properties of the materials.
The inter-molecular transfer of energy could be primarily by elastic impact, as in fluids, or by free-electron diffusion, as in metals, or phonon vibration, as in insulators. In insulators, the heat flux is carried almost entirely by phonon vibrations.
Metals (e.g., copper, platinum, gold, etc.) are usually good conductors of thermal energy. This is due to the way that metals bond chemically: metallic bonds (as opposed to covalent or ionic bonds) have free-moving electrons that transfer thermal energy rapidly through the metal. The electron fluid of a conductive metallic solid conducts most of the heat flux through the solid. Phonon flux is still present but carries less of the energy. Electrons also conduct electric current through conductive solids, and the thermal and electrical conductivities of most metals have about the same ratio. A good electrical conductor, such as copper, also conducts heat well. Thermoelectricity is caused by the interaction of heat flux and electric current. Heat conduction within a solid is directly analogous to diffusion of particles within a fluid, in the situation where there are no fluid currents.
In gases, heat transfer occurs through collisions of gas molecules with one another. In the absence of convection, which relates to a moving fluid or gas phase, thermal conduction through a gas phase is highly dependent on the composition and pressure of this phase, and in particular, the mean free path of gas molecules relative to the size of the gas gap, as given by the Knudsen number .
To quantify the ease with which a particular medium conducts, engineers employ the thermal conductivity, also known as the conductivity constant or conduction coefficient, k. In thermal conductivity, k is defined as "the quantity of heat, Q, transmitted in time (t) through a thickness (L), in a direction normal to a surface of area (A), due to a temperature difference (ΔT) [...]". Thermal conductivity is a material property that is primarily dependent on the medium's phase, temperature, density, and molecular bonding. Thermal effusivity is a quantity derived from conductivity, which is a measure of its ability to exchange thermal energy with its surroundings.
Steady-state conduction
Steady-state conduction is the form of conduction that happens when the temperature difference(s) driving the conduction are constant, so that (after an equilibration time), the spatial distribution of temperatures (temperature field) in the conducting object does not change any further. Thus, all partial derivatives of temperature concerning space may either be zero or have nonzero values, but all derivatives of temperature at any point concerning time are uniformly zero. In steady-state conduction, the amount of heat entering any region of an object is equal to the amount of heat coming out (if this were not so, the temperature would be rising or falling, as thermal energy was tapped or trapped in a region).
For example, a bar may be cold at one end and hot at the other, but after a state of steady-state conduction is reached, the spatial gradient of temperatures along the bar does not change any further, as time proceeds. Instead, the temperature remains constant at any given cross-section of the rod normal to the direction of heat transfer, and this temperature varies linearly in space in the case where there is no heat generation in the rod.
In steady-state conduction, all the laws of direct current electrical conduction can be applied to "heat currents". In such cases, it is possible to take "thermal resistances" as the analog to electrical resistances. In such cases, temperature plays the role of voltage, and heat transferred per unit time (heat power) is the analog of electric current. Steady-state systems can be modeled by networks of such thermal resistances in series and parallel, in exact analogy to electrical networks of resistors. See purely resistive thermal circuits for an example of such a network.
Transient conduction
During any period in which temperatures changes in time at any place within an object, the mode of thermal energy flow is termed transient conduction. Another term is "non-steady-state" conduction, referring to the time-dependence of temperature fields in an object. Non-steady-state situations appear after an imposed change in temperature at a boundary of an object. They may also occur with temperature changes inside an object, as a result of a new source or sink of heat suddenly introduced within an object, causing temperatures near the source or sink to change in time.
When a new perturbation of temperature of this type happens, temperatures within the system change in time toward a new equilibrium with the new conditions, provided that these do not change. After equilibrium, heat flow into the system once again equals the heat flow out, and temperatures at each point inside the system no longer change. Once this happens, transient conduction is ended, although steady-state conduction may continue if heat flow continues.
If changes in external temperatures or internal heat generation changes are too rapid for the equilibrium of temperatures in space to take place, then the system never reaches a state of unchanging temperature distribution in time, and the system remains in a transient state.
An example of a new source of heat "turning on" within an object, causing transient conduction, is an engine starting in an automobile. In this case, the transient thermal conduction phase for the entire machine is over, and the steady-state phase appears, as soon as the engine reaches steady-state operating temperature. In this state of steady-state equilibrium, temperatures vary greatly from the engine cylinders to other parts of the automobile, but at no point in space within the automobile does temperature increase or decrease. After establishing this state, the transient conduction phase of heat transfer is over.
New external conditions also cause this process: for example, the copper bar in the example steady-state conduction experiences transient conduction as soon as one end is subjected to a different temperature from the other. Over time, the field of temperatures inside the bar reaches a new steady-state, in which a constant temperature gradient along the bar is finally set up, and this gradient then stays constant in time. Typically, such a new steady-state gradient is approached exponentially with time after a new temperature-or-heat source or sink, has been introduced. When a "transient conduction" phase is over, heat flow may continue at high power, so long as temperatures do not change.
An example of transient conduction that does not end with steady-state conduction, but rather no conduction, occurs when a hot copper ball is dropped into oil at a low temperature. Here, the temperature field within the object begins to change as a function of time, as the heat is removed from the metal, and the interest lies in analyzing this spatial change of temperature within the object over time until all gradients disappear entirely (the ball has reached the same temperature as the oil). Mathematically, this condition is also approached exponentially; in theory, it takes infinite time, but in practice, it is over, for all intents and purposes, in a much shorter period. At the end of this process with no heat sink but the internal parts of the ball (which are finite), there is no steady-state heat conduction to reach. Such a state never occurs in this situation, but rather the end of the process is when there is no heat conduction at all.
The analysis of non-steady-state conduction systems is more complex than that of steady-state systems. If the conducting body has a simple shape, then exact analytical mathematical expressions and solutions may be possible (see heat equation for the analytical approach). However, most often, because of complicated shapes with varying thermal conductivities within the shape (i.e., most complex objects, mechanisms or machines in engineering) often the application of approximate theories is required, and/or numerical analysis by computer. One popular graphical method involves the use of Heisler Charts.
Occasionally, transient conduction problems may be considerably simplified if regions of the object being heated or cooled can be identified, for which thermal conductivity is very much greater than that for heat paths leading into the region. In this case, the region with high conductivity can often be treated in the lumped capacitance model, as a "lump" of material with a simple thermal capacitance consisting of its aggregate heat capacity. Such regions warm or cool, but show no significant temperature variation across their extent, during the process (as compared to the rest of the system). This is due to their far higher conductance. During transient conduction, therefore, the temperature across their conductive regions changes uniformly in space, and as a simple exponential in time. An example of such systems is those that follow Newton's law of cooling during transient cooling (or the reverse during heating). The equivalent thermal circuit consists of a simple capacitor in series with a resistor. In such cases, the remainder of the system with a high thermal resistance (comparatively low conductivity) plays the role of the resistor in the circuit.
Relativistic conduction
The theory of relativistic heat conduction is a model that is compatible with the theory of special relativity. For most of the last century, it was recognized that the Fourier equation is in contradiction with the theory of relativity because it admits an infinite speed of propagation of heat signals. For example, according to the Fourier equation, a pulse of heat at the origin would be felt at infinity instantaneously. The speed of information propagation is faster than the speed of light in vacuum, which is physically inadmissible within the framework of relativity.
Quantum conduction
Second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave-like motion, rather than by the more usual mechanism of diffusion. Heat takes the place of pressure in normal sound waves. This leads to a very high thermal conductivity. It is known as "second sound" because the wave motion of heat is similar to the propagation of sound in air.this is called Quantum conduction
Fourier's law
The law of heat conduction, also known as Fourier's law (compare Fourier's heat equation), states that the rate of heat transfer through a material is proportional to the negative gradient in the temperature and to the area, at right angles to that gradient, through which the heat flows. We can state this law in two equivalent forms: the integral form, in which we look at the amount of energy flowing into or out of a body as a whole, and the differential form, in which we look at the flow rates or fluxes of energy locally.
Newton's law of cooling is a discrete analogue of Fourier's law, while Ohm's law is the electrical analogue of Fourier's law and Fick's laws of diffusion is its chemical analogue.
Differential form
The differential form of Fourier's law of thermal conduction shows that the local heat flux density is equal to the product of thermal conductivity and the negative local temperature gradient . The heat flux density is the amount of energy that flows through a unit area per unit time.
where (including the SI units)
is the local heat flux density, W/m2,
is the material's conductivity, W/(m·K),
is the temperature gradient, K/m.
The thermal conductivity is often treated as a constant, though this is not always true. While the thermal conductivity of a material generally varies with temperature, the variation can be small over a significant range of temperatures for some common materials. In anisotropic materials, the thermal conductivity typically varies with orientation; in this case is represented by a second-order tensor. In non-uniform materials, varies with spatial location.
For many simple applications, Fourier's law is used in its one-dimensional form, for example, in the direction:
In an isotropic medium, Fourier's law leads to the heat equation
with a fundamental solution famously known as the heat kernel.
Integral form
By integrating the differential form over the material's total surface , we arrive at the integral form of Fourier's law:
where (including the SI units):
is the thermal power transferred by conduction (in W), time derivative of the transferred heat (in J),
is an oriented surface area element (in m2).
The above differential equation, when integrated for a homogeneous material of 1-D geometry between two endpoints at constant temperature, gives the heat flow rate as
where
is the time interval during which the amount of heat flows through a cross-section of the material,
is the cross-sectional surface area,
is the temperature difference between the ends,
is the distance between the ends.
One can define the (macroscopic) thermal resistance of the 1-D homogeneous material:
With a simple 1-D steady heat conduction equation which is analogous to Ohm's law for a simple electric resistance:
This law forms the basis for the derivation of the heat equation.
Conductance
Writing
where is the conductance, in W/(m2 K).
Fourier's law can also be stated as:
The reciprocal of conductance is resistance, is given by:
Resistance is additive when several conducting layers lie between the hot and cool regions, because and are the same for all layers. In a multilayer partition, the total conductance is related to the conductance of its layers by:
or equivalently
So, when dealing with a multilayer partition, the following formula is usually used:
For heat conduction from one fluid to another through a barrier, it is sometimes important to consider the conductance of the thin film of fluid that remains stationary next to the barrier. This thin film of fluid is difficult to quantify because its characteristics depend upon complex conditions of turbulence and viscosity—but when dealing with thin high-conductance barriers it can sometimes be quite significant.
Intensive-property representation
The previous conductance equations, written in terms of extensive properties, can be reformulated in terms of intensive properties. Ideally, the formulae for conductance should produce a quantity with dimensions independent of distance, like Ohm's law for electrical resistance, , and conductance, .
From the electrical formula: , where ρ is resistivity, x is length, and A is cross-sectional area, we have , where G is conductance, k is conductivity, x is length, and A is cross-sectional area.
For heat,
where is the conductance.
Fourier's law can also be stated as:
analogous to Ohm's law, or
The reciprocal of conductance is resistance, R, given by:
analogous to Ohm's law,
The rules for combining resistances and conductances (in series and parallel) are the same for both heat flow and electric current.
Cylindrical shells
Conduction through cylindrical shells (e.g. pipes) can be calculated from the internal radius, , the external radius, , the length, , and the temperature difference between the inner and outer wall, .
The surface area of the cylinder is
When Fourier's equation is applied:
and rearranged:
then the rate of heat transfer is:
the thermal resistance is:
and , where . It is important to note that this is the log-mean radius.
Spherical
The conduction through a spherical shell with internal radius, , and external radius, , can be calculated in a similar manner as for a cylindrical shell.
The surface area of the sphere is:
Solving in a similar manner as for a cylindrical shell (see above) produces:
Transient thermal conduction
Interface heat transfer
The heat transfer at an interface is considered a transient heat flow. To analyze this problem, the Biot number is important to understand how the system behaves. The Biot number is determined by:
The heat transfer coefficient , is introduced in this formula, and is measured in . If the system has a Biot number of less than 0.1, the material behaves according to Newtonian cooling, i.e. with negligible temperature gradient within the body. If the Biot number is greater than 0.1, the system behaves as a series solution. however, there is a noticeable temperature gradient within the material, and a series solution is required to describe the temperature profile. The cooling equation given is:
This leads to the dimensionless form of the temperature profile as a function of time:
This equation shows that the temperature decreases exponentially over time, with the rate governed by the properties of the material and the heat transfer coefficient.
The heat transfer coefficient, , is measured in , and represents the transfer of heat at an interface between two materials. This value is different at every interface and is an important concept in understanding heat flow at an interface.
The series solution can be analyzed with a nomogram. A nomogram has a relative temperature as the coordinate and the Fourier number, which is calculated by
The Biot number increases as the Fourier number decreases. There are five steps to determine a temperature profile in terms of time.
Calculate the Biot number
Determine which relative depth matters, either x or L.
Convert time to the Fourier number.
Convert to relative temperature with the boundary conditions.
Compared required to point to trace specified Biot number on the nomogram.
Applications
Splat cooling
Splat cooling is a method for quenching small droplets of molten materials by rapid contact with a cold surface. The particles undergo a characteristic cooling process, with the heat profile at for initial temperature as the maximum at and at and , and the heat profile at for as the boundary conditions. Splat cooling rapidly ends in a steady state temperature, and is similar in form to the Gaussian diffusion equation. The temperature profile, with respect to the position and time of this type of cooling, varies with:
Splat cooling is a fundamental concept that has been adapted for practical use in the form of thermal spraying. The thermal diffusivity coefficient, represented as , can be written as . This varies according to the material.
Metal quenching
Metal quenching is a transient heat transfer process in terms of the time temperature transformation (TTT). It is possible to manipulate the cooling process to adjust the phase of a suitable material. For example, appropriate quenching of steel can convert a desirable proportion of its content of austenite to martensite, creating a very hard and strong product. To achieve this, it is necessary to quench at the "nose" (or eutectic) of the TTT diagram. Since materials differ in their Biot numbers, the time it takes for the material to quench, or the Fourier number, varies in practice. In steel, the quenching temperature range is generally from 600 °C to 200 °C. To control the quenching time and to select suitable quenching media, it is necessary to determine the Fourier number from the desired quenching time, the relative temperature drop, and the relevant Biot number. Usually, the correct figures are read from a standard nomogram. By calculating the heat transfer coefficient from this Biot number, one can find a liquid medium suitable for the application.
Zeroth law of thermodynamics
One statement of the so-called zeroth law of thermodynamics is directly focused on the idea of conduction of heat. Bailyn (1994) writes that "the zeroth law may be stated: All diathermal walls are equivalent".
A diathermal wall is a physical connection between two bodies that allows the passage of heat between them. Bailyn is referring to diathermal walls that exclusively connect two bodies, especially conductive walls.
This statement of the "zeroth law" belongs to an idealized theoretical discourse, and actual physical walls may have peculiarities that do not conform to its generality.
For example, the material of the wall must not undergo a phase transition, such as evaporation or fusion, at the temperature at which it must conduct heat. But when only thermal equilibrium is considered and time is not urgent, so that the conductivity of the material does not matter too much, one suitable heat conductor is as good as another. Conversely, another aspect of the zeroth law is that, subject again to suitable restrictions, a given diathermal wall is indifferent to the nature of the heat bath to which it is connected. For example, the glass bulb of a thermometer acts as a diathermal wall whether exposed to a gas or a liquid, provided that they do not corrode or melt it.
These differences are among the defining characteristics of heat transfer. In a sense, they are symmetries of heat transfer.
Instruments
Thermal conductivity analyzer
Thermal conduction property of any gas under standard conditions of pressure and temperature is a fixed quantity. This property of a known reference gas or known reference gas mixtures can, therefore, be used for certain sensory applications, such as the thermal conductivity analyzer.
The working of this instrument is by principle based on the Wheatstone bridge containing four filaments whose resistances are matched. Whenever a certain gas is passed over such network of filaments, their resistance changes due to the altered thermal conductivity of the filaments and thereby changing the net voltage output from the Wheatstone Bridge. This voltage output will be correlated with the database to identify the gas sample.
Gas sensor
The principle of thermal conductivity of gases can also be used to measure the concentration of a gas in a binary mixture of gases.
Working: if the same gas is present around all the Wheatstone bridge filaments, then the same temperature is maintained in all the filaments and hence same resistances are also maintained; resulting in a balanced Wheatstone bridge. However, If the dissimilar gas sample (or gas mixture) is passed over one set of two filaments and the reference gas on the other set of two filaments, then the Wheatstone bridge becomes unbalanced. And the resulting net voltage output of the circuit will be correlated with the database to identify the constituents of the sample gas.
Using this technique many unknown gas samples can be identified by comparing their thermal conductivity with other reference gas of known thermal conductivity. The most commonly used reference gas is nitrogen; as the thermal conductivity of most common gases (except hydrogen and helium) are similar to that of nitrogen.
See also
List of thermal conductivities
Electrical conduction
Convection diffusion equation
R-value (insulation)
Heat pipe
Fick's law of diffusion
Relativistic heat conduction
Churchill–Bernstein equation
Fourier number
Biot number
False diffusion
General equation of heat transfer
References
Further reading
H. S. Carslaw and J. C. Jaeger. Conduction of heat in solids. Oxford University Press, USA. 1959. ISBN 978-0198533030.
F. Dehghani, CHNG2801. Conservation and Transport Processes: Course Notes. University of Sydney, Sydney. 2007.
Amimul Ahsan. Convection and conduction heat transfer. Intech. 2011. ISBN 9789533075822.
Sadik Kakac, Y Yener. Heat Conduction. Taylor and Francis. 2012. ISBN 9781466507845.
Jan Taler, Piotr Duda. Solving Direct and Inverse Heat Conduction Problems. Springer-Verlag Berlin Heidelberg 2005. ISBN 978-3-540-33470-5.
Liqiu Wang, Xuesheng Zhou, Xiaohao Wei. Heat Conduction: Mathematical Models and Analytical Solutions. Springer 2008. ISBN 978-3-540-74028-5.
Beck, James V.; Cole, Kevin D.; Haji-Sheikh, A.; Litkouhi, Bahman. Heat Conduction Using Green's Functions. CRC Press. 2010. ISBN 9781439895214.
S. G. Bruch. The kind of motion we call heat. Elsevier Science Publisher. 1976. ISBN 0-444-87008-3.
M. Necati Ozisik. Heat Conduction. Wiley-Interscience. 1993. ISBN 9780471532569.
W. Kelly, Understanding Heat Conduction. Nova Science Publischer. 2010. ISBN 978-1-53619-182-0.
Latif M. Jiji, Amir H. Danesh-Yazdi. Heat Conduction. Fourth Edition. Springer. 2024. .
John H Lienhard IV and John H Lienhard V. A Heat Transfer Textbook. Fifth Edition. Dover Pub., Mineola, N.Y. 2019. ISBN 978-0486837352 .
External links
Heat conduction – Thermal-FluidsPedia
Newton's Law of Cooling by Jeff Bryant based on a program by Stephen Wolfram, Wolfram Demonstrations Project.
be-x-old:Цеплаправоднасьць
Heat conduction
Heat transfer
Physical quantities
Transport phenomena
be:Цеплаправоднасць
bg:Топлопроводимост | Thermal conduction | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 5,864 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Thermodynamics",
"Heat conduction",
"Physical properties"
] |
72,576 | https://en.wikipedia.org/wiki/Axial%20precession | In astronomy, axial precession is a gravity-induced, slow, and continuous change in the orientation of an astronomical body's rotational axis. In the absence of precession, the astronomical body's orbit would show axial parallelism. In particular, axial precession can refer to the gradual shift in the orientation of Earth's axis of rotation in a cycle of approximately 26,000 years. This is similar to the precession of a spinning top, with the axis tracing out a pair of cones joined at their apices. The term "precession" typically refers only to this largest part of the motion; other changes in the alignment of Earth's axis—nutation and polar motion—are much smaller in magnitude.
Earth's precession was historically called the precession of the equinoxes, because the equinoxes moved westward along the ecliptic relative to the fixed stars, opposite to the yearly motion of the Sun along the ecliptic. Historically,
the discovery of the precession of the equinoxes is usually attributed in the West to the 2nd-century-BC astronomer Hipparchus. With improvements in the ability to calculate the gravitational force between planets during the first half of the nineteenth century, it was recognized that the ecliptic itself moved slightly, which was named planetary precession, as early as 1863, while the dominant component was named lunisolar precession. Their combination was named general precession, instead of precession of the equinoxes.
Lunisolar precession is caused by the gravitational forces of the Moon and Sun on Earth's equatorial bulge, causing Earth's axis to move with respect to inertial space. Planetary precession (an advance) is due to the small angle between the gravitational force of the other planets on Earth and its orbital plane (the ecliptic), causing the plane of the ecliptic to shift slightly relative to inertial space. Lunisolar precession is about 500 times greater than planetary precession. In addition to the Moon and Sun, the other planets also cause a small movement of Earth's axis in inertial space, making the contrast in the terms lunisolar versus planetary misleading, so in 2006 the International Astronomical Union recommended that the dominant component be renamed the precession of the equator, and the minor component be renamed precession of the ecliptic, but their combination is still named general precession. Many references to the old terms exist in publications predating the change.
Nomenclature
The term "Precession" is derived from the Latin praecedere ("to precede, to come before or earlier"). The stars viewed from Earth are seen to proceed from east to west daily (at about 15 degrees per hour), due to the Earth's diurnal motion, and yearly (at about 1 degree per day), due to the Earth's revolution around the Sun. At the same time the stars can be observed to anticipate slightly such motion, at the rate of approximately 50 arc seconds per year (1 degree per 72 years), a phenomenon known as the "precession of the equinoxes".
In describing this motion astronomers generally have shortened the term to simply "precession". In describing the cause of the motion physicists have also used the term "precession", which has led to some confusion between the observable phenomenon and its cause, which matters because in astronomy, some precessions are real and others are apparent. This issue is further obfuscated by the fact that many astronomers are physicists or astrophysicists.
The term "precession" used in astronomy generally describes the observable precession of the equinox (the stars moving retrograde across the sky), whereas the term "precession" as used in physics, generally describes a mechanical process.
Effects
The precession of the Earth's axis has a number of observable effects. First, the positions of the south and north celestial poles appear to move in circles against the space-fixed backdrop of stars, completing one circuit in approximately 26,000 years. Thus, while today the star Polaris lies approximately at the north celestial pole, this will change over time, and other stars will become the "north star". In approximately 3,200 years, the star Gamma Cephei in the Cepheus constellation will succeed Polaris for this position. The south celestial pole currently lacks a bright star to mark its position, but over time precession also will cause bright stars to become South Stars. As the celestial poles shift, there is a corresponding gradual shift in the apparent orientation of the whole star field, as viewed from a particular position on Earth.
Secondly, the position of the Earth in its orbit around the Sun at the solstices, equinoxes, or other time defined relative to the seasons, slowly changes. For example, suppose that the Earth's orbital position is marked at the summer solstice, when the Earth's axial tilt is pointing directly toward the Sun. One full orbit later, when the Sun has returned to the same apparent position relative to the background stars, the Earth's axial tilt is not now directly toward the Sun: because of the effects of precession, it is a little way "beyond" this. In other words, the solstice occurred a little earlier in the orbit. Thus, the tropical year, measuring the cycle of seasons (for example, the time from solstice to solstice, or equinox to equinox), is about 20 minutes shorter than the sidereal year, which is measured by the Sun's apparent position relative to the stars. After about 26 000 years the difference amounts to a full year, so the positions of the seasons relative to the orbit are "back where they started". (Other effects also slowly change the shape and orientation of the Earth's orbit, and these, in combination with precession, create various cycles of differing periods; see also Milankovitch cycles. The magnitude of the Earth's tilt, as opposed to merely its orientation, also changes slowly over time, but this effect is not attributed directly to precession.)
For identical reasons, the apparent position of the Sun relative to the backdrop of the stars at some seasonally fixed time slowly regresses a full 360° through all twelve traditional constellations of the zodiac, at the rate of about 50.3 seconds of arc per year, or 1 degree every 71.6 years.
At present, the rate of precession corresponds to a period of 25,772 years, so tropical year is shorter than sidereal year by 1,224.5 seconds
The rate itself varies somewhat with time (see Values below), so one cannot say that in exactly 25,772 years the Earth's axis will be back to where it is now.
For further details, see Changing pole stars and Polar shift and equinoxes shift, below.
History
Hellenistic world
Hipparchus
The discovery of precession usually is attributed to Hipparchus (190–120 BC) of Rhodes or Nicaea, a Greek astronomer. According to Ptolemy's Almagest, Hipparchus measured the longitude of Spica and other bright stars. Comparing his measurements with data from his predecessors, Timocharis (320–260 BC) and Aristillus (~280 BC), he concluded that Spica had moved 2° relative to the autumnal equinox. He also compared the lengths of the tropical year (the time it takes the Sun to return to an equinox) and the sidereal year (the time it takes the Sun to return to a fixed star), and found a slight discrepancy. Hipparchus concluded that the equinoxes were moving ("precessing") through the zodiac, and that the rate of precession was not less than 1° in a century, in other words, completing a full cycle in no more than 36,000 years.
Virtually all of the writings of Hipparchus are lost, including his work on precession. They are mentioned by Ptolemy, who explains precession as the rotation of the celestial sphere around a motionless Earth. It is reasonable to presume that Hipparchus, similarly to Ptolemy, thought of precession in geocentric terms as a motion of the heavens, rather than of the Earth.
Ptolemy
The first astronomer known to have continued Hipparchus's work on precession is Ptolemy in the second century AD. Ptolemy measured the longitudes of Regulus, Spica, and other bright stars with a variation of Hipparchus's lunar method that did not require eclipses. Before sunset, he measured the longitudinal arc separating the Moon from the Sun. Then, after sunset, he measured the arc from the Moon to the star. He used Hipparchus's model to calculate the Sun's longitude, and made corrections for the Moon's motion and its parallax. Ptolemy compared his own observations with those made by Hipparchus, Menelaus of Alexandria, Timocharis, and Agrippa. He found that between Hipparchus's time and his own (about 265 years), the stars had moved 2°40', or 1° in 100 years (36" per year; the rate accepted today is about 50" per year or 1° in 72 years). It is possible, however, that Ptolemy simply trusted Hipparchus' figure instead of making his own measurements. He also confirmed that precession affected all fixed stars, not just those near the ecliptic, and his cycle had the same period of 36,000 years as that of Hipparchus.
Other authors
Most ancient authors did not mention precession and, perhaps, did not know of it. For instance, Proclus rejected precession, while Theon of Alexandria, a commentator on Ptolemy in the fourth century, accepted Ptolemy's explanation. Theon also reports an alternate theory:
"According to certain opinions ancient astrologers believe that from a certain epoch the solstitial signs have a motion of 8° in the order of the signs, after which they go back the same amount. ..." (Dreyer 1958, p. 204)
Instead of proceeding through the entire sequence of the zodiac, the equinoxes "trepidated" back and forth over an arc of 8°. The theory of trepidation is presented by Theon as an alternative to precession.
Alternative discovery theories
Babylonians
Various assertions have been made that other cultures discovered precession independently of Hipparchus. According to Al-Battani, the Chaldean astronomers had distinguished the tropical and sidereal year so that by approximately 330 BC, they would have been in a position to describe precession, if inaccurately, but such claims generally are regarded as unsupported.
Maya
Archaeologist Susan Milbrath has speculated that the Mesoamerican Long Count calendar of "30,000 years involving the Pleiades...may have been an effort to calculate the precession of the equinox." This view is held by few other professional scholars of Maya civilization.
Ancient Egyptians
Similarly, it is claimed the precession of the equinoxes was known in Ancient Egypt, prior to the time of Hipparchus (the Ptolemaic period). These claims remain controversial. Ancient Egyptians kept accurate calendars and recorded dates on temple walls, so it would be a simple matter for them to plot the "rough" precession rate.
The Dendera Zodiac, a star-map inside the Hathor temple at Dendera, allegedly records the precession of the equinoxes. In any case, if the ancient Egyptians knew of precession, their knowledge is not recorded as such in any of their surviving astronomical texts.
Michael Rice, a popular writer on Ancient Egypt, has written that Ancient Egyptians must have observed the precession, and suggested that this awareness had profound affects on their culture. Rice noted that Egyptians re-oriented temples in response to precession of associated stars.
India
Before 1200, India had two theories of trepidation, one with a rate and another without a rate, and several related models of precession. Each had minor changes or corrections by various commentators. The dominant of the three was the trepidation described by the most respected Indian astronomical treatise, the Surya Siddhanta (3:9–12), composed but revised during the next few centuries. It used a sidereal epoch, or ayanamsa, that is still used by all Indian calendars, varying over the ecliptic longitude of 19°11′ to 23°51′, depending on the group consulted. This epoch causes the roughly 30 Indian calendar years to begin 23–28 days after the modern March equinox. The March equinox of the Surya Siddhanta librated 27° in both directions from the sidereal epoch. Thus the equinox moved 54° in one direction and then back 54° in the other direction. This cycle took 7200 years to complete at a rate of 54″/year. The equinox coincided with the epoch at the beginning of the Kali Yuga in −3101 and again 3,600 years later in 499. The direction changed from prograde to retrograde midway between these years at −1301 when it reached its maximum deviation of 27°, and would have remained retrograde, the same direction as modern precession, for 3600 years until 2299.
Another trepidation was described by Varāhamihira (). His trepidation consisted of an arc of 46°40′ in one direction and a return to the starting point. Half of this arc, 23°20′, was identified with the Sun's maximum declination on either side of the equator at the solstices. But no period was specified, thus no annual rate can be ascertained.
Several authors have described precession to be near 200,000revolutions in a Kalpa of 4,320,000,000years, which would be a rate of = 60″/year. They probably deviated from an even 200,000revolutions to make the accumulated precession zero near 500. Visnucandra () mentions 189,411revolutions in a Kalpa or 56.8″/year. Bhaskara I () mentions [1]94,110revolutions in a Kalpa or 58.2″/year. Bhāskara II () mentions 199,699revolutions in a Kalpa or 59.9″/year.
Chinese astronomy
Yu Xi (fourth century AD) was the first Chinese astronomer to mention precession. He estimated the rate of precession as 1° in 50 years.
Middle Ages and Renaissance
In medieval Islamic astronomy, precession was known based on Ptolemy's Almagest, and by observations that refined the value.
Al-Battani, in his work Zij Al-Sabi, mentions Hipparchus's calculation of precession, and Ptolemy's value of 1 degree per 100 solar years, says that he measured precession and found it to be one degree per 66 solar years.
Subsequently, Al-Sufi, in his Book of Fixed Stars, mentions the same values that Ptolemy's value for precession is 1 degree per 100 solar years. He then quotes a different value from Zij Al Mumtahan, which was done during Al-Ma'mun's reign, of 1 degree for every 66 solar years. He also quotes the aforementioned Zij Al-Sabi of Al-Battani as adjusting coordinates for stars by 11 degrees and 10 minutes of arc to account for the difference between Al-Battani's time and Ptolemy's.
Later, the Zij-i Ilkhani, compiled at the Maragheh observatory, sets the precession of the equinoxes at 51 arc seconds per annum, which is very close to the modern value of 50.2 arc seconds.
In the Middle Ages, Islamic and Latin Christian astronomers treated "trepidation" as a motion of the fixed stars to be added to precession. This theory is commonly attributed to the Arab astronomer Thabit ibn Qurra, but the attribution has been contested in modern times. Nicolaus Copernicus published a different account of trepidation in De revolutionibus orbium coelestium (1543). This work makes the first definite reference to precession as the result of a motion of the Earth's axis. Copernicus characterized precession as the third motion of the Earth.
Modern period
Over a century later, Isaac Newton in Philosophiae Naturalis Principia Mathematica (1687) explained precession as a consequence of gravitation. However, Newton's original precession equations did not work, and were revised considerably by Jean le Rond d'Alembert and subsequent scientists.
Hipparchus's discovery
Hipparchus gave an account of his discovery in On the Displacement of the Solsticial and Equinoctial Points (described in Almagest III.1 and VII.2). He measured the ecliptic longitude of the star Spica during lunar eclipses and found that it was about 6° west of the autumnal equinox. By comparing his own measurements with those of Timocharis of Alexandria (a contemporary of Euclid, who worked with Aristillus early in the 3rd century BC), he found that Spica's longitude had decreased by about 2° in the meantime (exact years are not mentioned in Almagest). Also in VII.2, Ptolemy gives more precise observations of two stars, including Spica, and concludes that in each case a 2° 40' change occurred between 128 BC and AD 139. Hence, 1° per century or one full cycle in 36,000 years, that is, the precessional period of Hipparchus as reported by Ptolemy; cf. page 328 in Toomer's translation of Almagest, 1998 edition. He also noticed this motion in other stars. He speculated that only the stars near the zodiac shifted over time. Ptolemy called this his "first hypothesis" (Almagest VII.1), but did not report any later hypothesis Hipparchus might have devised. Hipparchus apparently limited his speculations, because he had only a few older observations, which were not very reliable.
Because the equinoctial points are not marked in the sky, Hipparchus needed the Moon as a reference point; he used a lunar eclipse to measure the position of a star. Hipparchus already had developed a way to calculate the longitude of the Sun at any moment. A lunar eclipse happens during Full moon, when the Moon is at opposition, precisely 180° from the Sun. Hipparchus is thought to have measured the longitudinal arc separating Spica from the Moon. To this value, he added the calculated longitude of the Sun, plus 180° for the longitude of the Moon. He did the same procedure with Timocharis' data. Observations such as these eclipses, incidentally, are the main source of data about when Hipparchus worked, since other biographical information about him is minimal. The lunar eclipses he observed, for instance, took place on 21 April 146 BC, and 21 March 135 BC.
Hipparchus also studied precession in On the Length of the Year. Two kinds of year are relevant to understanding his work. The tropical year is the length of time that the Sun, as viewed from the Earth, takes to return to the same position along the ecliptic (its path among the stars on the celestial sphere). The sidereal year is the length of time that the Sun takes to return to the same position with respect to the stars of the celestial sphere. Precession causes the stars to change their longitude slightly each year, so the sidereal year is longer than the tropical year. Using observations of the equinoxes and solstices, Hipparchus found that the length of the tropical year was 365+1/4−1/300 days, or 365.24667 days (Evans 1998, p. 209). Comparing this with the length of the sidereal year, he calculated that the rate of precession was not less than 1° in a century. From this information, it is possible to calculate that his value for the sidereal year was 365+1/4+1/144 days. By giving a minimum rate, he may have been allowing for errors in observation.
To approximate his tropical year, Hipparchus created his own lunisolar calendar by modifying those of Meton and Callippus in On Intercalary Months and Days (now lost), as described by Ptolemy in the Almagest III.1. The Babylonian calendar used a cycle of 235 lunar months in 19 years since 499 BC (with only three exceptions before 380 BC), but it did not use a specified number of days. The Metonic cycle (432 BC) assigned 6,940 days to these 19 years producing an average year of 365+1/4+1/76 or 365.26316 days. The Callippic cycle (330 BC) dropped one day from four Metonic cycles (76 years) for an average year of 365+1/4 or 365.25 days. Hipparchus dropped one more day from four Callippic cycles (304 years), creating the Hipparchic cycle with an average year of 365+1/4−1/304 or 365.24671 days, which was close to his tropical year of 365+1/4−1/300 or 365.24667 days.
Hipparchus's mathematical signatures are found in the Antikythera Mechanism, an ancient astronomical computer of the second century BC. The mechanism is based on a solar year, the Metonic Cycle, which is the period the Moon reappears in the same place in the sky with the same phase (full Moon appears at the same position in the sky approximately in 19 years), the Callipic cycle (which is four Metonic cycles and more accurate), the Saros cycle, and the Exeligmos cycles (three Saros cycles for the accurate eclipse prediction). Study of the Antikythera Mechanism showed that the ancients used very accurate calendars based on all the aspects of solar and lunar motion in the sky. In fact, the Lunar Mechanism which is part of the Antikythera Mechanism depicts the motion of the Moon and its phase, for a given time, using a train of four gears with a pin and slot device which gives a variable lunar velocity that is very close to Kepler's second law. That is, it takes into account the fast motion of the Moon at perigee and slower motion at apogee.
Changing pole stars
A consequence of the precession is a changing pole star. Currently Polaris is extremely well suited to mark the position of the north celestial pole, as Polaris is a moderately bright star with a visual magnitude of 2.1 (variable), and is located about one degree from the pole, with no stars of similar brightness too close.
The previous pole star was Kochab (Beta Ursae Minoris, β UMi, β Ursae Minoris), the brightest star in the bowl of the "Little Dipper", located 16 degrees from Polaris. It held that role from 1500 BC to AD 500. It was not quite as accurate in its day as Polaris is today. Today, Kochab and its neighbor Pherkad are referred to as the "Guardians of the Pole" (meaning Polaris).
On the other hand, Thuban in the constellation Draco, which was the pole star in 3000 BC, is much less conspicuous at magnitude 3.67 (one-fifth as bright as Polaris); today it is invisible in light-polluted urban skies.
When Polaris becomes the north star again around 27,800, it will then be farther away from the pole than it is now due to its proper motion, while in 23,600 BC it came closer to the pole.
It is more difficult to find the south celestial pole in the sky at this moment, as that area is a particularly bland portion of the sky. The nominal south pole star is Sigma Octantis, which with magnitude 5.5 is barely visible to the naked eye even under ideal conditions. That will change from the 80th to the 90th centuries, however, when the south celestial pole travels through the False Cross.
This situation also is seen on a star map. The orientation of the south pole is moving toward the Southern Cross constellation. For the last 2,000 years or so, the Southern Cross has pointed to the south celestial pole. As a consequence, the constellation is difficult to view from subtropical northern latitudes, unlike in the time of the ancient Greeks. The Southern Cross can be viewed from as far north as Miami (about 25° N), but only during the winter/early spring.
Polar shift and equinoxes shift
The images at right attempt to explain the relation between the precession of the Earth's axis and the shift in the equinoxes. These images show the position of the Earth's axis on the celestial sphere, a fictitious sphere which places the stars according to their position as seen from Earth, regardless of their actual distance. The first image shows the celestial sphere from the outside, with the constellations in mirror image. The second image shows the perspective of a near-Earth position as seen through a very wide angle lens (from which the apparent distortion arises).
The rotation axis of the Earth describes, over a period of 25,700 years, a small among the stars near the top of the diagram, centered on the ecliptic north pole (the ) and with an angular radius of about 23.4°, an angle known as the obliquity of the ecliptic. The direction of precession is opposite to the daily rotation of the Earth on its axis. The was the Earth's rotation axis 5,000 years ago, when it pointed to the star Thuban. The yellow axis, pointing to Polaris, marks the axis now.
The equinoxes occur where the celestial equator intersects the ecliptic (red line), that is, where the Earth's axis is perpendicular to the line connecting the centers of the Sun and Earth.The term "equinox" here refers to a point on the celestial sphere so defined, rather than the moment in time when the Sun is overhead at the Equator (though the two meanings are related). When the axis precesses from one orientation to another, the equatorial plane of the Earth (indicated by the circular grid around the equator) moves. The celestial equator is just the Earth's equator projected onto the celestial sphere, so it moves as the Earth's equatorial plane moves, and the intersection with the ecliptic moves with it. The positions of the poles and equator on Earth do not change, only the orientation of the Earth against the fixed stars.
As seen from the , 5,000 years ago, the March equinox was close to the star Aldebaran in Taurus. Now, as seen from the yellow grid, it has shifted (indicated by the ) to somewhere in the constellation of Pisces.
Still pictures like these are only first approximations, as they do not take into account the variable speed of the precession, the variable obliquity of the ecliptic, the planetary precession (which is a slow rotation of the ecliptic plane itself, presently around an axis located on the plane, with longitude 174.8764°) and the proper motions of the stars.
The precessional eras of each constellation, often known as "Great Months", are given, approximately, in the table below:
Cause
The precession of the equinoxes is caused by the gravitational forces of the Sun and the Moon, and to a lesser extent other bodies, on the Earth. It was first explained by Isaac Newton.
Axial precession is similar to the precession of a spinning top. In both cases, the applied force is due to gravity. For a spinning top, this force tends to be almost parallel to the rotation axis initially and increases as the top slows down. For a gyroscope on a stand it can approach 90 degrees. For the Earth, however, the applied forces of the Sun and the Moon are closer to perpendicular to the axis of rotation.
The Earth is not a perfect sphere but an oblate spheroid, with an equatorial diameter about 43 kilometers larger than its polar diameter. Because of the Earth's axial tilt, during most of the year the half of this bulge that is closest to the Sun is off-center, either to the north or to the south, and the far half is off-center on the opposite side. The gravitational pull on the closer half is stronger, since gravity decreases with the square of distance, so this creates a small torque on the Earth as the Sun pulls harder on one side of the Earth than the other. The axis of this torque is roughly perpendicular to the axis of the Earth's rotation so the axis of rotation precesses. If the Earth were a perfect sphere, there would be no precession.
This average torque is perpendicular to the direction in which the rotation axis is tilted away from the ecliptic pole, so that it does not change the axial tilt itself. The magnitude of the torque from the Sun (or the Moon) varies with the angle between the Earth's spin axis direction and that of the gravitational attraction. It approaches zero when they are perpendicular. For example, this happens at the equinoxes in the case of the interaction with the Sun. This can be seen to be since the near and far points are aligned with the gravitational attraction, so there is no torque due to the difference in gravitational attraction.
Although the above explanation involved the Sun, the same explanation holds true for any object moving around the Earth, along or close to the ecliptic, notably, the Moon. The combined action of the Sun and the Moon is called the lunisolar precession. In addition to the steady progressive motion (resulting in a full circle in about 25,700 years) the Sun and Moon also cause small periodic variations, due to their changing positions. These oscillations, in both precessional speed and axial tilt, are known as the nutation. The most important term has a period of 18.6 years and an amplitude of 9.2 arcseconds.
In addition to lunisolar precession, the actions of the other planets of the Solar System cause the whole ecliptic to rotate slowly around an axis which has an ecliptic longitude of about 174° measured on the instantaneous ecliptic. This so-called planetary precession shift amounts to a rotation of the ecliptic plane of 0.47 seconds of arc per year (more than a hundred times smaller than lunisolar precession). The sum of the two precessions is known as the general precession.
Equations
The tidal force on Earth due to a perturbing body (Sun, Moon or planet) is expressed by Newton's law of universal gravitation, whereby the gravitational force of the perturbing body on the side of Earth nearest is said to be greater than the gravitational force on the far side by an amount proportional to the difference in the cubes of the distances between the near and far sides. If the gravitational force of the perturbing body acting on the mass of the Earth as a point mass at the center of Earth (which provides the centripetal force causing the orbital motion) is subtracted from the gravitational force of the perturbing body everywhere on the surface of Earth, what remains may be regarded as the tidal force. This gives the paradoxical notion of a force acting away from the satellite but in reality it is simply a lesser force toward that body due to the gradient in the gravitational field. For precession, this tidal force can be grouped into two forces which only act on the equatorial bulge outside of a mean spherical radius. This couple can be decomposed into two pairs of components, one pair parallel to Earth's equatorial plane toward and away from the perturbing body which cancel each other out, and another pair parallel to Earth's rotational axis, both toward the ecliptic plane. The latter pair of forces creates the following torque vector on Earth's equatorial bulge:
where
GM, standard gravitational parameter of the perturbing body
r, geocentric distance to the perturbing body
C, moment of inertia around Earth's axis of rotation
A, moment of inertia around any equatorial diameter of Earth
C − A, moment of inertia of Earth's equatorial bulge (C > A)
δ, declination of the perturbing body (north or south of equator)
α, right ascension of the perturbing body (east from March equinox).
The three unit vectors of the torque at the center of the Earth (top to bottom) are x on a line within the ecliptic plane (the intersection of Earth's equatorial plane with the ecliptic plane) directed toward the March equinox, y on a line in the ecliptic plane directed toward the summer solstice (90° east of x), and z on a line directed toward the north pole of the ecliptic.
The value of the three sinusoidal terms in the direction of x for the Sun is a sine squared waveform varying from zero at the equinoxes (0°, 180°) to 0.36495 at the solstices (90°, 270°). The value in the direction of y for the Sun is a sine wave varying from zero at the four equinoxes and solstices to ±0.19364 (slightly more than half of the sine squared peak) halfway between each equinox and solstice with peaks slightly skewed toward the equinoxes (43.37°(−), 136.63°(+), 223.37°(−), 316.63°(+)). Both solar waveforms have about the same peak-to-peak amplitude and the same period, half of a revolution or half of a year. The value in the direction of z is zero.
The average torque of the sine wave in the direction of y is zero for the Sun or Moon, so this component of the torque does not affect precession. The average torque of the sine squared waveform in the direction of x for the Sun or Moon is:
where
, semimajor axis of Earth's (Sun's) orbit or Moon's orbit
e, eccentricity of Earth's (Sun's) orbit or Moon's orbit
and 1/2 accounts for the average of the sine squared waveform, accounts for the average distance cubed of the Sun or Moon from Earth over the entire elliptical orbit, and ε (the angle between the equatorial plane and the ecliptic plane) is the maximum value of δ for the Sun and the average maximum value for the Moon over an entire 18.6 year cycle.
Precession is:
where ω is Earth's angular velocity and Cω is Earth's angular momentum. Thus the first order component of precession due to the Sun is:
whereas that due to the Moon is:
where i is the angle between the plane of the Moon's orbit and the ecliptic plane. In these two equations, the Sun's parameters are within square brackets labeled S, the Moon's parameters are within square brackets labeled L, and the Earth's parameters are within square brackets labeled E. The term accounts for the inclination of the Moon's orbit relative to the ecliptic. The term is Earth's dynamical ellipticity or flattening, which is adjusted to the observed precession because Earth's internal structure is not known with sufficient detail. If Earth were homogeneous the term would equal its third eccentricity squared,
where a is the equatorial radius () and c is the polar radius (), so .
Applicable parameters for J2000.0 rounded to seven significant digits (excluding leading 1) are:
which yield
dψS/dt = 2.450183 /s
dψL/dt = 5.334529 /s
both of which must be converted to ″/a (arcseconds/annum) by the number of arcseconds in 2π radians (1.296″/2π) and the number of seconds in one annum (a Julian year) (3.15576s/a):
dψS/dt = 15.948788″/a vs 15.948870″/a from Williams
dψL/dt = 34.723638″/a vs 34.457698″/a from Williams.
The solar equation is a good representation of precession due to the Sun because Earth's orbit is close to an ellipse, being only slightly perturbed by the other planets. The lunar equation is not as good a representation of precession due to the Moon because the Moon's orbit is greatly distorted by the Sun and neither the radius nor the eccentricity is constant over the year.
Values
Simon Newcomb's calculation at the end of the 19th century for general precession (p) in longitude gave a value of 5,025.64 arcseconds per tropical century, and was the generally accepted value until artificial satellites delivered more accurate observations and electronic computers allowed more elaborate models to be calculated. Jay Henry Lieske developed an updated theory in 1976, where p equals 5,029.0966 arcseconds (or 1.3969713 degrees) per Julian century. Modern techniques such as VLBI and LLR allowed further refinements, and the International Astronomical Union adopted a new constant value in 2000, and new computation methods and polynomial expressions in 2003 and 2006; the accumulated precession is:
pA = 5,028.796195T + 1.1054348T2 + higher order terms, in arcseconds, with T, the time in Julian centuries (that is, 36,525 days) since the epoch of 2000.
The rate of precession is the derivative of that:
p = 5,028.796195 + 2.2108696T + higher order terms.
The constant term of this speed (5,028.796195 arcseconds per century in above equation) corresponds to one full precession circle in 25,771.57534 years (one full circle of 360 degrees divided by 50.28796195 arcseconds per year) although some other sources put the value at 25771.4 years, leaving a small uncertainty.
The precession rate is not a constant, but is (at the moment) slowly increasing over time, as indicated by the linear (and higher order) terms in T. In any case it must be stressed that this formula is only valid over a limited time period. It is a polynomial expression centred on the J2000 datum, empirically fitted to observational data, not on a deterministic model of the Solar System. It is clear that if T gets large enough (far in the future or far in the past), the T² term will dominate and p will go to very large values. In reality, more elaborate calculations on the numerical model of the Solar System show that the precessional rate has a period of about 41,000 years, the same as the obliquity of the ecliptic. That is,
p = A + BT + CT2 + …
is an approximation of
p = a + b sin (2πT/P), where P is the 41,000-year period.
Theoretical models may calculate the constants (coefficients) corresponding to the higher powers of T, but since it is impossible for a polynomial to match a periodic function over all numbers, the difference in all such approximations will grow without bound as T increases. Sufficient accuracy can be obtained over a limited time span by fitting a high enough order polynomial to observation data, rather than a necessarily imperfect dynamic numerical model. For present flight trajectory calculations of artificial satellites and spacecraft, the polynomial method gives better accuracy. In that respect, the International Astronomical Union has chosen the best-developed available theory. For up to a few centuries into the past and future, none of the formulas used diverge very much. For up to a few thousand years in the past and the future, most agree to some accuracy. For eras farther out, discrepancies become too large – the exact rate and period of precession may not be computed using these polynomials even for a single whole precession period.
The precession of Earth's axis is a very slow effect, but at the level of accuracy at which astronomers work, it does need to be taken into account on a daily basis. Although the precession and the tilt of Earth's axis (the obliquity of the ecliptic) are calculated from the same theory and are thus related one to the other, the two movements act independently of each other, moving in opposite directions.
Precession rate exhibits a secular decrease due to tidal dissipation from 59"/a to 45"/a (a = annum = Julian year) during the 500 million year period centered on the present. After short-term fluctuations (tens of thousands of years) are averaged out, the long-term trend can be approximated by the following polynomials for negative and positive time from the present in "/a, where T is in billions of Julian years (Ga):
p = 50.475838 − 26.368583T + 21.890862T2
p = 50.475838 − 27.000654T + 15.603265T2
This gives an average cycle length now of 25,676 years.
Precession will be greater than p by the small amount of +0.135052"/a between and . The jump to this excess over p will occur in only beginning now because the secular decrease in precession is beginning to cross a resonance in Earth's orbit caused by the other planets.
According to W. R. Ward, in about 1,500 million years, when the distance of the Moon, which is continuously increasing from tidal effects, has increased from the current 60.3 to approximately 66.5 Earth radii, resonances from planetary effects will push precession to 49,000 years at first, and then, when the Moon reaches 68 Earth radii in about 2,000 million years, to 69,000 years. This will be associated with wild swings in the obliquity of the ecliptic as well. Ward, however, used the abnormally large modern value for tidal dissipation. Using the 620-million year average provided by tidal rhythmites of about half the modern value, these resonances will not be reached until about 3,000 and 4,000 million years, respectively. However, due to the gradually increasing luminosity of the Sun, the oceans of the Earth will have vaporized before that time (about 2,100 million years from now).
See also
Astronomical nutation
Axial tilt
Euler angles
Longitude of vernal equinox
Milankovitch cycles
Polar motion
Sidereal year
Apsidal precession
References
Bibliography
Dreyer, J. L. E. A History of Astronomy from Thales to Kepler. 2nd ed. New York: Dover, 1953.
Evans, James. The History and Practice of Ancient Astronomy. New York: Oxford University Press, 1998.
Explanatory supplement to the Astronomical ephemeris and the American ephemeris and nautical almanac
Precession and the Obliquity of the Ecliptic has a comparison of values predicted by different theories
Pannekoek, A. A History of Astronomy. New York: Dover, 1961.
Parker, Richard A. "Egyptian Astronomy, Astrology, and Calendrical Reckoning." Dictionary of Scientific Biography 15:706–727.
Rice, Michael (1997), Egypt's Legacy: The archetypes of Western civilization, 3000–30 BC, London and New York.
Tompkins, Peter. Secrets of the Great Pyramid. With an appendix by Livio Catullo Stecchini. New York: Harper Colophon Books, 1971.
Toomer, G. J. "Hipparchus." Dictionary of Scientific Biography. Vol. 15:207–224. New York: Charles Scribner's Sons, 1978.
Toomer, G. J. Ptolemy's Almagest. London: Duckworth, 1984.
Ulansey, David. The Origins of the Mithraic Mysteries: Cosmology and Salvation in the Ancient World. New York: Oxford University Press, 1989.
External links
D'Alembert and Euler's Debate on the Solution of the Precession of the Equinoxes
Forced precession and nutation of Earth
Precession
Technical factors of astrology
Celestial mechanics
Equinoxes | Axial precession | [
"Physics",
"Astronomy"
] | 9,481 | [
"Time in astronomy",
"Physical quantities",
"Classical mechanics",
"Astrophysics",
"Precession",
"Equinoxes",
"Celestial mechanics",
"Wikipedia categories named after physical quantities"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.