id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
1,393,135 | https://en.wikipedia.org/wiki/Illusory%20motion | The term illusory motion, or motion illusion or apparent motion, refers to any optical illusion in which a static image appears to be moving due to the cognitive effects of interacting color contrasts, object shapes, and position. The stroboscopic animation effect is the most common type of illusory motion and is perceived when images are displayed in fast succession, as occurs in movies. The concept of illusory motion was allegedly first described by Aristotle.
Types of illusory motion
Induced movement works by moving the background around a fixed object. Films such as Airplane! and Top Secret! use a fixed prop and move the background props to give the effect of induced motion.
Motion aftereffect occurs when one views moving stimuli for an extended period of time and then focus on a stationary object. The object will appear to move in the opposite direction of the moving stimuli.
Mechanics of illusory motion perception
Illusory motion is perceived as movement in a number of ways. The first can manifest through the retinal image where the motion flows across the retinal mosaic. The perceived motion can also manifest by the eyes changing position. In either case, an aftereffect may occur. Peripheral drift illusion is another variety of perceived movement in the eye.
Using an fMRI, Roger B. H. Tootell et al. were able to identify the area of the brain that is active when experiencing illusory motion. Tootell and his colleagues had participants view a set of concentric rings that would appear to move inward and outward. Participants would experience a motion aftereffect following the viewing the moving stimuli for 40 seconds. Participants showed an increased activity in the MT area of the brain.
Occurrences
Illusory motion can occur in different circumstances. Stroboscopic images is where a series of static images are viewed in sequence at a high enough rate that the static images appear to blend into a continuous motion. An example of this is a motion picture. Optical art (or Op art.) is when artists use simple black and white patterns that create vivid illusions of motion, which are known as optical flow.
Stroboscopic effects
Stroboscopic effects are caused by aliasing that occurs when continuous rotational or other cyclic motion is represented by a series of short or instantaneous samples (as opposed to a continuous view) at a sampling rate close to the period of the motion. Rotating objects can appear counter-rotating, stationary, or rotating under a strobe light.
Simon Stampfer, who coined the term in his 1833 patent application for his stroboscopische Scheiben (better known as the "phenakistiscope"), explained how the illusion of motion occurs when during unnoticed regular and very short interruptions of light, one figure gets replaced by a similar figure in a slightly different position.
Beta movement and the phi phenomenon are examples of apparent motion that can be induced with stroboscopic alternation between stimuli at different spots in close proximity of each other. Beta movement occurs with relatively big differences in position or shape between images at relatively low stroboscopic frequencies, and seems to rely more on cerebral interpretation than on lower neural processing. The (pure) phi phenomenon occurs at very high stroboscopic frequencies and induces a ghost-like "objectless" motion between or around the alternating figures. Both have erroneously been regarded as explanations for the illusion of motion in film.
The apparent counter-rotation of wheels can also occur in constant daylight. It has been assumed that the eye views the world in a series of still images, and therefore the counter-rotation would be a result of physiological under-sampling. However, a simple demonstration to disprove the idea is to view an apparent counter-rotation (that of a rotating drum) simultaneously with a mirror image. subjective reports reveal that the counter-rotation appears in only one of the images (either the real or mirrored image when both are viewed simultaneously). Perceptual rivalry has been suggested as a more likely cause of the effect.
Optical art
Apparent motion in optical art has been suggested to be caused by the difference in neural signals between black and white parts of an image. While white parts may produce an "on-off" signal, the black parts produce an "off-on" signal. This means for a black part and a white part presented simultaneously, the "on" part of the signal is separated in time, possibly resulting in the stimulation of motion detectors.
Another explanation is that afterimages from the retina cause a moiré that is hard to identify.
Gallery
In popular culture
American neo-psychedelia outfit Animal Collective used an illusory motion on the cover of their award-winning 2009 album Merriweather Post Pavilion.
The Rotating Snakes illusion by Akiyoshi Kitaoka is one of the most popularly known illusory motions.
See also
Akiyoshi Kitaoka
Illusions of self-motion
References
External links
These Patterns Move, But it’s an Illusion by Smithsonian Research Lab
Akiyoshi illusion pages by the Professor Akiyoshi Kitaoka, Ristumeikan University, Osaka, Japan
Optical illusions | Illusory motion | [
"Physics"
] | 1,039 | [
"Optical phenomena",
"Physical phenomena",
"Optical illusions"
] |
1,393,154 | https://en.wikipedia.org/wiki/Cost%E2%80%93utility%20analysis | Cost–utility analysis (CUA) is a form of economic analysis used to guide procurement decisions.
The most common and well-known application of this analysis is in pharmacoeconomics, especially health technology assessment (HTA).
In health economics
In health economics, the purpose of CUA is to estimate the ratio between the cost of a health-related intervention and the benefit it produces in terms of the number of years lived in full health by the beneficiaries. Hence it can be considered a special case of cost-effectiveness analysis, and the two terms are often used interchangeably.
Cost is measured in monetary units. Benefit needs to be expressed in a way that allows health states that are considered less preferable to full health to be given quantitative values. However, unlike cost–benefit analysis, the benefits do not have to be expressed in monetary terms. In HTAs it is usually expressed in quality-adjusted life years (QALYs).
If, for example, intervention A allows a patient to live for three additional years than if no intervention had taken place, but only with a quality of life weight of 0.6, then the intervention confers 3 * 0.6 = 1.8 QALYs to the patient. (Note that the quality of life weight is determined via a scale of 0-1, with 0 being the lowest health possible, and 1 being perfect health). If intervention B confers two extra years of life at a quality of life weight of 0.75, then it confers an additional 1.5 QALYs to the patient. The net benefit of intervention A over intervention B is therefore 1.8 – 1.5 = 0.3 QALYs.
The incremental cost-effectiveness ratio (ICER) is the ratio between the difference in costs and the difference in benefits of two interventions. The ICER may be stated as (C1 – C0)/(E1 – E0) in a simple example where C0 and E0 represent the cost and gain, respectively, from taking no health intervention action. C1 and E1 would represent the cost and gain, respectively of taking a specific action. So, an example in which the costs and gains, respectively, are $140,000 and 3.5 QALYs, would yield a value of $40,000 per QALY. These values are often used by policy makers and hospital administrators to determine relative priorities when determining treatments for disease conditions. It is important to note that CUA measures relative patient or general population utility of a treatment or pharmacoeconomic intervention. Its results give no absolute indicator of the value of a certain treatment.
The National Institute for Health and Care Excellence (NICE) in the UK has been using QALYs to measure the health benefits delivered by various treatment regimens. There is some question as to how well coordinated NICE and NHS are in making decisions about resource allocation. According to a recent study "cost effectiveness often does not appear to be the dominant consideration in decisions about resource allocation made elsewhere in the NHS". While QALYs are used in the United States, they are not utilized to the same degree as they are in Europe.
In the United Kingdom, in January 2005, the NICE is believed to have a threshold of about £30,000 per QALY – roughly twice the mean income after tax – although a formal figure has never been made public. Thus, any health intervention which has an incremental cost of more than £30,000 per additional QALY gained is likely to be rejected and any intervention which has an incremental cost of less than or equal to £30,000 per extra QALY gained is likely to be accepted as cost-effective. This implies a value of a full life of about £2.4 million. For end of life treatments, a higher threshold of £50,000 per additional QALY gained is used by NICE.
In North America, a similar figure of US$50000 per QALY is often suggested as a threshold ICER for a cost-effective intervention.
A complete compilation of cost–utility analyses in the peer reviewed medical literature is available at the CEA Registry Website
Advantages and disadvantages
On the plus side, CUA allows comparison across different health programs and policies by using a common unit of measure (money/QALYs gained). CUA provides a more complete analysis of total benefits than simple cost–benefit analysis does. This is because CUA takes into account the quality of life that an individual has, while CBA does not.
However, in CUA, societal benefits and costs are often not taken into account. Furthermore, some economists believe that measuring QALYs is more difficult than measuring the monetary value of life through health improvements, as is done with cost–benefit analysis. This is because in CUA you need to measure the health improvement effects for every remaining year of life after the program is initiated. While for cost–benefit analysis (CBA) we have an approximate value of life ($2 million is one of the estimates), we do not have a QALY estimate for nearly every medical treatment or disease.
In addition, some people believe that life is priceless and there are ethical problems with placing a value on human life.
Also, the weighting of QALYs through time-trade-off, standard gamble, or visual analogue scale is highly subjective.
Criticism of cost–utility analysis
There are criticisms of QALY. One involves QALY's lack of usefulness to the healthcare provider in determining the applicability of alternative treatments in the individual patient environment, and the absence of incorporating the patient's willingness to pay (i.e. behavioral economics) in decisions to finance new treatments. Another criticism involves age; elderly individuals are assumed to have lower QALYs since they do not have as many years to influence the calculation of the measurement; so comparing a health intervention's impact on a teenager's QALYs to an older individual's QALYs may not be considered "fair" since age is such an important factor. Specific health outcomes may also be difficult to quantify, thus making it difficult to compare all factors that may influence an individual's QALY.
Example: Comparing an intervention's impact on the livelihood of a single person to a parent of three; QALYs do not take into account the importance that an individual person may have for others' lives.
In the US, the health care reform law (Patient Protection and Affordable Care Act) has forbidden the use of QALYs "as a threshold to establish what type of health care is cost effective or recommended. Also, "The Secretary shall not utilize such an adjusted life year (or such a similar measure) as a threshold to determine coverage, reimbursement, or incentive programs under title XVIII".
See also
Cost-effectiveness
Cost–benefit analysis
References
Costs
Health economics
Health informatics
Health care quality
Optimal decisions
Utility | Cost–utility analysis | [
"Biology"
] | 1,412 | [
"Health informatics",
"Medical technology"
] |
1,393,169 | https://en.wikipedia.org/wiki/Quality-adjusted%20life%20year | The quality-adjusted life year (QALY) is a generic measure of disease burden, including both the quality and the quantity of life lived. It is used in economic evaluation to assess the value of medical interventions. One QALY equates to one year in perfect health. QALY scores range from 1 (perfect health) to 0 (dead). QALYs can be used to inform health insurance coverage determinations, treatment decisions, to evaluate programs, and to set priorities for future programs.
Critics argue that the QALY oversimplifies how actual patients would assess risks and outcomes, and that its use may restrict patients with disabilities from accessing treatment. Proponents of the measure acknowledge that the QALY has some shortcomings, but that its ability to quantify tradeoffs and opportunity costs from the patient, and societal perspective make it a critical tool for equitably allocating resources.
Calculation
A measure of the state of health of a person or group in which the benefits, in terms of length of life, are adjusted to reflect the quality of life. One quality-adjusted life year (QALY) is equal to 1 year of life in perfect health. It combines two different benefits of treatment—length of life and quality of life—into a single number that can be compared across different types of treatments. For example, one year lived in perfect health equates to 1 QALY. This can be interpreted as a person getting 100% of the value for that year. A year lived in a less than perfect state of health can also be expressed as the amount of value accrued to the person living it. For example, 1 year of life lived in a situation with utility 0.5 yields 0.5 QALYs—a person experiencing this state is getting only 50% of the possible value of that year. In other words, they value the experience of being in less than perfect health for a full year as much as they value living for half a year in perfect health (0.5 years × 1 Utility).
Therefore, calculating a QALY requires two inputs. One is the utility value (or utility weight) associated with a given state of health by the years lived in that state. The underlying measure of utility is derived from clinical trials, and studies that measure how people feel in these specific states of health. The way they feel in a state of perfect health equates to a value of 1 (or 100%). Death is assigned a utility of 0 (or 0%), and in some circumstances it is possible to accrue negative QALYs to reflect health states deemed "worse than dead." The value people perceive in less than perfect states of health are expressed as a fraction between 0 and 1.
The second input is the amount of time people live in various states of health. This information usually comes from clinical trials.
The QALY calculation is simple: the change in utility value induced by the treatment is multiplied by the duration of the treatment effect to provide the number of QALYs gained. QALYs can then be incorporated with medical costs to arrive at a final common denominator of cost/QALY. This parameter can be used to compare the cost-effectiveness of any treatment.
Weighting
The utility values used in QALY calculations are generally determined by methods that measure people's willingness to trade time in different health states, such as those proposed in the Journal of Health Economics:
Time-trade-off (TTO): Respondents are asked to choose between remaining in a state of ill health for a period of time, or being restored to perfect health but having a shorter life expectancy.
Standard gamble (SG): Respondents are asked to choose between remaining in a state of ill health for a period of time, or choosing a medical intervention which has a chance of either restoring them to perfect health or killing them.
Visual analogue scale (VAS): Respondents are asked to rate a state of ill health on a scale from 0 to 100, with 0 representing being dead, and 100 representing perfect health. This method has the advantage of being the easiest to ask, but is the most subjective.
Another way of determining the weight associated with a particular health state is to use standard descriptive systems such as the EuroQol Group's EQ-5D questionnaire, which categorizes health states according to five dimensions: mobility, self-care, usual activities (e.g. work, study, homework or leisure activities), pain/discomfort and anxiety/depression.
Use
Data on medical costs are often combined with QALYs in cost-utility analysis to estimate the cost-per-QALY associated with a health care intervention. This parameter can be used to develop a cost-effectiveness analysis of any treatment. This incremental cost-effectiveness ratio (ICER) can then be used to allocate healthcare resources, often using a threshold approach.
In the United Kingdom, the National Institute for Health and Care Excellence (NICE), which advises on the use of health technologies within the National Health Service, used "£ per QALY" to evaluate their utility since its founding in 1999.
In 1989, the state of Oregon attempted to reform its Medicaid system by incorporating the QALY metric. This was found to be discriminatory, and in violation of the Americans with Disabilities Act in 1992. Louis W. Sullivan, the Secretary of Health and Human Services at the time, criticized the plan by stating that "Oregon's plan in substantial part values the life of a person with a disability less than the life of a person without a disability."
History
The first mention of Quality Adjusted Life Years appeared in a doctoral thesis at Harvard University by Joseph S. Pliskin (1974). The need to consider quality of life is credited to work by Klarman et al. (1968), Fanshel and Bush (1970) and Torrance et al. (1972) who suggested the idea of length of life adjusted by indices of functionality or health. A 1976 article by Zeckhauser and Shepard was the first appearance in print of the term. QALYs were later promoted through medical technology assessments conducted by the US Congress Office of Technology Assessment.
In 1980, Pliskin et al. justified the QALY indicator using multiattribute utility theory: if a set of conditions pertaining to agent preferences on life years and quality of life are verified, then it is possible to express the agent's preferences about couples (number of life years/health state), by an interval (Neumannian) utility function. This utility function would be equal to the product of an interval utility function on "life years", and an interval utility function on "health state".
Debate
According to Pliskin et al., the QALY model requires utility independent, risk neutral, and constant proportional tradeoff behavior. For the more general case of a life time health profile (i.e., experiencing more than one health state during the remaining years of life), the utility of a life time health profile must equal the sum of single-period utilities. Because of these theoretical assumptions, the meaning and usefulness of the QALY is debated. Perfect health is difficult, if not impossible, to define. Some argue that there are health states worse than being dead, and that therefore there should be negative values possible on the health spectrum (indeed, some health economists have incorporated negative values into calculations). Determining the level of health depends on measures that some argue place disproportionate importance on physical pain or disability over mental health.
The method of ranking interventions on grounds of their cost per QALY gained ratio (or ICER) is controversial because it implies a quasi-utilitarian calculus to determine who will or will not receive treatment. However, its supporters argue that since health care resources are inevitably limited, this method enables them to be allocated in the way that is approximately optimal for society, including most patients. Another concern is that it does not take into account equity issues such as the overall distribution of health states—particularly since younger, healthier cohorts have many times more QALYs than older or sicker individuals. As a result, QALY analysis may undervalue treatments which benefit the elderly or others with a lower life expectancy. Also, many would argue that all else being equal, patients with more severe illness should be prioritized over patients with less severe illness if both would get the same absolute increase in utility.
As early as 1989, Loomes and McKenzie recommended that research be conducted concerning the validity of QALYs. In 2010, with funding from the European Commission, the European Consortium in Healthcare Outcomes and Cost-Benefit Research (ECHOUTCOME) began a major study on QALYs as used in health technology assessment. Ariel Beresniak, the study's lead author, was quoted as saying that it was the "largest-ever study specifically dedicated to testing the assumptions of the QALY." In January 2013, at its final conference, ECHOUTCOME released preliminary results of its study which surveyed 1361 people "from academia" in Belgium, France, Italy and the UK. The researchers asked the subjects to respond to 14 questions concerning their preferences for various health states and durations of those states (e.g., 15 years limping versus 5 years in a wheelchair). They concluded that:
"Preferences expressed by the respondents were not consistent with the QALY theoretical assumptions";
Quality of life can be measured in consistent intervals;
Life-years and quality of life are independent of each other;
People are neutral about risk; and
Willingness to gain or lose life-years is constant over time.
ECHOUTCOME also released "European Guidelines for Cost-Effectiveness Assessments of Health Technologies", which recommended not using QALYs in healthcare decision making. Instead, the guidelines recommended that cost-effectiveness analyses focus on "costs per relevant clinical outcome."
In response to the ECHOUTCOME study, representatives of the National Institute for Health and Care Excellence, the Scottish Medicines Consortium, and the Organisation for Economic Co-operation and Development made the following points.
First, QALYs are better than alternative measures.
Second, the study was "limited."
Third, problems with QALYs were already widely acknowledged.
Fourth, the researchers did not take budgetary constraints into consideration.
Fifth, the UK's National Institute for Health and Care Excellence uses QALYs that are based on 3395 interviews with residents of the UK, as opposed to residents of several European countries.
Finally, according to Franco Sassi, a senior health economist at the Organization for Economic Co-operation and Development, people who call for the elimination of QALYs may have "vested interests".
While supporters laud QALY's efficiency, critics argue that use of QALY can cause medical inefficiencies because a less-effective, cheaper drug may be approved based on its QALY calculation.
The use of QALYs has been criticized by disability advocates because otherwise healthy individuals cannot return to full health or achieve a high QALY score. Treatments for quadriplegics, patients with multiple sclerosis, or other disabilities are valued less under a QALY-based system.
Critics also argue that a QALY-based system would limit research on treatments for rare disorders because the upfront costs of the treatments tend to be higher. Officials in the United Kingdom were forced to create the Cancer Drugs Fund to pay for new drugs regardless of their QALY rating because innovation had stalled since NICE was founded. At the time, one in seven drugs were turned down. Additionally there is a trend where QALY is getting position as a capital allocation tool although many sources and publications show that QALY has relatively significant gaps as formula and as organization management mechanism in healthcare
The Partnership to Improve Patient Care, a group opposed to the adoption of QALY-based metrics, argued that a QALY-based system could exacerbate racial disparities in medicine because there is no consideration of genetic background, demographics, or comorbidities that may be elevated in minority racial groups that do not have as much weight in the consideration of the average year of perfect health.
Critics have also noted that QALY only considers the quality of life when patients may choose to suffer negative side-effects to live long enough to attend a milestone event, such as a wedding or graduation.
The Rule of rescue and immoral or "inhuman acting" are frequently used arguments to ignore cost-effectiveness analysis and the use of QALYs. Especially during the 2020/2021 Covid-19 pandemic, national responses represented a massive form of applying the 'rule of rescue' and disregard of cost-effectiveness analysis (see e.g. Utilitarianism and the pandemic).
Both the Rule of rescue and immoral behavior are heavily attacked by Shepley Orr and Jonathan Wolff in their 2014 article "Reconciling cost-effectiveness with the rule of rescue: the institutional division of moral labor". They argued that the "Rule of rescue" is the result of wrong reasoning, and that cost-effectiveness reasoning with the aid of QALYs always leads to moral superior outcomes and optimal public health outcome, although not always perfect, given constraints of resources.
Future development
The UK Medical Research Council and others are exploring improvements to or replacements for QALYs. Among other possibilities are extending the data used to calculate QALYs (e.g., by using different survey instruments); "using well-being to value outcomes" (e.g., by developing a "well-being-adjusted life-year"; and by value outcomes in monetary terms. In 2018 HM Treasury set a discount rate of 1.5% for QALYs, which is lower than the discount rates for other costs and benefits, because the QALY is a direct utility measure.
See also
Related units:
Disability-adjusted life year (DALY)
Wellbeing-adjusted Life Year WALY and Wellbeing Year (WELLBY)
Life-years lost
Value of a Statistical Life (VSL)
Other:
Case mix index
Cost-Effectiveness Analysis Registry
Cost-utility analysis
Incremental cost-effectiveness ratio
Quality of life and measurements such as MANSA and Life Quality Index
References
Health economics
Health care quality
Medical ethics
Life expectancy | Quality-adjusted life year | [
"Biology"
] | 2,916 | [
"Senescence",
"Life expectancy"
] |
1,393,461 | https://en.wikipedia.org/wiki/Pollen%20zone | Pollen zones are a system of subdividing the Last Glacial Period and Holocene paleoclimate using the data from pollen cores. The sequence provides a global chronological structure to a wide variety of researchers, such as geologists, climatologists, geographers and archaeologists, who study the physical and cultural environment of the last 15,000 years.
History
The palynological aspects of the system were first investigated extensively by the Swedish palynologist Lennart von Post in the years before the First World War. By analysing pollen in core samples taken from peat bogs, von Post noticed that different plant species were represented in bands through the cores.
The differing species and differing quantities of the same species are caused by changes in climate. Von Post was able to confirm the Blytt–Sernander climatic sequence showing fluctuations between warmer and colder periods across thousands of years. He used local peat sequences combined with varve dating to produce a regional climatic chronology for Scandinavia.
In 1940 Harry Godwin began applying von Post's methods to pollen cores from the British Isles to produce the wider European sequence accepted today. It basically expanded the Blytt-Sernander further into the late Pleistocene and refined some of its periods. Following the Second World War, the technique spread to the Americas.
Currently scientists are focusing a repertory of several different methods on core samples in peat, ice, lake and ocean bottoms, and sediments to achieve "high resolution" dating not possible to only one method: carbon dating, dendrochronology, isotope ratios on a number of gases, studies of insects and molluscs, and others. While often doubting the utility of the modified Bytt-Sernander, they seem to confirm and expand it all the more.
Notes on the sequence table
At present nine main pollen zones, I-IX, are defined, based on the work of J. Iversen, published in 1954. These are matched to period names called "biostratigraphic divisions" in the table, which were defined for Denmark by Iverson based on layers in the peat bogs. They represent climatic and biological zones in the peat.
Others have used these names in different senses, such as the 1974 chronozones of J. Mangerud. The sequences in Germany and Sweden are not exactly the same as those in Denmark, inviting scientists there to use the names still differently or make other definitions. Moreover, the names are apt to be used interchangeably for glacials, interglacials, stadial, interstadials, or oscillations, leading some scientists to deplore the lack of system.
The system of the table below covers from around 13,000 BC to the modern day. Dates, given in years BC, are best viewed as being based on uncalibrated C-14 dates, which, when calibrated, would result in much earlier BC dates. For example, an Older Dryas start date of 10,000 BC translates roughly into an uncalibrated BP date of 12,000. Calibrated, that becomes 14,000 BP, 12,000 BC. To obtain quick, on-line calibrations, you may use CalPal.
The dates in the table correspond relatively well to more modern dates for the earlier periods. Larger discrepancies begin at the end of the Boreal. More, and more modern, details on the dating of the periods are given under the article for each one.
The archaeological periods listed only apply to north Europe, and do so approximately. For example, there is no uniform chronozone, "the Bronze Age", which would apply globally or even be of the same dates between north and south Europe.
The geological stages listed are only defined for the British Isles. Scientists use different names for north Europe, south Europe and other regions. However, they are cross-correlated in the articles for the ones listed.
In contrast to glacial periods, these pollen zones are being used to apply globally, with but few exceptions. It is acceptable, for example, to refer to the "Younger Dryas" of Antarctica, which has no pollen of its own. A few scientists disapprove of such uses.
Sequence table
References
External links
Reconsidering the geochronological framework of Lateglacial hunter-gatherer colonization of southern Scandinavia
Wansleben Salt Lake
Chronology
Holocene
Paleoclimatology
Palynology
Paleoecology | Pollen zone | [
"Physics",
"Biology"
] | 908 | [
"Evolution of the biosphere",
"Chronology",
"Physical quantities",
"Time",
"Spacetime",
"Paleoecology"
] |
1,393,634 | https://en.wikipedia.org/wiki/Sabatier%20reaction | The Sabatier reaction or Sabatier process produces methane and water from a reaction of hydrogen with carbon dioxide at elevated temperatures (optimally 300–400 °C) and pressures (perhaps 3 MPa ) in the presence of a nickel catalyst. It was discovered by the French chemists Paul Sabatier and Jean-Baptiste Senderens in 1897. Optionally, ruthenium on alumina (aluminium oxide) makes a more efficient catalyst. It is described by the following exothermic reaction:
CO2{} + 4H2 ->[{}\atop 400\ ^\circ\ce{C}][\ce{pressure+catalyst}] CH4{} + 2H2O∆H = −165.0 kJ/mol
There is disagreement on whether the CO2 methanation occurs by first associatively adsorbing an adatom hydrogen and forming oxygen intermediates before hydrogenation or dissociating and forming a carbonyl before being hydrogenated.
{CO} + 3H2 -> {CH4} + H2O∆H = −206 kJ/mol
CO methanation is believed to occur through a dissociative mechanism where the carbon oxygen bond is broken before hydrogenation with an associative mechanism only being observed at high H2 concentrations.
Methanation reactions over different metal catalysts including Ni, Ru and Rh have been widely investigated for the production of CH4 from syngas and other power to gas initiatives. Nickel is the most widely used catalyst owing to its high selectivity and low cost.
Applications
Creation of synthetic natural gas
Methanation is an important step in the creation of synthetic or substitute natural gas (SNG). Coal or wood undergo gasification which creates a producer gas that must undergo methaneation in order to produce a usable gas that just needs to undergo a final purification step.
The first commercial synthetic gas plant opened in 1984 and is the Great Plains Synfuels plant in Beulah, North Dakota. As of 2016, it is still operational and produces 1500 MW worth of SNG using coal as the carbon source. In the years since its opening, other commercial facilities have been opened using other carbon sources such as wood chips.
In France, AFUL Chantrerie, located in Nantes, in November 2017 opened the demonstrator MINERVE. The plant feeds a compressed natural gas station and sometimes injects methane into a natural gas fired boiler.
The Sabatier reaction has been used in renewable-energy-dominated energy systems to use the excess electricity generated by wind, solar photovoltaic, hydro, marine current, etc. to make methane from hydrogen sourced from water electrolysis.
In contrast to a direct usage of hydrogen for transport or energy storage applications, the methane can be injected into the existing gas network. The methane can be used on-demand to generate electricity overcoming low points of renewable energy production. The process is electrolysis of water by electricity to create hydrogen (which can partly be used directly in fuel cells) and the addition of carbon dioxide CO2 (Sabatier reaction) to create methane. The CO2 can be extracted from the air or fossil fuel waste gases by the amine process.
A 6 MW power-to-gas plant went into production in Germany in 2013, and powered a fleet of 1,500 Audi A3.
Ammonia synthesis
In ammonia production CO and CO2 are considered poisons to most commonly used catalysts. Methanation catalysts are added after several hydrogen producing steps to prevent carbon oxide buildup in the ammonia synthesis loop as methane does not have similar adverse effects on ammonia synthesis rates.
International Space Station life support
Oxygen generators on board the International Space Station produce oxygen from water using electrolysis; the hydrogen produced was previously discarded into space. As astronauts consume oxygen, carbon dioxide is produced, which must then be removed from the air and discarded as well. This approach required copious amounts of water to be regularly transported to the space station for oxygen generation in addition to that used for human consumption, hygiene, and other uses—a luxury that will not be available to future long-duration missions beyond low Earth orbit.
NASA is using the Sabatier reaction to recover water from exhaled carbon dioxide and the hydrogen previously discarded from electrolysis on the International Space Station and possibly for future missions. The other resulting chemical, methane, is released into space. As half of the input hydrogen becomes wasted as methane, additional hydrogen is supplied from Earth to make up the difference. However, this creates a nearly-closed cycle between water, oxygen, and carbon dioxide which only requires a relatively modest amount of imported hydrogen to maintain.
2H2O ->[\text{electrolysis}] O2{} + 2H2 ->[\text{respiration}] CO2{} + 2H2{} + \overset{added}{2H2} -> 2H2O{} + \overset{discarded}{CH4}
The loop could be further closed if the waste methane was separated into its component parts by pyrolysis, the high efficiency (up to 95% conversion) of which can be achieved at 1200 °C:
CH4 ->[\text{heat}] C{} + 2H2
The released hydrogen would then be recycled back into the Sabatier reactor, leaving an easily removed deposit of pyrolytic graphite. The reactor would be little more than a steel pipe, and could be periodically serviced by an astronaut where the deposit is chiselled out.
Alternatively, the loop could be partially closed (75% of H2 from CH4 recovered) by incomplete pyrolysis of the waste methane while keeping the carbon locked up in gaseous form as acetylene:
2CH4 ->[\text{heat}] C2H2{} + 3H2
The Bosch reaction is also being investigated by NASA for this purpose, which is:
CO2 + 2H2 -> C + 2H2O
The Bosch reaction would present a completely closed hydrogen and oxygen cycle which only produces atomic carbon as waste. However, difficulties maintaining its temperature of up to 600 °C and properly handling carbon deposits mean significantly more research will be required before a Bosch reactor could become a reality. One problem is that the production of elemental carbon tends to foul the catalyst's surface (coking), which is detrimental to the reaction's efficiency.
Manufacturing propellant on Mars
The Sabatier reaction has been proposed as a key step in reducing the cost of human mission to Mars (Mars Direct, SpaceX Starship) through in situ resource utilization. Hydrogen is combined with CO2 from the atmosphere, with methane then stored as fuel and the water side product electrolyzed yielding oxygen to be liquefied and stored as oxidizer and hydrogen to be recycled back into the reactor. The original hydrogen could be transported from Earth or separated from Martian sources of water.
Importing hydrogen
Importing a small amount of hydrogen avoids searching for water and just uses CO2 from the atmosphere.
"A variation of the basic Sabatier methanation reaction may be used via a mixed catalyst bed and a reverse water gas shift in a single reactor to produce methane from the raw materials available on Mars, utilising carbon dioxide in the Martian atmosphere. A 2011 prototype test operation that harvested CO2 from a simulated Martian atmosphere and reacted it with H2, produced methane rocket propellant at a rate of 1 kg/day, operating autonomously for 5 consecutive days, maintaining a nearly 100% conversion rate. An optimised system of this design massing 50 kg "is projected to produce 1 kg/day of O2:CH4 propellant ... with a methane purity of 98+% while consuming ~17 kWh per day of electrical power (at a continuous power of 700 W). Overall unit conversion rate expected from the optimised system is one tonne of propellant per 17 MWh energy input."
Stoichiometry issue with importing hydrogen
The stoichiometric ratio of oxidiser and fuel is 2:1, for an oxygen/methane engine:
CH4 + 2O2 -> CO2 + 2H2O
However, one pass through the Sabatier reactor produces a ratio of only 1:1. More oxygen may be produced by running the water-gas shift reaction (WGSR) in reverse (RWGS), effectively extracting oxygen from the atmosphere by reducing carbon dioxide to carbon monoxide.
Another option is to make more methane than needed and pyrolyze the excess of it into carbon and hydrogen (see above section), where the hydrogen is recycled back into the reactor to produce further methane and water. In an automated system, the carbon deposit may be removed by blasting with hot Martian CO2, oxidizing the carbon into carbon monoxide (via the Boudouard reaction), which is vented.
A fourth solution to the stoichiometry problem would be to combine the Sabatier reaction with the reverse water-gas shift (RWGS) reaction in a single reactor as follows:
3CO2 + 6H2 -> CH4 + 2CO + 4H2O
This reaction is slightly exothermic, and when the water is electrolyzed, an oxygen to methane ratio of 2:1 is obtained.
Regardless of which method of oxygen fixation is utilized, the overall process can be summarized by the following equation:
2H2 + 3CO2 -> CH4 + 2O2 + 2CO
Looking at molecular masses, 16 grams of methane and 64 grams of oxygen have been produced using 4 grams of hydrogen (which would have to be imported from Earth, unless Martian water was electrolysed), for a mass gain of 20:1; and the methane and oxygen are in the right stoichiometric ratio to be burned in a rocket engine. This kind of in situ resource utilization would result in massive weight and cost savings to any proposed crewed Mars or sample-return missions.
See also
Methane pyrolysis (for Hydrogen)
References
External links
A Crewed Mission to Mars
Development of an improved Sabatier reactor
Improved Sabatier Reactions for In Situ Resource Utilization on Mars Missions
Catalytic methanation experimental instructions, videos, and theory
Hydrogen
Methane
Organic redox reactions
Name reactions
Catalysis
Synthetic fuel technologies | Sabatier reaction | [
"Chemistry"
] | 2,129 | [
"Catalysis",
"Methane",
"Petroleum technology",
"Organic redox reactions",
"Organic reactions",
"Name reactions",
"Synthetic fuel technologies",
"Greenhouse gases",
"Chemical kinetics"
] |
1,393,702 | https://en.wikipedia.org/wiki/Seed%20orchard | A seed orchard is an intensively-managed plantation of specifically arranged trees for the mass production of genetically improved seeds to create plants, or seeds for the establishment of new forests.
General
Seed orchards are a common method of mass-multiplication for transferring genetically improved material from breeding populations to production populations (forests) and in this sense are often referred to as "multiplication" populations. A seed orchard is often composed of grafts (vegetative copies) of selected genotypes, but seedling seed orchards also occur mainly to combine orchard with progeny testing.
Seed orchards are the strong link between breeding programs and plantation establishment. They are designed and managed to produce seeds of superior genetic quality compared to those obtained from seed production areas, seed stands, or unimproved stands.
Material and connection with breeding population
In first generation seed orchards, the parents usually are phenotypically selected trees. In advanced generation seed orchards, the seed orchards are harvesting the benefits generated by tree breeding and the parents may be selected among the tested clones or families. It is efficient to synchronise the productive live cycle of the seed orchards with the cycle time of the breeding population. In the seed orchard, the trees can be arranged in a design to keep the related individuals or cloned copies apart from each other.
Seed orchards are the delivery vehicle for genetic improvement programs where the trade-off between genetic gain and diversity is the most important concern. The genetic gain of seed orchard crops depends primarily on the genetic superiority of the orchard parents, the gametic contribution to the resultant seed crops, and pollen contamination from outside seed orchards.
Genetic diversity of seed orchard crops
Seed production and gene diversity is an important aspect when using improved materials like seed orchard crops. Seed orchards crops derive generally from a limited number of trees. But if it is a common wind-pollinated species much pollen will come from outside the seed orchard and widen the genetic diversity. The genetic gain of the first generation seed orchards is not great and the seed orchard progenies overlap with unimproved material. Gene diversity of the seed crops is greatly influenced by the relatedness (kinship) among orchard parents, the parental fertility variation, and the pollen contamination.
Management and practical examples
Seed orchards are usually managed to obtain sustainable and large crops of seeds of good quality. To achieve this, the following methods are commonly applied: orchards are established on flat surface sites with southern exposure (better conditions for orchard maintenance and for seed production), no stands of the same species in close proximity (avoid strong pollen contamination), sufficient area to produce and be mainly pollinated with their own pollen cloud, cleaning the corridors between the rows, fertilising, and supplemental pollination. The genetic quality of seed orchards can be improved by genetic thinning and selective harvesting. In plantation forestry with southern yellow pines in the United States, almost all plants originate from seed orchards and most plantations are planted in family blocks, thus the harvest from each clone is kept separate during seed processing, plant production and plantation.
Recent seed orchard research
The optimal balance between the effective number of clones (diversity, status number, gene diversity) and genetic gain is achieved by making clonal contributions (number of ramets) proportional (linearly dependent) to the genetic value ("linear deployment"). This is dependent on several assumptions, one of them that the contribution to the seed orchard crop is proportional to the number of ramets. But the more ramets the larger the share of the pollen is lost depending on ineffective self-pollination. But even considering this, the linear deployment is a very good approximation. It was thought that increasing the gain is always accompanied by a loss in effective number of clones, but it has shown that both can be obtained in the same time by genetic thinning using the linear deployment algorithm if applied to some rather unbalanced seed orchards. Relatedness among clones is more critical for diversity than inbreeding.
The clonal variation in expected seed set has been compiled for 12 adult clonal seed orchards of Scots pine. The seed set ability is not that drastic among clones as has been shown in other investigations which are probably less relevant for actual seed production of Scots pine.
The correlations of cone set for Scots pine in a clonal archive was not well correlated with that of the same clones in seed orchards. Thus it does not seem meaningful to increase seed set by choosing clones with a good seed set.
As supporting tree breeding make advances, new seed orchards will be genetically better than old ones. This is a relevant factor for the economic lifetime of a seed orchard. Considerations for Swedish Scots pine suggested an economic lifetime of 30 years, which is less than the current lifetime.
Seed orchards for important wind pollinated species start to produce seeds before the seed orchard trees start to produce much pollen. Thus all or most of the pollen parents are outside the seed orchard. Calculations indicates that seed orchard seeds are still to be expected to a superior alternative to older and more mature seed orchards or stand seeds. Advantage of early seeds like absence of selfing or related matings and high diversity are positive factors in the early seeds.
Swedish conifers orchards with tested clones could have 20–25 clones with more ramets from the better and less from the worse so effective ramet number is 15–18. Higher clone number results in unneeded loss of genetic gain. Lower clone numbers can still be better than existing alternatives. For southern pines in United States it may be optimal with half as many clones.
When forest tree breeding proceeds to advanced generations the candidates to seed orchards will be related and the question to what degree related clones can be tolerated in seed orchards become urgent. Gene diversity seems to be a more important consideration than inbreeding. If the number of candidates have at least eight times as much diversity (status number) as required for the seed orchard relations are not limiting and clones can be deployed as usual but restricting for half and full sibs, but if the candidate population has a lower diversity more sophisticated algorithms are needed.
See also
Double-pair mating
Grafting
Plant nursery
References
Further reading
Kang, K. S. (2001). Genetic gain and gene diversity of seed orchard crops. (Abstract). Acta Universitatis Agriculturae Sueciae, Silvestria 187.
Lindgren, D. (Ed.) Proceedings of a Seed Orchard Conference. Umeå, Sweden, 26–28 September 2007. 256 pages.
Prescher, F. (2007). Seed Orchards – Genetic considerations on function, management and seed procurement. Doctoral dissertation, Swedish University of Agricultural Sciences.
Plant genetics
Seeds | Seed orchard | [
"Biology"
] | 1,337 | [
"Plants",
"Plant genetics"
] |
1,393,819 | https://en.wikipedia.org/wiki/Diamond%20turning | Diamond turning is turning using a cutting tool with a diamond tip. It is a process of mechanical machining of precision elements using lathes or derivative machine tools (e.g., turn-mills, rotary transfers) equipped with natural or synthetic diamond-tipped tool bits. The term single-point diamond turning (SPDT) is sometimes applied, although as with other lathe work, the "single-point" label is sometimes only nominal (radiused tool noses and contoured form tools being options). The process of diamond turning is widely used to manufacture high-quality aspheric optical elements from crystals, metals, acrylic, and other materials. Plastic optics are frequently molded using diamond turned mold inserts. Optical elements produced by the means of diamond turning are used in optical assemblies in telescopes, video projectors, missile guidance systems, lasers, scientific research instruments, and numerous other systems and devices. Most SPDT today is done with computer numerical control (CNC) machine tools. Diamonds also serve in other machining processes, such as milling, grinding, and honing. Diamond turned surfaces have a high specular brightness and require no additional polishing or buffing, unlike other conventionally machined surfaces.
Process
Diamond turning is a multi-stage process. Initial stages of machining are carried out using a series of CNC lathes of increasing accuracy. A diamond-tipped lathe tool is used in the final stages of the manufacturing process to achieve sub-nanometer level surface finishes and sub-micrometer form accuracies. The surface finish quality is measured as the peak-to-valley distance of the grooves left by the lathe. The form accuracy is measured as a mean deviation from the ideal target form. Quality of surface finish and form accuracy is monitored throughout the manufacturing process using such equipment as contact and laser profilometers, laser interferometers, optical and electron microscopes. Diamond turning is most often used for making infrared optics, because at longer wavelengths midspatial frequencies do not affect optical performance as it is less sensitive to surface finish quality, and because many of the materials used are difficult to polish with traditional methods.
Temperature control is crucial, because the surface must be accurate on distance scales shorter than the wavelength of light. Temperature changes of a few degrees during machining can alter the form of the surface enough to have an effect. The main spindle may be cooled with a liquid coolant to prevent temperature deviations.
The diamonds that are used in the process are strong in the downhill regime but tool wear is also highly dependent on crystal anisotropy and work material.
The machine tool
For best possible quality natural diamonds are used as single-point cutting elements during the final stages of the machining process. A CNC SPDT lathe rests atop a high-quality granite base with micrometer surface finish quality. The granite base is placed on air suspension on a solid foundation, keeping its working surface strictly horizontal. The machine tool components are placed on top of the granite base and can be moved with high degree of accuracy using a high-pressure air cushion or hydraulic suspension. The machined element is attached to an air chuck using negative air pressure and is usually centered manually using a micrometer. The chuck itself is separated from the electric motor that spins it by another air suspension.
The cutting tool is moved with sub-micron precision by a combination of electric motors and piezoelectric actuators. As with other CNC machines, the motion of the tool is controlled by a list of coordinates generated by a computer. Typically, the part to be created is first described using a computer aided design (CAD) model, then converted to G-code using a computer aided manufacturing (CAM) program, and the G-code is then executed by the machine control computer to move the cutting tool. The final surface is achieved with a series of cutting passes to maintain a ductile cutting regime.
Alternative methods of diamond machining in practice also include diamond fly cutting and diamond milling. Diamond fly cutting can be used to generate diffraction gratings and other linear patterns with appropriately contoured diamond shapes. Diamond milling can be used to generate aspheric lens arrays by annulus cutting methods with a spherical diamond tool.
Materials
Diamond turning is specifically useful when cutting materials that are viable as infrared optical components and certain non-linear optical components such as potassium dihydrogen phosphate (KDP). KDP is a perfect material in application for diamond turning, because the material is very desirable for its optical modulating properties, yet it is impossible to make optics from this material using conventional methods. KDP is water-soluble, so conventional grinding and polishing techniques are not effective in producing optics. Diamond turning works well to produce optics from KDP.
Generally, diamond turning is restricted to certain materials. Materials that are readily machinable include:
Plastics
Acetal
Acrylic
Nylon
Polycarbonate
Polypropylene
Polystyrene
Zeonex
Metals
Aluminum and aluminium alloys
Brass
Copper
Gold
Nickel-phosphorus alloy, deposited via electrolytic or electroless nickel plating on other materials
Silver
Tin
Zinc
Infrared crystals
Cadmium sulfide
Cadmium telluride
Calcium fluoride
Cesium iodide
Gallium arsenide
Germanium
Lithium niobate
Potassium bromide
Potassium dihydrogen phosphate (KDP)
Silicon
Sodium chloride
Tellurium dioxide
Zinc selenide
Zinc sulfide
The most often requested materials that are not readily machinable are:
Silicon-based glasses and ceramics
Ferrous materials (steel, iron)
Beryllium
Titanium
Molybdenum
Nickel (except for electroless nickel plating)
Ferrous materials are not readily machinable because the carbon in the diamond tool chemically reacts with the substrate, leading to tool damage and dulling after short cut lengths. Several techniques have been investigated to prevent this reaction, but few have been successful for long diamond machining processes at mass production scales.
Tool life improvement has been under consideration in diamond turning as the tool is expensive. Hybrid processes such as laser-assisted machining have emerged in this industry recently. The laser softens hard and difficult-to-machine materials such as ceramics and semiconductors, making them easier to cut.
Quality control
Despite all the automation involved in the diamond turning process, the human operator still plays the main role in achieving the final result. Quality control is a major part of the diamond turning process and is required after each stage of machining, sometimes after each pass of the cutting tool. If it is not detected immediately, even a minute error during any of the cutting stages results in a defective part. The extremely high requirements for quality of diamond-turned optics leave virtually no room for error.
The SPDT manufacturing process produces a relatively high percentage of defective parts, which must be discarded. As a result, the manufacturing costs are high compared to conventional polishing methods. Even with the relatively high volume of optical components manufactured using the SPDT process, this process cannot be classified as mass production, especially when compared with production of polished optics. Each diamond-turned optical element is manufactured on an individual basis with extensive manual labor.
History
Research into single-point diamond turning began in the late 1940s with Philips in the Netherlands, while Lawrence Livermore National Laboratory (LLNL) pioneered SPDT in the mid-1960s. By 1979, LLNL received funding to transfer this technology to private industry.
LLNL initially focused on two-axis machining for axisymmetric surfaces and developed the Large Optics Diamond Turning Machine (LDTM), a highly accurate lathe. They also experimented with freeform surfaces using fast tool servos and XZC (slow tool servo) turning, leading to applications like wavefront correctors for lasers.
Three-axis turning became more common in the early 1990s as diamond quality improved. Companies like Zeiss began producing refractive lenses for infrared optics, advancing freeform optical manufacturing. By 2002, interest in freeform shapes had expanded, especially in focusing lenses. Early applications included Polaroid’s SX-70 camera, and fast tool servos enabled rapid production of non-axisymmetric surfaces for contact lenses.
See also
Fabrication and testing (optical components)
References
Optics
Glass production
Turning | Diamond turning | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,671 | [
"Glass engineering and science",
"Applied and interdisciplinary physics",
"Optics",
"Glass production",
" molecular",
"Atomic",
" and optical physics"
] |
1,394,160 | https://en.wikipedia.org/wiki/Thermal%20shock | Thermal shock is a phenomenon characterized by a rapid change in temperature that results in a transient mechanical load on an object. The load is caused by the differential expansion of different parts of the object due to the temperature change. This differential expansion can be understood in terms of strain, rather than stress. When the strain exceeds the tensile strength of the material, it can cause cracks to form, and eventually lead to structural failure.
Methods to prevent thermal shock include:
Minimizing the thermal gradient by changing the temperature gradually
Increasing the thermal conductivity of the material
Reducing the coefficient of thermal expansion of the material
Increasing the strength of the material
Introducing compressive stress in the material, such as in tempered glass
Decreasing the Young's modulus of the material
Increasing the toughness of the material through crack tip blunting or crack deflection, utilizing the process of plastic deformation, and phase transformation
Effect on materials
Borosilicate glass is made to withstand thermal shock better than most other glass through a combination of reduced expansion coefficient, and greater strength, though fused quartz outperforms it in both these respects. Some glass-ceramic materials (mostly in the lithium aluminosilicate (LAS) system) include a controlled proportion of material with a negative expansion coefficient, so that the overall coefficient can be reduced to almost exactly zero over a reasonably wide range of temperatures.
Among the best thermomechanical materials, there are alumina, zirconia, tungsten alloys, silicon nitride, silicon carbide, boron carbide, and some stainless steels.
Reinforced carbon-carbon is extremely resistant to thermal shock, due to graphite's extremely high thermal conductivity and low expansion coefficient, the high strength of carbon fiber, and a reasonable ability to deflect cracks within the structure.
To measure thermal shock, the impulse excitation technique proved to be a useful tool. It can be used to measure Young's modulus, Shear modulus, Poisson's ratio, and damping coefficient in a non destructive way. The same test-piece can be measured after different thermal shock cycles, and this way the deterioration in physical properties can be mapped out.
Thermal shock resistance
Thermal shock resistance measures can be used for material selection in applications subject to rapid temperature changes. A common measure of thermal shock resistance is the maximum temperature differential, , which can be sustained by the material for a given thickness.
Strength-controlled thermal shock resistance
Thermal shock resistance measures can be used for material selection in applications subject to rapid temperature changes. The maximum temperature jump, sustainable by a material can be defined for strength-controlled models by:
where is the failure stress (which can be yield or fracture stress), is the coefficient of thermal expansion, is the Young's modulus, and is a constant depending upon the part constraint, material properties, and thickness.
where is a system constrain constant dependent upon the Poisson's ratio, and is a non-dimensional parameter dependent upon the Biot number,
may be approximated by:
where is the thickness, is the heat transfer coefficient, and is the thermal conductivity.
Perfect heat transfer
If perfect heat transfer is assumed, the maximum heat transfer supported by the material is:
for cold shock in plates
for hot shock in plates
A material index for material selection according to thermal shock resistance in the fracture stress derived perfect heat transfer case is therefore:
Poor heat transfer
For cases with poor heat transfer the maximum heat differential supported by the material is:
for cold shock
for hot shock
In the poor heat transfer case, a higher thermal conductivity is beneficial for thermal shock resistance. The material index for the poor heat transfer case is often taken as:
According to both the perfect and poor heat transfer models, larger temperature differentials can be tolerated for hot shock than for cold shock.
Fracture toughness controlled thermal shock resistance
In addition to thermal shock resistance defined by material fracture strength, models have also been defined within the fracture mechanics framework. Lu and Fleck produced criteria for thermal shock cracking based on fracture toughness controlled cracking. The models were based on thermal shock in ceramics (generally brittle materials). Assuming an infinite plate, and mode I cracking, the crack was predicted to start from the edge for cold shock, but the center of the plate for hot shock. Cases were divided into perfect, and poor heat transfer to further simplify the models.
Perfect heat transfer
The sustainable temperature jump decreases, with increasing convective heat transfer (and therefore larger Biot number). This is represented in the model shown below for perfect heat transfer
where is the mode I fracture toughness, is the Young's modulus, is the thermal expansion coefficient, and is half the thickness of the plate.
for cold shock
for hot shock
A material index for material selection in the fracture mechanics derived perfect heat transfer case is therefore:
Poor heat transfer
For cases with poor heat transfer, the Biot number is an important factor in the sustainable temperature jump.
Critically, for poor heat transfer cases, materials with higher thermal conductivity, , have higher thermal shock resistance. As a result, a commonly chosen material index for thermal shock resistance in the poor heat transfer case is:
Kingery thermal shock methods
The temperature difference to initiate fracture has been described by William David Kingery to be:
where is a shape factor, is the fracture stress, is the thermal conductivity, is the Young's modulus, is the coefficient of thermal expansion, is the heat transfer coefficient, and is a fracture resistance parameter. The fracture resistance parameter is a common metric used to define the thermal shock tolerance of materials.
The formulas were derived for ceramic materials, and make the assumptions of a homogeneous body with material properties independent of temperature, but can be well applied to other brittle materials.
Testing
Thermal shock testing exposes products to alternating low and high temperatures to accelerate failures caused by temperature cycles or thermal shocks during normal use. The transition between temperature extremes occurs very rapidly, greater than 15 °C per minute.
Equipment with single or multiple chambers is typically used to perform thermal shock testing. When using single chamber thermal shock equipment, the products remain in one chamber and the chamber air temperature is rapidly cooled and heated. Some equipment uses separate hot and cold chambers with an elevator mechanism that transports the products between two or more chambers.
Glass containers can be sensitive to sudden changes in temperature. One method of testing involves rapid movement from cold to hot water baths, and back.
Examples of thermal shock failure
Hard rocks containing ore veins such as quartzite were formerly broken down using fire-setting, which involved heating the rock face with a wood fire, then quenching with water to induce crack growth. It is described by Diodorus Siculus in Egyptian gold mines, Pliny the Elder, and Georg Agricola.
Ice cubes placed in a glass of warm water crack by thermal shock as the exterior surface increases in temperature much faster than the interior. The outer layer expands as it warms, while the interior remains largely unchanged. This rapid change in volume between different layers creates stresses in the ice that build until the force exceeds the strength of the ice, and a crack forms, sometimes with enough force to shoot ice shards out of the container.
Incandescent bulbs that have been running for a while have a very hot surface. Splashing cold water on them can cause the glass to shatter due to thermal shock, and the bulb to implode.
An antique cast iron cookstove is a simple iron box on legs, with a cast iron top. A wood or coal fire is built inside the box and food is cooked on the top outer surface of the box, like a griddle. If a fire is built too hot, and then the stove is cooled by pouring water on the top surface, it will crack due to thermal shock.
The strong gradient of temperature (due to the dousing of a fire with water) is believed to cause the breakage of the third Tsar Bell.
Thermal shock is a primary contributor to head gasket failure in internal combustion engines.
See also
Biot number
Impulse excitation technique
Spontaneous glass breakage
Strain
References
Materials degradation
Laser science
Heat transfer
Temperature | Thermal shock | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 1,643 | [
"Transport phenomena",
"Scalar physical quantities",
"Temperature",
"Physical phenomena",
"Thermodynamic properties",
"Physical quantities",
"Heat transfer",
"SI base quantities",
"Intensive quantities",
"Materials science",
"Materials degradation",
"Thermodynamics",
"Wikipedia categories na... |
1,394,307 | https://en.wikipedia.org/wiki/Photophosphorylation | In the process of photosynthesis, the phosphorylation of ADP to form ATP using the energy of sunlight is called photophosphorylation. Cyclic photophosphorylation occurs in both aerobic and anaerobic conditions, driven by the main primary source of energy available to living organisms, which is sunlight. All organisms produce a phosphate compound, ATP, which is the universal energy currency of life. In photophosphorylation, light energy is used to pump protons across a biological membrane, mediated by flow of electrons through an electron transport chain. This stores energy in a proton gradient. As the protons flow back through an enzyme called ATP synthase, ATP is generated from ADP and inorganic phosphate. ATP is essential in the Calvin cycle to assist in the synthesis of carbohydrates from carbon dioxide and NADPH.
ATP and reactions
Both the structure of ATP synthase and its underlying gene are remarkably similar in all known forms of life. ATP synthase is powered by a transmembrane electrochemical potential gradient, usually in the form of a proton gradient. In all living organisms, a series of redox reactions is used to produce a transmembrane electrochemical potential gradient, or a so-called proton motive force (pmf).
Redox reactions are chemical reactions in which electrons are transferred from a donor molecule to an acceptor molecule. The underlying force driving these reactions is the Gibbs free energy of the reactants relative to the products. If donor and acceptor (the reactants) are of higher free energy than the reaction products, the electron transfer may occur spontaneously. The Gibbs free energy is the energy available ("free") to do work. Any reaction that decreases the overall Gibbs free energy of a system will proceed spontaneously (given that the system is isobaric and also at constant temperature), although the reaction may proceed slowly if it is kinetically inhibited.
The fact that a reaction is thermodynamically possible does not mean that it will actually occur. A mixture of hydrogen gas and oxygen gas does not spontaneously ignite. It is necessary either to supply an activation energy or to lower the intrinsic activation energy of the system, in order to make most biochemical reactions proceed at a useful rate. Living systems use complex macromolecular structures to lower the activation energies of biochemical reactions.
It is possible to couple a thermodynamically favorable reaction (a transition from a high-energy state to a lower-energy state) to a thermodynamically unfavorable reaction (such as a separation of charges, or the creation of an osmotic gradient), in such a way that the overall free energy of the system decreases (making it thermodynamically possible), while useful work is done at the same time. The principle that biological macromolecules catalyze a thermodynamically unfavorable reaction if and only if a thermodynamically favorable reaction occurs simultaneously, underlies all known forms of life.
The transfer of electrons from a donor molecule to an acceptor molecule can be spatially separated into a series of intermediate redox reactions. This is an electron transport chain (ETC). Electron transport chains often produce energy in the form of a transmembrane electrochemical potential gradient. The gradient can be used to transport molecules across membranes. Its energy can be used to produce ATP or to do useful work, for instance mechanical work of a rotating bacterial flagella.
Cyclic photophosphorylation
This form of photophosphorylation occurs on the stroma lamella, or fret channels. In cyclic photophosphorylation, the high-energy electron released from P700, a pigment in a complex called photosystem I, flows in a cyclic pathway. The electron starts in photosystem I, passes from the primary electron acceptor to ferredoxin and then to plastoquinone, next to cytochrome bf (a similar complex to that found in mitochondria), and finally to plastocyanin before returning to photosystem I. This transport chain produces a proton-motive force, pumping H ions across the membrane and producing a concentration gradient that can be used to power ATP synthase during chemiosmosis. This pathway is known as cyclic photophosphorylation, and it produces neither O nor NADPH. Unlike non-cyclic photophosphorylation, NADP does not accept the electrons; they are instead sent back to the cytochrome bf complex.
In bacterial photosynthesis, a single photosystem is used, and therefore is involved in cyclic photophosphorylation.
It is favored in anaerobic conditions and conditions of high irradiance and CO compensation points.
Non-cyclic photophosphorylation
The other pathway, non-cyclic photophosphorylation, is a two-stage process involving two different chlorophyll photosystems in the thylakoid membrane. First, a photon is absorbed by chlorophyll pigments surrounding the reaction core center of photosystem II. The light excites an electron in the pigment P680 at the core of photosystem II, which is transferred to the primary electron acceptor, pheophytin, leaving behind P680. The energy of P680 is used in two steps to split a water molecule into 2H + 1/2 O + 2e (photolysis or light-splitting). An electron from the water molecule reduces P680 back to P680, while the H and oxygen are released. The electron transfers from pheophytin to plastoquinone (PQ), which takes 2e (in two steps) from pheophytin, and two H Ions from the stroma to form PQH. This plastoquinol is later oxidized back to PQ, releasing the 2e to the cytochrome bf complex and the two H ions into the thylakoid lumen. The electrons then pass through Cyt b and Cyt f to plastocyanin, using energy from photosystem I to pump hydrogen ions (H) into the thylakoid space. This creates a H gradient, making H ions flow back into the stroma of the chloroplast, providing the energy for the (re)generation of ATP.
The photosystem II complex replaced its lost electrons from HO, so electrons are not returned to photosystem II as they would in the analogous cyclic pathway. Instead, they are transferred to the photosystem I complex, which boosts their energy to a higher level using a second solar photon. The excited electrons are transferred to a series of acceptor molecules, but this time are passed on to an enzyme called ferredoxin-NADP reductase, which uses them to catalyze the reaction
NADP + 2H + 2e → NADPH + H
This consumes the H ions produced by the splitting of water, leading to a net production of 1/2O, ATP, and NADPH + H with the consumption of solar photons and water.
The concentration of NADPH in the chloroplast may help regulate which pathway electrons take through the light reactions. When the chloroplast runs low on ATP for the Calvin cycle, NADPH will accumulate and the plant may shift from noncyclic to cyclic electron flow.
Early history of research
In 1950, first experimental evidence for the existence of photophosphorylation in vivo was presented by Otto Kandler using intact Chlorella cells and interpreting his findings as light-dependent ATP formation.
In 1954, Daniel I. Arnon et.al. discovered photophosphorylation in vitro in isolated chloroplasts with the help of P32.
His first review on the early research of photophosphorylation was published in 1956.
References
Professor Luis Gordillo
Fenchel T, King GM, Blackburn TH. Bacterial Biogeochemistry: The Ecophysiology of Mineral Cycling. 2nd ed. Elsevier; 1998.
Lengeler JW, Drews G, Schlegel HG, editors. Biology of the Prokaryotes. Blackwell Sci; 1999.
Nelson DL, Cox MM. Lehninger Principles of Biochemistry. 4th ed. Freeman; 2005.
Stumm W, Morgan JJ. Aquatic Chemistry. 3rd ed. Wiley; 1996.
Thauer RK, Jungermann K, Decker K. Energy Conservation in Chemotrophic Anaerobic Bacteria. Bacteriol. Rev. 41:100–180; 1977.
White D. The Physiology and Biochemistry of Prokaryotes. 2nd ed. Oxford University Press; 2000.
Voet D, Voet JG. Biochemistry. 3rd ed. Wiley; 2004.
Cj C. Enverg
Photosynthesis
Light reactions | Photophosphorylation | [
"Chemistry",
"Biology"
] | 1,849 | [
"Biochemistry",
"Light reactions",
"Photosynthesis",
"Biochemical reactions"
] |
1,394,385 | https://en.wikipedia.org/wiki/Feynman%20slash%20notation | In the study of Dirac fields in quantum field theory, Richard Feynman introduced the convenient Feynman slash notation (less commonly known as the Dirac slash notation). If A is a covariant vector (i.e., a 1-form),
where γ are the gamma matrices. Using the Einstein summation notation, the expression is simply
.
Identities
Using the anticommutators of the gamma matrices, one can show that for any and ,
where is the identity matrix in four dimensions.
In particular,
Further identities can be read off directly from the gamma matrix identities by replacing the metric tensor with inner products. For example,
where:
is the Levi-Civita symbol
is the Minkowski metric
is a scalar.
With four-momentum
This section uses the metric signature. Often, when using the Dirac equation and solving for cross sections, one finds the slash notation used on four-momentum: using the Dirac basis for the gamma matrices,
as well as the definition of contravariant four-momentum in natural units,
we see explicitly that
Similar results hold in other bases, such as the Weyl basis.
See also
Weyl basis
Gamma matrices
Four-vector
S-matrix
References
Quantum field theory
Spinors
Richard Feynman
de:Dirac-Matrizen#Feynman-Slash-Notation | Feynman slash notation | [
"Physics"
] | 273 | [
"Quantum field theory",
"Quantum mechanics",
"Quantum physics stubs"
] |
1,394,454 | https://en.wikipedia.org/wiki/Vertex%20function | In quantum electrodynamics, the vertex function describes the coupling between a photon and an electron beyond the leading order of perturbation theory. In particular, it is the one particle irreducible correlation function involving the fermion , the antifermion , and the vector potential A.
Definition
The vertex function can be defined in terms of a functional derivative of the effective action Seff as
The dominant (and classical) contribution to is the gamma matrix , which explains the choice of the letter. The vertex function is constrained by the symmetries of quantum electrodynamics — Lorentz invariance; gauge invariance or the transversality of the photon, as expressed by the Ward identity; and invariance under parity — to take the following form:
where , is the incoming four-momentum of the external photon (on the right-hand side of the figure), and F1(q2) and F2(q2) are form factors that depend only on the momentum transfer q2. At tree level (or leading order), F1(q2) = 1 and F2(q2) = 0. Beyond leading order, the corrections to F1(0) are exactly canceled by the field strength renormalization. The form factor F2(0) corresponds to the anomalous magnetic moment a of the fermion, defined in terms of the Landé g-factor as:
See also
Nonoblique correction
References
External links
Quantum electrodynamics
Quantum field theory | Vertex function | [
"Physics"
] | 311 | [
"Quantum field theory",
"Quantum mechanics",
"Quantum physics stubs"
] |
1,395,554 | https://en.wikipedia.org/wiki/Distance%20modulus | The distance modulus is a way of expressing distances that is often used in astronomy. It describes distances on a logarithmic scale based on the astronomical magnitude system.
Definition
The distance modulus is the difference between the apparent magnitude (ideally, corrected from the effects of interstellar absorption) and the absolute magnitude of an astronomical object. It is related to the luminous distance in parsecs by:
This definition is convenient because the observed brightness of a light source is related to its distance by the inverse square law (a source twice as far away appears one quarter as bright) and because brightnesses are usually expressed not directly, but in magnitudes.
Absolute magnitude is defined as the apparent magnitude of an object when seen at a distance of 10 parsecs. If a light source has flux when observed from a distance of parsecs, and flux when observed from a distance of 10 parsecs, the inverse-square law is then written like:
The magnitudes and flux are related by:
Substituting and rearranging, we get:
which means that the apparent magnitude is the absolute magnitude plus the distance modulus.
Isolating from the equation , finds that the distance (or, the luminosity distance) in parsecs is given by
The uncertainty in the distance in parsecs () can be computed from the uncertainty in the distance modulus () using
which is derived using standard error analysis.
Different kinds of distance moduli
Distance is not the only quantity relevant in determining the difference between absolute and apparent magnitude. Absorption is another important factor, and it may even be a dominant one in particular cases (e.g., in the direction of the Galactic Center). Thus a distinction is made between distance moduli uncorrected for interstellar absorption, the values of which would overestimate distances if used naively, and absorption-corrected moduli.
The first ones are termed visual distance moduli and are denoted by , while the second ones are called true distance moduli and denoted by .
Visual distance moduli are computed by calculating the difference between the observed apparent magnitude and some theoretical estimate of the absolute magnitude. True distance moduli require a further theoretical step; that is, the estimation of the interstellar absorption coefficient.
Usage
Distance moduli are most commonly used when expressing the distance to other galaxies in the relatively nearby universe. For example, the Large Magellanic Cloud (LMC) is at a distance modulus of 18.5, the Andromeda Galaxy's distance modulus is 24.4, and the galaxy NGC 4548 in the Virgo Cluster has a DM of 31.0. In the case of the LMC, this means that Supernova 1987A, with a peak apparent magnitude of 2.8, had an absolute magnitude of −15.7, which is low by supernova standards.
Using distance moduli makes computing magnitudes easy. As for instance, a solar type star (M= 5) in the Andromeda Galaxy (DM= 24.4) would have an apparent magnitude (m) of 5 + 24.4 = 29.4, so it would be barely visible for the Hubble Space Telescope which has a limiting magnitude of about 30. Since it is apparent magnitudes which are actually measured at a telescope, many discussions about distances in astronomy are really discussions about the putative or derived absolute magnitudes of the distant objects being observed.
References
Zeilik, Gregory and Smith, Introductory Astronomy and Astrophysics (1992, Thomson Learning)
Physical quantities
de:Absolute Helligkeit#Entfernungsmodul | Distance modulus | [
"Physics",
"Mathematics"
] | 732 | [
"Physical phenomena",
"Quantity",
"Physical quantities",
"Physical properties"
] |
1,395,967 | https://en.wikipedia.org/wiki/Signaling%20game | In game theory, a signaling game is a simple type of a dynamic Bayesian game.
The essence of a signalling game is that one player takes an action, the signal, to convey information to another player, where sending the signal is more costly if they are conveying false information. A manufacturer, for example, might provide a warranty for its product in order to signal to consumers that its product is unlikely to break down. The classic example is of a worker who acquires a college degree not because it increases their skill, but because it conveys their ability to employers.
A simple signalling game would have two players, the sender and the receiver. The sender has one of two types that might be called "desirable" and "undesirable" with different payoff functions, where the receiver knows the probability of each type but not which one this particular sender has. The receiver has just one possible type.
The sender moves first, choosing an action called the "signal" or "message" (though the term "message" is more often used in non-signalling "cheap talk" games where sending messages is costless). The receiver moves second, after observing the signal.
The two players receive payoffs dependent on the sender's type, the message chosen by the sender and the action chosen by the receiver.
The tension in the game is that the sender wants to persuade the receiver that they have the desirable type, and they will try to choose a signal to do that. Whether this succeeds depends on whether the undesirable type would send the same signal, and how the receiver interprets the signal.
Perfect Bayesian equilibrium
The equilibrium concept that is relevant for signaling games is the perfect Bayesian equilibrium, a refinement of Bayesian Nash equilibrium.
Nature chooses the sender to have type with probability . The sender then chooses the probability with which to take signalling action , which can be written as for each possible The receiver observes the signal but not , and chooses the probability with which to take response action , which can be written as for each possible The sender's payoff is and the receiver's is
A perfect Bayesian equilibrium is a combination of beliefs and strategies for each player. Both players believe that the other will follow the strategies specified in the equilibrium, as in simple Nash equilibrium, unless they observe something that has probability zero in the equilibrium. The receiver's beliefs also include a probability distribution representing the probability put on the sender having type if the receiver observes signal . The receiver's strategy is a choice of The sender's strategy is a choice of . These beliefs and strategies must satisfy certain conditions:
Sequential rationality: each strategy should maximize a player's expected utility, given their beliefs.
Consistency: each belief should be updated according to the equilibrium strategies, the observed actions, and Bayes' rule on every path reached in equilibrium with positive probability. On paths of zero probability, known as "off-equilibrium paths", the beliefs must be specified but can be arbitrary.
The kinds of perfect Bayesian equilibria that may arise can be divided in three different categories: pooling equilibria, separating equilibria and semi-separating. A given game may or may not have more than one equilibrium.
In a pooling equilibrium, senders of different types all choose the same signal. This means that the signal does not give any information to the receiver, so the receiver's beliefs are not updated after seeing the signal.
In a separating equilibrium, senders of different types always choose different signals. This means that the signal always reveals the sender's type, so the receiver's beliefs become deterministic after seeing the signal.
In a semi-separating equilibrium (also called partial-pooling), some types of senders choose the same message and other types choose different messages.
If there are more types of senders than there are messages, the equilibrium can never be a separating equilibrium (but may be semi-separating).
There are also hybrid equilibria, in which the sender randomizes between pooling and separating.
Examples
Reputation game
In this game, the sender and the receiver are firms. The sender is an incumbent firm and the receiver is an entrant firm.
The sender can be one of two types: sane or crazy. A sane sender can send one of two messages: prey and accommodate. A crazy sender can only prey.
The receiver can do one of two actions: stay or exit.
The payoffs are given by the table at the right. It is assumed that:
M1>D1>P1, i.e., a sane sender prefers to be a monopoly (M1), but if it is not a monopoly, it prefers to accommodate (D1) than to prey (P1). The value of X1 is irrelevant since a crazy firm has only one possible action.
D2>0>P2, i.e., the receiver prefers to stay in a market with a sane competitor (D2) than to exit the market (0), but prefers to exit than to stay in a market with a crazy competitor (P2).
A priori, the sender has probability p to be sane and 1-p to be crazy.
We now look for perfect Bayesian equilibria. It is convenient to differentiate between separating equilibria and pooling equilibria.
A separating equilibrium, in our case, is one in which the sane sender always accommodates. This separates it from a crazy sender. In the second period, the receiver has complete information: their beliefs are "If accommodate then the sender is sane, otherwise the sender is crazy". Their best-response is: "If accommodate then stay, if prey then exit". The payoff of the sender when they accommodate is D1+D1, but if they deviate to prey their payoff changes to P1+M1; therefore, a necessary condition for a separating equilibrium is D1+D1≥P1+M1 (i.e., the cost of preying overrides the gain from being a monopoly). It is possible to show that this condition is also sufficient.
A pooling equilibrium is one in which the sane sender always preys. In the second period, the receiver has no new information. If the sender preys, then the receiver's beliefs must be equal to the apriori beliefs, which are, the sender is sane with probability p and crazy with probability 1-p. Therefore, the receiver's expected payoff from staying is: [p D2 + (1-p) P2]; the receiver stays if-and-only-if this expression is positive. The sender can gain from preying, only if the receiver exits. Therefore, a necessary condition for a pooling equilibrium is p D2 + (1-p) P2 ≤ 0 (intuitively, the receiver is careful and will not enter the market if there is a risk that the sender is crazy. The sender knows this, and thus hides their true identity by always preying like a crazy). But this condition is not sufficient: if the receiver exits also after accommodate, then it is better for the sender to accommodate, since it is cheaper than Prey. So it is necessary that the receiver stays after accommodate, and it is necessary that D1+D1<P1+M1 (i.e., the gain from being a monopoly overrides the cost of preying). Finally, we must make sure that staying after accommodate is a best-response for the receiver. For this, the receiver's beliefs must be specified after accommodate. This path has probability 0, so Bayes' rule does not apply, and we are free to choose the receiver's beliefs as e.g. "If accommodate then the sender is sane".
Summary:
If preying is costly for a sane sender (D1+D1≥P1+M1), they will accommodate and there will be a unique separating PBE: the receiver will stay after accommodate and exit after prey.
If preying is not too costly for a sane sender (D1+D1<P1+M1), and it is harmful for the receiver (p D2 + (1-p) P2 ≤ 0), the sender will prey and there will be a unique pooling PBE: again the receiver will stay after accommodate and exit after prey. Here, the sender is willing to lose some value by preying in the first period, in order to build a reputation of a predatory firm, and convince the receiver to exit.
If preying is not costly for the sender nor harmful for the receiver, there will not be a PBE in pure strategies. There will be a unique PBE in mixed strategies - both the sender and the receiver will randomize between their two actions.
Education game
Michael Spence's 1973 paper on education as a signal of ability is the start of the economic analysis of signalling. In this game, the senders are workers and the receivers are employers. The example below has two types of workers and a continuous signal level.
The players are a worker and two firms. The worker chooses an education level the signal, after which the firms simultaneously offer him a wage and and he accepts one or the other. The worker's type, known only to himself, is either high ability with or low ability with each type having probability 1/2. The high-ability worker's payoff is and the low-ability's is A firm that hires the worker at wage has payoff and the other firm has payoff 0.
In this game, the firms compete the wage down to where it equals the expected ability, so if there is no signal possible, the result would be This will also be the wage in a pooling equilibrium, one where both types of worker choose the same signal, so the firms are left using their prior belief of .5 for the probability he has High ability. In a separating equilibrium, the wage will be 0 for the signal level the Low type chooses and 10 for the high type's signal. There are many equilibria, both pooling and separating, depending on expectations.
In a separating equilibrium, the low type chooses The wages will be and for some critical level that signals high ability. For the low type to choose requires that so and we can conclude that For the high type to choose requires that so and we can conclude that Thus, any value of between 5 and 10 can support an equilibrium. Perfect Bayesian equilibrium requires an out-of-equilibrium belief to be specified too, for all the other possible levels of besides 0 and levels which are "impossible" in equilibrium since neither type plays them. These beliefs must be such that neither player would want to deviate from his equilibrium strategy 0 or to a different A convenient belief is that if another, more realistic, belief that would support an equilibrium is if and if . There is a continuum of equilibria, for each possible level of One equilibrium, for example, is
In a pooling equilibrium, both types choose the same One pooling equilibrium is for both types to choose no education, with the out-of-equilibrium belief In that case, the wage will be the expected ability of 5, and neither type of worker will deviate to a higher education level because the firms would not think that told them anything about the worker's type.
The most surprising result is that there are also pooling equilibria with Suppose we specify the out-of-equilibrium belief to be Then the wage will be 5 for a worker with but 0 for a worker with wage The low type compares the payoffs to and if he is willing to follow his equilibrium strategy of The high type will choose a fortiori. Thus, there is another continuum of equilibria, with values of in [0, 2.5].
In the signalling model of education, expectations are crucial. If, as in the separating equilibrium, employers expect that high-ability people will acquire a certain level of education and low-ability ones will not, we get the main insight: that if people cannot communicate their ability directly, they will acquire educations even if it does not increase productivity, just to demonstrate ability. Or, in the pooling equilibrium with if employers do not think education signals anything, we can get the outcome that nobody becomes educated. Or, in the pooling equilibrium with everyone acquires education that is completely useless, not even showing who has high ability, out of fear that if they deviate and do not acquire education, employers will think they have low ability.
Beer-Quiche game
The Beer-Quiche game of Cho and Kreps draws on the stereotype of quiche eaters being less masculine. In this game, an individual B is considering whether to duel with another individual A. B knows that A is either a wimp or is surly but not which. B would prefer a duel if A is a wimp but not if A is surly. Player A, regardless of type, wants to avoid a duel. Before making the decision B has the opportunity to see whether A chooses to have beer or quiche for breakfast. Both players know that wimps prefer quiche while surlies prefer beer. The point of the game is to analyze the choice of breakfast by each kind of A. This has become a standard example of a signaling game. See for more details.
Applications of signaling games
Signaling games describe situations where one player has information the other player does not have. These situations of asymmetric information are very common in economics and behavioral biology.
Philosophy
The first signaling game was the Lewis signaling game, which occurred in David K. Lewis' Ph. D. dissertation (and later book) Convention. See Replying to W.V.O. Quine, Lewis attempts to develop a theory of convention and meaning using signaling games. In his most extreme comments, he suggests that understanding the equilibrium properties of the appropriate signaling game captures all there is to know about meaning:
I have now described the character of a case of signaling without mentioning the meaning of the signals: that two lanterns meant that the redcoats were coming by sea, or whatever. But nothing important seems to have been left unsaid, so what has been said must somehow imply that the signals have their meanings.
The use of signaling games has been continued in the philosophical literature. Others have used evolutionary models of signaling games to describe the emergence of language. Work on the emergence of language in simple signaling games includes models by Huttegger, Grim, et al., Skyrms, and Zollman. Harms, and Huttegger, have attempted to extend the study to include the distinction between normative and descriptive language.
Economics
The first application of signaling games to economic problems was Michael Spence's Education game. A second application was the Reputation game.
Biology
Valuable advances have been made by applying signaling games to a number of biological questions. Most notably, Alan Grafen's (1990) handicap model of mate attraction displays. The antlers of stags, the elaborate plumage of peacocks and bird-of-paradise, and the song of the nightingale are all such signals. Grafen's analysis of biological signaling is formally similar to the classic monograph on economic market signaling by Michael Spence. More recently, a series of papers by Getty shows that Grafen's analysis, like that of Spence, is based on the critical simplifying assumption that signalers trade off costs for benefits in an additive fashion, the way humans invest money to increase income in the same currency. This assumption that costs and benefits trade off in an additive fashion might be valid for some biological signaling systems, but is not valid for multiplicative tradeoffs, such as the survival cost – reproduction benefit tradeoff that is assumed to mediate the evolution of sexually selected signals.
Charles Godfray (1991) modeled the begging behavior of nestling birds as a signaling game. The nestlings begging not only informs the parents that the nestling is hungry, but also attracts predators to the nest. The parents and nestlings are in conflict. The nestlings benefit if the parents work harder to feed them than the parents ultimate benefit level of investment. The parents are trading off investment in the current nestlings against investment in future offspring.
Pursuit deterrent signals have been modeled as signaling games. Thompson's gazelles are known sometimes to perform a 'stott', a jump into the air of several feet with the white tail showing, when they detect a predator. Alcock and others have suggested that this action is a signal of the gazelle's speed to the predator. This action successfully distinguishes types because it would be impossible or too costly for a sick creature to perform and hence the predator is deterred from chasing a stotting gazelle because it is obviously very agile and would prove hard to catch.
The concept of information asymmetry in molecular biology has long been apparent. Although molecules are not rational agents, simulations have shown that through replication, selection, and genetic drift, molecules can behave according to signaling game dynamics. Such models have been proposed to explain, for example, the emergence of the genetic code from an RNA and amino acid world.
Costly versus cost-free signaling
One of the major uses of signaling games both in economics and biology has been to determine under what conditions honest signaling can be an equilibrium of the game. That is, under what conditions can we expect rational people or animals subject to natural selection to reveal information about their types?
If both parties have coinciding interest, that is they both prefer the same outcomes in all situations, then honesty is an equilibrium. (Although in most of these cases non-communicative equilibria exist as well.) However, if the parties' interests do not perfectly overlap, then the maintenance of informative signaling systems raises an important problem.
Consider a circumstance described by John Maynard Smith regarding transfer between related individuals. Suppose a signaler can be either starving or just hungry, and they can signal that fact to another individual who has food. Suppose that they would like more food regardless of their state, but that the individual with food only wants to give them the food if they are starving. While both players have identical interests when the signaler is starving, they have opposing interests when the signaler is only hungry. When they are only hungry, they have an incentive to lie about their need in order to obtain the food. And if the signaler regularly lies, then the receiver should ignore the signal and do whatever they think is best.
Determining how signaling is stable in these situations has concerned both economists and biologists, and both have independently suggested that signal cost might play a role. If sending one signal is costly, it might only be worth the cost for the starving person to signal. The analysis of when costs are necessary to sustain honesty has been a significant area of research in both these fields.
See also
Cheap talk
Extensive form game
Incomplete information
Intuitive criterion and Divine equilibrium – refinements of PBE in signaling games.
Screening game – a related kind of game where the uninformed player, the receiver, rather than choosing an action based on a signal, moves first and gives the informed player, the sender, proposals based on the type of the sender. The sender selects one of these proposals.
Signalling (economics)
Signalling theory
References
Game theory game classes
Asymmetric information | Signaling game | [
"Physics",
"Mathematics"
] | 4,009 | [
"Asymmetric information",
"Game theory",
"Asymmetry",
"Game theory game classes",
"Symmetry"
] |
28,878,050 | https://en.wikipedia.org/wiki/Isoelastic%20function | In mathematical economics, an isoelastic function, sometimes constant elasticity function, is a function that exhibits a constant elasticity, i.e. has a constant elasticity coefficient. The elasticity is the ratio of the percentage change in the dependent variable to the percentage causative change in the independent variable, in the limit as the changes approach zero in magnitude.
For an elasticity coefficient (which can take on any real value), the function's general form is given by
where and are constants. The elasticity is by definition
which for this function simply equals r.
Derivation
Elasticity of demand is indicated by
,
where r is the elasticity, Q is quantity, and P is price.
Rearranging gets us:
Then integrating
Simplify
Examples
Demand functions
An example in microeconomics is the constant elasticity demand function, in which p is the price of a product and D(p) is the resulting quantity demanded by consumers. For most goods the elasticity r (the responsiveness of quantity demanded to price) is negative, so it can be convenient to write the constant elasticity demand function with a negative sign on the exponent, in order for the coefficient to take on a positive value:
where is now interpreted as the unsigned magnitude of the responsiveness.
An analogous function exists for the supply curve.
Utility functions in the presence of risk
The constant elasticity function is also used in the theory of choice under risk aversion, which usually assumes that risk-averse decision-makers maximize the expected value of a concave von Neumann-Morgenstern utility function. In this context, with a constant elasticity of utility with respect to, say, wealth, optimal decisions on such things as shares of stocks in a portfolio are independent of the scale of the decision-maker's wealth. The constant elasticity utility function in this context is generally written as
where x is wealth and is the elasticity, with , ≠ 1 referred to as the constant coefficient of relative risk aversion (with risk aversion approaching infinity as → ∞).
See also
Constant elasticity of substitution
Power function
References
External links
Constant Elasticity Demand and Supply Curves
Mathematical economics | Isoelastic function | [
"Mathematics"
] | 442 | [
"Applied mathematics",
"Mathematical economics"
] |
28,883,744 | https://en.wikipedia.org/wiki/Schneider%E2%80%93Lang%20theorem | In mathematics, the Schneider–Lang theorem is a refinement by of a theorem of about the transcendence of values of meromorphic functions. The theorem implies both the Hermite–Lindemann and Gelfond–Schneider theorems, and implies the transcendence of some values of elliptic functions and elliptic modular functions.
Statement
Fix a number field and meromorphic , of which at least two are algebraically independent and have orders and , and such that for any . Then there are at most
distinct complex numbers such that for all combinations of and .
Examples
If and then the theorem implies the Hermite–Lindemann theorem that is transcendental for nonzero algebraic : otherwise, would be an infinite number of values at which both and are algebraic.
Similarly taking and for irrational algebraic implies the Gelfond–Schneider theorem that if and are algebraic, then }: otherwise, would be an infinite number of values at which both and are algebraic.
Recall that the Weierstrass P function satisfies the differential equation
Taking the three functions to be , , shows that, for any algebraic , if and are algebraic, then is transcendental.
Taking the functions to be and for a polynomial of degree shows that the number of points where the functions are all algebraic can grow linearly with the order .
Proof
To prove the result Lang took two algebraically independent functions from , say, and , and then created an auxiliary function . Using Siegel's lemma, he then showed that one could assume vanished to a high order at the . Thus a high-order derivative of takes a value of small size at one such s, "size" here referring to an algebraic property of a number. Using the maximum modulus principle, Lang also found a separate estimate for absolute values of derivatives of . Standard results connect the size of a number and its absolute value, and the combined estimates imply the claimed bound on .
Bombieri's theorem
and generalized the result to functions of several variables. Bombieri showed that if K is an algebraic number field and f1, ..., fN are meromorphic functions of d complex variables of order at most ρ generating a field K(f1, ..., fN) of transcendence degree at least d + 1 that is closed under all partial derivatives, then the set of points where all the functions fn have values in K is contained in an algebraic hypersurface in Cd of degree at most
gave a simpler proof of Bombieri's theorem, with a slightly stronger bound of d(ρ1 + ... + ρd+1)[K:Q] for the degree, where the ρj are the orders of d + 1 algebraically independent functions. The special case d = 1 gives the Schneider–Lang theorem, with a bound of (ρ1 + ρ2)[K:Q] for the number of points.
Example
If is a polynomial with integer coefficients then the functions are all algebraic at a dense set of points of the hypersurface .
References
Diophantine approximation
Transcendental numbers | Schneider–Lang theorem | [
"Mathematics"
] | 624 | [
"Mathematical theorems",
"Diophantine approximation",
"Theorems in number theory",
"Mathematical relations",
"Mathematical problems",
"Approximations",
"Number theory"
] |
28,884,108 | https://en.wikipedia.org/wiki/Hopp%E2%80%93Woods%20scale | The Hopp–Woods hydrophilicity scale of amino acids is a method of ranking the amino acids in a protein according to their water solubility in order to search for surface locations on proteins, and especially those locations that tend to form strong interactions with other macromolecules such as proteins, DNA, and RNA.
Given the amino acid sequence of any protein, likely interaction sites can be identified by taking the moving average of six amino acid hydrophilicity values along the polypeptide chain, and looking for local peaks in the data plot.
In subsequent papers after their initial publication of the method, Hopp and Woods demonstrated that the data plots, or hydrophilicity profiles, contained much information about protein folding, and that the hydrophobic valleys of the profiles corresponded to internal structures of proteins such as beta-strands and alpha-helices. Furthermore, long hydrophobic valleys were shown to correspond quite closely to the membrane-spanning helices identified by the later-published Kyte and Doolittle hydropathic plotting method.
References
Proteins
Amino acids | Hopp–Woods scale | [
"Chemistry",
"Biology"
] | 217 | [
"Biomolecules by chemical classification",
"Biotechnology stubs",
"Amino acids",
"Biochemistry stubs",
"Molecular biology",
"Biochemistry",
"Proteins"
] |
28,886,032 | https://en.wikipedia.org/wiki/Latt%C3%A8s%20map | In mathematics, a Lattès map is a rational map f = ΘLΘ−1 from the complex sphere to itself such that Θ is a holomorphic map from a complex torus to the complex sphere and L is an affine map z → az + b from the complex torus to itself.
Lattès maps are named after French mathematician Samuel Lattès, who wrote about them in 1918.
References
Dynamical systems | Lattès map | [
"Physics",
"Mathematics"
] | 87 | [
"Mechanics",
"Dynamical systems"
] |
35,642,357 | https://en.wikipedia.org/wiki/Joseph%20Neng%20Shun%20Kwong | Joseph Neng Shun Kwong (October 28, 1916 – January 4, 1998) was a chemical engineer, most famous for his role in the development of the Redlich–Kwong equation of state.
Biography
Joseph Kwong was born in Chung Won, China in 1916, and emigrated to the United States as a child with his family. Kwong earned a bachelor's degree in 1937 from Stanford University in chemistry and basic medical sciences. He then earned a Master of Science degree in Chemical and Metallurgical Engineering at the University of Michigan, and was awarded a Ph.D. in chemical engineering from the University of Minnesota in 1942.
Kwong worked as a chemical engineer at Minnesota Mining and Manufacturing Co. (later 3M) from 1942 to 1944, helping to develop adhesive products. In 1944, Kwong began working at the Shell Development Company in California. During his time at Shell, Kwong met Otto Redlich, a physical chemist who had fled his native Austria to the United States in 1938 as the Nazis took power. The two presented a paper in 1948 describing what is now known as the Redlich–Kwong equation of state, which related the pressure, volume, and temperature of different compounds. Kwong returned to 3M in 1951 as a senior chemical engineer in the Chemical Division, working there until retirement in 1980, at the age of 64. The development of the Redlich-Kwong equation was the last significant theoretical treatment of thermodynamics. He died of pneumonia in Saint Paul, Minnesota, on January 4, 1998, at the age of 81.
References
1916 births
1998 deaths
University of Minnesota College of Science and Engineering alumni
Chinese chemical engineers
Chinese physical chemists
Chinese emigrants to the United States
Thermodynamicists
University of Michigan College of Engineering alumni
Deaths from pneumonia in Minnesota | Joseph Neng Shun Kwong | [
"Physics",
"Chemistry"
] | 370 | [
"Thermodynamics",
"Thermodynamicists"
] |
24,326,638 | https://en.wikipedia.org/wiki/C22H26N2O2 | {{DISPLAYTITLE:C22H26N2O2}}
The molecular formula C22H26N2O2 (molar mass: 350.45 g/mol) may refer to:
Oil Blue 35, a blue anthraquinone dye
Vinpocetine, a synthetic derivative of the vinca alkaloid vincamine
Molecular formulas | C22H26N2O2 | [
"Physics",
"Chemistry"
] | 77 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,326,721 | https://en.wikipedia.org/wiki/C25H31NO3 | {{DISPLAYTITLE:C25H31NO3}}
The molecular formula C25H31NO3 (molar mass: 393.52 g/mol, exact mass: 393.2304 u) may refer to:
HT-0712, also known as IPL-455903
Testosterone nicotinate
Molecular formulas | C25H31NO3 | [
"Physics",
"Chemistry"
] | 75 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,327,497 | https://en.wikipedia.org/wiki/NATO%20Accessory%20Rail | The NATO Accessory Rail (NAR), defined by NATO Standardization Agreement (STANAG) 4694, is a rail interface system standard for mounting accessory equipment such as telescopic sights, tactical lights, laser aiming modules, night vision devices, reflex sights, foregrips, bipods and bayonets to small arms such as rifles and pistols.
STANAG 4694 was approved by the NATO Army Armaments Group (NAAG), Land Capability Group 1 Dismounted Soldier (LCG1-DS) in 2009. It was published in March 2011.
The NATO Accessory Rail is backwards-compatible with the Draft STANAG 2324/MIL-STD 1913 Picatinny rail, which dates back to 3 February 1995, and was designed in conjunction with weapon specialists like Aimpoint, Beretta, Colt Firearms, FN Herstal and Heckler & Koch.
Technical specifications
According to the NATO Army Armaments Group the differences between the MIL-STD 1913 Picatinny rail and the STANAG 4694 are:
A metric reference drawing.
Additional new measurements and tolerances.
Adjustments of some measurements.
Tighter straightness tolerances (approximately 50%).
Another notable change is the recommendation that while in the Picatinny rail system the V-angles are used for the alignment and reference of the accessory, NATO recommends using the top surface instead. Initial NATO tests had shown that the Picatinny rail system did not provide good repeatability. Using the top surface as a reference and alignment of the grabbers provided excellent repeatability.
Power rail
In 2009, the Pentagon revealed plans to develop a NATO rail that provides electrical power to rail mounted accessories in the future. At least two proposals were presented:
Wilcox Fusion Rail, powered from a central battery pack or a proprietary vertical grip.
The Tworx "I-Rail", powered from a central battery pack and offering data transmission.
In 2012, the NATO Powered Rail working group selected the I-Rail design as the basis for further standardization. In 2015, STANAG 4740/AEP-90 "NATO Powered Accessory Rail" was ratified describing the rail.
According to images on the Tworx website, the STANAG 4740 rail has the grabber sides of a normal NATO rail, but the top surface is hollowed out by two lines of metal contacts. , no copies of the STANAG is available on the Internet, but patents from Tworx indicate that it uses a communication mechanism derived from Ethernet. The basic mode is based on 10BASE2, but higher-data-rate encodings for application such as video streaming may also be available.
Video cameras
NATO rail design is also used on small video camera setups. Videos shot with DSLR or Mirrorless cameras use a variety of mounting systems for attaching microphones, monitors, lighting, and other accessories. A NATO-compatible rail is one of these systems. Although all such products are advertised to meet the same specifications, there is discussion that some of the rails and accessories are not compatible with one another.
See also
Rail System An overview of the various Rail Equipment hardware types, some are listed below.
Rail Integration System, generic term for a system for attaching accessories to small firearms
Weaver rail mount, early system used for scope mounts, still has some popularity in the civilian market
Picatinny rail (MIL-STD-1913), improved and standardized version of the Weaver mount. Used for both for scope mounts, and for accessories (such as extra sling mounts, vertical grips, bipods etc.) Major popularity in the civilian market.
Warsaw Pact rail, a rail mount system to connect telescopic sights to rifles
UIT rail, an older standard used for mounting slings particularly on competition firearms
KeyMod - open standard design to replace MIL-STD-1913 for mounting accessories (except for scope mounts)
M-LOK - free licensed competing standard to KeyMod
Zeiss rail, a ringless scope mounting standard
References
External links
NATO countries finalise plans for a standard rail adaptor system
2009 NATO STANAG 4694 vs 1913 MIL-STD Picatinny
Firearm components
Mechanical standards | NATO Accessory Rail | [
"Technology",
"Engineering"
] | 837 | [
"Firearm components",
"Mechanical standards",
"Components",
"Mechanical engineering"
] |
24,328,041 | https://en.wikipedia.org/wiki/Response%20reactions | The theory of response reactions (RERs) was elaborated for systems in which several physico-chemical processes run simultaneously in mutual interaction, with local thermodynamic equilibrium, and in which state variables called extents of reaction are allowed, but thermodynamic equilibrium proper is not required. It is based on detailed analysis of the Hessian determinant, using either the Gibbs or the De Donder method of analysis. The theory derives the sensitivity coefficient as the sum of the contributions of individual RERs. Thus phenomena which are in contradiction to over-general statements of the Le Chatelier principle can be interpreted. With the help of RERs the equilibrium coupling was defined. RERs could be derived based either on the species, or on the stoichiometrically independent reactions of a parallel system. The set of RERs is unambiguous in a given system; and the number of them (M) is , where S denotes the number of species and C refers to the number of components. In the case of three-component systems, RERs can be visualized on a triangle diagram.
References
Thermodynamics | Response reactions | [
"Physics",
"Chemistry",
"Mathematics"
] | 235 | [
"Thermodynamics stubs",
"Physical chemistry stubs",
"Thermodynamics",
"Dynamical systems"
] |
24,328,126 | https://en.wikipedia.org/wiki/CIEMAT | The Centre for Energy, Environmental and Technological Research (CIEMAT), until 1986 Junta de Energía Nuclear (JEN), is a Spanish public research institution.
History
The Centre for Energy, Environmental and Technological Research (CIEMAT) is a Spanish public research institution which specializes in energy and the environment. It is attached to the General Secretariat for Research of the Ministry of Science and Innovation.
In September 1948, Francisco Franco, by means of a decree of reserved character, created the Board of Atomic Investigations o Junta de Investigaciones Atómicas (JIA), constituted 8 October 1948 and formed by Jose Maria Otero de Navascués (director-general and president until 1974), Manuel Lora-Tamayo, Armando Durán Miranda and José Ramón Sobredo i Rioboo.
In 1951, after finishing the secret phase, it was rebaptized as Board of Nuclear Power or Junta de Energía Nuclear (JEN), under the presidency of General Juan Vigón and with Otero de Navascués as chief of the main directorate (later he would be its president again), and has since carried out research and technological development projects, serving as a reference to technically represent Spain in international forums and to advise public administrations on matters within its areas of research.
In 1956, Guillermo Velarde entered the Division of Theoretical Physics of this Meeting, later being named Director of Technology that included the Divisions of Electronics, Theory and Calculation of Reactors, Nuclear Fusion, Engineering and Reactors in Operation.
References
External links
Research institutes in the Community of Madrid
Government agencies of Spain
Governmental nuclear organizations
Nuclear power in Spain
Nuclear research institutes
Nuclear technology organizations of Spain
Complutense University of Madrid | CIEMAT | [
"Engineering"
] | 343 | [
"Governmental nuclear organizations",
"Nuclear research institutes",
"Nuclear organizations"
] |
24,331,218 | https://en.wikipedia.org/wiki/Multiway%20switching | In building wiring, multiway switching is the interconnection of two or more electrical switches to control an electrical load from more than one location. A common application is in lighting, where it allows the control of lamps from multiple locations, for example in a hallway, stairwell, or large room.
In contrast to a simple light switch, which is a single pole, single throw (SPST) switch, multiway switching uses switches with one or more additional contacts and two or more wires are run between the switches. When the load is controlled from only two points, single pole, double throw (SPDT) switches are used. Double pole, double throw (DPDT) switches allow control from three or more locations.
In alternative designs, low-voltage relay or electronic controls can be used to switch electrical loads, sometimes without the extra power wires.
Three-way and four-way switches
The controlled load is often a lamp, but multiway switching is used to control other electrical loads, such as an electrical outlet, fans, pumps, heaters or other appliances. The electrical load may be permanently hard-wired, or plugged into a switched receptacle.
Three-way and four-way switches make it possible to control a light from multiple locations, such as the top and bottom of a stairway, either end of a long hallway, or multiple doorways into a large room. These switches appear externally similar to single pole, single throw (SPST) switches, but have extra connections which allow a circuit to be controlled from multiple locations. Toggling the switch disconnects one "traveler" terminal and connects the other.
Electrically, a typical "3-way" switch is a single pole, double throw (SPDT) switch. By correctly connecting two of these switches together, toggling either switch changes the state of the load from off to on, or vice versa. The switches may be arranged so that they are in the same orientation for off, and contrasting orientations for on.
A "4-way" (intermediate) switch is a purpose built double pole, double throw (DPDT) switch, internally wired in manufacture to reverse the connections between the input and output and having only four external terminals. This switch has two pairs of "traveler" terminals that it connects either straight through, or crossed over (transposed, or swapped). An intermediate switch can, however, be implemented by adding appropriate external wiring to an ordinary (six terminal) DPDT switch, or by using a separate DPDT relay.
By connecting one or more 4-way (intermediate) switches in-line, with 3-way switches at either end, the load can be controlled from three or more locations. Toggling any switch changes the state of the load from off to on, or from on to off.
Two locations
Switching a load on or off from two locations (for instance, turning a light on or off from either end of a flight of stairs) requires two SPDT switches. There are several arrangements of wiring to achieve this.
Traveler system
In the traveler system, also called the "common" system, the power line (hot, shown in red) is fed into the common terminal of one of the switches. The switches are then connected to each other by a pair of wires called "travelers" (or "strappers" in the UK), and the lamp is connected to the common terminal of the second switch, as shown.
Using the traveler system, there are four possible permutations of switch positions: two with the light on and two with the light off.
{| class="wikitable"
|-
! Off !! On
|-
| ||
|-
| ||
|}
Alternative system
An alternative system, known as the "California 3-way", or "coast 3-way" connection system allows both switched and unswitched loads to be connected near both switches without running too many additional wires. This is useful in long hallways that may need more than one light to be controlled by the two switches, and which may also have receptacles needing unswitched power as well as the switched lights. If only one light is being switched and no unswitched connection is needed, this system uses more long wires than the standard system (four instead of three), but if the switched light is close to the switch near the fuse box and a receptacle needs to be powered near the far switch it will use fewer long wires (four instead of five).
Carter system
The Carter system, also known as the Chicago system, was a method of wiring three-way switches in the era of early knob-and-tube wiring. This now-obsolete wiring method has been prohibited by the USA National Electrical Code since 1923, even in new knob-and-tube installations which are still permitted under certain circumstances. This wiring system may still be encountered in old "grandfathered" electrical installations.
In the Carter system, the incoming live (energized) and neutral wires were connected to the traveler screws of both three-way switches, and the lamp was connected between the common screws of the two switches. If both switches were flipped to hot or both were flipped to neutral, the light would remain off; but if they were switched to opposite positions, the light would illuminate. The advantage of this method was that it used just one wire to the light from each switch, having a hot and neutral in both switches.
The major problem with this method is that in one of the four switch combinations the socket around the bulb is electrified at both of its terminals even though the bulb is not lit. As the shell may be energized, even with the light switched off, this poses a risk of electrical shock when changing the bulb. This method is therefore prohibited in modern building wiring where Edison screw based lamps are used.
More than two locations
For more than two locations, two of the interconnecting wires must be passed through an intermediate switch, wired to swap or transpose the pair. Any number of intermediate switches can be inserted, allowing for any number of locations. This requires two wires along the sequence of switches.
Traveler system
Using three switches, there are eight possible permutations of switch positions: four with the light on and four with the light off. Note that these diagrams also use the American electrical wiring names.
{| class="wikitable"
|-
! Off !! On
|-
| ||
|-
| ||
|-
| ||
|-
| ||
|}
As mentioned above, the above circuit can be extended by using multiple 4-way switches between the 3-way switches to extend switching ability to any number of locations.
Low voltage relay switching
Systems based on relays with low-voltage control circuits permit switching the power to lighting loads from an arbitrary number of locations. For each load, a latching relay is used that mechanically maintains its on- or off-state, even if power to the building is interrupted. Mains power is wired through the relay to the load.
Instead of running mains voltage to the switches, a low voltage—typically 24 V AC—is connected to remote momentary toggle or rocker switches. The momentary switches usually have SPDT contacts in an (ON)-OFF-(ON) configuration. Pushing the switch actuator in one direction causes the relay contacts to close; pushing it in the opposite direction causes the relay contacts to open. Any number of additional rocker switches can be wired in parallel, as needed in multiple locations. An optional master control can be added that turns all lights in the facility on or off simultaneously under the control of a timer or computer.
After an initial burst of popularity in the 1960s, residential use of such relay-based low voltage systems has become rare. Equipment for new installations is not commonly carried by electrical suppliers, although it is still possible to find parts for maintaining existing installations.
Electronic remote switching
As of 2012, multiway switching in residential and commercial applications is increasingly being implemented with power line signalling and wireless signalling techniques. These include the X10 system, available since the 1970s, and newer hybrid wired/wireless systems, such as Insteon and Z-Wave. This is particularly useful when retrofitting multi-way circuits into existing wiring, often avoiding the need to put holes in walls to run new wires.
Remote-control systems are increasingly used in commercial buildings as part of lighting systems under semi-automatic control, for better safety, security, and energy conservation.
References
Further reading
Electrical wiring
Electricity supply circuits | Multiway switching | [
"Physics",
"Engineering"
] | 1,745 | [
"Electrical systems",
"Building engineering",
"Physical systems",
"Electricity supply circuits",
"Electrical engineering",
"Electrical wiring"
] |
30,366,747 | https://en.wikipedia.org/wiki/Quantum%20mechanics%20of%20time%20travel | The theoretical study of time travel generally follows the laws of general relativity. Quantum mechanics requires physicists to solve equations describing how probabilities behave along closed timelike curves (CTCs), which are theoretical loops in spacetime that might make it possible to travel through time.
In the 1980s, Igor Novikov proposed the self-consistency principle. According to this principle, any changes made by a time traveler in the past must not create historical paradoxes. If a time traveler attempts to change the past, the laws of physics will ensure that events unfold in a way that avoids paradoxes. This means that while a time traveler can influence past events, those influences must ultimately lead to a consistent historical narrative.
However, Novikov's self-consistency principle has been debated in relation to certain interpretations of quantum mechanics. Specifically, it raises questions about how it interacts with fundamental principles such as unitarity and linearity. Unitarity ensures that the total probability of all possible outcomes in a quantum system always sums to 1, preserving the predictability of quantum events. Linearity ensures that quantum evolution preserves superpositions, allowing quantum systems to exist in multiple states simultaneously.
There are two main approaches to explaining quantum time travel while incorporating Novikov's self-consistency principle. The first approach uses density matrices to describe the probabilities of different outcomes in quantum systems, providing a statistical framework that can accommodate the constraints of CTCs. The second approach involves state vectors, which describe the quantum state of a system. Both approaches can lead to insights into how time travel might be reconciled with quantum mechanics, although they may introduce concepts that challenge conventional understandings of these theories.
Deutsch's prescription for closed timelike curves (CTCs)
In 1991, David Deutsch proposed a method to explain how quantum systems interact with closed timelike curves (CTCs) using time evolution equations. This method aims to address paradoxes like the grandfather paradox, which suggests that a time traveler who stops their own birth would create a contradiction. One interpretation of Deutsch's approach is that it allows for self-consistency without necessarily implying the existence of parallel universes.
Method overview
To analyze the system, Deutsch divided it into two parts: a subsystem outside the CTC and the CTC itself. To describe the combined evolution of both parts over time, he used a unitary operator (U). This approach relies on a specific mathematical framework to describe quantum systems. The overall state is represented by combining the density matrices (ρ) for both parts using a tensor product (⊗). While Deutsch's approach does not assume initial correlation between these two parts, this does not inherently break time symmetry.
Deutsch's proposal uses the following key equation to describe the fixed-point density matrix (ρCTC) for the CTC:
.
The unitary evolution involving both the CTC and the external subsystem determines the density matrix of the CTC as a fixed point, focusing on its state.
Ensuring Self-Consistency
Deutsch's proposal ensures that the CTC returns to a self-consistent state after each loop. However, if a system retains memories after traveling through a CTC, it could create scenarios where it appears to have experienced different possible pasts.
Furthermore, Deutsch's method may not align with common probability calculations in quantum mechanics unless we consider multiple paths leading to the same outcome. There can also be multiple solutions (fixed points) for the system's state after the loop, introducing randomness (nondeterminism). Deutsch suggested using solutions that maximize entropy, aligning with systems' natural tendency to evolve toward higher entropy states.
To calculate the final state outside the CTC, trace operations consider only the external system's state after combining both systems' evolution.
Implications and criticisms
Deutsch's approach has intriguing implications for paradoxes like the grandfather paradox. For instance, if everything except a single qubit travels through a time machine and flips its value according to a specific operator:
.
Deutsch argues that maximizing von Neumann entropy is relevant in this context. In this scenario, outcomes may mix starting at 0 and ending at 1 or vice versa. While this interpretation can align with many-worlds views of quantum mechanics, it does not necessarily imply branching timelines after interacting with a CTC.
Researchers have explored Deutsch's ideas further. If feasible, his model might allow computers near a time machine to solve problems beyond classical capabilities; however, debates about CTCs' feasibility continue.
Despite its theoretical nature, Deutsch's proposal has faced significant criticism. For example, Tolksdorf and Verch demonstrated that quantum systems in spacetimes without CTCs can achieve results similar to Deutsch's criterion with any prescribed accuracy. This finding challenges claims that quantum simulations of CTCs are related to closed timelike curves as understood in general relativity. Their research also shows that classical systems governed by statistical mechanics could also meet these criteria without invoking peculiarities attributed solely to quantum mechanics. Consequently, they argue that their findings raise doubts about Deutsch's explanation of his time travel scenario using many-worlds interpretations of quantum physics.
Lloyd's prescription: Post-selection and time travel with CTCs
Seth Lloyd proposed an alternative approach to time travel with closed timelike curves (CTCs), based on "post-selection" and path integrals. Path integrals are a powerful tool in quantum mechanics that involve summing probabilities over all possible ways a system could evolve, including paths that do not strictly follow a single timeline. Unlike classical approaches, path integrals can accommodate histories involving CTCs, although their application requires careful consideration of quantum mechanics' principles.
He proposes an equation that describes the transformation of the density matrix, which represents the system's state outside the CTC after a time loop:
, where .
In this equation:
is the density matrix of the system after interacting with the CTC.
is the initial density matrix of the system before the time loop.
is a transformation operator derived from the trace operation over the CTC, applied to the unitary evolution operator .
The transformation relies on the trace operation, which summarizes aspects of the matrix. If this trace term is zero (), it indicates that the transformation is invalid in that context, but does not directly imply a paradox like the grandfather paradox. Conversely, a non-zero trace suggests a valid transformation leading to a unique solution for the external system's state.
Thus, Lloyd's approach aims to filter out histories that lead to inconsistencies by allowing only those consistent with both initial and final states. This aligns with post-selection, where specific outcomes are considered based on predetermined criteria; however, it does not guarantee that all paradoxical scenarios are eliminated.
Entropy and computation
Michael Devin (2001) proposed a model that incorporates closed timelike curves (CTCs) into thermodynamics, suggesting it as a potential way to address the grandfather paradox. This model introduces a "noise" factor to account for imperfections in time travel, proposing a framework that could help mitigate paradoxes.
See also
Novikov self-consistency principle
Grandfather paradox
Causal loop
Chronology protection conjecture
Retrocausality
References
Time travel
Quantum mechanics
Quantum gravity | Quantum mechanics of time travel | [
"Physics"
] | 1,487 | [
"Physical quantities",
"Time",
"Time travel",
"Theoretical physics",
"Unsolved problems in physics",
"Quantum mechanics",
"Quantum gravity",
"Spacetime",
"Physics beyond the Standard Model"
] |
30,367,004 | https://en.wikipedia.org/wiki/ACLAME | ACLAME (The CLAssification of Mobile genetic Elements) is a database of sequenced mobile genetic elements.
See also
Gypsy (database)
Mobile genetic elements
References
External links
http://aclame.ulb.ac.be (broken at 30/Jun/2022)
Biological databases
Mobile genetic elements | ACLAME | [
"Biology"
] | 64 | [
"Bioinformatics",
"Molecular genetics",
"Mobile genetic elements",
"Biological databases"
] |
30,373,123 | https://en.wikipedia.org/wiki/Amitsur%E2%80%93Levitzki%20theorem | In algebra, the Amitsur–Levitzki theorem states that the algebra of n × n matrices over a commutative ring satisfies a certain identity of degree 2n. It was proved by . In particular matrix rings are polynomial identity rings such that the smallest identity they satisfy has degree exactly 2n.
Statement
The standard polynomial of degree n is
in non-commuting variables x1, ..., xn, where the sum is taken over all n! elements of the symmetric group Sn.
The Amitsur–Levitzki theorem states that for n × n matrices A1, ..., A2n whose entries are taken from a commutative ring then
Proofs
gave the first proof.
deduced the Amitsur–Levitzki theorem from the Koszul–Samelson theorem about primitive cohomology of Lie algebras.
and gave a simple combinatorial proof as follows. By linearity it is enough to prove the theorem when each matrix has only one nonzero entry, which is 1. In this case each matrix can be encoded as a directed edge of a graph with n vertices. So all matrices together give a graph on n vertices with 2n directed edges. The identity holds provided that for any two vertices A and B of the graph, the number of odd Eulerian paths from A to B is the same as the number of even ones. (Here a path is called odd or even depending on whether its edges taken in order give an odd or even permutation of the 2n edges.) Swan showed that this was the case provided the number of edges in the graph is at least 2n, thus proving the Amitsur–Levitzki theorem.
gave a proof related to the Cayley–Hamilton theorem.
gave a short proof using the exterior algebra of a vector space of dimension 2n.
gave another proof, showing that the Amitsur–Levitzki theorem is the Cayley–Hamilton identity for the generic Grassman matrix.
References
Linear algebra
Theorems in algebra
Matrix theory | Amitsur–Levitzki theorem | [
"Mathematics"
] | 423 | [
"Theorems in algebra",
"Linear algebra",
"Mathematical problems",
"Mathematical theorems",
"Algebra"
] |
30,377,575 | https://en.wikipedia.org/wiki/Dissolved%20gas%20flotation | Dissolved gas flotation (DGF) systems are used for a variety of applications throughout the world. The process floats solids, oils and other contaminants to the surface of liquids. Once on the surface these contaminants are skimmed off and removed from the liquids. Oil and gas production facilities have used flotation systems to remove oil and solids from their produced and processed water (wastewater) for many years. The relative density of candle wax is 0.93, hence objects made of wax float on water.
Process description
The keys to good separation are both gravity and the creation of millions of very small bubbles. Based on Stokes' law, the size of the oil droplet and density of the droplet will affect the rate of rise to the surface. The larger and lighter the droplet, the faster it will rise to the surface. By attaching a small gas bubble to an oil droplet, the density of the droplet decreases, which increases the rate at which it will rise to the surface. Therefore, the smaller the gas bubbles created the smaller the oil droplet floated to the surface. Efficient flotation systems need to create as many small bubbles as possible.
The method in which the bubbles are introduced into the water stream and retention time are also important factors. The average retention time for a vertical unit is typically 4 to 5 minutes and 5 to 6 minutes for a horizontal unit.
DGF pump
The impeller in a DGF pump is designed with dual sides. One side is designed to drive the liquid like a normal centrifugal pump and the other side is designed to draw vapor into the pump and mix it with the liquid. In addition to the new impeller, a special seal was invented to extend the life of the pump. With these innovations the pump creates a sub-atmospheric pressure region within the pump's seal chamber. As the impeller draws in the vapor it is mixed with the liquid being pumped and compressed into micro-fine bubbles. Because of the close tolerance between the backvanes of the impeller and the backplate of the pump the vapor is sheared into fine bubbles and then they are compressed in the sub-atmospheric pressure region of the pump. These fine bubbles become dissolved into the liquid within the volute of the pump.
The result of this process provides similar size bubbles to a dissolved air flotation system. The backpressure valve on the discharge piping can regulate the bubble size in a DGF pump. The bubble size ranges from 50 down to 1 micrometer or less.
See also
API oil-water separator
Flotation process
Induced gas flotation
Industrial wastewater treatment
List of waste-water treatment technologies
Dissolved air flotation
References
Design and operation of Dissolved-Gas Flotation Equipment for the Treatment Of Oilfield Produced Brines M.C. Sport, Shell Oil Company, Journal of Petroleum Technology, Volume 22, Number 8, August 1970.
Oil refining
Flotation processes
Waste treatment technology | Dissolved gas flotation | [
"Chemistry",
"Engineering"
] | 604 | [
"Water treatment",
"Petroleum technology",
"Environmental engineering",
"Oil refining",
"Flotation processes",
"Waste treatment technology"
] |
30,380,229 | https://en.wikipedia.org/wiki/Wet%20nanotechnology | Wet nanotechnology (also known as wet nanotech) involves working up to large masses from small ones.
Wet nanotechnology requires water in which the process occurs. The process also involves chemists and biologists trying to reach larger scales by putting together individual molecules. While Eric Drexler put forth the idea of nano-assemblers working dry, wet nanotech appears to be the likely first area in which something like a nano-assembler may achieve economic results. Pharmaceuticals and bioscience are central features of most nanotech start-ups. Richard A.L. Jones calls nanotechnology that steals bits of natural nanotechnology and puts them in a synthetic structure biokleptic nanotechnology. He calls building with synthetic materials according to nature's design principles biomimetic nanotechnology.
Using these guiding principles could lead to trillions of nanotech robots, that resemble bacteria in structural properties, entering a person's blood stream to do medical treatments.
Background
Wet nanotechnology is an anticipated new sub-discipline of nanotech that is going to mostly be dominated by the different forms of wet engineering. The processes that will be used are going to take place in aqueous solutions and are very close to that of biotechnology manufacturing / bio-molecular manufacturing which is largely concerned with the production of biomolecules like proteins and DNA/RNA. There is some overlap of Biotechnology and Wet nanotechnology because living things are inherently bottom-up engineered and any exploitation of this by biotechnologists means they dabble in bottom-up engineering (though mostly at the level of producing macromolecules like proteins and nucleic acids from there monomer units. Wet nanotech, however, seeks to analyse living things and their components as engineering systems and aims to understand them completely to have complete control of the behavior of the system and to derive principles and methods that can be applied more broadly to bottom up manufacturing, to manipulate matter on the atomic and molecular scales and to creating machines or devices at the nanometer and microscopic scales. Biotech is mostly about exploiting living systems in any way possible. Molecular Biology and related disciplines compare the mechanism of function of proteins in particular - and nucleic acids to a lesser extent - as like "molecular machines". In order for engineers to mimic these nanoscale machines in a way that they could be produced with some efficiency, they must look into bottom-up manufacturing. Bottom-up manufacturing deals with manipulating individual atoms during the manufacturing process, so that there is absolute control of their placement and interactions.
Then from the atomic scale, nanomachines could be made and even be designed to self-replicate themselves as long as they are designed in an environment with copious amount of the needed materials. Because individual atoms are being manipulated in the process, bottom-up manufacturing is often referred to as “atom by atom” manufacturing. If the manufacturing of nanomachines can be made more readily available through improved techniques, there could be a large economic and social impact. This would start with improvements in making microelectromechanical systems and then would allow for the creation of nanoscale biological sensors along with things that have not been thought of yet. This is because “wet” nanotech is only in the beginning of its life. Scientists and engineers alike feel that biomimetics is a great way to start looking at creating nanoscale machines. Humans have only had a few thousand years to try to learn about the mechanics of things at really small scales. However, nature has been working on perfecting the design and functionality of nanomachines for millions of years. This is why there are already nanomachines, such as ATP synthase, working in our bodies that have an unheard of 95% efficiency.
“Wet” vs. “Dry”
Wet nanotechnology is a form of wet engineering as opposed to dry engineering. There are different fields that deal with those two types of engineering. Biologists, from the point of view of nanotechnology, deal with wet engineering. They study processes that happen in life, and for the most part those processes take place in aqueous environments. Our bodies are made up mostly of water.
Electrical and mechanical engineers are on the other side of the line in dry engineering. They are involved with processes and manufacturing that does not occur in aqueous environments.
For the most part, wet engineering deals with “soft” materials that allow for flexibility which is vital at the nanoscale in biological manufacturing. Dry engineers mostly handle things with rigid structures and parts. These differences stem for the fact that the forces that the two types of engineers must deal with are very different. At a larger scale, most things are dominated by Newtonian physics. However, when one looks at the nanoscale, especially in biological matters, the dominating force is Brownian motion.
Because nanotechnology in the new age is going to most likely deal with both dry and wet in conjunction with each other, there is going to have to be a change in the way society looks at engineering and manufacturing. People will have to be not only well educated in engineering but also in biology because the integration of the two is how there will be the largest improvements in nanotechnology.
Brownian Motion as it relates to Wet Nanotech
With the existence of natural nanomachines, “a complex precision microscopic-sized machine that fits the standard definition of a machine”, such as ATP synthase and T4 bacteriophage, scientists and biologists know that they are capable of making similar types of machines at the same scale. However, nature has had a long time to perfect the building and creation of these nanomachines and humankind has only just begun to look into them with greater interest.
This interest may have been sparked because of the existence of nanomachines such as ATP synthase (adenosine triphosphate), which is the “second in importance only to DNA”. ATP is the main energy converter that our bodies contain and without it, life as we know it would not be able to flourish or even survive.
What does Brownian motion have to do with complex nanomachines?
Brownian motion is a random, constantly fluctuating force that acts on a body in environments that are at a microscale. This force is one that mechanical engineers and physicists are not used to dealing with because, at the larger scale that humankind tends to think of things, this force is not one that needs to be taken into account. People think of gravity, inertia, and other physics based forces that act on us all the time, however at the nanoscale those forces are mostly “negligible”.
In order for nanomachines to be recreated by humans, either there will need to be discoveries that allow us to understand how to “exploit” Brownian motion as nature does or find a way to work around it by using materials that are rigid enough to stand up to these forces. The way that nature has been able to exploit Brownian motion is through self-assembly. This force pushes and pulls all of the proteins and amino acids around in our bodies and sticks them together in all sorts of combinations. The combinations that do not work separate and continue with their random attachment however, the combinations that do work produce things like ATP synthase. Through this process nature has been able to make a nanomachine that is 95% efficient, which is a feat that humans have not been able to accomplish yet. This is all because nature does not try to work around the forces; it uses them at its advantage.
Growing cells in culture to take advantage of their internal chemical synthesis machinery can be considered a form of nanotechnology but this machinery has also been manipulated outside of living cells.
References
Nanotechnology | Wet nanotechnology | [
"Materials_science",
"Engineering"
] | 1,591 | [
"Nanotechnology",
"Materials science"
] |
2,827,774 | https://en.wikipedia.org/wiki/Charpy%20impact%20test | In materials science, the Charpy impact test, also known as the Charpy V-notch test, is a standardized high strain rate test which determines the amount of energy absorbed by a material during fracture. Absorbed energy is a measure of the material's notch toughness. It is widely used in industry, since it is easy to prepare and conduct and results can be obtained quickly and cheaply. A disadvantage is that some results are only comparative. The test was pivotal in understanding the fracture problems of ships during World War II.
The test was developed around 1900 by S. B. Russell (1898, American) and Georges Charpy (1901, French). The test became known as the Charpy test in the early 1900s due to the technical contributions and standardization efforts by Charpy.
History
In 1896, S. B. Russell introduced the idea of residual fracture energy and devised a pendulum fracture test. Russell's initial tests measured un-notched samples. In 1897, Frémont introduced a test to measure the same phenomenon using a spring-loaded machine. In 1901, Georges Charpy proposed a standardized method improving Russell's by introducing a redesigned pendulum and notched sample, giving precise specifications.
Definition
The apparatus consists of a pendulum of known mass and length that is dropped from a known height to impact a notched specimen of material. The energy transferred to the material can be inferred by comparing the difference in the height of the hammer before and after the fracture (energy absorbed by the fracture event).
The notch in the sample affects the results of the impact test, thus it is necessary for the notch to be of regular dimensions and geometry. The size of the sample can also affect results, since the dimensions determine whether or not the material is in plane strain. This difference can greatly affect the conclusions made.
The Standard methods for Notched Bar Impact Testing of Metallic Materials can be found in ASTM E23, ISO 148-1 or EN 10045-1 (retired and replaced with ISO 148-1), where all the aspects of the test and equipment used are described in detail.
Quantitative results
The quantitative result of the impact tests the energy needed to fracture a material and can be used to measure the toughness of the material. There is a connection to the yield strength but it cannot be expressed by a standard formula. Also, the strain rate may be studied and analyzed for its effect on fracture.
The ductile-brittle transition temperature (DBTT) may be derived from the temperature where the energy needed to fracture the material drastically changes. However, in practice there is no sharp transition and it is difficult to obtain a precise transition temperature (it is really a transition region). An exact DBTT may be empirically derived in many ways: a specific absorbed energy, change in aspect of fracture (such as 50% of the area is cleavage), etc.
Qualitative results
The qualitative results of the impact test can be used to determine the ductility of a material. If the material breaks on a flat plane, the fracture was brittle, and if the material breaks with jagged edges or shear lips, then the fracture was ductile. Usually, a material does not break in just one way or the other and thus comparing the jagged to flat surface areas of the fracture will give an estimate of the percentage of ductile and brittle fracture.
Sample sizes
According to ASTM A370, the standard specimen size for Charpy impact testing is 10 mm × 10 mm × 55 mm. Subsize specimen sizes are: 10 mm × 7.5 mm × 55 mm, 10 mm × 6.7 mm × 55 mm, 10 mm × 5 mm × 55 mm, 10 mm × 3.3 mm × 55 mm, 10 mm × 2.5 mm × 55 mm. Details of specimens as per ASTM A370 (Standard Test Method and Definitions for Mechanical Testing of Steel Products).
According to EN 10045-1 (retired and replaced with ISO 148), standard specimen sizes are 10 mm × 10 mm × 55 mm. Subsize specimens are: 10 mm × 7.5 mm × 55 mm and 10 mm × 5 mm × 55 mm.
According to ISO 148, standard specimen sizes are 10 mm × 10 mm × 55 mm. Subsize specimens are: 10 mm × 7.5 mm × 55 mm, 10 mm × 5 mm × 55 mm and 10 mm × 2.5 mm × 55 mm.
According to MPIF Standard 40, the standard unnotched specimen size is 10 mm (±0.125 mm) x 10 mm (±0.125 mm) x 55 mm (±2.5 mm).
Impact test results on low- and high-strength materials
The impact energy of low-strength metals that do not show a change of fracture mode with temperature, is usually high and insensitive to temperature. For these reasons, impact tests are not widely used for assessing the fracture-resistance of low-strength materials whose fracture modes remain unchanged with temperature. Impact tests typically show a ductile-brittle transition for high-strength materials that do exhibit change in fracture mode with temperature such as body-centered cubic (BCC) transition metals. Impact tests on natural materials (can be considered as low-strength), such as wood, are used to study the material toughness and are subjected to a number of issues that include the interaction between the pendulum and a specimen as well as higher modes of vibration and multiple contacts between pendulum tup and the specimen.
Generally, high-strength materials have low impact energies which attest to the fact that fractures easily initiate and propagate in high-strength materials. The impact energies of high-strength materials other than steels or BCC transition metals are usually insensitive to temperature. High-strength BCC steels display a wider variation of impact energy than high-strength metal that do not have a BCC structure because steels undergo microscopic ductile-brittle transition. Regardless, the maximum impact energy of high-strength steels is still low due to their brittleness.
See also
Izod impact strength test
Brittle
Impact force
Notes
External links
Calculator
Video on the Charpy impact test
Fracture mechanics
Materials testing | Charpy impact test | [
"Materials_science",
"Engineering"
] | 1,257 | [
"Structural engineering",
"Fracture mechanics",
"Materials science",
"Materials testing",
"Materials degradation"
] |
2,828,566 | https://en.wikipedia.org/wiki/String%20theory%20landscape | In string theory, the string theory landscape (or landscape of vacua) is the collection of possible false vacua, together comprising a collective "landscape" of choices of parameters governing compactifications.
The term "landscape" comes from the notion of a fitness landscape in evolutionary biology. It was first applied to cosmology by Lee Smolin in his book The Life of the Cosmos (1997), and was first used in the context of string theory by Leonard Susskind.
Compactified Calabi–Yau manifolds
In string theory the number of flux vacua is commonly thought to be roughly , but could be or higher. The large number of possibilities arises from choices of Calabi–Yau manifolds and choices of generalized magnetic fluxes over various homology cycles, found in F-theory.
If there is no structure in the space of vacua, the problem of finding one with a sufficiently small cosmological constant is NP complete. This is a version of the subset sum problem.
A possible mechanism of string theory vacuum stabilization, now known as the KKLT mechanism, was proposed in 2003 by Shamit Kachru, Renata Kallosh, Andrei Linde, and Sandip Trivedi.
Fine-tuning by the anthropic principle
Fine-tuning of constants like the cosmological constant or the Higgs boson mass are usually assumed to occur for precise physical reasons as opposed to taking their particular values at random. That is, these values should be uniquely consistent with underlying physical laws.
The number of theoretically allowed configurations has prompted suggestions that this is not the case, and that many different vacua are physically realized. The anthropic principle proposes that fundamental constants may have the values they have because such values are necessary for life (and therefore intelligent observers to measure the constants). The anthropic landscape thus refers to the collection of those portions of the landscape that are suitable for supporting intelligent life.
Weinberg model
In 1987, Steven Weinberg proposed that the observed value of the cosmological constant was so small because it is impossible for life to occur in a universe with a much larger cosmological constant.
Weinberg attempted to predict the magnitude of the cosmological constant based on probabilistic arguments. Other attempts have been made to apply similar reasoning to models of particle physics.
Such attempts are based in the general ideas of Bayesian probability; interpreting probability in a context where it is only possible to draw one sample from a distribution is problematic in frequentist probability but not in Bayesian probability, which is not defined in terms of the frequency of repeated events.
In such a framework, the probability of observing some fundamental parameters is given by,
where is the prior probability, from fundamental theory, of the parameters and is the "anthropic selection function", determined by the number of "observers" that would occur in the universe with parameters .
These probabilistic arguments are the most controversial aspect of the landscape. Technical criticisms of these proposals have pointed out that:
The function is completely unknown in string theory and may be impossible to define or interpret in any sensible probabilistic way.
The function is completely unknown, since so little is known about the origin of life. Simplified criteria (such as the number of galaxies) must be used as a proxy for the number of observers. Moreover, it may never be possible to compute it for parameters radically different from those of the observable universe.
Simplified approaches
Tegmark et al. have recently considered these objections and proposed a simplified anthropic scenario for axion dark matter in which they argue that the first two of these problems do not apply.
Vilenkin and collaborators have proposed a consistent way to define the probabilities for a given vacuum.
A problem with many of the simplified approaches people have tried is that they "predict" a cosmological constant that is too large by a factor of 10–1000 orders of magnitude (depending on one's assumptions) and hence suggest that the cosmic acceleration should be much more rapid than is observed.
Interpretation
Few dispute the large number of metastable vacua. The existence, meaning, and scientific relevance of the anthropic landscape, however, remain controversial.
Cosmological constant problem
Andrei Linde, Sir Martin Rees and Leonard Susskind advocate it as a solution to the cosmological constant problem.
Weak scale supersymmetry from the landscape
The string landscape ideas can be applied to the notion of weak scale supersymmetry and the Little Hierarchy problem.
For string vacua which include the MSSM (Minimal Supersymmetric Standard Model) as the low energy effective field theory, all values of SUSY breaking fields
are expected to be equally likely on the landscape. This led Douglas and others to propose that the SUSY breaking scale is distributed as a power
law in the landscape where is the number of F-breaking fields
(distributed as complex numbers) and is the number of D-breaking fields (distributed as real numbers).
Next, one may impose the Agrawal, Barr, Donoghue, Seckel (ABDS) anthropic requirement that the derived weak scale lie within a factor of a few
of our measured value (lest nuclei as needed for life as we know it become unstable (the atomic principle)).
Combining these effects with a mild power-law draw to large soft SUSY breaking terms,
one may calculate the Higgs boson and superparticle masses expected from the landscape.
The Higgs mass probability distribution peaks around 125 GeV while sparticles (with the exception of light higgsinos) tend to
lie well beyond current LHC search limits. This approach is an example of the application of stringy naturalness.
Scientific relevance
David Gross suggests that the idea is inherently unscientific, unfalsifiable or premature. A famous debate on the anthropic landscape of string theory is the Smolin–Susskind debate on the merits of the landscape.
Popular reception
There are several popular books about the anthropic principle in cosmology. The authors of two physics blogs, Lubos Motl and Peter Woit, are opposed to this use of the anthropic principle.
See also
Swampland
Extra dimensions
References
External links
String landscape; moduli stabilization; flux vacua; flux compactification on arxiv.org.
Physical cosmology
String theory
Multiverse | String theory landscape | [
"Physics",
"Astronomy"
] | 1,315 | [
"Astronomical hypotheses",
"Astronomical sub-disciplines",
"Theoretical physics",
"Astrophysics",
"String theory",
"Multiverse",
"Physical cosmology"
] |
2,828,651 | https://en.wikipedia.org/wiki/Exact%20cover | In the mathematical field of combinatorics, given a collection of subsets of a set , an exact cover is a subcollection of such that each element in is contained in exactly one subset in .
One says that each element in is covered by exactly one subset in .
An exact cover is a kind of cover. In other words, is a partition of consisting of subsets contained in .
The exact cover problem to find an exact cover is a kind of constraint satisfaction problem. The elements of represent choices and the elements of represent constraints. It is non-deterministic polynomial time (NP) complete and has a variety of applications, ranging from the optimization of airline flight schedules, cloud computing, and electronic circuit design.
An exact cover problem involves the relation contains between subsets and elements. But an exact cover problem can be represented by any heterogeneous relation between a set of choices and a set of constraints. For example, an exact cover problem is equivalent to an exact hitting set problem, an incidence matrix, or a bipartite graph.
In computer science, the exact cover problem is a decision problem to determine if an exact cover exists. The exact cover problem is NP-complete and is one of Karp's 21 NP-complete problems. It is NP-complete even when each subset in contains exactly three elements; this restricted problem is known as exact cover by 3-sets, often abbreviated X3C.
Knuth's Algorithm X is an algorithm that finds all solutions to an exact cover problem. DLX is the name given to Algorithm X when it is implemented efficiently using Donald Knuth's Dancing Links technique on a computer.
The exact cover problem can be generalized slightly to involve not only exactly-once constraints but also at-most-once constraints.
Finding Pentomino tilings and solving Sudoku are noteworthy examples of exact cover problems. The n queens problem is a generalized exact cover problem.
Formal definition
Given a collection of subsets of a set , an exact cover of is a subcollection of that satisfies two conditions:
The intersection of any two distinct subsets in is empty, i.e., the subsets in are pairwise disjoint. In other words, each element in is contained in at most one subset in .
The union of the subsets in is , i.e., the subsets in cover . In other words, each element in is contained in at least one subset in .
In short, an exact cover is exact in the sense that each element in is contained in exactly one subset in .
Equivalently, an exact cover of is a subcollection of that partitions .
For an exact cover of to exist, it is necessary that:
The union of the subsets in is . In other words, each element in is contained in at least one subset in .
If the empty set is contained in , then it makes no difference whether or not it is in any exact cover. Thus it is typical to assume that:
The empty set is not in . In other words, each subset in contains at least one element.
Basic examples
Let be a collection of subsets of a set such that:
,
,
, and
.
The subcollection is an exact cover of , since the subsets and are disjoint and their union is .
The subcollection is also an exact cover of .
Including the empty set makes no difference, as it is disjoint with all subsets and does not change the union.
The subcollection is not an exact cover of .
Even though the union of the subsets and is , the intersection of the subsets and , , is not empty. Therefore the subsets and do not meet the disjoint requirement of an exact cover.
The subcollection is also not an exact cover of .
Even though and are disjoint, their union is not , so they fail the cover requirement.
On the other hand, there is no exact cover—indeed, not even a cover—of because is a proper subset of : None of the subsets in contains the element 5.
Detailed example
Let = {, , , , , } be a collection of subsets of a set = {1, 2, 3, 4, 5, 6, 7} such that:
= {1, 4, 7};
= {1, 4};
= {4, 5, 7};
= {3, 5, 6};
= {2, 3, 6, 7}; and
= {2, 7}.
The subcollection = {, , } is an exact cover, since each element is covered by (contained in) exactly one selected subset, as the highlighting makes clear.
Moreover, {, , } is the only exact cover, as the following argument demonstrates: Because and are the only subsets containing the element 1, an exact cover must contain or , but not both. If an exact cover contains , then it doesn't contain , , , or , as each of these subsets has the element 1, 4, or 7 in common with . Then is the only remaining subset, but the subcollection {, } doesn't cover the element 2. In conclusion, there is no exact cover containing . On the other hand, if an exact cover contains , then it doesn't contain or , as each of these subsets has the element 1 or 4 in common with . Because is the only remaining subset containing the element 5, must be part of the exact cover. If an exact cover contains , then it doesn't contain , as has the elements 3 and 6 in common with . Then is the only remaining subset, and the subcollection {, , } is indeed an exact cover. See the example in the article on Knuth's Algorithm X for a matrix-based version of this argument.
Representations
An exact cover problem is defined by the heterogeneous relation contains between a collection of subsets and a set of elements. But there is nothing fundamental about subsets and elements.
A representation of an exact cover problem arises whenever there is a heterogeneous relation ⊆ × between a set of choices and a set of constraints and the goal is to select a subset of such that each element in is -related to exactly one element in . Here is the converse of .
In general, restricted to × is a function from to , which maps each element in to the unique element in that is -related to that element in . This function is onto, unless contains an element (akin to the empty set) that isn't -related to any element in .
Representations of an exact cover problem include an exact hitting set problem, an incidence matrix, and a bipartite graph.
Exact hitting set
In mathematics, given a set and a collection of subsets of , an exact hitting set is a subset of such that each subset in contains exactly one element in . One says that each subset in is hit by exactly one element in .
The exact hitting set problem is a representation of an exact cover problem involving the relation is contained in rather than contains.
For example, let = {, , , , , } be a set and = {, , , , , , } be a collection of subsets of such that:
= {, }
= {, }
= {, }
= {, , }
= {, }
= {, }
= {, , , }
Then = {, , } is an exact hitting set, since each subset in is hit by (contains) exactly one element in , as the highlighting makes clear.
This exact hitting set example is essentially the same as the detailed example above. Displaying the relation is contained in (∈) from elements to subsets makes clear that we have simply replaced lettered subsets with elements and numbered elements with subsets:
∈ , , ;
∈ , ;
∈ , , ;
∈ , , ;
∈ , , , ; and
∈ , .
Incidence matrix
The relation contains can be represented by an incidence matrix.
The matrix includes one row for each subset in and one column for each element in .
The entry in a particular row and column is 1 if the corresponding subset contains the corresponding element, and is 0 otherwise.
In the matrix representation, an exact cover is a selection of rows such that each column contains a 1 in exactly one selected row. Each row represents a choice and each column represents a constraint.
For example, the relation contains in the detailed example above can be represented by a 6×7 incidence matrix:
{| class="wikitable" style="text-align:center;width:17.0em;"
! !! 1 !! 2 !! 3 !! 4 !! 5 !! 6 !! 7
|-
!
| 1 || 0 || 0 || 1 || 0 || 0 || 1
|-
!
| 1 || 0 || 0 || 1 || 0 || 0 || 0
|-
!
| 0 || 0 || 0 || 1 || 1 || 0 || 1
|-
!
| 0 || 0 || 1 || 0 || 1 || 1 || 0
|-
!
| 0 || 1 || 1 || 0 || 0 || 1 || 1
|-
!
| 0 || 1 || 0 || 0 || 0 || 0 || 1
|}
Again, the subcollection = {, , } is an exact cover, since each column contains a 1 in exactly one selected row, as the highlighting makes clear.
See the example in the article on Knuth's Algorithm X for a matrix-based solution to the detailed example above.
Hypergraph
In turn, the incidence matrix can be seen also as describing a hypergraph. The hypergraph includes one node for each element in and one edge for each subset in ; each node is included in exactly one of the edges forming the cover.
Bipartite graph
The relation contains can be represented by a bipartite graph.
The vertices of the graph are divided into two disjoint sets, one representing the subsets in and another representing the elements in . If a subset contains an element, an edge connects the corresponding vertices in the graph.
In the graph representation, an exact cover is a selection of vertices corresponding to subsets such that each vertex corresponding to an element is connected to exactly one selected vertex.
For example, the relation contains in the detailed example above can be represented by a bipartite graph with 6+7 = 13 vertices:
Again, the subcollection = {B, D, F} is an exact cover, since the vertex corresponding to each element in is connected to exactly one selected vertex, as the highlighting makes clear.
Finding solutions
Algorithm X is the name Donald Knuth gave for "the most obvious trial-and-error approach" for finding all solutions to the exact cover problem. Technically, Algorithm X is a recursive, nondeterministic, depth-first, backtracking algorithm.
When Algorithm X is implemented efficiently using Donald Knuth's Dancing Links technique on a computer, Knuth calls it DLX. It uses the matrix representation of the problem, implemented as a series of doubly linked lists of the 1s of the matrix: each 1 element has a link to the next 1 above, below, to the left, and to the right of itself. Because exact cover problems tend to be sparse, this representation is usually much more efficient in both size and processing time required. DLX then uses the Dancing Links technique to quickly select permutations of rows as possible solutions and to efficiently backtrack (undo) mistaken guesses.
Generalized exact cover
In a standard exact cover problem, each constraint must be satisfied exactly once.
It is a simple generalization to relax this requirement slightly and allow for the possibility that some primary constraints must be satisfied by exactly one choice but other secondary constraints can be satisfied by at most one choice.
As Knuth explains, a generalized exact cover problem can be converted to an equivalent exact cover problem by simply appending one row for each secondary column, containing a single 1 in that column. If in a particular candidate solution a particular secondary column is satisfied, then the added row isn't needed.
But if the secondary column isn't satisfied, as is allowed in the generalized problem but not the standard problem, then the added row can be selected to ensure the column is satisfied.
But Knuth goes on to explain that it is better working with the generalized problem directly, because the generalized algorithm is simpler and faster: A simple change to his Algorithm X allows secondary columns to be handled directly.
The N queens problem is an example of a generalized exact cover problem, as the constraints corresponding to the diagonals of the chessboard have a maximum rather than an exact queen count.
Noteworthy examples
Due to its NP-completeness, any problem in NP can be reduced to exact cover problems, which then can be solved with techniques such as Dancing Links. However, for some well known problems, the reduction is particularly direct. For instance, the problem of tiling a board with pentominoes, and solving Sudoku can both be viewed as exact cover problems.
Pentomino tiling
The problem of tiling a 60-square board with the 12 different free pentominoes is an example of an exact cover problem, as Donald Knuth explains in his paper "Dancing links."
For example, consider the problem of tiling with pentominoes an 8×8 chessboard with the 4 central squares removed:
{| border="1" cellpadding="5" cellspacing="0"
| 11 || 12 || 13 || 14 || 15 || 16 || 17 || 18
|-
| 21 || 22 || 23 || 24 || 25 || 26 || 27 || 28
|-
| 31 || 32 || 33 || 34 || 35 || 36 || 37 || 38
|-
| 41 || 42 || 43 || || || 46 || 47 || 48
|-
| 51 || 52 || 53 || || || 56 || 57 || 58
|-
| 61 || 62 || 63 || 64 || 65 || 66 || 67 || 68
|-
| 71 || 72 || 73 || 74 || 75 || 76 || 77 || 78
|-
| 81 || 82 || 83 || 84 || 85 || 86 || 87 || 88
|}
The problem involves two kinds of constraints:
Pentomino: For each of the 12 pentominoes, there is the constraint that it must be placed exactly once. Name these constraints after the corresponding pentominoes: F I L P N T U V W X Y Z.
Square: For each of the 60 squares, there is the constraint that it must be covered by a pentomino exactly once. Name these constraints after the corresponding squares in the board: ij, where i is the rank and j is the file.
Thus there are 12+60 = 72 constraints in all.
As both kinds of constraints are exactly-once constraints, the problem is an exact cover problem.
The problem involves many choices, one for each way to place a pentomino on the board.
It is convenient to consider each choice as satisfying a set of 6 constraints: 1 constraint for the pentomino being placed and 5 constraints for the five squares where it is placed.
In the case of an 8×8 chessboard with the 4 central squares removed, there are 1568 such choices, for example:
{F, 12, 13, 21, 22, 32}
{F, 13, 14, 22, 23, 33}
…
{I, 11, 12, 13, 14, 15}
{I, 12, 13, 14, 15, 16}
…
{L, 11, 21, 31, 41, 42}
{L, 12, 22, 32, 42, 43}
…
One of many solutions of this exact cover problem is the following set of 12 choices:
{I, 11, 12, 13, 14, 15}
{N, 16, 26, 27, 37, 47}
{L, 17, 18, 28, 38, 48}
{U, 21, 22, 31, 41, 42}
{X, 23, 32, 33, 34, 43}
{W, 24, 25, 35, 36, 46}
{P, 51, 52, 53, 62, 63}
{F, 56, 64, 65, 66, 75}
{Z, 57, 58, 67, 76, 77}
{T, 61, 71, 72, 73, 81}
{V, 68, 78, 86, 87, 88}
{Y, 74, 82, 83, 84, 85}
This set of choices corresponds to the following solution to the pentomino tiling problem:
A pentomino tiling problem is more naturally viewed as an exact cover problem than an exact hitting set problem, because it is more natural to view each choice as a set of constraints than each constraint as a set of choices.
Each choice relates to just 6 constraints, which are easy to enumerate. On the other hand, each constraint relates to many choices, which are harder to enumerate.
Whether viewed as an exact cover problem or an exact hitting set problem, the matrix representation is the same, having 1568 rows corresponding to choices and 72 columns corresponding to constraints. Each row contains a single 1 in the column identifying the pentomino and five 1s in the columns identifying the squares covered by the pentomino.
Using the matrix, a computer can find all solutions relatively quickly, for example, using Dancing Links.
Sudoku
Main articles: Sudoku, Mathematics of Sudoku, Sudoku solving algorithms
The problem in Sudoku is to assign numbers (or digits, values, symbols) to cells (or squares) in a grid so as to satisfy certain constraints.
In the standard 9×9 Sudoku variant, there are four kinds of constraints:
Row-Column: Each intersection of a row and column, i.e, each cell, must contain exactly one number.
Row-Number: Each row must contain each number exactly once
Column-Number: Each column must contain each number exactly once.
Box-Number: Each box must contain each number exactly once.
While the first constraint might seem trivial, it is nevertheless needed to ensure there is only one number per cell. Naturally, placing a number into a cell prohibits placing any other number into the now occupied cell.
Solving Sudoku is an exact cover problem.
More precisely, solving Sudoku is an exact hitting set problem, which is equivalent to an exact cover problem, when viewed as a problem to select possibilities such that each constraint set contains (i.e., is hit by) exactly one selected possibility.
Each possible assignment of a particular number to a particular cell is a possibility (or candidate). When Sudoku is played with pencil and paper, possibilities are often called pencil marks.
In the standard 9×9 Sudoku variant, in which each of 9×9 cells is assigned one of 9 numbers, there are 9×9×9=729 possibilities.
Using obvious notation for rows, columns and numbers, the possibilities can be labeled
R1C1#1, R1C1#2, …, R9C9#9.
The fact that each kind of constraint involves exactly one of something is what makes Sudoku an exact hitting set problem. The constraints can be represented by constraint sets. The problem is to select possibilities such that each constraint set contains (i.e., is hit by) exactly one selected possibility.
In the standard 9×9 Sudoku variant, there are four kinds of constraints sets corresponding to the four kinds of constraints:
Row-Column: A row-column constraint set contains all the possibilities for the intersection of a particular row and column, i.e., for a cell. For example, the constraint set for row 1 and column 1, which can be labeled R1C1, contains the 9 possibilities for row 1 and column 1 but different numbers:
R1C1 = { R1C1#1, R1C1#2, R1C1#3, R1C1#4, R1C1#5, R1C1#6, R1C1#7, R1C1#8, R1C1#9 }.
Row-Number: A row-number constraint set contains all the possibilities for a particular row and number. For example, the constraint set for row 1 and number 1, which can be labeled R1#1, contains the 9 possibilities for row 1 and number 1 but different columns:
R1#1 = { R1C1#1, R1C2#1, R1C3#1, R1C4#1, R1C5#1, R1C6#1, R1C7#1, R1C8#1, R1C9#1 }.
Column-Number: A column-number constraint set contains all the possibilities for a particular column and number. For example, the constraint set for column 1 and number 1, which can be labeled C1#1, contains the 9 possibilities for column 1 and number 1 but different rows:
C1#1 = { R1C1#1, R2C1#1, R3C1#1, R4C1#1, R5C1#1, R6C1#1, R7C1#1, R8C1#1, R9C1#1 }.
Box-Number: A box-number constraint set contains all the possibilities for a particular box and number. For example, the constraint set for box 1 (in the upper lefthand corner) and number 1, which can be labeled B1#1, contains the 9 possibilities for the cells in box 1 and number 1:
B1#1 = { R1C1#1, R1C2#1, R1C3#1, R2C1#1, R2C2#1, R2C3#1, R3C1#1, R3C2#1, R3C3#1 }.
Since there are 9 rows, 9 columns, 9 boxes and 9 numbers, there are 9×9=81 row-column constraint sets, 9×9=81 row-number constraint sets, 9×9=81 column-number constraint sets, and 9×9=81 box-number constraint sets: 81+81+81+81=324 constraint sets in all.
In brief, the standard 9×9 Sudoku variant is an exact hitting set problem with 729 possibilities and 324 constraint sets.
Thus the problem can be represented by a 729×324 matrix.
Although it is difficult to present the entire 729×324 matrix, the general nature of the matrix can be seen from several snapshots:
The complete 729×324 matrix is available from Robert Hanson.
Note that the set of possibilities RxCy#z can be arranged as a 9×9×9 cube in a 3-dimensional space with coordinates x, y, and z. Then each row Rx, column Cy, or number #z is a 9×9×1 "slice" of possibilities; each box Bw is a 9x3×3 "tube" of possibilities; each row-column constraint set RxCy, row-number constraint set Rx#z, or column-number constraint set Cy#z is a 9x1×1 "strip" of possibilities; each box-number constraint set Bw#z is a 3x3×1 "square" of possibilities; and each possibility RxCy#z is a 1x1×1 "cubie" consisting of a single possibility. Moreover, each constraint set or possibility is the intersection of the component sets. For example, R1C2#3 = R1 ∩ C2 ∩ #3, where ∩ denotes set intersection.
Although other Sudoku variations have different numbers of rows, columns, numbers and/or different kinds of constraints, they all involve possibilities and constraint sets, and thus can be seen as exact hitting set problems.
N queens problem
The N queens problem is the problem of placing n chess queens on an n×n chessboard so that no two queens threaten each other. A solution requires that no two queens share the same row, column, or diagonal. It is an example of a generalized exact cover problem.
The problem involves four kinds of constraints:
Rank: For each of the N ranks, there must be exactly one queen.
File: For each of the N files, there must be exactly one queen.
Diagonals: For each of the 2N − 1 diagonals, there must be at most one queen.
Reverse diagonals: For each of the 2N − 1 reverse diagonals, there must be at most one queen.
Note that the 2N ranks and files form the primary constraints, while the 4N − 2 diagonal and reverse diagonals form the secondary constraints. Further, because each of first and last diagonals and reverse diagonals involves only one square on the chessboard, these can be omitted and thus one can reduce the number of secondary constraints to 4N − 6. The matrix for the N queens problem then has N2 rows and 6N − 6 columns, each row for a possible queen placement on each square on the chessboard, and each column for each constraint.
See also
Constraint satisfaction problem
Dancing Links
Difference map algorithm
Karp's 21 NP-complete problems
Knuth's Algorithm X
List of NP-complete problems
Partition of a set
Perfect matching and 3-dimensional matching are special cases of the exact cover problem
References
External links
Free Software implementation of an Exact Cover solver in C - uses Algorithm X and Dancing Links. Includes examples for Sudoku and logic grid puzzles.
Exact Cover solver in Golang - uses Algorithm X and Dancing Links. Includes examples for Sudoku and N queens.
Exact cover - Math Reference Project
Theoretical computer science
NP-complete problems | Exact cover | [
"Mathematics"
] | 5,386 | [
"Theoretical computer science",
"Applied mathematics",
"Computational problems",
"Mathematical problems",
"NP-complete problems"
] |
2,829,400 | https://en.wikipedia.org/wiki/Fuel%20fleas | Fuel fleas are microscopic hot particles of new or spent nuclear fuel. While small, they tend to be intensely radioactive.
The fuel particles, the size about 10 micrometers, are a strong source of beta and gamma radiation and a weaker source of alpha radiation. The disparity between alpha and beta radiation (alpha activity is typically 100–1000 times weaker than beta, so the particle loses much more negatively charged particles than positively charged ones) leads to buildup of positive electrostatic charge on the particle, causing the particle to "jump" from surface to surface and easily become airborne.
Fuel fleas are typically rich in uranium-238 and contain an abundance of insoluble fission products. Due to their high beta activity, they can be detected by a Geiger counter. Their gamma output can allow analysis of their isotope composition (and therefore their age and origin) by a gamma-ray spectrometer.
Fuel fleas can be very dangerous if they become embedded within a person's body, but are generally not considered more dangerous than an equal amount of radioactive material evenly distributed throughout the body. An exception would be if the flea was embedded in a particularly vulnerable organ such as the cornea of the eye or inhaled into the lungs.
The most likely cause of fuel fleas is when the cladding surrounding the nuclear fuel becomes ruptured or cracked (known as "fuel pin failure"), allowing the fuel particles to escape and allowing the coolant to enter the fuel rod, further accelerating the process. In water-cooled reactors, this can be due to the reaction of the zirconium alloy cladding with the cooling water, which produces hydrogen. The hydrogen can be absorbed into the cladding material, resulting in hydrogen embrittlement. Embrittled cladding is less ductile and more susceptible to cracking. This process is avoided in modern reactors by carefully monitoring the fuel assemblies, limiting operating lifetime of the fuel, and by using alloys developed to resist hydride formation.
References
Radioactive waste | Fuel fleas | [
"Physics",
"Chemistry",
"Technology"
] | 409 | [
"Nuclear chemistry stubs",
"Nuclear and atomic physics stubs",
"Environmental impact of nuclear power",
"Radioactivity",
"Nuclear physics",
"Hazardous waste",
"Radioactive waste"
] |
2,829,647 | https://en.wikipedia.org/wiki/Algorithmic%20information%20theory | Algorithmic information theory (AIT) is a branch of theoretical computer science that concerns itself with the relationship between computation and information of computably generated objects (as opposed to stochastically generated), such as strings or any other data structure. In other words, it is shown within algorithmic information theory that computational incompressibility "mimics" (except for a constant that only depends on the chosen universal programming language) the relations or inequalities found in information theory. According to Gregory Chaitin, it is "the result of putting Shannon's information theory and Turing's computability theory into a cocktail shaker and shaking vigorously."
Besides the formalization of a universal measure for irreducible information content of computably generated objects, some main achievements of AIT were to show that: in fact algorithmic complexity follows (in the self-delimited case) the same inequalities (except for a constant) that entropy does, as in classical information theory; randomness is incompressibility; and, within the realm of randomly generated software, the probability of occurrence of any data structure is of the order of the shortest program that generates it when running on a universal machine.
AIT principally studies measures of irreducible information content of strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers. One of the main motivations behind AIT is the very study of the information carried by mathematical objects as in the field of metamathematics, e.g., as shown by the incompleteness results mentioned below. Other main motivations came from surpassing the limitations of classical information theory for single and fixed objects, formalizing the concept of randomness, and finding a meaningful probabilistic inference without prior knowledge of the probability distribution (e.g., whether it is independent and identically distributed, Markovian, or even stationary). In this way, AIT is known to be basically founded upon three main mathematical concepts and the relations between them: algorithmic complexity, algorithmic randomness, and algorithmic probability.
Overview
Algorithmic information theory principally studies complexity measures on strings (or other data structures). Because most mathematical objects can be described in terms of strings, or as the limit of a sequence of strings, it can be used to study a wide variety of mathematical objects, including integers.
Informally, from the point of view of algorithmic information theory, the information content of a string is equivalent to the length of the most-compressed possible self-contained representation of that string. A self-contained representation is essentially a program—in some fixed but otherwise irrelevant universal programming language—that, when run, outputs the original string.
From this point of view, a 3000-page encyclopedia actually contains less information than 3000 pages of completely random letters, despite the fact that the encyclopedia is much more useful. This is because to reconstruct the entire sequence of random letters, one must know what every single letter is. On the other hand, if every vowel were removed from the encyclopedia, someone with reasonable knowledge of the English language could reconstruct it, just as one could likely reconstruct the sentence "Ths sntnc hs lw nfrmtn cntnt" from the context and consonants present.
Unlike classical information theory, algorithmic information theory gives formal, rigorous definitions of a random string and a random infinite sequence that do not depend on physical or philosophical intuitions about nondeterminism or likelihood. (The set of random strings depends on the choice of the universal Turing machine used to define Kolmogorov complexity, but any choice
gives identical asymptotic results because the Kolmogorov complexity of a string is invariant up to an additive constant depending only on the choice of universal Turing machine. For this reason the set of random infinite sequences is independent of the choice of universal machine.)
Some of the results of algorithmic information theory, such as Chaitin's incompleteness theorem, appear to challenge common mathematical and philosophical intuitions. Most notable among these is the construction of Chaitin's constant Ω, a real number that expresses the probability that a self-delimiting universal Turing machine will halt when its input is supplied by flips of a fair coin (sometimes thought of as the probability that a random computer program will eventually halt). Although Ω is easily defined, in any consistent axiomatizable theory one can only compute finitely many digits of Ω, so it is in some sense unknowable, providing an absolute limit on knowledge that is reminiscent of Gödel's incompleteness theorems. Although the digits of Ω cannot be determined, many properties of Ω are known; for example, it is an algorithmically random sequence and thus its binary digits are evenly distributed (in fact it is normal).
History
Algorithmic information theory was founded by Ray Solomonoff, who published the basic ideas on which the field is based as part of his invention of algorithmic probability—a way to overcome serious problems associated with the application of Bayes' rules in statistics. He first described his results at a Conference at Caltech in 1960, and in a report, February 1960, "A Preliminary Report on a General Theory of Inductive Inference." Algorithmic information theory was later developed independently by Andrey Kolmogorov, in 1965 and Gregory Chaitin, around 1966.
There are several variants of Kolmogorov complexity or algorithmic information; the most widely used one is based on self-delimiting programs and is mainly due to Leonid Levin (1974). Per Martin-Löf also contributed significantly to the information theory of infinite sequences. An axiomatic approach to algorithmic information theory based on the Blum axioms (Blum 1967) was introduced by Mark Burgin in a paper presented for publication by Andrey Kolmogorov (Burgin 1982). The axiomatic approach encompasses other approaches in the algorithmic information theory. It is possible to treat different measures of algorithmic information as particular cases of axiomatically defined measures of algorithmic information. Instead of proving similar theorems, such as the basic invariance theorem, for each particular measure, it is possible to easily deduce all such results from one corresponding theorem proved in the axiomatic setting. This is a general advantage of the axiomatic approach in mathematics. The axiomatic approach to algorithmic information theory was further developed in the book (Burgin 2005) and applied to software metrics (Burgin and Debnath, 2003; Debnath and Burgin, 2003).
Precise definitions
A binary string is said to be random if the Kolmogorov complexity of the string is at least the length of the string. A simple counting argument shows that some strings of any given length are random, and almost all strings are very close to being random. Since Kolmogorov complexity depends on a fixed choice of universal Turing machine (informally, a fixed "description language" in which the "descriptions" are given), the collection of random strings does depend on the choice of fixed universal machine. Nevertheless, the collection of random strings, as a whole, has similar properties regardless of the fixed machine, so one can (and often does) talk about the properties of random strings as a group without having to first specify a universal machine.
An infinite binary sequence is said to be random if, for some constant c, for all n, the Kolmogorov complexity of the initial segment of length n of the sequence is at least n − c. It can be shown that almost every sequence (from the point of view of the standard measure—"fair coin" or Lebesgue measure—on the space of infinite binary sequences) is random. Also, since it can be shown that the Kolmogorov complexity relative to two different universal machines differs by at most a constant, the collection of random infinite sequences does not depend on the choice of universal machine (in contrast to finite strings). This definition of randomness is usually called Martin-Löf randomness, after Per Martin-Löf, to distinguish it from other similar notions of randomness. It is also sometimes called 1-randomness to distinguish it from other stronger notions of randomness (2-randomness, 3-randomness, etc.). In addition to Martin-Löf randomness concepts, there are also recursive randomness, Schnorr randomness, and Kurtz randomness etc. Yongge Wang showed that all of these randomness concepts are different.
(Related definitions can be made for alphabets other than the set .)
Specific sequence
Algorithmic information theory (AIT) is the information theory of individual objects, using computer science, and concerns itself with the relationship between computation, information, and randomness.
The information content or complexity of an object can be measured by the length of its shortest description. For instance the string
"0101010101010101010101010101010101010101010101010101010101010101"
has the short description "32 repetitions of '01'", while
"1100100001100001110111101110110011111010010000100101011110010110"
presumably has no simple description other than writing down the string itself.
More formally, the algorithmic complexity (AC) of a string x is defined as the length of the shortest program that computes or outputs x, where the program is run on some fixed reference universal computer.
A closely related notion is the probability that a universal computer outputs some string x when fed with a program chosen at random. This algorithmic "Solomonoff" probability (AP) is key in addressing the old philosophical problem of induction in a formal way.
The major drawback of AC and AP are their incomputability. Time-bounded "Levin" complexity penalizes a slow program by adding the logarithm of its running time to its length. This leads to computable variants of AC and AP, and universal "Levin" search (US) solves all inversion problems in optimal time (apart from some unrealistically large multiplicative constant).
AC and AP also allow a formal and rigorous definition of randomness of individual strings to not depend on physical or philosophical intuitions about non-determinism or likelihood. Roughly, a string is algorithmic "Martin-Löf" random (AR) if it is incompressible in the sense that its algorithmic complexity is equal to its length.
AC, AP, and AR are the core sub-disciplines of AIT, but AIT spawns into many other areas. It serves as the foundation of the Minimum Description Length (MDL) principle, can simplify proofs in computational complexity theory, has been used to define a universal similarity metric between objects, solves the Maxwell daemon problem, and many others.
See also
References
External links
Algorithmic Information Theory at Scholarpedia
Chaitin's account of the history of AIT.
Further reading
Information theory
Randomized algorithms | Algorithmic information theory | [
"Mathematics",
"Technology",
"Engineering"
] | 2,310 | [
"Telecommunications engineering",
"Applied mathematics",
"Computer science",
"Information theory"
] |
2,830,876 | https://en.wikipedia.org/wiki/Mini-TES | The Miniature Thermal Emission Spectrometer (Mini-TES) is an infrared spectrometer used for detecting the composition of a material (typically rocks) from a distance. By making its measurements in the thermal infrared part of the electromagnetic spectrum, it has the ability to penetrate through the dust coatings common to the Martian surface which is usually problematic for remote sensing observations. There is one on each of the two Mars Exploration Rovers.
Development
The Mini-TES was originally developed by Raytheon for the Department of Geological Sciences at Arizona State University. The Mini-TES is a miniaturized version of Raytheon's Mars Global Surveyor (MGS) TES, built by Arizona State University and Raytheon SAS’ Santa Barbara Remote Sensing. The MGS TES data helped scientists choose landing sites for the Spirit and Opportunity Mars explorer rovers.
Martian soil
The Mini-TES is used for identifying promising rocks and soils for closer examination, and to determine the processes that formed Martian rocks. It measures the infrared radiation that the target rock or object emits in 167 different wavelengths, providing information about the target's composition. One particular goal is to search for minerals that were formed by the action of water, such as carbonates and clays. The instrument can also look skyward to provide temperature profiles of the Martian atmosphere and detect the abundance of dust and water vapor.
The instrument is located inside the warm electronics box in the body of the rover - the mirror redirects radiation into the aperture from above. The Mini-TES instruments aboard the MERs Opportunity and Spirit were never expected to survive the cold Martian winter even if the rovers themselves survived. It was thought that a small potassium bromide (KBr) beamsplitter which was housed in an aluminium fitting would crack due to the mismatched coefficient of thermal expansion. This never happened however and the miniTES instrument on both rovers has survived several Martian winters, and the Spirit rover continues to periodically use the Mini-TES for remote sensing. (The miniTES on the Opportunity rover is not currently being used because of accumulated dust on the mirror following the 2007 dust storm).
There are two other types of spectrometers mounted on the rover's arm which provide additional information about the composition when the rover is close enough to touch the object.
Mini-Tes can work with Pancams to analyze surroundings.
The Mini-TES weighs 2.1 kg (4.6 lb) of the total 185 kg (408 lb) for the whole rover.
See also
Heat Flow and Physical Properties Package (included an infrared radiometer)
References
External links
NASA JPL web-page stating purpose of Mini-TES
Technical academic publication on Mini-TES for Mars Exploration Rover
Web-page regarding information recorded by Mini-TES
Slide show of Mini-TES operational details
Mars Exploration Rover mission
Spectrometers
Spacecraft instruments | Mini-TES | [
"Physics",
"Chemistry"
] | 590 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
2,831,127 | https://en.wikipedia.org/wiki/Traffic%20flow | In transportation engineering, traffic flow is the study of interactions between travellers (including pedestrians, cyclists, drivers, and their vehicles) and infrastructure (including highways, signage, and traffic control devices), with the aim of understanding and developing an optimal transport network with efficient movement of traffic and minimal traffic congestion problems.
The foundation for modern traffic flow analysis dates back to the 1920s with Frank Knight's analysis of traffic equilibrium, further developed by Wardrop in 1952. Despite advances in computing, a universally satisfactory theory applicable to real-world conditions remains elusive. Current models blend empirical and theoretical techniques to forecast traffic and identify congestion areas, considering variables like vehicle use and land changes.
Traffic flow is influenced by the complex interactions of vehicles, displaying behaviors such as cluster formation and shock wave propagation. Key traffic stream variables include speed, flow, and density, which are interconnected. Free-flowing traffic is characterized by fewer than 12 vehicles per mile per lane, whereas higher densities can lead to unstable conditions and persistent stop-and-go traffic. Models and diagrams, such as time-space diagrams, help visualize and analyze these dynamics. Traffic flow analysis can be approached at different scales: microscopic (individual vehicle behavior), macroscopic (fluid dynamics-like models), and mesoscopic (probability functions for vehicle distributions). Empirical approaches, such as those outlined in the Highway Capacity Manual, are commonly used by engineers to model and forecast traffic flow, incorporating factors like fuel consumption and emissions.
The kinematic wave model, introduced by Lighthill and Whitham in 1955, is a cornerstone of traffic flow theory, describing the propagation of traffic waves and impact of bottlenecks. Bottlenecks, whether stationary or moving, significantly disrupt flow and reduce roadway capacity. The Federal Highway Authority attributes 40% of congestion to bottlenecks. Classical traffic flow theories include the Lighthill-Whitham-Richards model and various car-following models that describe how vehicles interact in traffic streams. An alternative theory, Kerner's three-phase traffic theory, suggests a range of capacities at bottlenecks rather than a single value. The Newell-Daganzo merge model and car-following models further refine our understanding of traffic dynamics and are instrumental in modern traffic engineering and simulation.
History
Attempts to produce a mathematical theory of traffic flow date back to the 1920s, when American Economist Frank Knight first produced an analysis of traffic equilibrium, which was refined into Wardrop's first and second principles of equilibrium in 1952.
Nonetheless, even with the advent of significant computer processing power, to date there has been no satisfactory general theory that can be consistently applied to real flow conditions. Current traffic models use a mixture of empirical and theoretical techniques. These models are then developed into traffic forecasts, and take account of proposed local or major changes, such as increased vehicle use, changes in land use or changes in mode of transport (with people moving from bus to train or car, for example), and to identify areas of congestion where the network needs to be adjusted.
Overview
Traffic behaves in a complex and nonlinear way, depending on the interactions of a large number of vehicles. Due to the individual reactions of human drivers, vehicles do not interact simply following the laws of mechanics, but rather display cluster formation and shock wave propagation, both forward and backward, depending on vehicle density. Some mathematical models of traffic flow use a vertical queue assumption, in which the vehicles along a congested link do not spill back along the length of the link.
In a free-flowing network, traffic flow theory refers to the traffic stream variables of speed, flow, and concentration. These relationships are mainly concerned with uninterrupted traffic flow, primarily found on freeways or expressways.
Flow conditions are considered "free" when less than 12 vehicles per mile per lane are on a road. "Stable" is sometimes described as 12–30 vehicles per mile per lane. As the density reaches the maximum mass flow rate (or flux) and exceeds the optimum density (above 30 vehicles per mile per lane), traffic flow becomes unstable, and even a minor incident can result in persistent stop-and-go driving conditions. A "breakdown" condition occurs when traffic becomes unstable and exceeds 67 vehicles per mile per lane. "Jam density" refers to extreme traffic density when traffic flow stops completely, usually in the range of 185–250 vehicles per mile per lane.
However, calculations about congested networks are more complex and rely more on empirical studies and extrapolations from actual road counts. Because these are often urban or suburban in nature, other factors (such as road-user safety and environmental considerations) also influence the optimum conditions.
Traffic stream properties
Traffic flow is generally constrained along a one-dimensional pathway (e.g. a travel lane). A time-space diagram shows graphically the flow of vehicles along a pathway over time. Time is displayed along the horizontal axis, and distance is shown along the vertical axis. Traffic flow in a time-space diagram is represented by the individual trajectory lines of individual vehicles. Vehicles following each other along a given travel lane will have parallel trajectories, and trajectories will cross when one vehicle passes another. Time-space diagrams are useful tools for displaying and analyzing the traffic flow characteristics of a given roadway segment over time (e.g. analyzing traffic flow congestion).
There are three main variables to visualize a traffic stream: speed (v), density (indicated k; the number of vehicles per unit of space), and flow (indicated q; the number of vehicles per unit of time).
Speed
Speed is the distance covered per unit of time. One cannot track the speed of every vehicle; so, in practice, average speed is measured by sampling vehicles in a given area over a period of time. Two definitions of average speed are identified: "time mean speed" and "space mean speed".
The "space mean speed" is thus the harmonic mean of the speeds. The time mean speed is never less than space mean speed: , where is the variance of the space mean speed
In a time-space diagram, the instantaneous velocity, v = dx/dt, of a vehicle is equal to the slope along the vehicle's trajectory. The average velocity of a vehicle is equal to the slope of the line connecting the trajectory endpoints where a vehicle enters and leaves the roadway segment. The vertical separation (distance) between parallel trajectories is the vehicle spacing (s) between a leading and following vehicle. Similarly, the horizontal separation (time) represents the vehicle headway (h). A time-space diagram is useful for relating headway and spacing to traffic flow and density, respectively.
Density
Density (k) is defined as the number of vehicles per unit length of the roadway. In traffic flow, the two most important densities are the critical density (kc) and jam density (kj). The maximum density achievable under free flow is kc, while kj is the maximum density achieved under congestion. In general, jam density is five times the critical density. Inverse of density is spacing (s), which is the center-to-center distance between two vehicles.
The density (k) within a length of roadway (L) at a given time (t1) is equal to the inverse of the average spacing of the n vehicles.
In a time-space diagram, the density may be evaluated in the region A.
where tt is the total travel time in A.
Flow
Flow (q) is the number of vehicles passing a reference point per unit of time, vehicles per hour. The inverse of flow is headway (h), which is the time that elapses between the ith vehicle passing a reference point in space and the (i + 1)th vehicle. In congestion, h remains constant. As a traffic jam forms, h approaches infinity.
The flow (q) passing a fixed point (x1) during an interval (T) is equal to the inverse of the average headway of the m vehicles.
In a time-space diagram, the flow may be evaluated in the region B.
where td is the total distance traveled in B.
Methods of analysis
Analysts approach the problem in three main ways, corresponding to the three main scales of observation in physics:
Microscopic scale: At the most basic level, every vehicle is considered as an individual. An equation can be written for each, usually an ordinary differential equation (ODE). Cellular automation models can also be used, where the road is divided into cells, each of which contains a moving car, or is empty. The Nagel–Schreckenberg model is a simple example of such a model. As the cars interact it can model collective phenomena such as traffic jams.
Macroscopic scale: Similar to models of fluid dynamics, it is considered useful to employ a system of partial differential equations, which balance laws for some gross quantities of interest; e.g., the density of vehicles or their mean velocity.
Mesoscopic (kinetic) scale: A third, intermediate possibility, is to define a function which expresses the probability of having a vehicle at time in position which runs with velocity . This function, following methods of statistical mechanics, can be computed using an integro-differential equation such as the Boltzmann equation.
The engineering approach to analysis of highway traffic flow problems is primarily based on empirical analysis (i.e., observation and mathematical curve fitting). One major reference used by American planners is the Highway Capacity Manual, published by the Transportation Research Board, which is part of the United States National Academy of Sciences. This recommends modelling traffic flows using the whole travel time across a link using a delay/flow function, including the effects of queuing. This technique is used in many US traffic models and in the SATURN model in Europe.
In many parts of Europe, a hybrid empirical approach to traffic design is used, combining macro-, micro-, and mesoscopic features. Rather than simulating a steady state of flow for a journey, transient "demand peaks" of congestion are simulated. These are modeled by using small "time slices" across the network throughout the working day or weekend. Typically, the origins and destinations for trips are first estimated and a traffic model is generated before being calibrated by comparing the mathematical model with observed counts of actual traffic flows, classified by type of vehicle. "Matrix estimation" is then applied to the model to achieve a better match to observed link counts before any changes, and the revised model is used to generate a more realistic traffic forecast for any proposed scheme. The model would be run several times (including a current baseline, an "average day" forecast based on a range of economic parameters and supported by sensitivity analysis) in order to understand the implications of temporary blockages or incidents around the network. From the models, it is possible to total the time taken for all drivers of different types of vehicle on the network and thus deduce average fuel consumption and emissions.
Much of UK, Scandinavian, and Dutch authority practice is to use the modelling program CONTRAM for large schemes, which has been developed over several decades under the auspices of the UK's Transport Research Laboratory, and more recently with the support of the Swedish Road Administration. By modelling forecasts of the road network for several decades into the future, the economic benefits of changes to the road network can be calculated, using estimates for value of time and other parameters. The output of these models can then be fed into a cost-benefit analysis program.
Cumulative vehicle count curves (N-curves)
A cumulative vehicle count curve, the N-curve, shows the cumulative number of vehicles that pass a certain location x by time t, measured from the passage of some reference vehicle. This curve can be plotted if the arrival times are known for individual vehicles approaching a location x, and the departure times are also known as they leave location x. Obtaining these arrival and departure times could involve data collection: for example, one could set two point sensors at locations X1 and X2, and count the number of vehicles that pass this segment while also recording the time each vehicle arrives at X1 and departs from X2. The resulting plot is a pair of cumulative curves where the vertical axis (N) represents the cumulative number of vehicles that pass the two points: X1 and X2, and the horizontal axis (t) represents the elapsed time from X1 and X2.
If vehicles experience no delay as they travel from X1 to X2, then the arrivals of vehicles at location X1 is represented by curve N1 and the arrivals of the vehicles at location X2 is represented by N2 in figure 8. More commonly, curve N1 is known as the arrival curve of vehicles at location X1 and curve N2 is known as the arrival curve of vehicles at location X2. Using a one-lane signalized approach to an intersection as an example, where X1 is the location of the stop bar at the approach and X2 is an arbitrary line on the receiving lane just across of the intersection, when the traffic signal is green, vehicles can travel through both points with no delay and the time it takes to travel that distance is equal to the free-flow travel time. Graphically, this is shown as the two separate curves in figure 8.
However, when the traffic signal is red, vehicles arrive at the stop bar (X1) and are delayed by the red light before crossing X2 some time after the signal turns green. As a result, a queue builds at the stop bar as more vehicles are arriving at the intersection while the traffic signal is still red. Therefore, for as long as vehicles arriving at the intersection are still hindered by the queue, the curve N2 no longer represents the vehicles’ arrival at location X2; it now represents the vehicles’ virtual arrival at location X2, or in other words, it represents the vehicles' arrival at X2 if they did not experience any delay. The vehicles' arrival at location X2, taking into account the delay from the traffic signal, is now represented by the curve N′2 in figure 9.
However, the concept of the virtual arrival curve is flawed. This curve does not correctly show the queue length resulting from the interruption in traffic (i.e. red signal). It assumes that all vehicles are still reaching the stop bar before being delayed by the red light. In other words, the virtual arrival curve portrays the stacking of vehicles vertically at the stop bar. When the traffic signal turns green, these vehicles are served in a first-in-first-out (FIFO) order. For a multi-lane approach, however, the service order is not necessarily FIFO. Nonetheless, the interpretation is still useful because of the concern with average total delay instead of total delays for individual vehicles.
Step function vs. smooth function
The traffic light example depicts N-curves as smooth functions. Theoretically, however, plotting N-curves from collected data should result in a step-function (figure 10). Each step represents the arrival or departure of one vehicle at that point in time. When the N-curve is drawn on larger scale reflecting a period of time that covers several cycles, then the steps for individual vehicles can be ignored, and the curve will then look like a smooth function (figure 8).
Traffic assignment
The aim of traffic flow analysis is to create and implement a model which would enable vehicles to reach their destination in the shortest possible time using the maximum roadway capacity. This is a four-step process:
Generation – the program estimates how many trips would be generated. For this, the program needs the statistical data of residence areas by population, location of workplaces etc.;
Distribution – after generation it makes the different Origin-Destination (OD) pairs between the location found in step 1;
Modal Split/Mode Choice – the system has to decide how much percentage of the population would be split between the difference modes of available transport, e.g. cars, buses, rails, etc.;
Route Assignment – finally, routes are assigned to the vehicles based on minimum criterion rules.
This cycle is repeated until the solution converges.
There are two main approaches to tackle this problem with the end objectives:
System optimum
User equilibrium
System optimum
In short, a network is in system optimum (SO) when the total system cost is the minimum among all possible assignments.
System Optimum is based on the assumption that routes of all vehicles would be controlled by the system, and that rerouting would be based on maximum utilization of resources and minimum total system cost. (Cost can be interpreted as travel time.) Hence, in a System Optimum routing algorithm, all routes between a given OD pair have the same marginal cost.
In traditional transportation economics, System Optimum is determined by equilibrium of demand function and marginal cost function. In this approach, marginal cost is roughly depicted as increasing function in traffic congestion. In traffic flow approach, the marginal cost of the trip can be expressed as sum of the cost (delay time, w) experienced by the driver and the externality (e) that a driver imposes on the rest of the users.
Suppose there is a freeway (0) and an alternative route (1), which users can be diverted onto off-ramp. Operator knows total arrival rate (A(t)), the capacity of the freeway (μ0), and the capacity of the alternative route (μ1). From the time 't0', when freeway is congested, some of the users start moving to alternative route. However, when t1, alternative route is also full of capacity. Now operator decides the number of vehicles(N), which use alternative route. The optimal number of vehicles (N) can be obtained by calculus of variation, to make marginal cost of each route equal. Thus, optimal condition is T0 = T1 + ∆1. In this graph, we can see that the queue on the alternative route should clear ∆1 time units before it clears from the freeway. This solution does not define how we should allocates vehicles arriving between t1 and T1, we just can conclude that the optimal solution is not unique. If operator wants freeway not to be congested, operator can impose the congestion toll, e0 ― e1, which is the difference between the externality of freeway and alternative route. In this situation, freeway will maintain free flow speed, however alternative route will be extremely congested.
User equilibrium
In brief, A network is in user equilibrium (UE) when every driver chooses the routes in its lowest cost between origin and destination regardless whether total system cost is minimized.
The user optimum equilibrium assumes that all users choose their own route towards their destination based on the travel time that will be consumed in different route options. The users will choose the route which requires the least travel time. The user optimum model is often used in simulating the impact on traffic assignment by highway bottlenecks. When the congestion occurs on highway, it will extend the delay time in travelling through the highway and create a longer travel time. Under the user optimum assumption, the users would choose to wait until the travel time using a certain freeway is equal to the travel time using city streets, and hence equilibrium is reached. This equilibrium is called User Equilibrium, Wardrop Equilibrium or Nash Equilibrium.
The core principle of User Equilibrium is that all used routes between a given OD pair have the same travel time. An alternative route option is enabled to use when the actual travel time in the system has reached the free-flow travel time on that route.
For a highway user optimum model considering one alternative route, a typical process of traffic assignment is shown in figure 15. When the traffic demand stays below the highway capacity, the delay time on highway stays zero. When the traffic demand exceeds the capacity, the queue of vehicle will appear on the highway and the delay time will increase. Some of users will turn to the city streets when the delay time reaches the difference between the free-flow travel time on highway and the free-flow travel time on city streets. It indicates that the users staying on the highway will spend as much travel time as the ones who turn to the city streets. At this stage, the travel time on both the highway and the alternative route stays the same. This situation may be ended when the demand falls below the road capacity, that is the travel time on highway begins to decrease and all the users will stay on the highway. The total of part area 1 and 3 represents the benefits by providing an alternative route. The total of area 4 and area 2 shows the total delay cost in the system, in which area 4 is the total delay occurs on the highway and area 2 is the extra delay by shifting traffic to city streets.
Navigation function in Google Maps can be referred as a typical industrial application of dynamic traffic assignment based on User Equilibrium since it provides every user the routing option in lowest cost (travel time).
Time delay
Both User Optimum and System Optimum can be subdivided into two categories on the basis of the approach of time delay taken for their solution:
Predictive Time Delay
Predictive time delay assumes that the user of the system knows exactly how long the delay is going to be right ahead. Predictive delay knows when a certain congestion level will be reached and when the delay of that system would be more than taking the other system, so the decision for reroute can be made in time. In the vehicle counts-time diagram, predictive delay at time t is horizontal line segment on the right side of time t, between the arrival and departure curve, shown in Figure 16. the corresponding y coordinate is the number nth vehicle that leaves the system at time t.
Reactive Time Delay
Reactive time delay is when the user has no knowledge of the traffic conditions ahead. The user waits to experience the point where the delay is observed and the decision to reroute is in reaction to that experience at the moment. Predictive delay gives significantly better results than the reactive delay method. In the vehicle counts-time diagram, predictive delay at time t is horizontal line segment on the left side of time t, between the arrival and departure curve, shown in Figure 16. the corresponding y coordinate is the number nth vehicle that enters the system at time t.
Variable speed limit assignment
This is an upcoming approach of eliminating shockwave and increasing safety for the vehicles. The concept is based on the fact that the risk of accident on a roadway increases with speed differential between the upstream and downstream vehicles. The two types of crash risk which can be reduced from VSL implementation are the rear-end crash and the lane-change crash. Variable speed limits seek to homogenize speed, leading to a more constant flow. Different approaches have been implemented by researchers to build a suitable VSL algorithm.
Variable speed limits are usually enacted when sensors along the roadway detect that congestion or weather events have exceeded thresholds. The roadway speed limit will then be reduced in 5-mph increments through the use of signs above the roadway (Dynamic Message Signs) controlled by the Department of Transportation. The goal of this process is the both increase safety through accident reduction and to avoid or postpone the onset of congestion on the roadway. The ideal resulting traffic flow is slower overall, but less stop-and-go, resulting in fewer instances of rear-end and lane-change crashes. The use of VSL's also regularly employs shoulder-lanes permitted for transportation only under congested states which this process aims to combat. The need for a variable speed limit is shown by Flow-Density diagram to the right.
In this figure ("Flow-Speed Diagram for a Typical Roadway"), the point of the curve represents optimal traffic movement in both flow and speed. However, beyond this point the speed of travel quickly reaches a threshold and starts to decline rapidly. In order to reduce the potential risk of this rapid speed decline, variable speed limits reduce the speed at a more gradual rate (5-mph increments), allowing drivers to have more time to prepare and acclimate to the slowdown due to congestion/weather. The development of a uniform travel speed reduces the probability of erratic driver behavior and therefore crashes.
Through historical data obtained at VSL sites, it has been determined that implementation of this practice reduces accident numbers by 20-30%.
In addition to safety and efficiency concerns, VSL's can also garner environmental benefits such as decreased emissions, noise, and fuel consumption. This is due to the fact that vehicles are more fuel-efficient when at a constant rate of travel, rather than in a state of constant acceleration and deacceleration like that usually found in congested conditions.
Road junctions
A major consideration in road capacity relates to the design of junctions. By allowing long "weaving sections" on gently curving roads at graded intersections, vehicles can often move across lanes without causing significant interference to the flow. However, this is expensive and takes up a large amount of land, so other patterns are often used, particularly in urban or very rural areas. Most large models use crude simulations for intersections, but computer simulations are available to model specific sets of traffic lights, roundabouts, and other scenarios where flow is interrupted or shared with other types of road users or pedestrians. A well-designed junction can enable significantly more traffic flow at a range of traffic densities during the day. By matching such a model to an "Intelligent Transport System", traffic can be sent in uninterrupted "packets" of vehicles at predetermined speeds through a series of phased traffic lights.
The UK's TRL has developed junction modelling programs for small-scale local schemes that can take account of detailed geometry and sight lines; ARCADY for roundabouts, PICADY for priority intersections, and OSCADY and TRANSYT for signals. Many other junction analysis software packages exist such as Sidra and LinSig and Synchro.
Kinematic wave model
The kinematic wave model was first applied to traffic flow by Lighthill and Whitham in 1955. Their two-part paper first developed the theory of kinematic waves using the motion of water as an example. In the second half, they extended the theory to traffic on “crowded arterial roads.” This paper was primarily concerned with developing the idea of traffic “humps” (increases in flow) and their effects on speed, especially through bottlenecks.
The authors began by discussing previous approaches to traffic flow theory. They note that at the time there had been some experimental work, but that “theoretical approaches to the subject [were] in their infancy.” One researcher in particular, John Glen Wardrop, was primarily concerned with statistical methods of examination, such as space mean speed, time mean speed, and “the effect of increase of flow on overtaking” and the resulting decrease in speed it would cause. Other previous research had focused on two separate models: one related traffic speed to traffic flow and another related speed to the headway between vehicles.
The goal of Lighthill and Whitham, on the other hand, was to propose a new method of study “suggested by theories of the flow about supersonic projectiles and of flood movement in rivers.” The resulting model would capture both of the aforementioned relationships, speed-flow and speed-headway, into a single curve, which would “[sum] up all the properties of a stretch of road which are relevant to its ability to handle the flow of congested traffic.” The model they presented related traffic flow to concentration (now typically known as density). They wrote, “The fundamental hypothesis of the theory is that at any point of the road the flow q (vehicles per hour) is a function of the concentration k (vehicles per mile).” According to this model, traffic flow resembled the flow of water in that “Slight changes in flow are propagated back through the stream of vehicles along ‘kinematic waves,’ whose velocity relative to the road is the slope of the graph of flow against concentration.” The authors included an example of such a graph; this flow-versus-concentration (density) plot is still used today (see figure 3 above).
The authors used this flow-concentration model to illustrate the concept of shock waves, which slow down vehicles which enter them, and the conditions that surround them. They also discussed bottlenecks and intersections, relating both to their new model. For each of these topics, flow-concentration and time-space diagrams were included. Finally, the authors noted that no agreed-upon definition for capacity existed, and argued that it should be defined as the “maximum flow of which the road is capable.” Lighthill and Whitham also recognized that their model had a significant limitation: it was only appropriate for use on long, crowded roadways, as the “continuous flow” approach only works with a large number of vehicles.
Components of the kinematic wave model of traffic flow theory
The kinematic wave model of traffic flow theory is the simplest dynamic traffic flow model that reproduces the propagation of traffic waves. It is made up of three components: the fundamental diagram, the conservation equation, and initial conditions. The law of conservation is the fundamental law governing the kinematic wave model:
The fundamental diagram of the kinematic wave model relates traffic flow with density, as seen in figure 3 above. It can be written as:
Finally, initial conditions must be defined to solve a problem using the model. A boundary is defined to be , representing density as a function of time and position. These boundaries typically take two different forms, resulting in initial value problems (IVPs) and boundary value problems (BVPs). Initial value problems give the traffic density at time , such that , where is the given density function. Boundary value problems give some function that represents the density at the position, such that .
The model has many uses in traffic flow. One of the primary uses is in modeling traffic bottlenecks, as described in the following section.
Traffic bottleneck
Traffic bottlenecks are disruptions of traffic on a roadway caused either due to road design, traffic lights, or accidents. There are two general types of bottlenecks, stationary and moving bottlenecks. Stationary bottlenecks are those that arise due to a disturbance that occurs due to a stationary situation like narrowing of a roadway, an accident. Moving bottlenecks on the other hand are those vehicles or vehicle behavior that causes the disruption in the vehicles which are upstream of the vehicle. Generally, moving bottlenecks are caused by heavy trucks as they are slow moving vehicles with less acceleration and also may make lane changes.7
Bottlenecks are important considerations because they impact the flow in traffic, the average speeds of the vehicles. The main consequence of a bottleneck is an immediate reduction in capacity of the roadway. The Federal Highway Authority has stated that 40% of all congestion is from bottlenecks.
Stationary bottleneck
The general cause of stationary bottlenecks are lane drops which occurs when the a multilane roadway loses one or more its lane. This causes the vehicular traffic in the ending lanes to merge onto the other lanes.
Moving bottleneck
As explained above, moving bottlenecks are caused due to slow moving vehicles that cause disruption in traffic. Moving bottlenecks can be active or inactive bottlenecks. If the reduced capacity(qu) caused due to a moving bottleneck is greater than the actual capacity(μ) downstream of the vehicle, then this bottleneck is said to be an active bottleneck.
Classical traffic flow theories
The generally accepted classical fundamentals and methodologies of traffic and transportation theory are as follows:
The Lighthill-Whitham-Richards (LWR) model introduced in 1955–56. Daganzo introduced a cell-transmission model (CTM) that is consistent with the LWR model.
A traffic flow instability that causes a growing wave of a local reduction of the vehicle speed. This classical traffic flow instability was introduced in 1959–61 in the General Motors (GM) car-following model by Herman, Gazis, Montroll, Potts, and Rothery. The classical traffic flow instability of the GM model has been incorporated in a huge number of traffic flow models like Gipps's model, Payne's model, Newell's optimal velocity (OV) model, Wiedemann's model, Whitham's model, the Nagel-Schreckenberg (NaSch) cellular automaton (CA) model, Bando et al. OV model, Treiber's IDM, Krauß model, the Aw-Rascle model and many other well-known microscopic and macroscopic traffic-flow models, which are the basis of traffic simulation tools widely used by traffic engineers and researchers (see, e.g., references in review).
The understanding of highway capacity as a particular value. This understanding of road capacity was probably introduced in 1920–35 (see ). Currently, it is assumed that highway capacity of free flow at a highway bottleneck is a stochastic value. However, in accordance with the classical understanding of highway capacity, it is assumed that at a given time instant there can be only one particular value of this stochastic highway capacity (see references in the book).
Wardrop's user equilibrium (UE) and system optimum (SO) principles for traffic and transportation network optimization and control.
Alternatives: Kerner's three phase traffic theory
Three-phase traffic theory is an alternative theory of traffic flow created by Boris Kerner at the end of 1990's (for reviews, see the books). Probably the most important result of the three-phase theory is that at any time instance there is a range of highway capacities of free flow at a bottleneck. The capacity range is between some maximum and minimum capacities. The range of highway capacities of free flow at the bottleneck in three-phase traffic theory contradicts fundamentally classical traffic theories as well as methods for traffic management and traffic control which at any time instant assume the existence of a particular deterministic or stochastic highway capacity of free flow at the bottleneck. Non-specialists that have never learned about traffic phenomena before can find simplified explanations of real measured vehicle traffic phenomena leading to the emergence of Kerner's three phase traffic theory in the book; some engineering applications of Kerner's theory can be found in the book.
Newell-Daganzo Merge Models
In the condition of traffic flows leaving two branch roadways and merging into a single flow through a single roadway, determining the flows that pass through the merging process and the state of each branch of roadways becomes an important task for traffic engineers. The Newell-Daganzo merge model is a good approach to solve these problems. This simple model is the output of the result of both Gordon Newell's description of the merging process and the Daganzo's cell transmission model. In order to apply the model to determine the flows which exiting two branch of roadways and the stat of each branch of roadways, one needs to know the capacities of the two input branches of roadways, the exiting capacity, the demands for each branch of roadways, and the number of lanes of the single roadway. The merge ratio will be calculated in order to determine the proportion of the two input flows when both of branches of roadway are operating in congested conditions.
As can be seen in a simplified model of the process of merging, the exiting capacity of the system is defined to be μ, the capacities of the two input branches of roadways are defined as μ1 and μ2, and the demands for each branch of roadways are defined as q1D and q2D. The q1 and q2 are the output of the model which are the flows that pass through the merging process. The process of the model is based on the assumption that the sum of capacities of the two input branches of roadways is less than the exiting capacity of the system, μ1+μ2 ≤ μ.
Car-following models
Car-following models describe how one vehicle follows another vehicle in an uninterrupted traffic flow. They are a type of microscopic traffic flow model.
Examples of car-following models
Newell's car-following model
Louis A. Pipes started researching and gaining acknowledgment from the public in the early 1950s. Pipes car-following model is based on a safe driving rule in the California Motor Vehicle Code, and this model utilized an assumption of safe distance: a good rule for following another vehicle is to allocate an inter-vehicle distance of at least the length of a car for every ten miles per hour of vehicle speed. M
To capture the potential nonlinear effects in the dynamics of car following, G. F. Newell proposed a nonlinear car-following model based on empirical data. Unlike Pipes model which is solely relying on rules of safe driving, Newell nonlinear model aims at capturing the correct shape of fundamental diagrams (e.g., density-speed, flow-speed, density-flow, spacing-speed, pace-headway, etc.).
The Optimal Velocity Model (OVM) was introduced by Bando et al. in 1995 based on the assumption that each driver tries to reach to the optimal velocity according to the inter-vehicle difference and velocity difference between preceding vehicles.
Intelligent driver model is widely adopted in the research of Connected Vehicle (CV) and Connected and Autonomous Vehicle (CAV).
See also
Braess's paradox
Data flow
Dijkstra's algorithm
Epidemiology of motor vehicle collisions
Floating car data
Green transport hierarchy
Infrared traffic logger
Truck lane restriction
Road traffic control
Road traffic safety#Statistics
Rule 184
Traffic counter
Traffic engineering
Turning movement counters
References
Further reading
A survey about the state of art in traffic flow modeling:
N. Bellomo, V. Coscia, M. Delitala, On the Mathematical Theory of Vehicular Traffic Flow I. Fluid Dynamic and Kinetic Modelling, Math. Mod. Meth. App. Sc., Vol. 12, No. 12 (2002) 1801–1843
S. Maerivoet, Modelling Traffic on Motorways: State-of-the-Art, Numerical Data Analysis, and Dynamic Traffic Assignment, Katholieke Universiteit Leuven, 2006
M. Garavello and B. Piccoli, Traffic Flow on Networks, American Institute of Mathematical Sciences (AIMS), Springfield, MO, 2006. pp. xvi+243
Carlos F.Daganzo, "Fundamentals of Transportation and Traffic Operations.", Pergamon-Elsevier, Oxford, U.K. (1997)
B.S. Kerner, Introduction to Modern Traffic Flow Theory and Control: The Long Road to Three-Phase Traffic Theory, Springer, Berlin, New York 2009
Cassidy, M.J. and R.L. Bertini. "Observations at a Freeway Bottleneck." Transportation and Traffic Theory (1999).
Daganzo, Carlos F. "A Simple Traffic Analysis Procedure." Networks and Spatial Economics 1.i (2001): 77–101.
Lindgren, Roger V.F. "Analysis of Flow Features in Queued Traffic on a German Freeway." Portland State University (2005).
Ni, B. and J.D. Leonard. "Direct Methods of Determining Traffic Stream Characteristics by Definition." Transportation Research Record (2006).
Useful books from the physical point of view:
M. Treiber and A. Kesting, "Traffic Flow Dynamics", Springer, 2013
B.S. Kerner, The Physics of Traffic, Springer, Berlin, New York 2004
Traffic flow on arxiv.org
May, Adolf. Traffic Flow Fundamentals. Prentice Hall, Englewood Cliffs, NJ, 1990.
Taylor, Nicholas. The Contram dynamic traffic assignment model TRL 2003
External links
The Transportation Research Board's (TRB) fifth edition of the Highway Capacity Manual (HCM 2010)
Road transport
Mathematical physics
Conservation equations
Road traffic management | Traffic flow | [
"Physics",
"Mathematics"
] | 8,159 | [
"Applied mathematics",
"Conservation laws",
"Theoretical physics",
"Mathematical objects",
"Equations",
"Conservation equations",
"Mathematical physics",
"Symmetry",
"Physics theorems"
] |
2,831,867 | https://en.wikipedia.org/wiki/Helium%20mass%20spectrometer | A helium mass spectrometer is an instrument commonly used to detect and locate small leaks. It was initially developed in the Manhattan Project during World War II to find extremely small leaks in the gas diffusion process of uranium enrichment plants. It typically uses a vacuum chamber in which a sealed container filled with helium is placed. Helium leaks out of the container, and the rate of the leak is detected by a mass spectrometer.
Detection technique
Helium is used as a tracer because it penetrates small leaks rapidly. Helium also has the properties of being non-toxic, chemically inert and present in the atmosphere only in minute quantities (5 ppm). Typically a helium leak detector will be used to measure leaks in the range of 10 to 10 Pa·m·s.
A flow of 10 Pa·m·s is about 0.006 ml per minute at standard conditions for temperature and pressure (STP).
A flow of 10 Pa·m·s is about 0.003 ml per century at STP.
Types of leaks
Typically there are two types of leaks in the detection of helium as a tracer for leak detection: residual leak and virtual leak. A residual leak is a real leak due to an imperfect seal, a puncture, or some other hole in the system. A virtual leak is the semblance of a leak in a vacuum system caused by outgassing of chemicals trapped or adhered to the interior of a system that is actually sealed. As the gases are released into the chamber, they can create a false positive indication of a residual leak in the system.
Uses
Helium mass spectrometer leak detectors are used in production line industries such as refrigeration and air conditioning, automotive parts, carbonated beverage containers food packages and aerosol packaging, as well as in the manufacture of steam products, gas bottles, fire extinguishers, tire valves, and numerous other products including all vacuum systems.
Test methods
Global helium spray
This method requires the part to be tested to be connected to a helium leak detector. The outer surface of the part to be tested will be located in some kind of a tent in which the helium concentration will be raised to 100% helium.
If the part is small the vacuum system included in the leak testing instrument will be able to reach low enough pressure to allow for mass spectrometer operation.
If the size of the part is too large, an additional vacuum pumping system may be required to reach low enough pressure in a reasonable length of time. Once operating pressure has been reached, the mass spectrometer can start its measuring operation.
If leakage is encountered the small and "agile" molecules of helium will migrate through the cracks into the part. The vacuum system will carry any tracer gas molecule into the analyzer cell of the magnetic sector mass spectrometer. A signal will inform the operator of the value of the leakage encountered.
Local helium spray
This method is a small variation from the one above.
It still requires the part to be tested to be connected to a helium leak detector. The outer surface of the part to be tested is sprayed with a localized stream of helium tracer gas.
If the part is small the vacuum system included in the instrument will be able to reach low enough pressure to allow for mass spectrometer operation.
If the size of the part is too large, an additional pumping system may be required to reach low enough pressure in a reasonable length of time. Once operating pressure has been reached, the mass spectrometer can start its measuring operation.
If leakage is encountered the small and "agile" molecules of helium will migrate through the cracks into the part. The vacuum system will carry any tracer gas molecule into the analyzer cell of the magnetic sector mass spectrometer. A signal will inform the operator of the value of the leakage encountered. Thus correlation between maximum leakage signal and location of helium spray head will allow the operator to pinpoint the leaky area.
Helium charged vacuum test
In this case the part is pressurized (sometime this test is combined with a burst test, i.e. at 40 bar) with helium while sitting in a vacuum chamber. The vacuum chamber is connected to a vacuum pumping system and a leak detector. Once the vacuum has reached the mass spectrometer operating pressure, any helium leakage will be measured.
This test method applies to a lot of components that will operate under pressure: airbag canisters, evaporators, condensers, high-voltage SF6 filled switchgear.
Partial vacuum method (ultra sniffer test)
In contrast to the Helium charged sniffer test, the partial vacuum method, the ultra sniffer test gas method (UST-method) uses the partial vacuum effect, so that the gas tightness of test sample can be detected at normal pressure with the same sensitivity as the helium charged vacuum test with helium gas helium. The method has a sensitivity of 10 Pa·m·s.
Similar to the classical Helium charged sniffer test the test sample is enclosed in a bag, but in contrast to the classic method, the bag is exposed with a helium-free gas, so that the helium concentration inside the bag can reduced from 5·10 to 10 Pa·m·s. This sensitivity corresponds to a theoretical gas loss of 1 cm in 3000 years.
The UST method can be used very economically for the ad hoc testing of test samples. The test system can be set up easily, with normal pneumatic items, such as valves and plastic hoses. For the embedding of the test samples, a simple plastic bag is sufficient. The UST method was also used for the leak testing of component of the fusion experiment Wendelstein 7-X in Germany.
Bombing test
This method applies to objects that are supposedly sealed.
First the device under test will be exposed for an extended length of time to a high helium pressure in a "bombing" chamber.
If the part is leaky, helium will be able to penetrate the device.
Later the device will be placed in a vacuum chamber, connected to a vacuum pump and a mass spectrometer. The tiny amount of gas that entered the device under pressure will be released in the vacuum chamber and sent to the mass spectrometer where the leak rate will be measured.
This test method applies to implantable medical devices, crystal oscillator, saw filter devices.
This method is not able to detect a massive leak as the tracer gas will be quickly pumped out when test chamber is pumped down.
Helium charged sniffer test
In this last case the part is pressurized with helium. The mass spectrometer is fitted with a special device, a sniffer probe, that allows it to sample air (and tracer gas when confronted with a leak) at atmospheric pressure and to bring it into the mass spectrometer.
This mode of operation is frequently used to locate a leak that has been detected by other methods, in order to allow for parts repair. Modern machines can digitally remove the helium two decades below the background level and thus it is now possible detect leaks as small as 5·10 Pa·m·s in sniffing mode.
See also
Mass spectrometry
Tracer-gas leak testing
Leak noise correlator
Helium analyzer
References
External links
Test of medical devices (FDA)
Leak Detection
UST method
Mass spectrometry
Vacuum systems | Helium mass spectrometer | [
"Physics",
"Chemistry",
"Engineering"
] | 1,492 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Vacuum",
"Mass spectrometry",
"Vacuum systems",
"Matter"
] |
2,832,880 | https://en.wikipedia.org/wiki/Experimental%20testing%20of%20time%20dilation | Time dilation as predicted by special relativity is often verified by means of particle lifetime experiments. According to special relativity, the rate of a clock C traveling between two synchronized laboratory clocks A and B, as seen by a laboratory observer, is slowed relative to the laboratory clock rates. Since any periodic process can be considered a clock, the lifetimes of unstable particles such as muons must also be affected, so that moving muons should have a longer lifetime than resting ones. A variety of experiments confirming this effect have been performed both in the atmosphere and in particle accelerators. Another type of time dilation experiments is the group of Ives–Stilwell experiments measuring the relativistic Doppler effect.
Atmospheric tests
Theory
The emergence of the muons is caused by the collision of cosmic rays with the upper atmosphere, after which the muons reach Earth. The probability that muons can reach the Earth depends on their half-life, which itself is modified by the relativistic corrections of two quantities: a) the mean lifetime of muons and b) the length between the upper and lower atmosphere (at Earth's surface). This allows for a direct application of length contraction upon the atmosphere at rest in inertial frame S, and time dilation upon the muons at rest in S′.
Time dilation and length contraction
Length of the atmosphere: The contraction formula is given by , where L0 is the proper length of the atmosphere and L its contracted length. As the atmosphere is at rest in S, we have γ=1 and its proper Length L0 is measured. As it is in motion in S′, we have γ>1 and its contracted length L′ is measured.
Decay time of muons: The time dilation formula is , where T0 is the proper time of a clock comoving with the muon, corresponding with the mean decay time of the muon in its proper frame. As the muon is at rest in S′, we have γ=1 and its proper time T′0 is measured. As it is moving in S, we have γ>1, therefore its proper time is shorter with respect to time T. (For comparison's sake, another muon at rest on Earth can be considered, called muon-S. Therefore, its decay time in S is shorter than that of muon-S′, while it is longer in S′.)
In S, muon-S′ has a longer decay time than muon-S. Therefore, muon-S' has sufficient time to pass the proper length of the atmosphere in order to reach Earth.
In S′, muon-S has a longer decay time than muon-S′. But this is no problem, since the atmosphere is contracted with respect to its proper length. Therefore, even the faster decay time of muon-S′ suffices in order to be passed by the moving atmosphere and to be reached by Earth.
Minkowski diagram
The muon emerges at the origin (A) by collision of radiation with the upper atmosphere. The muon is at rest in S′, so its worldline is the ct′-axis. The upper atmosphere is at rest in S, so its worldline is the ct-axis. Upon the axes of x and x′, all events are present that are simultaneous with A in S and S′, respectively. The muon and Earth are meeting at D. As the Earth is at rest in S, its worldline (identical with the lower atmosphere) is drawn parallel to the ct-axis, until it intersects the axes of x′ and x.
Time: The interval between two events present on the worldline of a single clock is called proper time, an important invariant of special relativity. As the origin of the muon at A and the encounter with Earth at D is on the muon's worldline, only a clock comoving with the muon and thus resting in S′ can indicate the proper time T′0=AD. Due to its invariance, also in S it is agreed that this clock is indicating exactly that time between the events, and because it is in motion here, T′0=AD is shorter than time T indicated by clocks resting in S. This can be seen at the longer intervals T=BD=AE parallel to the ct-axis.
Length: Event B, where the worldline of Earth intersects the x-axis, corresponds in S to the position of Earth simultaneous with the emergence of the muon. C, where the Earth's worldline intersects the x′-axis, corresponds in S′ to the position of Earth simultaneous with the emergence of the muon. Length L0=AB in S is longer than length L′=AC in S′.
Experiments
If no time dilation exists, then those muons should decay in the upper regions of the atmosphere, however, as a consequence of time dilation they are present in considerable amount also at much lower heights. The comparison of those amounts allows for the determination of the mean lifetime as well as the half-life of muons. is the number of muons measured in the upper atmosphere, at sea level, is the travel time in the rest frame of the Earth by which the muons traverse the distance between those regions, and is the mean proper lifetime of the muons:
Rossi–Hall experiment
In 1940 at Echo Lake (3240 m) and Denver in Colorado (1616 m), Bruno Rossi and D. B. Hall measured the relativistic decay of muons (which they thought were mesons). They measured muons in the atmosphere traveling above 0.99 c (c being the speed of light). Rossi and Hall confirmed the formulas for relativistic momentum and time dilation in a qualitative manner. Knowing the momentum and lifetime of moving muons enabled them to compute their mean proper lifetime too – they obtained ≈ 2.4 μs (modern experiments improved this result to ≈ 2.2 μs).
Frisch–Smith experiment
A much more precise experiment of this kind was conducted by David H. Frisch and Smith (1962) and documented by a film. They measured approximately 563 muons per hour in six runs on Mount Washington at 1917m above sea-level. By measuring their kinetic energy, mean muon velocities between 0.995 c and 0.9954 c were determined. Another measurement was taken in Cambridge, Massachusetts at sea-level. The time the muons need from 1917m to 0m should be about . Assuming a mean lifetime of 2.2 μs, only 27 muons would reach this location if there were no time dilation. However, approximately 412 muons per hour arrived in Cambridge, resulting in a time dilation factor of .
Frisch and Smith showed that this is in agreement with the predictions of special relativity: The time dilation factor for muons on Mount Washington traveling at 0.995 c to 0.9954 c is approximately 10.2. Their kinetic energy and thus their velocity was diminished until they reached Cambridge to 0.9881 c and 0.9897 c due to the interaction with the atmosphere, reducing the dilation factor to 6.8. So between the start (≈ 10.2) and the target (≈ 6.8) an average time dilation factor of was determined by them, in agreement with the measured result within the margin of errors (see the above formulas and the image for computing the decay curves).
Other experiments
Since then, many measurements of the mean lifetime of muons in the atmosphere and time dilation have been conducted in undergraduate experiments.
Accelerator and atomic clock tests
Time dilation and CPT symmetry
Much more precise measurements of particle decays have been made in particle accelerators using muons and different types of particles. Besides the confirmation of time dilation, also CPT symmetry was confirmed by comparing the lifetimes of positive and negative particles. This symmetry requires that the decay rates of particles and their antiparticles have to be the same. A violation of CPT invariance would also lead to violations of Lorentz invariance and thus special relativity.
Today, time dilation of particles is routinely confirmed in particle accelerators along with tests of relativistic energy and momentum, and its consideration is obligatory in the analysis of particle experiments at relativistic velocities.
Twin paradox and moving clocks
Bailey et al. (1977) measured the lifetime of positive and negative muons sent around a loop in the CERN Muon storage ring. This experiment confirmed both time dilation and the twin paradox, i.e. the hypothesis that clocks sent away and coming back to their initial position are slowed with respect to a resting clock.
Other measurements of the twin paradox involve gravitational time dilation as well.
In the Hafele–Keating experiment, actual cesium-beam atomic clocks were flown around the world and the expected differences were found compared to a stationary clock.
Clock hypothesis - lack of effect of acceleration
The clock hypothesis states that the extent of acceleration does not influence the value of time dilation. In most of the former experiments mentioned above, the decaying particles were in an inertial frame, i.e. unaccelerated. However, in Bailey et al. (1977) the particles were subject to a transverse acceleration of up to ~1018 g. Since the result was the same, it was shown that acceleration has no impact on time dilation. In addition, Roos et al. (1980) measured the decay of Sigma baryons, which were subject to a longitudinal acceleration between 0.5 and 5.0 × 1015 g. Again, no deviation from ordinary time dilation was measured.
See also
Tests of special relativity
References
External links
Time Dilation - An Experiment With Mu-Mesons
Muon Paradox
Bonizzoni, Ilaria; Giuliani, Giuseppe, The interpretations by experimenters of experiments on 'time dilation': 1940-1970 circa,
Physics experiments
Special relativity
1940 in science | Experimental testing of time dilation | [
"Physics"
] | 2,064 | [
"Special relativity",
"Experimental physics",
"Physics experiments",
"Theory of relativity"
] |
3,795,104 | https://en.wikipedia.org/wiki/Ballistic%20foam | Ballistic foam is a foam that sets hard. It is widely used in the manufacture and repair of aircraft to form a light but strong filler for aircraft wings. The foam is used to surround aircraft fuel tanks to reduce the chance of fires caused by the penetration of incendiary projectiles.
Ballistic foam is a type of polyurethane foam placed in the dry bays of aircraft. Ballistic foam prevents fires, adds strength to the structure, slows down the speed of shrapnel during attacks, and offers cost-effective protection.
Ballistic foam is placed in the dry bays to provide a barrier between the spark and the fuel. As bullets or shrapnel penetrate the mold line skin surrounding the outermost portions of the dry bay, the ballistic foam deprives sparks of oxygen. Thus when the article punctures the fuel tank, a fire is not started. Not only does the foam displace oxygen, but all gases, including explosive vapors which could magnify the destructive effects of ballistic attack. Dry bays, voids, may also contain “onboard ignition sources” like hot surfaces and electrical sparks which benefit both from a lack of gases and the fire-retardant nature of the foam.
Ballistic foam strengthens aircraft by protecting it from fire as well as fluid while adding very little weight. The protection from fluid involves resisting damage by “moisture, hydrocarbon fuels, hydraulic fluids, and most common solvents”. The density of the foam varies with the type being used; Type 2.5 is a white to light amber foam weighing 2.5 pounds per cubic foot, while Type 1.8 is a pale blue to green foam weighing 1.8 pounds per cubic foot.
Chopped fiberglass strands embedded in the foam add to the structural integrity through physical support and shrapnel mitigation. The layer that strengthens the foam in turn strengthens the airframe. The layer of fiberglass also prevents shrapnel and bullets from rupturing the foam. The fiber glass then allows the damage caused by projectile penetration to heal more effectively.
The passive protection afforded by ballistic foam is very simple and inexpensive compared to active protection. One method of active protection is done by filling large dry bays with inert gases which will not sustain a flame. This process is very expensive and complex. Active protection only offers a “one time” chance for ballistic protection while the ballistic foam is always available.
See also
Vaporific effect
References
Foams | Ballistic foam | [
"Chemistry"
] | 499 | [
"Foams"
] |
3,797,203 | https://en.wikipedia.org/wiki/Angular%20momentum%20operator | In quantum mechanics, the angular momentum operator is one of several related operators analogous to classical angular momentum. The angular momentum operator plays a central role in the theory of atomic and molecular physics and other quantum problems involving rotational symmetry. Being an observable, its eigenfunctions represent the distinguishable physical states of a system's angular momentum, and the corresponding eigenvalues the observable experimental values. When applied to a mathematical representation of the state of a system, yields the same state multiplied by its angular momentum value if the state is an eigenstate (as per the eigenstates/eigenvalues equation). In both classical and quantum mechanical systems, angular momentum (together with linear momentum and energy) is one of the three fundamental properties of motion.
There are several angular momentum operators: total angular momentum (usually denoted J), orbital angular momentum (usually denoted L), and spin angular momentum (spin for short, usually denoted S). The term angular momentum operator can (confusingly) refer to either the total or the orbital angular momentum. Total angular momentum is always conserved, see Noether's theorem.
Overview
In quantum mechanics, angular momentum can refer to one of three different, but related things.
Orbital angular momentum
The classical definition of angular momentum is . The quantum-mechanical counterparts of these objects share the same relationship:
where r is the quantum position operator, p is the quantum momentum operator, × is cross product, and L is the orbital angular momentum operator. L (just like p and r) is a vector operator (a vector whose components are operators), i.e. where Lx, Ly, Lz are three different quantum-mechanical operators.
In the special case of a single particle with no electric charge and no spin, the orbital angular momentum operator can be written in the position basis as:
where is the vector differential operator, del.
Spin angular momentum
There is another type of angular momentum, called spin angular momentum (more often shortened to spin), represented by the spin operator . Spin is often depicted as a particle literally spinning around an axis, but this is only a metaphor: the closest classical analog is based on wave circulation. All elementary particles have a characteristic spin (scalar bosons have zero spin). For example, electrons always have "spin 1/2" while photons always have "spin 1" (details below).
Total angular momentum
Finally, there is total angular momentum , which combines both the spin and orbital angular momentum of a particle or system:
Conservation of angular momentum states that J for a closed system, or J for the whole universe, is conserved. However, L and S are not generally conserved. For example, the spin–orbit interaction allows angular momentum to transfer back and forth between L and S, with the total J remaining constant.
Commutation relations
Commutation relations between components
The orbital angular momentum operator is a vector operator, meaning it can be written in terms of its vector components . The components have the following commutation relations with each other:
where denotes the commutator
This can be written generally as
where l, m, n are the component indices (1 for x, 2 for y, 3 for z), and denotes the Levi-Civita symbol.
A compact expression as one vector equation is also possible:
The commutation relations can be proved as a direct consequence of the canonical commutation relations , where is the Kronecker delta.
There is an analogous relationship in classical physics:
where Ln is a component of the classical angular momentum operator, and is the Poisson bracket.
The same commutation relations apply for the other angular momentum operators (spin and total angular momentum):
These can be assumed to hold in analogy with L. Alternatively, they can be derived as discussed below.
These commutation relations mean that L has the mathematical structure of a Lie algebra, and the are its structure constants. In this case, the Lie algebra is SU(2) or SO(3) in physics notation ( or respectively in mathematics notation), i.e. Lie algebra associated with rotations in three dimensions. The same is true of J and S. The reason is discussed below. These commutation relations are relevant for measurement and uncertainty, as discussed further below.
In molecules the total angular momentum F is the sum of the rovibronic (orbital) angular momentum N, the electron spin angular momentum S, and the nuclear spin angular momentum I. For electronic singlet states the rovibronic angular momentum is denoted J rather than N. As explained by Van Vleck,
the components of the molecular rovibronic angular momentum referred to molecule-fixed axes have different commutation relations from those given above which are for the components about space-fixed axes.
Commutation relations involving vector magnitude
Like any vector, the square of a magnitude can be defined for the orbital angular momentum operator,
is another quantum operator. It commutes with the components of ,
One way to prove that these operators commute is to start from the [Lℓ, Lm] commutation relations in the previous section:
Mathematically, is a Casimir invariant of the Lie algebra SO(3) spanned by .
As above, there is an analogous relationship in classical physics:
where is a component of the classical angular momentum operator, and is the Poisson bracket.
Returning to the quantum case, the same commutation relations apply to the other angular momentum operators (spin and total angular momentum), as well,
Uncertainty principle
In general, in quantum mechanics, when two observable operators do not commute, they are called complementary observables. Two complementary observables cannot be measured simultaneously; instead they satisfy an uncertainty principle. The more accurately one observable is known, the less accurately the other one can be known. Just as there is an uncertainty principle relating position and momentum, there are uncertainty principles for angular momentum.
The Robertson–Schrödinger relation gives the following uncertainty principle:
where is the standard deviation in the measured values of X and denotes the expectation value of X. This inequality is also true if x, y, z are rearranged, or if L is replaced by J or S.
Therefore, two orthogonal components of angular momentum (for example Lx and Ly) are complementary and cannot be simultaneously known or measured, except in special cases such as .
It is, however, possible to simultaneously measure or specify L2 and any one component of L; for example, L2 and Lz. This is often useful, and the values are characterized by the azimuthal quantum number (l) and the magnetic quantum number (m). In this case the quantum state of the system is a simultaneous eigenstate of the operators L2 and Lz, but not of Lx or Ly. The eigenvalues are related to l and m, as shown in the table below.
Quantization
In quantum mechanics, angular momentum is quantized – that is, it cannot vary continuously, but only in "quantum leaps" between certain allowed values. For any system, the following restrictions on measurement results apply, where is reduced Planck constant:
Derivation using ladder operators
A common way to derive the quantization rules above is the method of ladder operators. The ladder operators for the total angular momentum are defined as:
Suppose is a simultaneous eigenstate of and (i.e., a state with a definite value for and a definite value for ). Then using the commutation relations for the components of , one can prove that each of the states and is either zero or a simultaneous eigenstate of and , with the same value as for but with values for that are increased or decreased by respectively. The result is zero when the use of a ladder operator would otherwise result in a state with a value for that is outside the allowable range. Using the ladder operators in this way, the possible values and quantum numbers for and can be found.
Since and have the same commutation relations as , the same ladder analysis can be applied to them, except that for there is a further restriction on the quantum numbers that they must be integers.
Visual interpretation
Since the angular momenta are quantum operators, they cannot be drawn as vectors like in classical mechanics. Nevertheless, it is common to depict them heuristically in this way. Depicted on the right is a set of states with quantum numbers , and for the five cones from bottom to top. Since , the vectors are all shown with length . The rings represent the fact that is known with certainty, but and are unknown; therefore every classical vector with the appropriate length and z-component is drawn, forming a cone. The expected value of the angular momentum for a given ensemble of systems in the quantum state characterized by and could be somewhere on this cone while it cannot be defined for a single system (since the components of do not commute with each other).
Quantization in macroscopic systems
The quantization rules are widely thought to be true even for macroscopic systems, like the angular momentum L of a spinning tire. However they have no observable effect so this has not been tested. For example, if is roughly 100000000, it makes essentially no difference whether the precise value is an integer like 100000000 or 100000001, or a non-integer like 100000000.2—the discrete steps are currently too small to measure. For most intents and purposes, the assortment of all the possible values of angular momentum is effectively continuous at macroscopic scales.
Angular momentum as the generator of rotations
The most general and fundamental definition of angular momentum is as the generator of rotations. More specifically, let be a rotation operator, which rotates any quantum state about axis by angle . As , the operator approaches the identity operator, because a rotation of 0° maps all states to themselves. Then the angular momentum operator about axis is defined as:
where 1 is the identity operator. Also notice that R is an additive morphism : ; as a consequence
where exp is matrix exponential. The existence of the generator is guaranteed by the Stone's theorem on one-parameter unitary groups.
In simpler terms, the total angular momentum operator characterizes how a quantum system is changed when it is rotated. The relationship between angular momentum operators and rotation operators is the same as the relationship between Lie algebras and Lie groups in mathematics, as discussed further below.
Just as J is the generator for rotation operators, L and S are generators for modified partial rotation operators. The operator
rotates the position (in space) of all particles and fields, without rotating the internal (spin) state of any particle. Likewise, the operator
rotates the internal (spin) state of all particles, without moving any particles or fields in space. The relation J = L + S comes from:
i.e. if the positions are rotated, and then the internal states are rotated, then altogether the complete system has been rotated.
SU(2), SO(3), and 360° rotations
Although one might expect (a rotation of 360° is the identity operator), this is not assumed in quantum mechanics, and it turns out it is often not true: When the total angular momentum quantum number is a half-integer (1/2, 3/2, etc.), , and when it is an integer, . Mathematically, the structure of rotations in the universe is not SO(3), the group of three-dimensional rotations in classical mechanics. Instead, it is SU(2), which is identical to SO(3) for small rotations, but where a 360° rotation is mathematically distinguished from a rotation of 0°. (A rotation of 720° is, however, the same as a rotation of 0°.)
On the other hand, in all circumstances, because a 360° rotation of a spatial configuration is the same as no rotation at all. (This is different from a 360° rotation of the internal (spin) state of the particle, which might or might not be the same as no rotation at all.) In other words, the operators carry the structure of SO(3), while and carry the structure of SU(2).
From the equation , one picks an eigenstate and draws
which is to say that the orbital angular momentum quantum numbers can only be integers, not half-integers.
Connection to representation theory
Starting with a certain quantum state , consider the set of states for all possible and , i.e. the set of states that come about from rotating the starting state in every possible way. The linear span of that set is a vector space, and therefore the manner in which the rotation operators map one state onto another is a representation of the group of rotation operators.
From the relation between J and rotation operators,
(The Lie algebras of SU(2) and SO(3) are identical.)
The ladder operator derivation above is a method for classifying the representations of the Lie algebra SU(2).
Connection to commutation relations
Classical rotations do not commute with each other: For example, rotating 1° about the x-axis then 1° about the y-axis gives a slightly different overall rotation than rotating 1° about the y-axis then 1° about the x-axis. By carefully analyzing this noncommutativity, the commutation relations of the angular momentum operators can be derived.
(This same calculational procedure is one way to answer the mathematical question "What is the Lie algebra of the Lie groups SO(3) or SU(2)?")
Conservation of angular momentum
The Hamiltonian H represents the energy and dynamics of the system. In a spherically symmetric situation, the Hamiltonian is invariant under rotations:
where R is a rotation operator. As a consequence, , and then due to the relationship between J and R. By the Ehrenfest theorem, it follows that J is conserved.
To summarize, if H is rotationally-invariant (The Hamiltonian function defined on an inner product space is said to have rotational invariance if its value does not change when arbitrary rotations are applied to its coordinates.), then total angular momentum J is conserved. This is an example of Noether's theorem.
If H is just the Hamiltonian for one particle, the total angular momentum of that one particle is conserved when the particle is in a central potential (i.e., when the potential energy function depends only on ). Alternatively, H may be the Hamiltonian of all particles and fields in the universe, and then H is always rotationally-invariant, as the fundamental laws of physics of the universe are the same regardless of orientation. This is the basis for saying conservation of angular momentum is a general principle of physics.
For a particle without spin, J = L, so orbital angular momentum is conserved in the same circumstances. When the spin is nonzero, the spin–orbit interaction allows angular momentum to transfer from L to S or back. Therefore, L is not, on its own, conserved.
Angular momentum coupling
Often, two or more sorts of angular momentum interact with each other, so that angular momentum can transfer from one to the other. For example, in spin–orbit coupling, angular momentum can transfer between L and S, but only the total J = L + S is conserved. In another example, in an atom with two electrons, each has its own angular momentum J1 and J2, but only the total J = J1 + J2 is conserved.
In these situations, it is often useful to know the relationship between, on the one hand, states where all have definite values, and on the other hand, states where all have definite values, as the latter four are usually conserved (constants of motion). The procedure to go back and forth between these bases is to use Clebsch–Gordan coefficients.
One important result in this field is that a relationship between the quantum numbers for :
For an atom or molecule with J = L + S, the term symbol gives the quantum numbers associated with the operators .
Orbital angular momentum in spherical coordinates
Angular momentum operators usually occur when solving a problem with spherical symmetry in spherical coordinates. The angular momentum in the spatial representation is
In spherical coordinates the angular part of the Laplace operator can be expressed by the angular momentum. This leads to the relation
When solving to find eigenstates of the operator , we obtain the following
where
are the spherical harmonics.
See also
Runge–Lenz vector (used to describe the shape and orientation of bodies in orbit)
Holstein–Primakoff transformation
Jordan map (Schwinger's bosonic model of angular momentum)
Pauli–Lubanski pseudovector
Angular momentum diagrams (quantum mechanics)
Spherical basis
Tensor operator
Orbital magnetization
Orbital angular momentum of free electrons
Orbital angular momentum of light
Notes
References
Further reading
Angular momentum
Quantum mechanics
Rotational symmetry | Angular momentum operator | [
"Physics",
"Mathematics"
] | 3,466 | [
"Physical quantities",
"Quantity",
"Quantum mechanics",
"Quantum operators",
"Momentum",
"Angular momentum",
"Moment (physics)",
"Symmetry",
"Rotational symmetry"
] |
3,797,882 | https://en.wikipedia.org/wiki/Tetrahedral%20molecular%20geometry | In a tetrahedral molecular geometry, a central atom is located at the center with four substituents that are located at the corners of a tetrahedron. The bond angles are arccos(−) = 109.4712206...° ≈ 109.5° when all four substituents are the same, as in methane () as well as its heavier analogues. Methane and other perfectly symmetrical tetrahedral molecules belong to point group Td, but most tetrahedral molecules have lower symmetry. Tetrahedral molecules can be chiral.
Tetrahedral bond angle
The bond angle for a symmetric tetrahedral molecule such as CH4 may be calculated using the dot product of two vectors. As shown in the diagram at left, the molecule can be inscribed in a cube with the tetravalent atom (e.g. carbon) at the cube centre which is the origin of coordinates, O. The four monovalent atoms (e.g. hydrogens) are at four corners of the cube (A, B, C, D) chosen so that no two atoms are at adjacent corners linked by only one cube edge.
If the edge length of the cube is chosen as 2 units, then the two bonds OA and OB correspond to the vectors and , and the bond angle is the angle between these two vectors. This angle may be calculated from the dot product of the two vectors, defined as where denotes the length of vector a. As shown in the diagram, the dot product here is –1 and the length of each vector is , so that and the tetrahedral bond angle .
An alternative proof using trigonometry is shown in the diagram at right.
Examples
Main group chemistry
Aside from virtually all saturated organic compounds, most compounds of Si, Ge, and Sn are tetrahedral. Often tetrahedral molecules feature multiple bonding to the outer ligands, as in xenon tetroxide (XeO4), the perchlorate ion (), the sulfate ion (), the phosphate ion (). Thiazyl trifluoride () is tetrahedral, featuring a sulfur-to-nitrogen triple bond.
Other molecules have a tetrahedral arrangement of electron pairs around a central atom; for example ammonia () with the nitrogen atom surrounded by three hydrogens and one lone pair. However the usual classification considers only the bonded atoms and not the lone pair, so that ammonia is actually considered as pyramidal. The H–N–H angles are 107°, contracted from 109.5°. This difference is attributed to the influence of the lone pair which exerts a greater repulsive influence than a bonded atom.
Transition metal chemistry
Again the geometry is widespread, particularly so for complexes where the metal has d0 or d10 configuration. Illustrative examples include tetrakis(triphenylphosphine)palladium(0) (), nickel carbonyl (), and titanium tetrachloride (). Many complexes with incompletely filled d-shells are often tetrahedral, e.g. the tetrahalides of iron(II), cobalt(II), and nickel(II).
Water structure
In the gas phase, a single water molecule has an oxygen atom surrounded by two hydrogens and two lone pairs, and the geometry is simply described as bent without considering the nonbonding lone pairs.
However, in liquid water or in ice, the lone pairs form hydrogen bonds with neighboring water molecules. The most common arrangement of hydrogen atoms around an oxygen is tetrahedral with two hydrogen atoms covalently bonded to oxygen and two attached by hydrogen bonds. Since the hydrogen bonds vary in length many of these water molecules are not symmetrical and form transient irregular tetrahedra between their four associated hydrogen atoms.
Bitetrahedral structures
Many compounds and complexes adopt bitetrahedral structures. In this motif, the two tetrahedra share a common edge. The inorganic polymer silicon disulfide features an infinite chain of edge-shared tetrahedra. In a completely saturated hydrocarbon system, bitetrahedral molecule C8H6 has been proposed as a candidate for the molecule with the shortest possible carbon-carbon single bond.
Exceptions and distortions
Inversion of tetrahedra occurs widely in organic and main group chemistry. The Walden inversion illustrates the stereochemical consequences of inversion at carbon. Nitrogen inversion in ammonia also entails transient formation of planar .
Inverted tetrahedral geometry
Geometrical constraints in a molecule can cause a severe distortion of idealized tetrahedral geometry. In compounds featuring "inverted" tetrahedral geometry at a carbon atom, all four groups attached to this carbon are on one side of a plane. The carbon atom lies at or near the apex of a square pyramid with the other four groups at the corners.
The simplest examples of organic molecules displaying inverted tetrahedral geometry are the smallest propellanes, such as [1.1.1]propellane; or more generally the paddlanes, and pyramidane ([3.3.3.3]fenestrane). Such molecules are typically strained, resulting in increased reactivity.
Planarization
A tetrahedron can also be distorted by increasing the angle between two of the bonds. In the extreme case, flattening results. For carbon this phenomenon can be observed in a class of compounds called the fenestranes.
Tetrahedral molecules with no central atom
A few molecules have a tetrahedral geometry with no central atom. An inorganic example is tetraphosphorus () which has four phosphorus atoms at the vertices of a tetrahedron and each bonded to the other three. An organic example is tetrahedrane () with four carbon atoms each bonded to one hydrogen and the other three carbons. In this case the theoretical C−C−C bond angle is just 60° (in practice the angle will be larger due to bent bonds), representing a large degree of strain.
See also
AXE method
Orbital hybridisation
References
External links
Examples of Tetrahedral molecules
Animated Tetrahedral Visual
Elmhurst College
Interactive molecular examples for point groups
3D Chem – Chemistry, Structures, and 3D Molecules
IUMSC – Indiana University Molecular Structure Center]
Complex ion geometry: tetrahedral
Molecular Modeling
Molecular geometry
Tetrahedra | Tetrahedral molecular geometry | [
"Physics",
"Chemistry"
] | 1,288 | [
"Molecular geometry",
"Molecules",
"Stereochemistry",
"Matter"
] |
3,798,805 | https://en.wikipedia.org/wiki/Molecular%20tagging%20velocimetry | Molecular tagging velocimetry (MTV) is a specific form of flow velocimetry, a technique for determining the velocity of currents in fluids such as air and water. In its simplest form, a single "write" laser beam is shot once through the sample space. Along its path an optically induced chemical process is initiated, resulting in the creation of a new chemical species or in changing the internal energy state of an existing one, so that the molecules struck by the laser beam can be distinguished from the rest of the fluid. Such molecules are said to be "tagged".
This line of tagged molecules is now transported by the fluid flow. To obtain velocity information, images at two instances in time are obtained and analyzed (often by correlation of the image intensities) to determine the displacement. If the flow is three-dimensional or turbulent the line will not only be displaced, it will also be deformed.
Description
There are three optical ways via which these tagged molecules can be visualized: fluorescence, phosphorescence and laser-induced fluorescence (LIF). In all three cases molecules relax to a lower state and their excess energy is released as photons. In fluorescence this energy decay occurs rapidly (within s to s at atmospheric pressure), thus making "direct" fluorescence impractical for tagging. In phosphorescence the decay is slower, because the transition is quantum-mechanically forbidden.
In some "writing" schemes, the tagged molecule ends up in an excited state. If the molecule relaxes through phosphorescence, lasting long enough to see line displacement, this can be used to track the written line and no additional visualisation step is needed. If during tagging the molecule did not reach a phosphorescing state, or relaxed before the molecule was "read", a second step is needed. The tagged molecule is then excited using a second laser beam, employing a wavelength such that it specifically excites the tagged molecule. The molecule will fluoresce and this fluorescence is captured by means of a camera. This manner of visualisation is called laser induced fluorescence (LIF).
Optical techniques are frequently used in modern fluid velocimetry but most are opto-mechanical in nature. Opto-mechanical techniques do not rely on photonics alone for flow measurements but require macro-size seeding. The best known and often used examples are particle image velocimetry (PIV) and laser Doppler velocimetry (LDV). Within the field of all-optical techniques we can distinguish analogous techniques but using molecular tracers. In Doppler schemes, light quasi-elastically scatters off molecules and the velocity of the molecules convey a Doppler shift to the frequency of the scattered light. In molecular tagging techniques, like in PIV, velocimetry is based on visualizing the tracer displacements.
Schemes
MTV techniques have proven to allow measurements of velocities in inhospitable environments, like jet engines, flames, high-pressure vessels, where it is difficult for techniques like Pitot, hot-wire velocimetry and PIV to work. The field of MTV is fairly young; the first demonstration of implementation emerged within the 1980s and the number of schemes developed and investigated for use in air is still fairly small. These schemes differ in the molecule that is created, whether seeding the flow with foreign molecules is necessary and what wavelength of light is being used.
In gases
The most thorough fluid mechanics studies in gas have been performed using the RELIEF scheme and the APART scheme. Both techniques can be used in ambient air without the need for additional seeding.
In RELIEF, excited oxygen is used as a tracer. The method takes advantage of quantum mechanical properties that prohibit relaxation of the molecule so that the excited oxygen has a relatively long lifetime.
APART is based on the "photosynthesis" of nitric oxide. Since NO is a stable molecule, patterns written with it can, in principle, be followed almost indefinitely.
Another well-developed and widely documented technique that yields extremely high accuracy is hydroxyl tagging velocimetry (HTV). It is based on photo-dissociation of water vapor followed by visualization of the resulting OH radical using LIF. HTV has been successfully demonstrated in many test conditions ranging from room air temperature flows to Mach 2 flows within a cavity.
In liquids
In liquids, three MTV approaches have been classified: MTV by direct phosphorescence (using a phosphorescent dye), absorbance (using a photochromic dye), and photoproduct fluorescence (typically using a caged dye).
MTV based on direct phosphorescence is the easiest technique to implement because a single laser is needed to produce a luminescent excited molecular state. The phosphorescence signal is generally weaker and harder to detect than fluorescence.
The second technique called MTV by absorbance relies on the reversible alteration of the fluorescence properties of a photochromic dye. The scheme showed good results in alcohol and oils, but not in water in which typical dyes are not soluble.
The third variant of MTV was first deployed in liquids in 1995 under the name "photoactivated nonintrusive tracking of molecular motion" (PHANTOMM). The PHANTOMM technique initially relied on a fluorescein-based caged dye excited by a blue laser. More recently, a rhodamine-based caged dye was successfully used with pulsed UV and green lasers.
See also
Hot-wire anemometry
Laser-induced fluorescence
Particle image velocimetry
References
Further reading
Laser applications
Measurement
Fluid dynamics
Transport phenomena | Molecular tagging velocimetry | [
"Physics",
"Chemistry",
"Mathematics",
"Engineering"
] | 1,170 | [
"Transport phenomena",
"Physical phenomena",
"Physical quantities",
"Chemical engineering",
"Quantity",
"Measurement",
"Size",
"Piping",
"Fluid dynamics"
] |
3,799,749 | https://en.wikipedia.org/wiki/Porosimetry | Porosimetry is an analytical technique used to determine various quantifiable aspects of a material's porous structure, such as pore diameter, total pore volume, surface area, and bulk and absolute densities.
The technique involves the intrusion of a non-wetting liquid (often mercury) at high pressure into a material through the use of a porosimeter. The pore size can be determined based on the external pressure needed to force the liquid into a pore against the opposing force of the liquid's surface tension.
A force balance equation known as Washburn's equation for the above material having cylindrical pores is given as:
= pressure of liquid
= pressure of gas
= surface tension of liquid
= contact angle of intrusion liquid
= pore diameter
Since the technique is usually performed within a vacuum, the initial gas pressure is zero. The contact angle of mercury with most solids is between 135° and 142°, so an average of 140° can be taken without much error. The surface tension of mercury at 20 °C under vacuum is 480 mN/m. With the various substitutions, the equation becomes:
As pressure increases, so does the cumulative pore volume. From the cumulative pore volume, one can find the pressure and pore diameter where 50% of the total volume has been added to give the median pore diameter.
See also
BET theory, measurement of specific surface
Evapoporometry
Porosity
Wood's metal, also injected for pore structure impregnation and replica
References
Measurement
Scientific techniques
Porous media | Porosimetry | [
"Physics",
"Materials_science",
"Mathematics",
"Engineering"
] | 315 | [
"Materials science stubs",
"Physical quantities",
"Porous media",
"Quantity",
"Materials science",
"Measurement",
"Size"
] |
3,799,847 | https://en.wikipedia.org/wiki/Overlayer | An overlayer is a layer of adatoms adsorbed onto a surface, for instance onto the surface of a single crystal.
On single crystals
Adsorbed species on single crystal surfaces are frequently found to exhibit long-range ordering; that is to say that the adsorbed species form a well-defined overlayer structure. Each particular structure may only exist over a limited coverage range of the adsorbate, and in some adsorbate/substrate systems a whole progression of adsorbate structure are formed as the surface coverage is gradually increased.
The periodicity of the overlayer (which often is larger than that of the substrate unit cell) can be determined by low-energy electron diffraction (LEED), because there will be additional diffraction beams associated with the overlayer.
Types
There are two types of overlayers: commensurate and incommensurate. In the former the substrate-adsorbate interaction tends to dominate over any lateral adsorbate-adsorbate interaction, while in the latter the adsorbate-adsorbate interactions are of similar magnitude to those between adsorbate and substrate.
Notation
An overlayer on a substrate can be notated in either Wood's notation or matrix notation.
Wood's notation
Wood's notation takes the form
where M is the chemical symbol of the substrate, A is the chemical symbol of the overlayer, are the Miller indices of the surface plane, R and correspond to the rotational difference between the substrate and overlayer vectors, and the vector magnitudes shown are those of the substrate ( subscripts) and of the overlayer ( subscripts). This notation can only describe commensurate overlayers however, while matrix notation can describe both.
Matrix notation
Matrix notation differs from Wood's notation in the second term, which is replaced by the matrix that describes the overlayer primitive vectors in terms of the substrate primitive vectors:
, where
and so hence matrix notation has the form
See also
Surface reconstruction
Superstructure
LEED#Superstructures
Citations
References
Textbooks
Websites
Surface science
Condensed matter physics | Overlayer | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 425 | [
"Phases of matter",
"Materials science",
"Surface science",
"Condensed matter physics",
"Matter"
] |
32,966,950 | https://en.wikipedia.org/wiki/Creatine%20phosphate%20shuttle | The creatine phosphate shuttle is an intracellular energy shuttle which facilitates transport of high energy phosphate from muscle cell mitochondria to myofibrils. This is part of phosphocreatine metabolism. In mitochondria, Adenosine triphosphate (ATP) levels are very high as a result of glycolysis, TCA cycle, oxidative phosphorylation processes, whereas creatine phosphate levels are low. This makes conversion of creatine to phosphocreatine a highly favored reaction. Phosphocreatine is a very-high-energy compound. It then diffuses from mitochondria to myofibrils.
In myofibrils, during exercise (contraction) ADP levels are very high, which favors resynthesis of ATP. Thus, phosphocreatine breaks down to creatine, giving its inorganic phosphate for ATP formation. This is done by the enzyme creatine phosphokinase which transduces energy from the transport molecule of phosphocreatine to the useful molecule for contraction demands, ATP, an action performed by ATPase in the myofibril. The resulting creatine product acts as a signal molecule indicating myofibril contraction and diffuses in the opposite direction of phosphocreatine, back towards the mitochondrial intermembrane space where it can be rephosphorylated by creatine phosphokinase.
At the onset of exercise phosphocreatine is broken down to provide ATP for muscle contraction. ATP hydrolysis results in products of ADP and inorganic phosphate. The inorganic phosphate will be transported into the mitochondrial matrix, while the free creatine passes through the outer membrane where it will be resynthesised into PCr. The antiporter transports the ADP into the matrix, while transporting ATP out. Due to the high concentration of ATP around the mitochondrial creatine kinase, it will convert ATP into PCr which will then move back out into the cells cytoplasm to be converted into ATP (by cytoplasmic creatine kinase) to be used as energy for muscle contraction.
In some vertebrates, arginine phosphate plays a similar role.
History
The idea of the creatine phosphate shuttle was suggested as an explanation for altered blood glucose levels in exercising diabetic patients. The change in blood glucose levels were very similar to the alterations that would occur if a diabetic patient would receive a shot of Insulin.It was then proposed that contraction of myofibrils during rigorous exercise freed creatine which imitated the effects of Insulin by consumption of ATP and releasing ADP. With the discovery of the mitochondrial isozyme of creatine kinase which participates in the shuttle, the other isozyme in the cytosol, Samuel Bessman further contributed to the creatine phosphate shuttle and proposed that the reversible properties of the creatine kinase enzyme was why exercise in diabetic patients can imitate the effects of Insulin.
References
Biochemistry, 3rd edition, Mathews, van Holde & Ahern.
Biomolecules | Creatine phosphate shuttle | [
"Chemistry",
"Biology"
] | 658 | [
"Natural products",
"Organic compounds",
"Biomolecules",
"Structural biology",
"Biochemistry",
"Molecular biology"
] |
32,969,820 | https://en.wikipedia.org/wiki/Food%20physical%20chemistry | Food physical chemistry is considered to be a branch of Food chemistry concerned with the study of both physical and chemical interactions in foods in terms of physical and chemical principles applied to food systems, as well as the applications of physical/chemical techniques and instrumentation for the study of foods. This field encompasses the "physiochemical principles of the reactions and conversions that occur during the manufacture, handling, and storage of foods."
Food physical chemistry concepts are often drawn from rheology, theories of transport phenomena, physical and chemical thermodynamics, chemical bonds and interaction forces, quantum mechanics and reaction kinetics, biopolymer science, colloidal interactions, nucleation, glass transitions, and freezing, disordered/noncrystalline solids.
Techniques utilized range widely from dynamic rheometry, optical microscopy, electron microscopy, AFM, light scattering, X-ray diffraction/neutron diffraction, to MRI, spectroscopy (NMR, FT-NIR/IR, NIRS, ESR and EPR, CD/VCD, Fluorescence, FCS, HPLC, GC-MS, and other related analytical techniques.
Understanding food processes and the properties of foods requires a knowledge of physical chemistry and how it applies to specific foods and food processes. Food physical chemistry is essential for improving the quality of foods, their stability, and food product development. Because food science is a multi-disciplinary field, food physical chemistry is being developed through interactions with other areas of food chemistry and food science, such as food analytical chemistry, food process engineering/food processing, food and bioprocess technology, food extrusion, food quality control, food packaging, food biotechnology, and food microbiology.
Topics in Food physical chemistry
The following are examples of topics in food physical chemistry that are of interest to both the food industry and food science:
Water in foods
Local structure in liquid water
Micro-crystallization in ice cream emulsions
Dispersion and surface-adsorption processes in foods
Water and protein activities
Food hydration and shelf-life
Hydrophobic interactions in foods
Hydrogen bonding and ionic interactions in foods
Disulfide bond breaking and formation in foods
Food dispersions
Structure-functionality in foods
Food micro- and nano- structure
Food gels and gelling mechanisms
Cross-linking in foods
Starch gelatinization and retrogradation
Physico-chemical modification of carbohydrates
Physico-chemical interactions in food formulations
Freezing effects on foods and freeze concentration of liquids
Glass transition in wheat gluten and wheat doughs
Drying of foods and crops
Rheology of wheat doughs, cheese and meat
Rheology of extrusion processes
Food enzyme kinetics
Immobilized enzymes and cells
Microencapsulation
Carbohydrates structure and interactions with water and proteins
Maillard browning reactions
Lipids structures and interactions with water and food proteins
Food proteins structure, hydration and functionality in foods
Food protein denaturation
Food enzymes and reaction mechanisms
Vitamin interactions and preservation during food processing
Interaction of salts and minerals with food proteins and water
Color determinations and food grade coloring
Flavors and sensorial perception of foods
Properties of food additives
Related fields
Food chemistry
Food physics and Rheology
Biophysical chemistry
Physical chemistry
Spectroscopy-applied
Intermolecular forces
Nanotechnology and nanostructures
Chemical physics
Molecular dynamics
Surface chemistry and Van der Waals forces
Chemical reactions and Reaction chemistry
Quantum chemistry
Quantum genetics
Molecular models of DNA and Molecular modelling of proteins and viruses
Bioorganic chemistry
Polymer chemistry
Biochemistry and Biological chemistry
Enzymology
Protein–protein interactions
Biomembranes
Complex system biology
Integrative biology
Mathematical biophysics
Systems biology
Genomics, Proteomics, Interactomics, Structural bioinformatics and Cheminformatics
Food technology, Food engineering, Food safety and Food biotechnology
Agricultural biotechnology
Immobilized cells and enzymes
Microencapsulation of food additives and vitamins, etc.
Chemical engineering
Plant biology and Crop sciences
Animal sciences
Techniques gallery: High-Field NMR, CARS (Raman spectroscopy), Fluorescence confocal microscopy and Hyperspectral imaging
See also
Food chemistry
Food Chemistry (journal)
NMR
ESR
FTIR
NIR
FCS
HPLC
GC-MS
Biophysical chemistry
Protein–protein interactions
Food processing
Food engineering
Food rheology
Food extrusion
Food packaging,
food biotechnology
Food safety
Food science
Food technology
International Academy of Quantum Molecular Science
References
Journals
Journal of Agricultural and Food Chemistry
Journal of the American Oil Chemists' Society
Biophysical Chemistry journal
Magnetic Resonance in Chemistry
Starke/ Starch Journal
Journal of Dairy Science (JDS)
Chemical Physics Letters
Zeitschrift für Physikalische Chemie (1887)
Biopolymers
Journal of Food Science (IFT, USA)
International Journal of Food Science & Technology
Macromolecular Chemistry and Physics (1947)
Journal of the Science of Food and Agriculture
Polymer Preprints (ACS)
Integrative Biology Journal of the Royal Society of Chemistry
Organic & Biomolecular Chemistry (An RSC Journal)
Nature
Nature Precedings
Journal of Biological Chemistry
Proceedings of the National Academy of Sciences of the United States of America
External links
ACS Division of Agricultural and Food Chemistry (AGFD)
American Chemical Society (ACS)
Institute of Food Science and Technology (IFST), (formerly IFT)
Dairy Science and Food Technology
Physical Chemistry. (Keith J. Laidler, John H. Meiser and Bryan C. Sanctuary)
The World of Physical Chemistry (Keith J. Laidler, 1993)
Physical Chemistry from Ostwald to Pauling (John W. Servos, 1996)
100 Years of Physical Chemistry (Royal Society of Chemistry, 2004)
The Cambridge History of Science: The modern physical and mathematical sciences (Mary Jo Nye, 2003)
Food chemistry
Physical chemistry | Food physical chemistry | [
"Physics",
"Chemistry",
"Biology"
] | 1,178 | [
"Applied and interdisciplinary physics",
"nan",
"Biochemistry",
"Physical chemistry",
"Food chemistry"
] |
32,974,036 | https://en.wikipedia.org/wiki/Fowkes%20hypothesis | The Fowkes hypothesis (after F. M. Fowkes) is a first order approximation for surface energy. It states the surface energy is the sum of each component's forces:
γ=γd+γp+γi+...
where γd is the dispersion component, γp is the polar, γi is the dipole and so on.
The Fowkes hypothesis goes further making the approximation that the interface between an apolar liquid and apolar solid where there are only dispersive interactions acting across the interface can be estimated using the geometric mean of the contributions from each surface i.e.
γSL=γS+γL-2(γSp x γLp)1/2
References
A. V. Pocius, Adhesion and adhesives: an introduction, 2002 ()
Related articles
Sessile drop technique: The Fowkes Theory
Surface science | Fowkes hypothesis | [
"Physics",
"Chemistry",
"Materials_science"
] | 188 | [
"Condensed matter physics",
"Surface science"
] |
21,368,075 | https://en.wikipedia.org/wiki/N-group%20%28category%20theory%29 | In mathematics, an n-group, or n-dimensional higher group, is a special kind of n-category that generalises the concept of group to higher-dimensional algebra. Here, may be any natural number or infinity. The thesis of Alexander Grothendieck's student Hoàng Xuân Sính was an in-depth study of under the moniker 'gr-category'.
The general definition of -group is a matter of ongoing research. However, it is expected that every topological space will have a homotopy at every point, which will encapsulate the Postnikov tower of the space up to the homotopy group , or the entire Postnikov tower for .
Examples
Eilenberg-Maclane spaces
One of the principal examples of higher groups come from the homotopy types of Eilenberg–MacLane spaces since they are the fundamental building blocks for constructing higher groups, and homotopy types in general. For instance, every group can be turned into an Eilenberg-Maclane space through a simplicial construction, and it behaves functorially. This construction gives an equivalence between groups and . Note that some authors write as , and for an abelian group , is written as .
2-groups
The definition and many properties of 2-groups are already known. can be described using crossed modules and their classifying spaces. Essentially, these are given by a quadruple where are groups with abelian,
a group homomorphism, and a cohomology class. These groups can be encoded as homotopy with and , with the action coming from the action of on higher homotopy groups, and coming from the Postnikov tower since there is a fibration
coming from a map . Note that this idea can be used to construct other higher groups with group data having trivial middle groups , where the fibration sequence is now
coming from a map whose homotopy class is an element of .
3-groups
Another interesting and accessible class of examples which requires homotopy theoretic methods, not accessible to strict groupoids, comes from looking at homotopy of groups. Essentially, these are given by a triple of groups with only the first group being non-abelian, and some additional homotopy theoretic data from the Postnikov tower. If we take this as a homotopy , the existence of universal covers gives us a homotopy type which fits into a fibration sequence
giving a homotopy type with trivial on which acts on. These can be understood explicitly using the previous model of , shifted up by degree (called delooping). Explicitly, fits into a Postnikov tower with associated Serre fibration
giving where the -bundle comes from a map , giving a cohomology class in . Then, can be reconstructed using a homotopy quotient .
n-groups
The previous construction gives the general idea of how to consider higher groups in general. For an with groups with the latter bunch being abelian, we can consider the associated homotopy type and first consider the universal cover . Then, this is a space with trivial , making it easier to construct the rest of the homotopy type using the Postnikov tower. Then, the homotopy quotient gives a reconstruction of , showing the data of an is a higher group, or simple space, with trivial such that a group acts on it homotopy theoretically. This observation is reflected in the fact that homotopy types are not realized by simplicial groups, but simplicial groupoidspg 295 since the groupoid structure models the homotopy quotient .
Going through the construction of a 4-group is instructive because it gives the general idea for how to construct the groups in general. For simplicity, let's assume is trivial, so the non-trivial groups are . This gives a Postnikov tower
where the first non-trivial map is a fibration with fiber . Again, this is classified by a cohomology class in . Now, to construct from , there is an associated fibration
given by a homotopy class . In principle this cohomology group should be computable using the previous fibration with the Serre spectral sequence with the correct coefficients, namely . Doing this recursively, say for a , would require several spectral sequence computations, at worst many spectral sequence computations for an .
n-groups from sheaf cohomology
For a complex manifold with universal cover , and a sheaf of abelian groups on , for every there exists canonical homomorphisms
giving a technique for relating constructed from a complex manifold and sheaf cohomology on . This is particularly applicable for complex tori.
See also
∞-groupoid
Crossed module
Homotopy hypothesis
Abelian 2-group
References
Hoàng Xuân Sính, Gr-catégories, PhD thesis, (1973)
Algebraic models for homotopy n-types
- musings by Tim porter discussing the pitfalls of modelling homotopy n-types with n-cubes
Cohomology of higher groups
Cohomology of higher groups over a site
Note this is (slightly) distinct from the previous section, because it is about taking cohomology over a space with values in a higher group , giving higher cohomology groups . If we are considering as a homotopy type and assuming the homotopy hypothesis, then these are the same cohomology groups.
Group theory
Higher category theory
Homotopy theory | N-group (category theory) | [
"Mathematics"
] | 1,125 | [
"Mathematical structures",
"Group theory",
"Higher category theory",
"Fields of abstract algebra",
"Category theory"
] |
21,370,273 | https://en.wikipedia.org/wiki/Nanoindenter | A nanoindenter is the main component for indentation hardness tests used in nanoindentation. Since the mid-1970s nanoindentation has become the primary method for measuring and testing very small volumes of mechanical properties. Nanoindentation, also called depth sensing indentation or instrumented indentation, gained popularity with the development of machines that could record small load and displacement with high accuracy and precision. The load displacement data can be used to determine modulus of elasticity, hardness, yield strength, fracture toughness, scratch hardness and wear properties.
Types
There are many types of nanoindenters in current use differing mainly on their tip geometry. Among the numerous available geometries are three and four sided pyramids, wedges, cones, cylinders, filaments, and spheres. Several geometries have become a well established common standard due to their extended use and well known properties; such as Berkovich, cube corner, Vickers, and Knoop nanoindenters. To meet the high precision required, nanoindenters must be made following the definitions of ISO 14577-2, and be inspected and measured with equipment and standards traceable to the National Institute of Standards and Technology (NIST). The tip end of the indenter can be made sharp, flat, or rounded to a cylindrical or spherical shape. The material for most nanoindenters is diamond and sapphire, although other hard materials can be used such as quartz, silicon, tungsten, steel, tungsten carbide and almost any other hard metal or ceramic material. Diamond is the most commonly used material for nanoindentation due to its properties of hardness, thermal conductivity, and chemical inertness. In some cases electrically conductive diamond may be needed for special applications and is also available.
Holders
Nanoindenters are mounted on holders which could be the standard design from a manufacturer of nanoindenting equipment, or custom design. The holder material can be steel, titanium, machinable ceramic, other metals or rigid materials. In most cases the indenter is attached to the holder using a rigid metal bonding process. The metal forms a molecular bond with both material be it diamond-steel, diamond-ceramic, etc.
Angular measurements
Nanoindenter dimensions are very small, some less than , and made with precise angular geometry in order to achieve the highly accurate readings required for nanoindentation. Instruments that measure angles on larger objects such as protractors or comparators are neither practical nor precise enough to measure nanoindenter angles even with help of microscopes. For precise measurements a laser goniometer is used to measure diamond nanoindenter angles. Nanoindenter faces are highly polished and reflective which is the basis for the laser goniometer measurements. The laser goniometer can measure within a thousandth of a degree to specified or requested angles.
References
Hardness tests
Nanotechnology | Nanoindenter | [
"Materials_science",
"Engineering"
] | 585 | [
"Hardness tests",
"Materials testing",
"Materials science",
"Nanotechnology"
] |
2,032,752 | https://en.wikipedia.org/wiki/Reed%E2%80%93Muller%20code | Reed–Muller codes are error-correcting codes that are used in wireless communications applications, particularly in deep-space communication. Moreover, the proposed 5G standard relies on the closely related polar codes for error correction in the control channel. Due to their favorable theoretical and mathematical properties, Reed–Muller codes have also been extensively studied in theoretical computer science.
Reed–Muller codes generalize the Reed–Solomon codes and the Walsh–Hadamard code. Reed–Muller codes are linear block codes that are locally testable, locally decodable, and list decodable. These properties make them particularly useful in the design of probabilistically checkable proofs.
Traditional Reed–Muller codes are binary codes, which means that messages and codewords are binary strings. When r and m are integers with 0 ≤ r ≤ m, the Reed–Muller code with parameters r and m is denoted as RM(r, m). When asked to encode a message consisting of k bits, where holds, the RM(r, m) code produces a codeword consisting of 2m bits.
Reed–Muller codes are named after David E. Muller, who discovered the codes in 1954, and Irving S. Reed, who proposed the first efficient decoding algorithm.
Description using low-degree polynomials
Reed–Muller codes can be described in several different (but ultimately equivalent) ways. The description that is based on low-degree polynomials is quite elegant and particularly suited for their application as locally testable codes and locally decodable codes.
Encoder
A block code can have one or more encoding functions that map messages to codewords . The Reed–Muller code has message length and block length . One way to define an encoding for this code is based on the evaluation of multilinear polynomials with m variables and total degree at most r. Every multilinear polynomial over the finite field with two elements can be written as follows:
The are the variables of the polynomial, and the values are the coefficients of the polynomial. Note that there are exactly coefficients. With this in mind, an input message consists of values which are used as these coefficients. In this way, each message gives rise to a unique polynomial in m variables. To construct the codeword , the encoder evaluates the polynomial at all points , where the polynomial is taken with multiplication and addition mod 2 . That is, the encoding function is defined via
The fact that the codeword suffices to uniquely reconstruct follows from Lagrange interpolation, which states that the coefficients of a polynomial are uniquely determined when sufficiently many evaluation points are given. Since and holds for all messages , the function is a linear map. Thus the Reed–Muller code is a linear code.
Example
For the code , the parameters are as follows:
Let be the encoding function just defined. To encode the string x = 1 1010 010101 of length 11, the encoder first constructs the polynomial in 4 variables:Then it evaluates this polynomial at all 16 evaluation points (0101 means :
As a result, C(1 1010 010101) = 1101 1110 0001 0010 holds.
Decoder
As was already mentioned, Lagrange interpolation can be used to efficiently retrieve the message from a codeword. However, a decoder needs to work even if the codeword has been corrupted in a few positions, that is, when the received word is different from any codeword. In this case, a local decoding procedure can help.
The algorithm from Reed is based on the following property:
you start from the code word, that is a sequence of evaluation points from an unknown polynomial of of degree at most that you want to find. The sequence may contains any number of errors up to included.
If you consider a monomial of the highest degree in and sum all the evaluation points of the polynomial where all variables in have the values 0 or 1, and all the other variables have value 0, you get the value of the coefficient (0 or 1) of in (There are such points). This is due to the fact that all lower monomial divisors of appears an even number of time in the sum, and only appears once.
To take into account the possibility of errors, you can also remark that you can fix the value of other variables to any value. So instead of doing the sum only once for other variables not in with 0 value, you do it times for each fixed valuations of the other variables. If there is no error, all those sums should be equals to the value of the coefficient searched.
The algorithm consists here to take the majority of the answers as the value searched. If the minority is larger than the maximum number of errors possible, the decoding step fails knowing there are too many errors in the input code.
Once a coefficient is computed, if it's 1, update the code to remove the monomial from the input code and continue to next monomial, in reverse order of their degree.
Example
Let's consider the previous example and start from the code. With we can fix at most 1 error in the code.
Consider the input code as 1101 1110 0001 0110 (this is the previous code with one error).
We know the degree of the polynomial is at most , we start by searching for monomial of degree 2.
we start by looking for evaluation points with . In the code this is: 1101 1110 0001 0110. The first sum is 1 (odd number of 1).
we look for evaluation points with . In the code this is: 1101 1110 0001 0110. The second sum is 1.
we look for evaluation points with . In the code this is: 1101 1110 0001 0110. The third sum is 1.
we look for evaluation points with . In the code this is: 1101 1110 0001 0110. The third sum is 0 (even number of 1).
The four sums don't agree (so we know there is an error), but the minority report is not larger than the maximum number of error allowed (1), so we take the majority and the coefficient of is 1.
We remove from the code before continue : code : 1101 1110 0001 0110, valuation of is 0001000100010001, the new code is 1100 1111 0000 0111
1100 1111 0000 0111. Sum is 0
1100 1111 0000 0111. Sum is 0
1100 1111 0000 0111. Sum is 1
1100 1111 0000 0111. Sum is 0
One error detected, coefficient is 0, no change to current code.
1100 1111 0000 0111. Sum is 0
1100 1111 0000 0111. Sum is 0
1100 1111 0000 0111. Sum is 1
1100 1111 0000 0111. Sum is 0
One error detected, coefficient is 0, no change to current code.
1100 1111 0000 0111. Sum is 1
1100 1111 0000 0111. Sum is 1
1100 1111 0000 0111. Sum is 1
1100 1111 0000 0111. Sum is 0
One error detected, coefficient is 1, valuation of is 0000 0011 0000 0011, current code is now 1100 1100 0000 0100.
1100 1100 0000 0100. Sum is 1
1100 1100 0000 0100. Sum is 1
1100 1100 0000 0100. Sum is 1
1100 1100 0000 0100. Sum is 0
One error detected, coefficient is 1, valuation of is 0000 0000 0011 0011, current code is now 1100 1100 0011 0111.
1100 1100 0011 0111. Sum is 0
1100 1100 0011 0111. Sum is 1
1100 1100 0011 0111. Sum is 0
1100 1100 0011 0111. Sum is 0
One error detected, coefficient is 0, no change to current code.
We know now all coefficient of degree 2 for the polynomial, we can start mononials of degree 1. Notice that for each next degree, there are twice as much sums, and each sums is half smaller.
1100 1100 0011 0111. Sum is 0
1100 1100 0011 0111. Sum is 0
1100 1100 0011 0111. Sum is 0
1100 1100 0011 0111. Sum is 0
1100 1100 0011 0111. Sum is 0
1100 1100 0011 0111. Sum is 0
1100 1100 0011 0111. Sum is 1
1100 1100 0011 0111. Sum is 0
One error detected, coefficient is 0, no change to current code.
1100 1100 0011 0111. Sum is 1
1100 1100 0011 0111. Sum is 1
1100 1100 0011 0111. Sum is 1
1100 1100 0011 0111. Sum is 1
1100 1100 0011 0111. Sum is 1
1100 1100 0011 0111. Sum is 1
1100 1100 0011 0111. Sum is 1
1100 1100 0011 0111. Sum is 0
One error detected, coefficient is 1, valuation of is 0011 0011 0011 0011, current code is now 1111 1111 0000 0100.
Then we'll find 0 for , 1 for and the current code become 1111 1111 1111 1011.
For the degree 0, we have 16 sums of only 1 bit. The minority is still of size 1, and we found and the corresponding initial word 1 1010 010101
Generalization to larger alphabets via low-degree polynomials
Using low-degree polynomials over a finite field of size , it is possible to extend the definition of Reed–Muller codes to alphabets of size . Let and be positive integers, where should be thought of as larger than . To encode a message of width , the message is again interpreted as an -variate polynomial of total degree at most and with coefficient from . Such a polynomial indeed has coefficients. The Reed–Muller encoding of is the list of all evaluations of over all . Thus the block length is .
Description using a generator matrix
A generator matrix for a Reed–Muller code of length can be constructed as follows. Let us write the set of all m-dimensional binary vectors as:
We define in N-dimensional space the indicator vectors
on subsets by:
together with, also in , the binary operation
referred to as the wedge product (not to be confused with the wedge product defined in exterior algebra). Here, and are points in (N-dimensional binary vectors), and the operation is the usual multiplication in the field .
is an m-dimensional vector space over the field , so it is possible to write
We define in N-dimensional space the following vectors with length and
where 1 ≤ i ≤ m and the Hi are hyperplanes in (with dimension ):
The generator matrix
The Reed–Muller code of order r and length N = 2m is the code generated by v0 and the wedge products of up to r of the vi, (where by convention a wedge product of fewer than one vector is the identity for the operation). In other words, we can build a generator matrix for the code, using vectors and their wedge product permutations up to r at a time , as the rows of the generator matrix, where .
Example 1
Let m = 3. Then N = 8, and
and
The RM(1,3) code is generated by the set
or more explicitly by the rows of the matrix:
Example 2
The RM(2,3) code is generated by the set:
or more explicitly by the rows of the matrix:
Properties
The following properties hold:
The set of all possible wedge products of up to m of the vi form a basis for .
The RM (r, m) code has rank
where '|' denotes the bar product of two codes.
has minimum Hamming weight 2m − r.
Proof
Decoding RM codes
RM(r, m) codes can be decoded using majority logic decoding. The basic idea of majority logic decoding is
to build several checksums for each received code word element. Since each of the different checksums must all
have the same value (i.e. the value of the message word element weight), we can use a majority logic decoding to decipher
the value of the message word element. Once each order of the polynomial is decoded, the received word is modified
accordingly by removing the corresponding codewords weighted by the decoded message contributions, up to the present stage.
So for a rth order RM code, we have to decode iteratively r+1, times before we arrive at the final
received code-word. Also, the values of the message bits are calculated through this scheme; finally we can calculate
the codeword by multiplying the message word (just decoded) with the generator matrix.
One clue if the decoding succeeded, is to have an all-zero modified received word, at the end of (r + 1)-stage decoding
through the majority logic decoding. This technique was proposed by Irving S. Reed, and is more general when applied
to other finite geometry codes.
Description using a recursive construction
A Reed–Muller code RM(r,m) exists for any integers and . RM(m, m) is defined as the universe () code. RM(−1,m) is defined as the trivial code (). The remaining RM codes may be constructed from these elementary codes using the length-doubling construction
From this construction, RM(r,m) is a binary linear block code (n, k, d) with length , dimension and minimum distance for . The dual code to RM(r,m) is RM(m-r-1,m). This shows that repetition and SPC codes are duals, biorthogonal and extended Hamming codes are duals and that codes with are self-dual.
Special cases of Reed–Muller codes
Table of all RM(r,m) codes for m≤5
All codes with and alphabet size 2 are displayed here, annotated with the standard [n,k,d] coding theory notation for block codes. The code is a -code, that is, it is a linear code over a binary alphabet, has block length , message length (or dimension) , and minimum distance .
Properties of RM(r,m) codes for r≤1 or r≥m-1
codes are repetition codes of length , rate and minimum distance .
codes are parity check codes of length , rate and minimum distance .
codes are single parity check codes of length , rate and minimum distance .
codes are the family of extended Hamming codes of length with minimum distance .
References
Further reading
Chapter 4.
Chapter 4.5.
External links
MIT OpenCourseWare, 6.451 Principles of Digital Communication II, Lecture Notes section 6.4
GPL Matlab-implementation of RM-codes
Source GPL Matlab-implementation of RM-codes
Error detection and correction
Coding theory
Theoretical computer science | Reed–Muller code | [
"Mathematics",
"Engineering"
] | 3,062 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Theoretical computer science",
"Applied mathematics",
"Error detection and correction"
] |
2,032,832 | https://en.wikipedia.org/wiki/Antiprotonic%20helium | Antiprotonic helium is a three-body atom composed of an antiproton and an electron orbiting around a helium nucleus. It is thus made partly of matter, and partly of antimatter. The atom is electrically neutral, since an electron and an antiproton each have a charge of −1 e, whereas a helium nucleus has a charge of +2 e. It has the longest lifetime of any experimentally produced matter–antimatter bound state.
Production
These exotic atoms can be produced by simply mixing antiprotons with ordinary helium gas; the antiproton spontaneously displaces one of the two electrons contained in a normal helium atom in a chemical reaction, and then begins to orbit the helium nucleus in the electron's place. This will happen in the case of approximately 3% of the antiprotons introduced to the helium gas. The antiproton's orbit, which has a large principal quantum number and angular momentum quantum number of around 38, lies far away from the surface of the helium nucleus. The antiproton can thus orbit the nucleus for tens of microseconds, before finally falling to its surface and annihilating. This contrasts with other types of exotic atoms of the form X, which typically decay within picoseconds.
Laser spectroscopy
Antiprotonic helium atoms are under study by the ASACUSA experiment at CERN. In these experiments, the atoms are first produced by stopping a beam of antiprotons in helium gas. The atoms are then irradiated by powerful laser beams, which cause the antiprotons in them to resonate and jump from one atomic orbit to another.
As in spectroscopy of other bound states, Doppler broadening and other effects present challenges to precision. Researchers use a variety of techniques to obtain accurate results. One way to exceed Doppler-limited precision is two-photon spectroscopy. The ASACUSA Collaboration has studied He and He atoms with the occupying a high Rydberg state with large principal and orbital quantum numbers, 38 using 2-photon spectroscopy.
Counterpropagating Ti:Sapphire lasers with pulses of duration 30−100 ns excited nonlinear 2-photon transitions in the deep UV, including spectral lines of wavelengths, 139.8, 193.0 and 197.0 nm. These lines correspond to transitions between states of the form . Such transitions are improbable. However, the probability is increased by a factor of when the laser frequencies sum to within 10 GHz of an intermediate state . States were selected pairwise such that Auger emission to He and rapid annihilation produced a detectable Čerenkov signal. The reduced Doppler shift resulted in narrower spectral lines accurate to between 2.3 and 5 ppb. Comparison of the results with three-body quantum electrodynamics calculations made possible a determination of the antiproton to electron mass ratio of .
In 2022 ASACUSA found unexpected narrowing of antiprotonic helium spectral lines.
Measurement of the mass ratio between the antiproton and electron
By measuring the particular frequency of the laser light needed to resonate the atom, the ASACUSA experiment determined the mass of the antiproton, which they measured at times more massive than an electron. This is the same as the mass of a "regular" proton, within the level of certainty of the experiment. This is a confirmation of a fundamental symmetry of nature called CPT (short for charge, parity, and time reversal). This law says that all physical laws would remain unchanged under simultaneous reversal of the charge axis, parity of the space axes, and the orientation of the time axis. One important prediction of this theory is that particles and their antiparticles should have exactly the same mass.
Comparison of antiproton and proton masses and charges
By comparing the above results on laser spectroscopy of antiprotonic helium with separate high-precision measurements of the antiproton's cyclotron frequency carried out by the ATRAP and BASE collaborations at CERN, the mass and electric charge of the antiproton can be precisely compared with the proton values. The most recent such measurements show that the antiproton's mass (and the absolute value of the charge) is the same as the proton's to a precision of 0.5 parts in a billion.
Antiprotonic helium ions
An antiprotonic helium ion is a two-body object composed of a helium nucleus and orbiting antiproton. It has an electric charge of +1 e. Cold ions with lifetimes of up to 100 ns were produced by the ASACUSA experiment in 2005.
Pionic helium
In 2020 ASACUSA in collaboration with the Paul Scherrer Institut (PSI) reported the experimental verification of long lived pionic helium by spectroscopic measurements, the first time on an exotic atom containing a meson. Its existence had been predicted in 1964 by George Condo at University of Tennessee to explain some anomalies from bubble chamber tracks but no definite proof of its existence had ever been obtained. In the experiment negatively charged pions from a ring cyclotron were magnetically focused into a tank filled with superfluid helium so that they would expel an electron from the atom and take its place. Later, to confirm the production, laser light was fired at various frequencies until they found a specific one at 1631 nm where the pion would resonate undergoing a quantum jump from its orbit into an inner one and eventually into the nucleus which would break down into a proton, a neutron and a deuteron. The experiment proved highly technical to perform and took 8 years, including the design and construction of the experiment.
See also
Positronium
Protonium
References
Further reading
ASACUSA improves measurement of antiproton mass
Antimatter
Atomic physics
Exotic atoms
Helium | Antiprotonic helium | [
"Physics",
"Chemistry"
] | 1,175 | [
"Antimatter",
"Exotic atoms",
"Quantum mechanics",
"Subatomic particles",
" molecular",
"Atomic physics",
"Nuclear physics",
"Atomic",
"Atoms",
"Matter",
" and optical physics"
] |
2,033,005 | https://en.wikipedia.org/wiki/Quantitative%20feedback%20theory | In control theory, quantitative feedback theory (QFT), developed by Isaac Horowitz (Horowitz, 1963; Horowitz and Sidi, 1972), is a frequency domain technique utilising the Nichols chart (NC) in order to achieve a desired robust design over a specified region of plant uncertainty. Desired time-domain responses are translated into frequency domain tolerances, which lead to bounds (or constraints) on the loop transmission function. The design process is highly transparent, allowing a designer to see what trade-offs are necessary to achieve a desired performance level.
Plant templates
Usually any system can be represented by its Transfer Function (Laplace in continuous time domain), after getting the model of a system.
As a result of experimental measurement, values of coefficients in the Transfer Function have a range of uncertainty. Therefore, in QFT every parameter of this function is included into an interval of possible values, and the system may be represented by a family of plants rather than by a standalone expression.
A frequency analysis is performed for a finite number of representative frequencies and a set of templates are obtained in the NC diagram which encloses the behaviour of the open loop system at each frequency.
Frequency bounds
Usually system performance is described as robustness to instability (phase and gain margins), rejection to input and output noise disturbances and reference tracking. In the QFT design methodology these requirements on the system are represented as frequency constraints, conditions that the compensated system loop (controller and plant) could not break.
With these considerations and the selection of the same set of frequencies used for the templates, the frequency constraints for the behaviour of the system loop are computed and represented on the Nichols Chart (NC) as curves.
To achieve the problem requirements, a set of rules on the Open Loop Transfer Function, for the nominal plant may be found. That means the nominal loop is not allowed to have its frequency value below the constraint for the same frequency, and at high frequencies the loop should not cross the Ultra High Frequency Boundary (UHFB), which has an oval shape in the center of the NC.
Loop shaping
The controller design is undertaken on the NC considering the frequency constraints and the nominal loop of the system. At this point, the designer begins to introduce controller functions () and tune their parameters, a process called Loop Shaping, until the best possible controller is reached without violation of the frequency constraints.
The experience of the designer is an important factor in finding a satisfactory controller that not only complies with the frequency restrictions but with the possible realization, complexity, and quality.
For this stage there currently exist different CAD (Computer Aided Design) packages to make the controller tuning easier.
Prefilter design
Finally, the QFT design may be completed with a pre-filter () design when it is required. In the case of tracking conditions a shaping on the Bode diagram may be used. Post design analysis is then performed to ensure the system response is satisfactory according with the problem requirements.
The QFT design methodology was originally developed for Single-Input Single-Output (SISO) and Linear Time Invariant Systems (LTI), with the design process being as described above. However, it has since been extended to weakly nonlinear systems, time varying systems, distributed parameter systems, multi-input multi-output (MIMO) systems (Horowitz, 1991), discrete systems (these using the Z-Transform as transfer function), and non minimum phase systems. The development of CAD tools has been an important, more recent development, which simplifies and automates much of the design procedure (Borghesani et al., 1994).
Traditionally, the pre-filter is designed by using the Bode-diagram magnitude information. The use of both phase and magnitude information for the design of pre-filter was first discussed in (Boje, 2003) for SISO systems. The method was then developed to MIMO problems in (Alavi et al., 2007).
See also
Control engineering
Feedback
Process control
Robotic unicycle
H infinity
Optimal control
Servomechanism
Nonlinear control
Adaptive control
Robust control
Intelligent control
State space (controls)
References
Horowitz, I., 1963, Synthesis of Feedback Systems, Academic Press, New York, 1963.
Horowitz, I., and Sidi, M., 1972, "Synthesis of feedback systems with large plant ignorance for prescribed time-domain tolerances," International Journal of Control, 16(2), pp. 287–309.
Horowitz, I., 1991, "Survey of Quantitative Feedback Theory (QFT)," International Journal of Control, 53(2), pp. 255–291.
Borghesani, C., Chait, Y., and Yaniv, O., 1994, Quantitative Feedback Theory Toolbox Users Guide, The Math Works Inc., Natick, MA.
Zolotas, A. (2005, June 8). QFT - Quantitative Feedback Theory. Connexions.
Boje, E. Pre-filter design for tracking error specifications in QFT, International Journal of Robust and Nonlinear Control, Vol. 13, pp. 637–642, 2003.
Alavi, SMM., Khaki-Sedigh, A., Labibi, B. and Hayes, M.J., Improved multivariable quantitative feedback design for tracking error specifications, IET Control Theory & Applications, Vol. 1, No. 4, pp. 1046–1053, 2007.
External links
Mario Garcia-Sanz, Quantitative Robust Control Engineering:Theory and Applications
Control theory | Quantitative feedback theory | [
"Mathematics"
] | 1,142 | [
"Applied mathematics",
"Control theory",
"Dynamical systems"
] |
2,033,875 | https://en.wikipedia.org/wiki/Artificial%20leather | Artificial leather, also called synthetic leather, is a material intended to substitute for leather in upholstery, clothing, footwear, and other uses where a leather-like finish is desired but the actual material is cost prohibitive or unsuitable due to practical or ethical concerns. Artificial leather is known under many names, including leatherette, imitation leather, faux leather, vegan leather, PU leather (polyurethane), and pleather.
Uses
Artificial leathers are often used in clothing fabrics, furniture upholstery, water craft upholstery, and automotive interiors.
One of its primary advantages, especially in cars, is that it requires little maintenance in comparison to leather, and does not crack or fade easily, though the surface of some artificial leathers may rub and wear off with time. Artificial leather made from polyurethane is washable, but varieties made from polyvinyl chloride (PVC) are not easily cleaned.
Fashion
Depending on the construction, the artificial leather may be porous and breathable, or may be impermeable and waterproof.
Porous artificial leather with a non-woven microfibre backing is a popular choice for clothing, and is comfortable to wear.
Manufacture
Many different methods for the manufacture of imitation leathers have been developed.
A current method is to use an embossed release paper known as casting paper as a form for the surface finish, often mimicking the texture of top-grain leather. This embossed release paper holds the final texture in negative. For the manufacture, the release paper is coated with several layers of plastic e.g. PVC or polyurethane, possibly including a surface finish, a colour layer, a foam layer, an adhesive, a fabric layer, a reverse finish. Depending on the specific process, these layers may be wet or partially cured at the time of integration. The artificial leather is cured, then the release paper is removed and possibly reused.
A fermentation method of making collagen, the main chemical in real leather, is under development.
Materials to make vegan leather can be derived from fungi, yeasts and bacterial strains using biotechnological processes.
Historical methods
One of the earliest artificial leathers was Presstoff. Invented in 19th century Germany, it was made of specially layered and treated paper pulp. It gained its widest use in Germany during the Second World War in place of leather, which under wartime conditions was rationed. Presstoff could be used in almost every application normally filled by leather, excepting items like footwear that were repeatedly subjected to flex wear or moisture. Under these conditions, Presstoff tends to delaminate and lose cohesion.
Another early example was Rexine, a leathercloth fabric produced in the United Kingdom by Rexine Ltd of Hyde, near Manchester. It was made of cloth surfaced with a mixture of nitrocellulose, camphor oil, alcohol, and pigment, embossed to look like leather. It was used as a bookbinding material and upholstery covering, especially for the interiors of motor vehicles and the interiors of railway carriages produced by British manufacturers beginning in the 1920s, its cost being around a quarter that of leather.
Poromerics are made from a plastic coating (usually a polyurethane) on a fibrous base layer (typically a polyester). The term poromeric was coined by DuPont as a derivative of the terms porous and polymeric. The first poromeric material was DuPont's Corfam, introduced in 1963 at the Chicago Shoe Show. Corfam was the centerpiece of the DuPont pavilion at the 1964 New York World's Fair in New York City. After spending millions of dollars marketing the product to shoe manufacturers, DuPont withdrew Corfam from the market in 1971 and sold the rights to a company in Poland.
Leatherette is also made by covering a fabric base with a plastic. The fabric can be made of natural or synthetic fiber which is then covered with a soft polyvinyl chloride (PVC) layer. Leatherette is used in bookbinding and was common on the casings of 20th century cameras.
Cork leather is a natural-fiber alternative made from the bark of cork oak trees that has been compressed, similar to Presstoff.
Environmental effect
The production of the PVC used in the production of many artificial leathers requires a plasticizer called a phthalate to make it flexible and soft. PVC requires petroleum and large amounts of energy thus making it reliant on fossil fuels. During the production process carcinogenic byproducts, dioxins, are produced which are toxic to humans and animals. Dioxins remain in the environment long after PVC is manufactured. When PVC ends up in a landfill it does not decompose like genuine leather and can release dangerous chemicals into the water and soil.
Polyurethane is currently more popular for use than PVC.
The production of some artificial leathers requires plastic, with others, called plant-based leathers, only requiring plant-based materials; the inclusion of artificial materials in the production of artificial leathers notably raises sustainability issues. However, some reports state that the manufacture of artificial leather is still more sustainable than that of real leather, with the Environmental Profit & Loss, a sustainability report developed in 2018 by Kering, stating that the impact of vegan-leather production can be up to a third lower than real leather.
Some artificial leathers may have traces of restricted substances, like paint ingredient butanone oxime, according to a study by the FILK Freiberg Institute.
Brand names
Alcantara
Clarino: manufactured by Kuraray Co., Ltd. of Japan.
Fabrikoid: A DuPont brand, cotton cloth coated with nitrocellulose
Kirza: A Russian form developed in the 1930s consisting of cotton fabric, latex, and rosin
MB-Tex: Used in many Mercedes-Benz base trims
Naugahyde: An American brand introduced by Uniroyal
Piñatex: Made from pineapple leaves
Rexine: A British brand
Skai: Made by the German company Konrad Hornschuch AG, its name has become a genericized trademark in Germany and surrounding countries
See also
Bicast leather – a form of genuine leather coated with a plastic finish
Bonded leather – a material made by blending scrap leather fibers with a plastic binder
Microfiber – a material made with synthetic fibers thinner than natural silk; can be used for making synthetic suedes, like Ultrasuede
Mycelium-based materials – Mycelium, the fungal equivalent of roots in plants, has been identified as an ecologically friendly substitute to a litany of materials throughout different industries.
References
Further reading
Faux Real: Genuine Leather and 200 Years of Inspired Fakes, by Robert Kanigel. Joseph Henry Press, 2007.
External links
Sustainability
Nonwoven fabrics
Synthetic materials
Leather
Textiles
Fashion design | Artificial leather | [
"Chemistry",
"Engineering"
] | 1,418 | [
"Fashion design",
"Synthetic materials",
"Artificial leather",
"Chemical synthesis",
"Design"
] |
2,034,059 | https://en.wikipedia.org/wiki/Fritsch%E2%80%93Buttenberg%E2%80%93Wiechell%20rearrangement | The Fritsch–Buttenberg–Wiechell rearrangement, named for Paul Ernst Moritz Fritsch (1859–1913), Wilhelm Paul Buttenberg, and Heinrich G. Wiechell, is a chemical reaction whereby a 1,1-diaryl-2-bromo-alkene rearranges to a 1,2-diaryl-alkyne by reaction with a strong base such as an alkoxide.
This rearrangement is also possible with alkyl substituents.
Reaction mechanism
The strong base deprotonates the vinylic hydrogen, which after alpha elimination forms a vinyl carbene. A 1,2-aryl migration forms the 1,2-diaryl-alkyne product. The mechanism of the FBW rearrangement was a subject of on-surface studies where the vinyl radical was visualised with sub-atomic resolution.
Scope
One study explored this reaction for the synthesis of novel polyynes:
See also
Corey–Fuchs reaction
References
Darses, B.; Milet, A.; Philouze, C.; Greene, A. E.; Poisson, J.-F. o., Ynol Ethers from Dichloroenol Ethers: Mechanistic Elucidation Through 35Cl Labeling. Organic Letters 2008, 10 (20), 4445-4447.
Rearrangement reactions
Name reactions | Fritsch–Buttenberg–Wiechell rearrangement | [
"Chemistry"
] | 295 | [
"Name reactions",
"Rearrangement reactions",
"Organic reactions"
] |
2,034,949 | https://en.wikipedia.org/wiki/Novaya%20Zemlya%20effect | The Novaya Zemlya effect is a polar mirage caused by high refraction of sunlight between atmospheric thermal layers. The effect gives the impression that the sun is rising earlier than it actually should, and depending on the meteorological situation, the effect will present the Sun as a line or a square — sometimes referred to as the rectangular sun — made up of flattened hourglass shapes.
The mirage requires rays of sunlight to travel through an inversion layer for hundreds of kilometres, and depends on the inversion layer's temperature gradient. The sunlight must bend to the Earth's curvature at least to allow an elevation rise of 5° for sight of the solar disk.
The first person to record the phenomenon was Gerrit de Veer, a member of Willem Barentsz's ill-fated third expedition into the north polar region in 1596–1597. Trapped by the ice, the party was forced to stay for the winter in a makeshift lodge on the archipelago of Novaya Zemlya and endure the polar night.
On 24 January 1597, De Veer and another crew member claimed to have seen the Sun appear above the horizon, approximately two weeks prior to its calculated return. They were met with disbelief by the rest of the crew — who accused De Veer of having used the old Julian calendar instead of the Gregorian calendar introduced several years earlier — but on 27 January, the Sun was seen by all "in his full roundnesse". For centuries the account was the source of skepticism, until in the 20th century the phenomenon was finally proven to be genuine.
Apart from the image of the Sun, the effect can also elevate the image of other objects above the horizon, such as coastlines which are normally invisible due to their distance. After studying the Saga of Erik the Red, Waldemar Lehn concluded that the effect may have aided the Vikings in their discovery of Iceland and Greenland, which are not visible from the mainland under normal atmospheric conditions.
See also
Looming and similar refraction phenomena
Mirage of astronomical objects
Fata Morgana (mirage)
References
Atmospheric optical phenomena
Novaya Zemlya | Novaya Zemlya effect | [
"Physics"
] | 425 | [
"Optical phenomena",
"Physical phenomena",
"Atmospheric optical phenomena",
"Earth phenomena"
] |
2,035,017 | https://en.wikipedia.org/wiki/Polybenzimidazole | Polybenzimidazole (PBI, short for poly[2,2’-(m-phenylen)-5,5’-bisbenzimidazole]) fiber is a synthetic fiber with a very high decomposition temperature. It does not exhibit a melting point, it has exceptional thermal and chemical stability, and it does not readily ignite. It was first discovered by American polymer chemist Carl Shipp Marvel in the pursuit of new materials with superior stability, retention of stiffness, and toughness at elevated temperature. Due to its high stability, polybenzimidazole is used to fabricate high-performance protective apparel such as firefighter's gear, astronaut space suits, high temperature protective gloves, welders’ apparel and aircraft wall fabrics. Polybenzimidazole has been applied as a membrane in fuel cells.
History
Discovery
Brinker and Robinson first reported aliphatic polybenzimidazoles in 1949. However the discovery of aromatic polybenzimidazole, which shows excellent physical and chemical properties, was generally credited to Carl Shipp Marvel in the 1950s. The Material Laboratory of Wright Patterson Air Force Base approached Marvel. They were looking for materials suitable for drogue parachutes which could tolerate short-time mechanical stress. However, the thermal resistance of all known filaments at that time was inadequate. The original search concentrated on aromatic condensation polymers but the amide linkage proved to be weak link for the aim of maximal thermal stability of the polymer, whereas Marvel's research focused on condensation polymers with aromatic and heteroaromatic repeating units. This progressively led to the discovery of polybenzimidazole.
Development
Its development history can be summarized in the following list:
In 1961, polybenzimidazole was developed by H. Vogel and C.S. Marvel with anticipation that the polymers would have exceptional thermal and oxidative stability.
Subsequently, in 1963, NASA and the Air Force Materials Lab sponsored considerable work with polybenzimidazole for aerospace and defense applications as a non-flammable and thermally stable textile fiber.
In 1969, the United States Air Force selected polybenzimidazole (PBI) for its superior thermal protective performance after a 1967 fire aboard the Apollo 1 spacecraft killed three astronauts.
In the early 1970s USAF laboratories experimented with polybenzimidazole fibers for protective clothing to reduce aircrew deaths from fires.
In the 1970s, NASA continued to use PBI as part of the astronauts’ clothing on Apollo, Skylab and numerous space shuttle flights.
When Skylab fell to Earth, the part that survived the re-entry was coated in PBI and thus did not burn up.
1980s – PBI was introduced to the fire service, and through Project Fires an outer shell for turnout gear was developed. PBI Gold fabric was born, consisting of 40% PBI/60% para-aramid. Previous to this, combinations of Nomex, leather, and Kevlar materials were used in the US.
1983 – A unique production plant goes on-line and PBI fibers become commercially available.
1990s – Short-cut PBI fibers are introduced for use in automotive braking systems. PBI staple fiber enters the aircraft market for seat fire blocking layers.
1992 – Lightweight PBI fabrics are developed for flame-resistant workwear for electric utility and petrochemical applications.
1994 – PBI Gold fabric is engineered in black and was specified by the FDNY.
2001 – After the terrorist attacks on September 11, many of the 343 fire fighters killed were only identifiable by their TenCate PBI Turnout Gear.
2003 – PBI Matrix was commercialized and introduced as the next-generation PBI for firefighter turnout gear.
Properties
General physical properties
PBI are usually yellow to brown solid infusible up to or higher. The solubility of PBI is controversial, because while most of the linear PBI are partly or entirely dissolved in strong protonic acids (for instance, sulfuric acid or methanesulfonic acid), contradictory observations of solubilities have been recorded among weaker acids like formic acid, and in non-acidic media, such as the aprotic amide-type solvents and dimethyl sulfoxide. For example, one type of PBI prepared in phosphoric acid was found by Iwakura et al. to be partially soluble in formic acid, but completely soluble in dimethyl sulfoxide and dimethylacetamide, whereas Varma and Veena reported the same polymer type to dissolve completely in formic acid, yet only partially in dimethyl sulfoxide or dimethylacetamide.
Thermal stability
Imidazole derivatives are known to be stable compounds. Many of them are resistant to the most drastic treatments with acids and bases and not easily oxidized. The high decomposition temperature and high stability at over 400 °C suggests a polymer with benzimidazole as the repeating unit may also show high heat stability.
Polybenzimidazole and its aromatic derivatives can withstand temperatures in excess of about without softening and degrading. The polymer synthesized from isophthalic acid and 3,3'-Diaminobenzidine is not melted by exposure to a temperature of and loses only 30% of its weight after exposure to high temperature up to for several hours.
Flame resistance
A property of a material needed to be considered before putting it into application is flammability, which demonstrates how easily one material can ignite and combust under the realistic operating conditions. This may affect its application in varied areas, such as in construction, plant design, and interior decoration. A number of quantitative assessments of flammability exist, such as limiting oxygen index (LOI), i.e., the minimum oxygen concentration at which a given sample can be induced to burn in a candle like configuration. These permit estimation of a 'ranking' comparison of flammability. Data shows that PBI is a highly flame resistant material compared to common polymers.
Moisture regain
PBI's moisture regain is useful in protective clothing; this makes the clothing comfortable to wear, in sharp contrast to other synthetic polymers. The moisture regain ability of PBI (13%) compares favorably with cotton (16%).
Synthesis
The preparation of PBI(IV) can be achieved by condensation reaction of diphenyl isophthalate (I) and 3,3’,4,4’-tetraaminodiphenyl (II) (Figure 1). The spontaneous cyclization of the intermediately formed amino-amide (III) to PBI (IV) provided a much more stable amide linkage. This synthetic method was first used in the lab and later developed into a two step process. In a typical synthesis, starting materials were heated at for 1.5 h to form the PBI prepolymer and later the prepolymer was heated at for another 1 h to form the final commercial-grade product.
The reason for the second step is due to the formation of the by-product phenol and water in the first step creating voluminous foam, which leads to the volume expansion of several times of the original. This is the issue that must be considered by the industrial manufacturers. This foam can be reduced by conducting the polycondensation at a high temperature around and under the pressure of . The foam can also be controlled by adding high boiling point liquids such as diphenylether or cetane to the polycondensation. The boiling point can make the liquid stay in the first stage of polycondensation but evaporate in the second stage of solid condensation. The disadvantage of this method is that there are still some liquids which remain in the PBI and it is hard to remove them completely.
While changing the tetramine and acid, a number of different aromatic poly benzimidazoles have been synthesized. The following table (Table 1) lists some of the combination possibilities that have been synthesized in the literature. Some of the combinations have actually been translated into fibers on a small scale. However, the only significant progress that has been made to date is with PBI.
The most common form of PBI used in industry is the fiber form. The fiber process following polymerization is shown in the figure. The polymer is made into solution using dimethylacetamide as solvent. The solution is filtered and converted into fiber using a high temperature dry-spinning process. The fiber is subsequently drawn at elevated temperature to get desired mechanical properties. It is then sulfonated and made into staple using conventional crimping and cutting techniques.
Applications
Before the 1980s, the major applications of PBI were fire-blocking, thermal protective apparel, and reverse osmosis membranes. Its applications became various by the 1990s when molded PBI parts and microporous membranes were developed.
Protective apparel
The thermal stability, flame resistance, and moisture regain of PBI and its conventional textile processing character enable it to be processed on conventional staple fiber textile equipment. These characteristics lead to one of the most important applications of PBI: protective apparel. PBI filaments were fabricated into protective clothing like firefighters' gear and astronauts' suits. PBI filaments are dry spun from dimethylacetamide containing lithium chloride. After washing and drying the resulting yarn is golden brown.
PBI fiber is an excellent candidate for applications in severe environments due to its combination of thermal, chemical and textile properties. Flame and thermal resistance are the critical properties of protective apparel. This kind of apparel applications includes firefighter's protective apparel, astronaut's suits, aluminized crash rescue gear, industrial worker's apparel, and suits for racing car drivers.
PBI-blended fabrics have been the preferred choice of active fire departments across the Americas and around the world for over 30 years. From New York, San Diego, San Francisco, Philadelphia, Seattle, Nashville to São Paulo, Belin, Hong Kong and many more. The high decomposition temperature at which PBI starts to degrade is , exceeding Nomex/Kevlar blends (Nomex being at and Kevlar at ), thus offering superior break-open and thermal protection.
PBI membranes
PBI has been used as the membranes for various separation purposes. Traditionally, PBI was used semi-permeable membranes for electrodialysis, reverse osmosis or ultrafiltration. PBI has also been used for gas separations. due to its close chain packing since PBI has rigidity structure and strong hydrogen bonding. PBI membranes are dense, with very low gas permeability. To be proton conductive, PBI usually is doped with acid. The higher level of the acid doping, the more conductive PBI is. But one problem raised is the mechanical strength of PBI decreases at the same time. The optimum doping level is thus a compromise between these two effects. Thus, multiple methods such as ionic cross-linking, covalent cross-linking and composite membranes have been researched to optimize the doping level at which PBI has an improved conductivity without sacrificing mechanical strength. Sulfonated partially fluorinated arylene main chain polymer exhibit good thermal and extended stability, high proton conductivities, less acid swelling, reasonable mechanical strength.
Molded PBI resin
Molded PBI resin has compressive strength 58 ksi and a tensile strength of 23 ksi, a flexural strength of 32 ksi, a ductile compressive failure mode and the density of 1.3 g/cm3. The PBI resin comprises a recurring structural unit represented by the following figure.
According to the Composite Materials Research Group at the University of Wyoming, PBI resin parts maintain significant tensile properties and compressive strength to . PBI resin parts are also potential materials for the chemical process and oil recovery industries which have demands of thermal stability and chemical resistance. In these areas, PBI resin has been successfully applied in demanding sealing, for instance, valve seats, stem seals, hydraulic seals and backup rings. In the aerospace industry, PBI resin has high strength and short term high temperature resistance advantages. In the industrial sector, PBI resin's high dimensional stability as well as retention of electrical properties at high temperature make it used as a thermal and electrical insulator.
Fuel cell electrolyte
Polybenzimidazole is able to be complexed by strong acids because of its basic character. Complexation by phosphoric acid makes it a proton conductive material. This renders the possible application to high temperature fuel cells. Cell performance test show a good stability in performance for 200 h runs at . However, gel PBI membranes made in the PPA Process show good stability for greater than 17,000 hours at . Application in direct methanol fuel cells may be also of interest because of a better selectivity water/methanol compared to existing membranes. Wainright, Wang et al. reported that PBI doped with phosphoric acid was utilized as a high temperature fuel cell electrolyte. The doped PBI high temperature fuel cell electrolyte has several advantages. The elevated temperature increases the kinetic rates of the fuel cell reactions. It also can reduce the problem of the catalyst poisoning by adsorbed carbon monoxide and it minimizes problems due to electrode flooding. PBI/H3PO4 is conductive even in low relative humidity and it allows less crossover of the methanol at the same time. These contribute PBI/H3PO4 to be superior to some traditional polymer electrolytes such as Nafion. Additionally, PBI/H3PO4 maintains good mechanical strength and toughness. Its modulus is three order of magnitudes greater than that of Nafion. This means that the thinner films can be used, thus reducing ohmic loss.
Asbestos replacement
Previously, only asbestos could perform well in high-temperature gloves for uses such as foundries, aluminium extrusion, and metal treatment. However, trials have been performed which show that PBI adequately functions as an asbestos replacement. Moreover, a safety garment manufacturer reported that gloves containing PBI outlasted asbestos by two to nine times with an effective cost. Gloves containing PBI fibers are softer and more supple than those made of asbestos, offering the worker greater mobility and comfort, even if the fabric becomes charred. Further, PBI fiber avoids the chronic toxicity problems associated with asbestos because it is processed on standard textile and glove fabricating equipment. PBI can also be a good substitute for asbestos in several areas of glass manufacturing.
Flue gas filtration
PBI's chemical, thermal and physical properties demonstrate that it can be a promising material as a flue gas filter fabric for coal-fired boilers. Few fabrics can survive in the acidic and high-temperature environment encountered in coal-fired boiler flue gas. The filter bags also must be able to bear the abrasion from the periodic cleaning to remove accumulated dust. PBI fabric has a good abrasion resistance property. The acid and abrasion resistance and thermal stability properties make PBI a competitor for this application.
References
Appendix of properties
PBI fiber characteristics
The chemical formula of poly[2,2’-(m-phenylen)-5,5’ bibenzimidazol] (PBI) is believed to be:
([NH-C=CH-C=CH-CH=C-N=C-]2-[C=CH-C=CH-CH=CH-])n OR (C20N4H12)n of Molar mass 308.336 ± 0.018 g/mol.
Chemical resistance
It is dyeable to dark shades with basic dyes following caustic pretreatment and resistant to most chemicals.
Electrical properties
Features low electrical conductivity and low static electricity buildup.
Mechanical properties
Features abrasion resistance.
Physical properties
Additional features: will not ignite or smolder (burn slowly without flame), mildew- and age-resistant, resistant to sparks and welding spatter.
Thermal properties
Other features: continuous temperature: , does not melt but degrades around the temperature: under pyrolysis, retains fiber integrity and suppleness up to .
External links
Polybenzimidazole (PBI) - Material Information
Summary of Polybenzimidazole
PBI Polymer Performance Study
Flame retardant fabrics
Organic polymers
Synthetic fibers | Polybenzimidazole | [
"Chemistry"
] | 3,398 | [
"Organic compounds",
"Synthetic materials",
"Organic polymers",
"Synthetic fibers"
] |
2,035,274 | https://en.wikipedia.org/wiki/Clark%E2%80%93Wilson%20model | The Clark–Wilson integrity model provides a foundation for specifying and analyzing an integrity policy for a computing system.
The model is primarily concerned with formalizing the notion of information integrity. Information integrity is maintained by preventing corruption of data items in a system due to either error or malicious intent. An integrity policy describes how the data items in the system should be kept valid from one state of the system to the next and specifies the capabilities of various principals in the system. The model uses security labels to grant access to objects via transformation procedures and a restricted interface model.
Origin
The model was described in a 1987 paper (A Comparison of Commercial and Military Computer Security Policies) by David D. Clark and David R. Wilson. The paper develops the model as a way to formalize the notion of information integrity, especially as compared to the requirements for multilevel security (MLS) systems described in the Orange Book. Clark and Wilson argue that the existing integrity models such as Biba (read-up/write-down) were better suited to enforcing data integrity rather than information confidentiality. The Biba models are more clearly useful in, for example, banking classification systems to prevent the untrusted modification of information and the tainting of information at higher classification levels. In contrast, Clark–Wilson is more clearly applicable to business and industry processes in which the integrity of the information content is paramount at any level of classification (although the authors stress that all three models are obviously of use to both government and industry organizations).
Basic principles
According to Stewart and Chapple's CISSP Study Guide Sixth Edition, the Clark–Wilson model uses a multi-faceted approach in order to enforce data integrity. Instead of defining a formal state machine, the model defines each data item and allows modifications through only a small set of programs. The model uses a three-part relationship of subject/program/object (where program is interchangeable with transaction) known as a triple or an access control triple. Within this relationship, subjects do not have direct access to objects. Objects can only be accessed through programs. Look here to see how this differs from other access control models.
The model's enforcement and certification rules define data items and processes that provide the basis for an integrity policy. The core of the model is based on the notion of a transaction.
A well-formed transaction is a series of operations that transition a system from one consistent state to another consistent state.
In this model, the integrity policy addresses the integrity of the transactions.
The principle of separation of duty requires that the certifier of a transaction and the implementer be different entities.
The model contains a number of basic constructs that represent both data items and processes that operate on those data items. The key data type in the Clark–Wilson model is a Constrained Data Item (CDI). An Integrity Verification Procedure (IVP) ensures that all CDIs in the system are valid at a certain state. Transactions that enforce the integrity policy are represented by Transformation Procedures (TPs). A TP takes as input a CDI or Unconstrained Data Item (UDI) and produces a CDI. A TP must transition the system from one valid state to another valid state. UDIs represent system input (such as that provided by a user or adversary). A TP must guarantee (via certification) that it transforms all possible values of a UDI to a “safe” CDI.
Rules
At the heart of the model is the notion of a relationship between an authenticated principal (i.e., user) and a set of programs (i.e., TPs) that operate on a set of data items (e.g., UDIs and CDIs). The components of such a relation, taken together, are referred to as a Clark–Wilson triple. The model must also ensure that different entities are responsible for manipulating the relationships between principals, transactions, and data items. As a short example, a user capable of certifying or creating a relation should not be able to execute the programs specified in that relation.
The model consists of two sets of rules: Certification Rules (C) and Enforcement Rules (E). The nine rules ensure the external and internal integrity of the data items. To paraphrase these:
C1—When an IVP is executed, it must ensure the CDIs are valid.
C2—For some associated set of CDIs, a TP must transform those CDIs from one valid state to another.
Since we must make sure that these TPs are certified to operate on a particular CDI, we must have E1 and E2.
E1—System must maintain a list of certified relations and ensure only TPs certified to run on a CDI change that CDI.
E2—System must associate a user with each TP and set of CDIs. The TP may access the CDI on behalf of the user if it is "legal".
E3-The system must authenticate the identity of each user attempting to execute a TP.
This requires keeping track of triples (user, TP, {CDIs}) called "allowed relations".
C3—Allowed relations must meet the requirements of "separation of duty".
We need authentication to keep track of this.
C4—All TPs must append to a log enough information to reconstruct the operation.
When information enters the system it need not be trusted or constrained (i.e. can be a UDI). We must deal with this appropriately.
C5—Any TP that takes a UDI as input may only perform valid transactions for all possible values of the UDI. The TP will either accept (convert to CDI) or reject the UDI.
Finally, to prevent people from gaining access by changing qualifications of a TP:
E4—Only the certifier of a TP may change the list of entities associated with that TP.
CW-lite
A variant of Clark-Wilson is the CW-lite model, which relaxes the original requirement of formal verification of TP semantics. The semantic verification is deferred to a separate model and general formal proof tools.
See also
Confused deputy problem
References
Clark, David D.; and Wilson, David R.; A Comparison of Commercial and Military Computer Security Policies; in Proceedings of the 1987 IEEE Symposium on Research in Security and Privacy (SP'87), May 1987, Oakland, CA; IEEE Press, pp. 184–193
Chapple, Mike; Stewart, James and Gibson Darril; Certified Information Systems Security Professional; Official Study Guide (8th Edition) 2018, John Wiley & Sons, Indiana
Shankar, Umesh; Jaeger, Trent; and Sailer, Reiner; "Toward Automated Information-Flow Integrity Verification for Security-Critical Applications"; in "Proceedings of the 2006 Network and Distributed Systems Security Symposium (NDSS '06), February 2006, San Diego, CA“; Internet Society, pp. 267–280
External links
Slides about Clark–Wilson used by professor Matt Bishop to teach computer security
http://doi.ieeecomputersociety.org/10.1109/SP.1987.10001
Computer security models | Clark–Wilson model | [
"Engineering"
] | 1,478 | [
"Cybersecurity engineering",
"Computer security models"
] |
2,035,588 | https://en.wikipedia.org/wiki/Radiochemistry | Radiochemistry is the chemistry of radioactive materials, where radioactive isotopes of elements are used to study the properties and chemical reactions of non-radioactive isotopes (often within radiochemistry the absence of radioactivity leads to a substance being described as being inactive as the isotopes are stable). Much of radiochemistry deals with the use of radioactivity to study ordinary chemical reactions. This is very different from radiation chemistry where the radiation levels are kept too low to influence the chemistry.
Radiochemistry includes the study of both natural and man-made radioisotopes.
Main decay modes
All radioisotopes are unstable isotopes of elements— that undergo nuclear decay and emit some form of radiation. The radiation emitted can be of several types including alpha, beta, gamma radiation, proton, and neutron emission along with neutrino and antiparticle emission decay pathways.
1. α (alpha) radiation—the emission of an alpha particle (which contains 2 protons and 2 neutrons) from an atomic nucleus. When this occurs, the atom's atomic mass will decrease by 4 units and the atomic number will decrease by 2.
2. β (beta) radiation—the transmutation of a neutron into an electron and a proton. After this happens, the electron is emitted from the nucleus into the electron cloud.
3. γ (gamma) radiation—the emission of electromagnetic energy (such as gamma rays) from the nucleus of an atom. This usually occurs during alpha or beta radioactive decay.
These three types of radiation can be distinguished by their difference in penetrating power.
Alpha can be stopped quite easily by a few centimetres of air or a piece of paper and is equivalent to a helium nucleus. Beta can be cut off by an aluminium sheet just a few millimetres thick and are electrons. Gamma is the most penetrating of the three and is a massless chargeless high-energy photon. Gamma radiation requires an appreciable amount of heavy metal radiation shielding (usually lead or barium-based) to reduce its intensity.
Activation analysis
By neutron irradiation of objects, it is possible to induce radioactivity; this activation of stable isotopes to create radioisotopes is the basis of neutron activation analysis. A high-energy most interesting object which has been studied in this way is the hair of Napoleon's head, which has been examined for its arsenic content.
A series of different experimental methods exist, these have been designed to enable the measurement of a range of different elements in different matrices. To reduce the effect of the matrix it is common to use the chemical extraction of the wanted element and/or to allow the radioactivity due to the matrix elements to decay before the measurement of the radioactivity. Since the matrix effect can be corrected by observing the decay spectrum, little or no sample preparation is required for some samples, making neutron activation analysis less susceptible to contamination.
The effects of a series of different cooling times can be seen if a hypothetical sample that contains sodium, uranium, and cobalt in a 100:10:1 ratio was subjected to a very short pulse of thermal neutrons. The initial radioactivity would be dominated by the 24Na activity (half-life 15 h) but with increasing time the 239Np (half-life 2.4 d after formation from parent 239U with half-life 24 min) and finally the 60Co activity (5.3 yr) would predominate.
Biology applications
One biological application is the study of DNA using radioactive phosphorus-32. In these experiments, stable phosphorus is replaced by the chemically identical radioactive P-32, and the resulting radioactivity is used in the analysis of the molecules and their behaviour.
Another example is the work that was done on the methylation of elements such as sulfur, selenium, tellurium, and polonium by living organisms. It has been shown that bacteria can convert these elements into volatile compounds, it is thought that methylcobalamin (vitamin B12) alkylates these elements to create the dimethyls. It has been shown that a combination of Cobaloxime and inorganic polonium in sterile water forms a volatile polonium compound, while a control experiment that did not contain the cobalt compound did not form the volatile polonium compound. For the sulfur work, the isotope 35S was used, while for polonium 207Po was used. In some related work by the addition of 57Co to the bacterial culture, followed by isolation of the cobalamin from the bacteria (and the measurement of the radioactivity of the isolated cobalamin) it was shown that the bacteria convert available cobalt into methylcobalamin.
In medicine PET (Positron Emission Tomography) scans are commonly used for diagnostic purposes in. A radiative tracer is injected intravenously into the patient and then taken to the PET machine. The radioactive tracer releases radiation outward from the patient and the cameras in the machine interpret the radiation rays from the tracer. PET scan machines use solid state scintillation detection because of their high detection efficiency, NaI(Tl) crystals absorb the tracer's radiation and produce photons that get converted into an electrical signal for the machine to analyze.
Environmental
Radiochemistry also includes the study of the behaviour of radioisotopes in the environment; for instance, a forest or grass fire can make radioisotopes mobile again. In these experiments, fires were started in the exclusion zone around Chernobyl and the radioactivity in the air downwind was measured.
It is important to note that a vast number of processes can release radioactivity into the environment, for example, the action of cosmic rays on the air is responsible for the formation of radioisotopes (such as 14C and 32P), the decay of 226Ra forms 222Rn which is a gas which can diffuse through rocks before entering buildings and dissolve in water and thus enter drinking water In addition, human activities such as bomb tests, accidents, and normal releases from industry have resulted in the release of radioactivity.
Chemical form of the actinides
The environmental chemistry of some radioactive elements such as plutonium is complicated by the fact that solutions of this element can undergo disproportionation and as a result, many different oxidation states can coexist at once. Some work has been done on the identification of the oxidation state and coordination number of plutonium and the other actinides under different conditions. This includes work on both solutions of relatively simple complexes and work on colloids Two of the key matrixes are soil/rocks and concrete, in these systems the chemical properties of plutonium have been studied using methods such as EXAFS and XANES.
Movement of colloids
While binding of a metal to the surfaces of the soil particles can prevent its movement through a layer of soil, it is possible for the particles of soil that bear the radioactive metal can migrate as colloidal particles through the soil. This has been shown to occur using soil particles labeled with 134Cs, these are able to move through cracks in the soil.
Normal background
Radioactivity is present everywhere on Earth since its formation. According to the International Atomic Energy Agency, one kilogram of soil typically contains the following amounts of the following three natural radioisotopes 370 Bq 40K (typical range 100–700 Bq), 25 Bq 226Ra (typical range 10–50 Bq), 25 Bq 238U (typical range 10–50 Bq) and 25 Bq 232Th (typical range 7–50 Bq).
Action of microorganisms
The action of micro-organisms can fix uranium; Thermoanaerobacter can use chromium(VI), iron(III), cobalt(III), manganese(IV), and uranium(VI) as electron acceptors while acetate, glucose, hydrogen, lactate, pyruvate, succinate, and xylose can act as electron donors for the metabolism of the bacteria. In this way, the metals can be reduced to form magnetite (Fe3O4), siderite (FeCO3), rhodochrosite (MnCO3), and uraninite (UO2). Other researchers have also worked on the fixing of uranium using bacteria , Francis R. Livens et al. (Working at Manchester) have suggested that the reason why Geobacter sulfurreducens can reduce cations to uranium dioxide is that the bacteria reduce the uranyl cations to which then undergoes disproportionation to form and UO2. This reasoning was based (at least in part) on the observation that is not converted to an insoluble neptunium oxide by the bacteria.
Education
Despite the growing use of nuclear medicine, the potential expansion of nuclear power plants, and worries about protection against nuclear threats and the management of the nuclear waste generated in past decades, the number of students opting to specialize in nuclear and radiochemistry has decreased significantly over the past few decades. Now, with many experts in these fields approaching retirement age, action is needed to avoid a workforce gap in these critical fields, for example by building student interest in these careers, expanding the educational capacity of universities and colleges, and providing more specific on-the-job training.
Nuclear and Radiochemistry (NRC) is mostly being taught at the university level, usually first at the Master- and PhD-degree level. In Europe, substantial effort is being done to harmonize and prepare the NRC education for the industry's and society's future needs. This effort is being coordinated in projects funded by the Coordinated Action supported by the European Atomic Energy Community's 7th Framework Program: The CINCH-II project - Cooperation in education and training In Nuclear Chemistry.
References
External links
ACS radioelectrochemistry
Nuclear chemistry
Radioactivity | Radiochemistry | [
"Physics",
"Chemistry"
] | 2,025 | [
"Nuclear physics",
"Nuclear chemistry",
"nan",
"Radiochemistry",
"Radioactivity"
] |
2,035,678 | https://en.wikipedia.org/wiki/Gibbs%27%20inequality | In information theory, Gibbs' inequality is a statement about the information entropy of a discrete probability distribution. Several other bounds on the entropy of probability distributions are derived from Gibbs' inequality, including Fano's inequality.
It was first presented by J. Willard Gibbs in the 19th century.
Gibbs' inequality
Suppose that and
are discrete probability distributions. Then
with equality if and only if
for . Put in words, the information entropy of a distribution is less than or equal to its cross entropy with any other distribution .
The difference between the two quantities is the Kullback–Leibler divergence or relative entropy, so the inequality can also be written:
Note that the use of base-2 logarithms is optional, and
allows one to refer to the quantity on each side of the inequality as an
"average surprisal" measured in bits.
Proof
For simplicity, we prove the statement using the natural logarithm, denoted by , since
so the particular logarithm base that we choose only scales the relationship by the factor .
Let denote the set of all for which pi is non-zero. Then, since for all x > 0, with equality if and only if x=1, we have:
The last inequality is a consequence of the pi and qi being part of a probability distribution. Specifically, the sum of all non-zero values is 1. Some non-zero qi, however, may have been excluded since the choice of indices is conditioned upon the pi being non-zero. Therefore, the sum of the qi may be less than 1.
So far, over the index set , we have:
,
or equivalently
.
Both sums can be extended to all , i.e. including , by recalling that the expression tends to 0 as tends to 0, and tends to as tends to 0. We arrive at
For equality to hold, we require
for all so that the equality holds,
and which means if , that is, if .
This can happen if and only if for .
Alternative proofs
The result can alternatively be proved using Jensen's inequality, the log sum inequality, or the fact that the Kullback-Leibler divergence is a form of Bregman divergence.
Proof by Jensen's inequality
Because log is a concave function, we have that:
where the first inequality is due to Jensen's inequality, and being a probability distribution implies the last equality.
Furthermore, since is strictly concave, by the equality condition of Jensen's inequality we get equality when
and
.
Suppose that this ratio is , then we have that
where we use the fact that are probability distributions. Therefore, the equality happens when .
Proof by Bregman divergence
Alternatively, it can be proved by noting thatfor all , with equality holding iff . Then, sum over the states, we havewith equality holding iff .
This is because the KL divergence is the Bregman divergence generated by the function .
Corollary
The entropy of is bounded by:
The proof is trivial – simply set for all i.
See also
Information entropy
Bregman divergence
Log sum inequality
References
Information theory
Coding theory
Probabilistic inequalities
Articles containing proofs | Gibbs' inequality | [
"Mathematics",
"Technology",
"Engineering"
] | 650 | [
"Discrete mathematics",
"Coding theory",
"Telecommunications engineering",
"Applied mathematics",
"Theorems in probability theory",
"Computer science",
"Probabilistic inequalities",
"Information theory",
"Inequalities (mathematics)",
"Articles containing proofs"
] |
2,037,563 | https://en.wikipedia.org/wiki/Geodesics%20in%20general%20relativity | In general relativity, a geodesic generalizes the notion of a "straight line" to curved spacetime. Importantly, the world line of a particle free from all external, non-gravitational forces is a particular type of geodesic. In other words, a freely moving or falling particle always moves along a geodesic.
In general relativity, gravity can be regarded as not a force but a consequence of a curved spacetime geometry where the source of curvature is the stress–energy tensor (representing matter, for instance). Thus, for example, the path of a planet orbiting a star is the projection of a geodesic of the curved four-dimensional (4-D) spacetime geometry around the star onto three-dimensional (3-D) space.
Mathematical expression
The full geodesic equation is
where s is a scalar parameter of motion (e.g. the proper time), and are Christoffel symbols (sometimes called the affine connection coefficients or Levi-Civita connection coefficients) symmetric in the two lower indices. Greek indices may take the values: 0, 1, 2, 3 and the summation convention is used for repeated indices and . The quantity on the left-hand-side of this equation is the acceleration of a particle, so this equation is analogous to Newton's laws of motion, which likewise provide formulae for the acceleration of a particle. The Christoffel symbols are functions of the four spacetime coordinates and so are independent of the velocity or acceleration or other characteristics of a test particle whose motion is described by the geodesic equation.
Equivalent mathematical expression using coordinate time as parameter
So far the geodesic equation of motion has been written in terms of a scalar parameter s. It can alternatively be written in terms of the time coordinate, (here we have used the triple bar to signify a definition). The geodesic equation of motion then becomes:
This formulation of the geodesic equation of motion can be useful for computer calculations and to compare General Relativity with Newtonian Gravity. It is straightforward to derive this form of the geodesic equation of motion from the form which uses proper time as a parameter using the chain rule. Notice that both sides of this last equation vanish when the mu index is set to zero. If the particle's velocity is small enough, then the geodesic equation reduces to this:
Here the Latin index n takes the values [1,2,3]. This equation simply means that all test particles at a particular place and time will have the same acceleration, which is a well-known feature of Newtonian gravity. For example, everything floating around in the International Space Station will undergo roughly the same acceleration due to gravity.
Derivation directly from the equivalence principle
Physicist Steven Weinberg has presented a derivation of the geodesic equation of motion directly from the equivalence principle. The first step in such a derivation is to suppose that a free falling particle does not accelerate in the neighborhood of a point-event with respect to a freely falling coordinate system (). Setting , we have the following equation that is locally applicable in free fall:
The next step is to employ the multi-dimensional chain rule. We have:
Differentiating once more with respect to the time, we have:
We have already said that the left-hand-side of this last equation must vanish because of the Equivalence Principle. Therefore:
Multiply both sides of this last equation by the following quantity:
Consequently, we have this:
Weinberg defines the affine connection as follows:
which leads to this formula:
Notice that, if we had used the proper time “s” as the parameter of motion, instead of using the locally inertial time coordinate “T”, then our derivation of the geodesic equation of motion would be complete. In any event, let us continue by applying the one-dimensional chain rule:
As before, we can set . Then the first derivative of x0 with respect to t is one and the second derivative is zero. Replacing λ with zero gives:
Subtracting d xλ / d t times this from the previous equation gives:
which is a form of the geodesic equation of motion (using the coordinate time as parameter).
The geodesic equation of motion can alternatively be derived using the concept of parallel transport.
Deriving the geodesic equation via an action
We can (and this is the most common technique) derive the geodesic equation via the action principle. Consider the case of trying to find a geodesic between two timelike-separated events.
Let the action be
where is the line element. There is a negative sign inside the square root because the curve must be timelike. To get the geodesic equation we must vary this action. To do this let us parameterize this action with respect to a parameter . Doing this we get:
We can now go ahead and vary this action with respect to the curve . By the principle of least action we get:
Using the product rule we get:
where
Integrating by-parts the last term and dropping the total derivative (which equals to zero at the boundaries) we get that:
Simplifying a bit we see that:
so,
multiplying this equation by we get:
So by Hamilton's principle we find that the Euler–Lagrange equation is
Multiplying by the inverse metric tensor we get that
Thus we get the geodesic equation:
with the Christoffel symbol defined in terms of the metric tensor as
(Note: Similar derivations, with minor amendments, can be used to produce analogous results for geodesics between light-like or space-like separated pairs of points.)
Equation of motion may follow from the field equations for empty space
Albert Einstein believed that the geodesic equation of motion can be derived from the field equations for empty space, i.e. from the fact that the Ricci curvature vanishes. He wrote:
It has been shown that this law of motion — generalized to the case of arbitrarily large gravitating masses — can be derived from the field equations of empty space alone. According to this derivation the law of motion is implied by the condition that the field be singular nowhere outside its generating mass points.
and
One of the imperfections of the original relativistic theory of gravitation was that as a field theory it was not complete; it introduced the independent postulate that the law of motion of a particle is given by the equation of the geodesic.
A complete field theory knows only fields and not the concepts of particle and motion. For these must not exist independently from the field but are to be treated as part of it.
On the basis of the description of a particle without singularity, one has the possibility of a logically more satisfactory treatment of the combined problem: The problem of the field and that of the motion coincide.
Both physicists and philosophers have often repeated the assertion that the geodesic equation can be obtained from the field equations to describe the motion of a gravitational singularity, but this claim remains disputed. According to David Malament, “Though the geodesic principle can be recovered as theorem in general relativity, it is not a consequence of Einstein’s equation (or the conservation principle) alone. Other assumptions are needed to derive the theorems in question.” Less controversial is the notion that the field equations determine the motion of a fluid or dust, as distinguished from the motion of a point-singularity.
Extension to the case of a charged particle
In deriving the geodesic equation from the equivalence principle, it was assumed that particles in a local inertial coordinate system are not accelerating. However, in real life, the particles may be charged, and therefore may be accelerating locally in accordance with the Lorentz force. That is:
with
The Minkowski tensor is given by:
These last three equations can be used as the starting point for the derivation of an equation of motion in General Relativity, instead of assuming that acceleration is zero in free fall. Because the Minkowski tensor is involved here, it becomes necessary to introduce something called the metric tensor in General Relativity. The metric tensor g is symmetric, and locally reduces to the Minkowski tensor in free fall. The resulting equation of motion is as follows:
with
This last equation signifies that the particle is moving along a timelike geodesic; massless particles like the photon instead follow null geodesics (replace −1 with zero on the right-hand side of the last equation). It is important that the last two equations are consistent with each other, when the latter is differentiated with respect to proper time, and the following formula for the Christoffel symbols ensures that consistency:
This last equation does not involve the electromagnetic fields, and it is applicable even in the limit as the electromagnetic fields vanish. The letter g with superscripts refers to the inverse of the metric tensor. In General Relativity, indices of tensors are lowered and raised by contraction with the metric tensor or its inverse, respectively.
Geodesics as curves of stationary interval
A geodesic between two events can also be described as the curve joining those two events which has a stationary interval (4-dimensional "length"). Stationary here is used in the sense in which that term is used in the calculus of variations, namely, that the interval along the curve varies minimally among curves that are nearby to the geodesic.
In Minkowski space there is only one geodesic that connects any given pair of events, and for a time-like geodesic, this is the curve with the longest proper time between the two events. In curved spacetime, it is possible for a pair of widely separated events to have more than one time-like geodesic between them. In such instances, the proper times along several geodesics will not in general be the same. For some geodesics in such instances, it is possible for a curve that connects the two events and is nearby to the geodesic to have either a longer or a shorter proper time than the geodesic.
For a space-like geodesic through two events, there are always nearby curves which go through the two events that have either a longer or a shorter proper length than the geodesic, even in Minkowski space. In Minkowski space, the geodesic will be a straight line. Any curve that differs from the geodesic purely spatially (i.e. does not change the time coordinate) in any inertial frame of reference will have a longer proper length than the geodesic, but a curve that differs from the geodesic purely temporally (i.e. does not change the space coordinates) in such a frame of reference will have a shorter proper length.
The interval of a curve in spacetime is
Then, the Euler–Lagrange equation,
becomes, after some calculation,
where
The goal being to find a curve for which the value of
is stationary, where
such goal can be accomplished by calculating the Euler–Lagrange equation for f, which is
Substituting the expression of f into the Euler–Lagrange equation (which makes the value of the integral l stationary), gives
Now calculate the derivatives:
This is just one step away from the geodesic equation.
If the parameter s is chosen to be affine, then the right side of the above equation vanishes (because is constant). Finally, we have the geodesic equation
Derivation using autoparallel transport
The geodesic equation can be alternatively derived from the autoparallel transport of curves. The derivation is based on the lectures given by Frederic P. Schuller at the We-Heraeus International Winter School on Gravity & Light.
Let be a smooth manifold with connection and be a curve on the manifold. The curve is said to be autoparallely transported if and only if .
In order to derive the geodesic equation, we have to choose a chart :
Using the linearity and the Leibniz rule:
Using how the connection acts on functions () and expanding the second term with the help of the connection coefficient functions:
The first term can be simplified to . Renaming the dummy indices:
We finally arrive to the geodesic equation:
See also
Geodesic
Geodetic precession
Schwarzschild geodesics
Geodesics as Hamiltonian flows
Synge's world function
Bibliography
Steven Weinberg, Gravitation and Cosmology: Principles and Applications of the General Theory of Relativity, (1972) John Wiley & Sons, New York . See chapter 3.
Lev D. Landau and Evgenii M. Lifschitz, The Classical Theory of Fields, (1973) Pergammon Press, Oxford See section 87.
Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation, (1970) W.H. Freeman, New York; .
Bernard F. Schutz, A first course in general relativity, (1985; 2002) Cambridge University Press: Cambridge, UK; . See chapter 6.
Robert M. Wald, General Relativity, (1984) The University of Chicago Press, Chicago. See Section 3.3.
References
General relativity
Articles containing proofs | Geodesics in general relativity | [
"Physics",
"Mathematics"
] | 2,697 | [
"Articles containing proofs",
"General relativity",
"Theory of relativity"
] |
27,612,945 | https://en.wikipedia.org/wiki/Cyclic%20ozone | Cyclic ozone is a theoretically predicted form of ozone. Like ordinary ozone (O3), it would have three oxygen atoms. It would differ from ordinary ozone in how those three oxygen atoms are arranged. In ordinary ozone, the atoms are arranged in a bent line; in cyclic ozone, they would form an equilateral triangle.
Some of the properties of cyclic ozone have been predicted theoretically. It should have more energy than ordinary ozone.
There is evidence that tiny quantities of cyclic ozone exist at the surface of magnesium oxide crystals in air. Cyclic ozone has not been made in bulk, although at least one researcher has attempted to do so using lasers. Another possibility to stabilize this form of oxygen is to produce it inside confined spaces, e.g., fullerene.
It has been speculated that, if cyclic ozone could be made in bulk, and if it proved to have good stability properties, it could be added to liquid oxygen to improve the specific impulse of rocket fuel.
Currently, the possibility of cyclic ozone is confirmed within diverse theoretical approaches.
References
External links
Allotropes of oxygen
Hypothetical chemical compounds
Three-membered rings
Ozone
Homonuclear triatomic molecules | Cyclic ozone | [
"Chemistry"
] | 237 | [
"Allotropes",
"Oxidizing agents",
"Hypotheses in chemistry",
"Ozone",
"Theoretical chemistry",
"Allotropes of oxygen",
"Hypothetical chemical compounds"
] |
27,614,604 | https://en.wikipedia.org/wiki/ISC%20High%20Performance | The ISC High Performance, formerly known as the International Supercomputing Conference, is a yearly conference on supercomputing which has been held in Europe since 1986. It stands as the oldest supercomputing conference in the world.
History
In 1986 Professor Dr. Hans Werner Meuer, director of the computer centre and professor for computer science at the University of Mannheim (Germany) co-founded and organized the "Mannheim Supercomputer Seminar" which had 81 participants. This was held yearly and became the annual International Supercomputing Conference and Exhibition (ISC). In 2015, the name was officially changed to ISC High Performance. The conference is attended by speakers, vendors, and researchers from all over the world. Since 1993 the conference has been the venue for one of the twice yearly TOP500 announcements where the fastest 500 supercomputers in the world are named. The other annual announcement is in November at the SC Conference (The International Conference for High Performance Computing, Networking, Storage and Analysis) in the USA.
The conference celebrated 30 years with the 19 June 2016 meeting in Frankfurt, Germany. Its 33rd edition in 2019 attracted a record number of participants – 164 exhibitors, and 3,573 visitors from 64 countries.
References
External links
of the ISC High Performance
1986 establishments in West Germany
Annual events in Germany
Computer science conferences
Recurring events established in 1986
Supercomputing | ISC High Performance | [
"Technology"
] | 284 | [
"Computer science",
"Computer science conferences",
"Supercomputing"
] |
34,447,124 | https://en.wikipedia.org/wiki/Quantum%20ESPRESSO | Quantum ESPRESSO (Quantum Open-Source Package for Research in Electronic Structure, Simulation, and Optimization; QE) is a suite for first-principles electronic-structure calculations and materials modeling, distributed for free and as free software under the GNU General Public License. It is based on density functional theory (DFT), plane wave basis sets, and pseudopotentials (both norm-conserving and ultrasoft).
The core plane wave DFT functions of QE are provided by the PWscf component (PWscf previously existed as an independent project). PWscf (Plane-Wave Self-Consistent Field) is a set of programs for electronic structure calculations within DFT and density functional perturbation theory, using plane wave basis sets and pseudopotentials. The software is released under the GNU General Public License.
The latest version QE-7.4 was released on 21 October 2024.
Quantum ESPRESSO Project
Quantum ESPRESSO is an open initiative of the CNR-IOM DEMOCRITOS National Simulation Center in Trieste (Italy) and its partners, in collaboration with different centers worldwide such as MIT, Princeton University, the University of Minnesota and the École Polytechnique Fédérale de Lausanne. The project is coordinated by the QUANTUM ESPRESSO foundation, which was formed by many research centers and groups all over the world. The first version, called pw.1.0.0, was released on 15-06-2001.
The program is written mainly in Fortran-90 with some parts in C or in Fortran-77. It is composed of a set of core components, a set of plug-ins for advanced tasks, and a set of third-party packages.
The basic packages include Pwscf, which solves the self-consistent Kohn-Sham equations, obtained for a periodic solid, CP to carry out Car-Parrinello molecular dynamics, and PostProc, which allows data analysis and plotting. Noteworthy additional packages include atomic for pseudopotential generation, PHonon for density-functional perturbation theory (DFPT) and the calculation of second- and third-order derivatives of the energy with respect to atomic displacements, and NEB (nudged elastic band) for the calculation of reaction pathways and energy barriers.
Target problems
The different tasks that can be performed include
Ground state calculations
Structural optimization
Transition states and minimum energy paths
Response properties (DFPT), such as phonon frequencies, electron-phonon interactions and EPR and NMR chemical shifts
Ab initio molecular dynamics: Car-Parrinello and Born-Oppenheimer MD
Spectroscopic properties
Quantum import
Generation of pseudopotentials
Parallelization
The main components of the Quantum ESPRESSO distribution are designed to exploit the architecture of today's supercomputers, which are characterized by multiple levels and layers of inter-processor communication. Parallelization is achieved using both MPI and OpenMP, allowing the main codes of the distribution to run in parallel on most or all parallel machines with very good performance.
See also
Quantum chemistry computer programs
Density Functional Theory
References
External links
Website of Quantum ESPRESSO
Website of Quantum ESPRESSO Foundation (QEF)
Computational chemistry software
Computational physics
Density functional theory software
Free science software
Physics software | Quantum ESPRESSO | [
"Physics",
"Chemistry"
] | 676 | [
"Computational chemistry software",
"Chemistry software",
"Computational physics",
"Computational chemistry",
"Density functional theory software",
"Physics software"
] |
34,453,685 | https://en.wikipedia.org/wiki/Tait%20equation | In fluid mechanics, the Tait equation is an equation of state, used to relate liquid density to hydrostatic pressure. The equation was originally published by Peter Guthrie Tait in 1888 in the form
where is the hydrostatic pressure in addition to the atmospheric one, is the volume at atmospheric pressure, is the volume under additional pressure , and are experimentally determined parameters.
A very detailed historical study on the Tait equation with the physical interpretation of the two parameters and is given in reference.
Tait-Tammann equation of state
In 1895, the original isothermal Tait equation was replaced by Tammann with an equation of the form
where is the isothermal mixed bulk modulus.
This above equation is popularly known as the Tait equation.
The integrated form is commonly written
where
is the specific volume of the substance (in units of ml/g or m3/kg)
is the specific volume at
(same units as ) and (same units as ) are functions of temperature
Pressure formula
The expression for the pressure in terms of the specific volume is
A highly detailed study on the Tait-Tammann equation of state with the physical interpretation of the two empirical parameters and is given in chapter 3 of reference. Expressions as a function of temperature for the two empirical parameters and are given for water, seawater, helium-4, and helium-3 in the entire liquid phase up to the critical temperature . The special case of the supercooled phase of water is discussed in Appendix D of reference. The case of liquid argon between the triple point temperature and 148 K is dealt with in detail in section 6 of the reference.
Tait-Murnaghan equation of state
Another popular isothermal equation of state that goes by the name "Tait equation" is the Murnaghan model which is sometimes expressed as
where is the specific volume at pressure , is the specific volume at pressure , is the bulk modulus at , and is a material parameter.
Pressure formula
This equation, in pressure form, can be written as
where are mass densities at , respectively.
For pure water, typical parameters are = 101,325 Pa, = 1000 kg/cu.m, = 2.15 GPa, and = 7.15.
Note that this form of the Tate equation of state is identical to that of the Murnaghan equation of state.
Bulk modulus formula
The tangent bulk modulus predicted by the MacDonald–Tait model is
Tumlirz–Tammann–Tait equation of state
A related equation of state that can be used to model liquids is the Tumlirz equation (sometimes called the Tammann equation and originally proposed by Tumlirz in 1909 and Tammann in 1911 for pure water). This relation has the form
where is the specific volume, is the pressure, is the salinity, is the temperature, and is the specific volume when , and are parameters that can be fit to experimental data.
The Tumlirz–Tammann version of the Tait equation for fresh water, i.e., when , is
For pure water, the temperature-dependence of are:
In the above fits, the temperature is in degrees Celsius, is in bars, is in cc/gm, and is in bars-cc/gm.
Pressure formula
The inverse Tumlirz–Tammann–Tait relation for the pressure as a function of specific volume is
Bulk modulus formula
The Tumlirz-Tammann-Tait formula for the instantaneous tangent bulk modulus of pure water is a quadratic function of (for an alternative see )
Modified Tait equation of state
Following in particular the study of underwater explosions and more precisely the shock waves emitted, J.G. Kirkwood proposed in 1965 a more appropriate form of equation of state to describe high pressures (>1 kbar) by expressing the isentropic compressibility coefficient as
where represents here the entropy.
The two empirical parameters and are now function of entropy such that
is dimensionless
has the same units as
The integration leads to the following expression for the volume along the isentropic
where .
Pressure formula
The expression for the pressure in terms of the specific volume along the isentropic is
A highly detailed study on the Modified Tait equation of state with the physical interpretation of the two empirical parameters and is given in chapter 4 of reference. Expressions as a function of entropy for the two empirical parameters and are given for water, helium-3 and helium-4.
See also
Equation of state
References
Equations of state
Fluid mechanics | Tait equation | [
"Physics",
"Engineering"
] | 916 | [
"Equations of physics",
"Statistical mechanics",
"Civil engineering",
"Equations of state",
"Fluid mechanics"
] |
34,458,189 | https://en.wikipedia.org/wiki/Rule-based%20modeling | Rule-based modeling is a modeling approach that uses a set of rules that indirectly specifies a mathematical model. The rule-set can either be translated into a model such as Markov chains or differential equations, or be treated using tools that directly work on the rule-set in place of a translated model, as the latter is typically much bigger. Rule-based modeling is especially effective in cases where the rule-set is significantly simpler than the model it implies, meaning that the model is a repeated manifestation of a limited number of patterns. An important domain where this is often the case is biochemical models of living organisms. Groups of mutually corresponding substances are subject to mutually corresponding interactions.
BioNetGen is a suite of software tools used to generate mathematical models consisting of ordinary differential equations without generating the equations directly. For example below is an example rule in the BioNetGen format:
Where:
A(a,a): Represents a model species A with two free binding sites a
B(b): Represents a model species B with one free binding site
A(a!1).B(b!1): Represents model species where at least one binding site of A is bound to the binding site of B
With the above line of code, BioNetGen will automatically create an ODE for each model species with the correct mass balance. Additionally, an additional species will be created because the rule above implies that two B molecules can bind to a single A molecule since there are two binding sites. Therefore, the following species will be generated:
4. A(a!1,a!2).B(b!1).B(b!2): Molecule A with both binding sites occupied by two different B molecules.
For biochemical systems
Early efforts to use rule-based modeling in simulation of biochemical systems include the stochastic simulation systems StochSim
A widely used tool for rule-based modeling of biochemical networks is BioNetGen It is released under the GNU GPL, version 3. BioNetGen includes a language to describe chemical substances, including the states they can assume and the bindings they can undergo. These rules can be used to create a reaction network model or to perform computer simulations directly on the rule set. The biochemical modeling framework Virtual Cell includes a BioNetGen interpreter.
A close alternative is the Kappa language. Another alternative is BioChemical Space language.
References
Systems biology
Molecular biology
Stochastic simulation
Free science software | Rule-based modeling | [
"Chemistry",
"Biology"
] | 497 | [
"Molecular and cellular biology stubs",
"Biochemistry stubs",
"Molecular biology",
"Biochemistry",
"Systems biology"
] |
34,458,244 | https://en.wikipedia.org/wiki/Antimony%20potassium%20tartrate | Antimony potassium tartrate, also known as potassium antimonyl tartrate, potassium antimontarterate, or tartar emetic, has the formula K2Sb2(C4H2O6)2. The compound has long been known as a powerful emetic, and was used in the treatment of schistosomiasis and leishmaniasis. It is used as a resolving agent. It typically is obtained as a hydrate.
Medical
The first treatment application against trypanosomiasis was tested in 1906, and the compound's use to treat other tropical diseases was researched. The treatment of leishmania with antimony potassium tartrate started in 1913. After the introduction of antimony(V) containing complexes like sodium stibogluconate and meglumine antimoniate, the use of antimony potassium tartrate was phased out. After British physician John Brian Christopherson's discovery in 1918 that antimony potassium tartrate could cure schistosomiasis, the antimonial drugs became widely used. However, the injection of antimony potassium tartrate had severe side effects such as Adams–Stokes syndrome and therefore alternative substances were under investigation. With the introduction and subsequent larger use of praziquantel in the 1970s, antimony-based treatments fell out of use.
Tartar emetic was used in the late 19th and early 20th century in patent medicine as a remedy for alcohol intoxication, and was first ruled ineffective in the United States in 1941, in United States v. 11 1/4 Dozen Packages of Articles Labeled in Part Mrs. Moffat's Shoo-Fly Powders for Drunkenness.
The New England Journal of Medicine reported a case study of a patient whose wife secretly gave him a dose of a product called "tartaro emetico" which contained trivalent antimony (antimony potassium tartrate) and is sold in Central America as an aversive treatment for alcohol use disorder. The patient, who had been out drinking the night before, developed persistent vomiting shortly after being given orange juice with the drug. When admitted to the hospital, and later in the intensive care unit, he experienced severe chest pains, cardiac abnormalities, renal and hepatic toxicity, and nearly died. The Journal reports that "Two years later, he [the patient] reports complete abstinence from alcohol."
Emetic
Antimony potassium tartrate's potential as an emetic has been known since the Middle Ages. The compound itself was considered toxic and therefore a different way to administer it was found. Cups made from pure antimony were used to store wine for 24 hours and then the resulting solution of antimony potassium tartrate in wine was consumed in small portions until the wanted emetic effect was reached.
Poisoning by "tartarised antimony" or "emetic tartar" is a plot device in the first modern detective novel, The Notting Hill Mystery (1862). The emetic tartar was kept by a character in the novel because he was "addicted to the pleasures of the table, and was in the habit of taking an occasional emetic."
The compound is still used to induce vomiting in captured animals in order to study their diets.
Insecticide
Antimony potassium tartrate is used as an insecticide against thrips. It is in IRAC class 8E.
Preparation, structure, reactions
Antimony potassium tartrate is prepared by treating a solution of potassium hydrogen tartrate and antimony trioxide:
2KOH + Sb2O3 + (HOCHCO2H)2 → K2Sb2(C4H2O6)2 + 3H2O
With an excess of tartaric acid, the monoanionic monoantimony salt is produced:
2KOH + Sb2O3 + 4(HOCHCO2H)2 → 2KSb(C4H2O6)2 + 2H2O
Antimony potassium tartrate has been the subject of several X-ray crystallography studies.
The core complex is an anionic dimer of antimony tartrate (Sb2(C4H2O6)22-) which is arranged in a large ring with the carbonyl groups pointing outwards. The complex has D2 molecular symmetry with two Sb(III) centers bonded in distorted square pyramids. Water and potassium ions are held within the unit cell but are not tightly bound to the dimer. The anion is a well-used resolving agent.
Further reading
Of historic interest:
References
Further reading
Potassium compounds
Antimony(III) compounds
Tartrates
Emetics
Double salts
Drugs with no legal status | Antimony potassium tartrate | [
"Chemistry"
] | 963 | [
"Double salts",
"Salts"
] |
34,458,555 | https://en.wikipedia.org/wiki/B%C3%B6ttcher%27s%20equation | Böttcher's equation, named after Lucjan Böttcher, is the functional equation
where
is a given analytic function with a superattracting fixed point of order at , (that is, in a neighbourhood of ), with n ≥ 2
is a sought function.
The logarithm of this functional equation amounts to Schröder's equation.
Solution
Solution of functional equation is a function in implicit form.
Lucian Emil Böttcher sketched a proof in 1904 on the existence of solution: an analytic function F in a neighborhood of the fixed point a, such that:
This solution is sometimes called:
the Böttcher coordinate
the Böttcher function
the Boettcher map.
The complete proof was published by Joseph Ritt in 1920, who was unaware of the original formulation.
Böttcher's coordinate (the logarithm of the Schröder function) conjugates in a neighbourhood of the fixed point to the function . An especially important case is when is a polynomial of degree , and = ∞ .
Explicit
One can explicitly compute Böttcher coordinates for:
power maps
Chebyshev polynomials
Examples
For the function h and n=2
the Böttcher function F is:
Applications
Böttcher's equation plays a fundamental role in the part of holomorphic dynamics which studies iteration of polynomials of one complex variable.
Global properties of the Böttcher coordinate were studied by Fatou
and Douady and Hubbard.
See also
Schröder's equation
External ray
References
Functional equations | Böttcher's equation | [
"Mathematics"
] | 311 | [
"Mathematical analysis",
"Mathematical objects",
"Functional equations",
"Equations"
] |
22,812,666 | https://en.wikipedia.org/wiki/RAFOS%20float | RAFOS floats are submersible devices used to map ocean currents well below the surface. They drift with these deep currents and listen for acoustic "pongs" emitted at designated times from multiple moored sound sources. By analyzing the time required for each pong to reach a float, researchers can pinpoint its position by trilateration. The floats are able to detect the pongs at ranges of hundreds of kilometers because they generally target a range of depths known as the SOFAR (Sound Fixing And Ranging) channel, which acts as a waveguide for sound. The name "RAFOS" derives from the earlier SOFAR floats, which emitted sounds that moored receivers picked up, allowing real-time underwater tracking. When the transmit and receive roles were reversed, so was the name: RAFOS is SOFAR spelled backward. Listening for sound requires far less energy than transmitting it, so RAFOS floats are cheaper and longer lasting than their predecessors, but they do not provide information in real-time: instead they store it on board, and upon completing their mission, drop a weight, rise to the surface, and transmit the data to shore by satellite.
Introduction
Of the importance of measuring ocean currents
The underwater world is still mostly unknown. The main reason for it is the difficulty to gather information in situ, to experiment, and even to reach certain places. But the ocean nonetheless is of a crucial importance for scientists, as it covers about 71% of the planet.
Knowledge of ocean currents is of crucial importance. In important scientific aspects, as the study of global warming, ocean currents are found to greatly affect the Earth's climate since they are the main heat transfer mechanism. They are the reason for heat flux between hot and cold regions, and in a larger sense drive almost every understood circulation. These currents also affect marine debris, and vice versa.
In an economical aspect, a better understanding can help reducing costs of shipping, since the currents would help boats reduce fuel costs. In the sail-ship era knowledge was even more essential. Even today, the round-the-world sailing competitors employ surface currents to their benefit. Ocean currents are also very important in the dispersal of many life forms. An example is the life-cycle of the European Eel.
The SOFAR channel
The SOFAR channel (short for Sound Fixing and Ranging channel), or deep sound channel (DSC), is a horizontal layer of water in the ocean at which depth the speed of sound is minimal, in average around 1200 m deep. It acts as a wave-guide for sound, and low frequency sound waves within the channel may travel thousands of miles before dissipating.
The SOFAR channel is centred on the depth where the cumulative effect of temperature and water pressure (and, to a smaller extent, salinity) combine to create the region of minimum sound speed in the water column. Near the surface, the rapidly falling temperature causes a decrease in sound speed, or a negative sound speed gradient. With increasing depth, the increasing pressure causes an increase in sound speed, or a positive sound speed gradient.
The depth where the sound speed is at a minimum is the sound channel axis. This is a characteristic that can be found in optical guides. If a sound wave propagates away from this horizontal channel, the part of the wave furthest from the channel axis travels faster, so the wave turns back toward the channel axis. As a result, the sound waves trace a path that oscillates across the SOFAR channel axis. This principle is similar to long distance transmission of light in an optical fiber. In this channel, a sound has a range of over 2000 km.
RAFOS float
Global idea
To use a RAFOS float, one has to submerge it in the specified location, so that it will get carried by the current. Then, every so often (usually every 6 or 8 hours) an 80-second sound signal is sent from moored emitters. Using the fact that a signal transmitted in the ocean preserves its phase structure (or pattern) for several minutes, it has been thought to use signals in which the frequency increases linearly of 1.523 Hz from start to end centered around 250 Hz. Then receivers would listen for specific phase structures, by comparing the incoming data with a reference 80-second signal. This permits to get rid of any noise appearing during the travel of the wave by floating particles or fish.
The detection scheme can be simplified by keeping only the information of positive or negative signal, allowing to work with a single bit of new information at each time step. This method works very well, and allows the use of small micro-processors, enabling the float itself to do the listening and computing, and a moored sound source. From the arrival time of the signals from two or more sound sources, and the previous location of the float, its current location can easily be determined to considerable (<1 km) accuracy. For instance, the float will listen for three sources and store the time of arrival for the two largest signals heard from each source. The location of the float will be computed onshore.
Technical characteristics
Mechanical characteristics
The floats consist of 8 cm by 1.5 to 2.2 m long glass pipe that contain a hydrophone, signal processing circuits, a microprocessor, a clock and a battery. A float weighs about 10 kg. The lower end is sealed with a flat aluminium endplate where all electrical and mechanical penetrators are located. The glass thickness is about 5 mm, giving the float a theoretical maximum depth of about 2700 m. The external ballast is suspended by a short piece of wire chosen for its resistance to saltwater corrosion. By dissolving it electrolytically the 1 kg ballast is released and the float returns to the surface.
Electrical characteristics
The electronics can be divided into four categories: a satellite transmitter used after surfacing, the set of sensors, a time reference clock, and a microprocessor. The clock is essential in locating the float, since it is used as reference to calculate the time travel of the sound signals from the moored emitters. It is also useful to have the float work on schedule. The microprocessor controls all subsystems except the clock, and stores the collected data at a regular schedule. The satellite transmitter is used to send data packages to orbiting satellites after the surfacing. It usually takes three days for the satellite to collect all the dataset.
The isobaric model
An isobaric float aims to follow a constant pressure plane, by adjusting the ballast's weight to attain buoyancy to a certain depth. It is the most easily achieved model. To achieve an isobaric float, its compressibility must be much lower than that of seawater. In that case, if the float were to be moved upwards from equilibrium, it will expand less than the surrounding seawater, leading to a restoring force pushing it downwards, back to its equilibrium position. Once correctly balanced, the float will remain in a constant pressure field.
The isopycnal model
The aim of an isopycnal float is to follow the density planes, that is to attain neutral buoyancy for constant density. To achieve this, it is necessary to remove pressure induced restoring forces, thus the float has to have the same compressibility as the surrounding seawater. This is often achieved by a compressible element, as a piston in a cylinder, so that the CPU can change the volume according to changes in pressure. An error of about 10% in the setting can lead to a 50 m depth difference once in water. This is why floats are ballasted in tanks working at high pressure.
Measures and projects
Computing the float's trajectory
Once the float's mission is over and the data collected by the satellites, one major step is to compute the float's route over time. This is done by looking at the travel time of the signals from the moored speakers to the float, computed from the emission time (known accurately), the reception time (known from the float's clock and corrected if the clock had moved). Then, because the speed of sound is known to 0.3% in sea, the position of the float can be determined to about 1 km by an iterative circular tracking procedure. The doppler effect can also be taken into account. Since the float's speed is not known, a first closing speed is determined by measuring the shift in time arrival between two transmissions, where the float is considered not to have moved.
The Argo project
The Argo project is an international collaboration between 50 research and operational agencies from 26 countries that aims to measure a global array of temperature, salinity and pressure of the top 2000m of the ocean. It uses over 3000 floats, some of which use RAFOS for underwater geolocation; most simply use the Global Positioning System (GPS) to obtain a position when surfacing every 10 days.
This project has greatly contributed to the scientific community and has issued many data that has since been used for ocean parameters cartography and Global change analysis.
Other results
Many results have been achieved thanks to these floats, on the global mapping of the ocean characteristics, or for example how floats systematically shoal (upwell) as they approach anticyclonic meanders and deepen (downwell) as they approach cyclonic meanders. On the left is a typical set of data from a RAFOS float. Today, such floats remain the best way to systematically probe the ocean's interior, since it is automatic and self-sufficient. In recent developments the floats have been able to measure different amounts of dissolved gases, and even to carry small experiments in situ.
See also
Argo (oceanography)
SOFAR channel
Ocean acoustic tomography
References
External links
RAFOS Float – Ocean Instruments
http://www.beyonddiscovery.org/content/view.page.asp?I=224
https://web.archive.org/web/20110205111415/http://www.beyonddiscovery.org/content/view.article.asp?a=219
http://www.dosits.org/people/researchphysics/measurecurrents/
http://www.whoi.edu/instruments/viewInstrument.do?id=1061
http://www.argo.ucsd.edu/index.html
Oceanography | RAFOS float | [
"Physics",
"Environmental_science"
] | 2,132 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
22,813,475 | https://en.wikipedia.org/wiki/Software%20defect%20indicator | A Software Defect Indicator is a pattern that can be found in source code that is strongly correlated with a software defect, an error or omission in the source code of a computer program that may cause it to malfunction. When inspecting the source code of computer programs, it is not always possible to identify defects directly, but there are often patterns, sometimes called anti-patterns, indicating that defects are present.
Some examples of Software Defect Indicators:
Disabled Code: Code has been written and the programmer has disabled it, or switched it off, without making it clear why it has been disabled, or when or whether it will be re-enabled.
Routine Too Complex: A program (method, module, routine, subroutine, procedure, or any named block of code) contains more than 10 binary terms in conditional statements.
Unused Variables: Unreferenced variables are a strong indicator for other errors.
Number of Distinct Committers: The amount of unique developers that have made contributions to a project's commit history. This is a process metric that is useful in indicating software defects.
See also
Cyclomatic complexity
Anti-pattern
Computer program
Computer programming
Control flow
Software engineering
References
External links
NIST Special Publication 500-235 Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric
Software metrics | Software defect indicator | [
"Mathematics",
"Engineering"
] | 265 | [
"Software engineering",
"Quantity",
"Metrics",
"Software metrics"
] |
22,813,858 | https://en.wikipedia.org/wiki/Lactivicin | Lactivicin is a non-beta-lactam antibiotic that is active against a range of Gram-positive and Gram-negative bacteria. Lactivicin demonstrates a similar affinity for penicillin-binding proteins to beta-lactam antibiotics and is also susceptible to beta-lactamase enzymes.
References
External links
Antibiotics
Acetamides
Lactones
Carboxylic acids
Oxazolidinones | Lactivicin | [
"Chemistry",
"Biology"
] | 84 | [
"Biotechnology products",
"Carboxylic acids",
"Functional groups",
"Antibiotics",
"Biocides"
] |
22,814,213 | https://en.wikipedia.org/wiki/Trimethylsilylpropanoic%20acid | Trimethylsilylpropanoic acid (TMSP or TSP) is a chemical compound containing a trimethylsilyl group. It is used as internal reference in nuclear magnetic resonance for aqueous solvents (e.g. D2O). For that use it is often deuterated (3-(trimethylsilyl)-2,2,3,3-tetradeuteropropionic acid or TMSP-d4). Other internal references that are frequently used in NMR experiments are DSS and tetramethylsilane.
References
Propionic acids
Trimethylsilyl compounds
Nuclear magnetic resonance | Trimethylsilylpropanoic acid | [
"Physics",
"Chemistry"
] | 140 | [
"Nuclear magnetic resonance",
"Functional groups",
"Trimethylsilyl compounds",
"Organic compounds",
"Nuclear chemistry stubs",
"Nuclear magnetic resonance stubs",
"Nuclear physics",
"Organic compound stubs",
"Organic chemistry stubs"
] |
22,819,055 | https://en.wikipedia.org/wiki/Acaulosporaceae | The Acaulosporaceae are a family of fungi in the order Diversisporales. Species in this family are widespread in distribution, and form arbuscular mycorrhiza and vesicles in roots. The family contains two genera and 31 species.
References
Diversisporales
Fungus families | Acaulosporaceae | [
"Biology"
] | 62 | [
"Fungus stubs",
"Fungi"
] |
22,821,969 | https://en.wikipedia.org/wiki/Spherical%20code | In geometry and coding theory, a spherical code with parameters (n,N,t) is a set of N points on the unit hypersphere in n dimensions for which the dot product of unit vectors from the origin to any two points is less than or equal to t. The kissing number problem may be stated as the problem of finding the maximal N for a given n for which a spherical code with parameters (n,N,1/2) exists. The Tammes problem may be stated as the problem of finding a spherical code with minimal t for given n and N.
External links
A library of putatively optimal spherical codes
Coding theory | Spherical code | [
"Mathematics"
] | 131 | [
"Discrete mathematics",
"Coding theory",
"Geometry",
"Geometry stubs"
] |
28,891,245 | https://en.wikipedia.org/wiki/Stoner%E2%80%93Wohlfarth%20model | In electromagnetism, the Stoner–Wohlfarth model is a widely used model for the magnetization of ferromagnets with a single-domain. It is a simple example of magnetic hysteresis and is useful for modeling small magnetic particles in magnetic storage, biomagnetism, rock magnetism and paleomagnetism.
History
The Stoner–Wohlfarth model was developed by Edmund Clifton Stoner and Erich Peter Wohlfarth and published in 1948. It included a numerical calculation of the integrated response of randomly oriented magnets. Since this was done before computers were widely available, they resorted to trigonometric tables and hand calculations.
Description
In the Stoner–Wohlfarth model, the magnetization does not vary within the ferromagnet and it is represented by a vector . This vector rotates as the magnetic field changes. The magnetic field is only varied along a single axis; its scalar value is positive in one direction and negative in the opposite direction. The ferromagnet is assumed to have a uniaxial magnetic anisotropy with anisotropy parameter . As the magnetic field varies, the magnetization is restricted to the plane containing the magnetic field direction and the easy axis. It can therefore be represented by a single angle , the angle between the magnetization and the field (Figure 1). Also specified is the angle between the field and the easy axis.
Equations
The energy of the system is
where is the volume of the magnet, is the saturation magnetization, and is the vacuum permeability. The first term is the magnetic anisotropy and the second the energy of coupling with the applied field (often called the Zeeman energy).
Stoner and Wohlfarth normalized this equation:
where .
A given magnetization direction is in mechanical equilibrium if the forces on it are zero. This occurs when the first derivative of the energy with respect to the magnetization direction is zero:
This direction is stable against perturbations when it is at an energy minimum, having a positive second derivative:
In zero field the magnetic anisotropy term is minimized when the magnetization is aligned with the easy axis. In a large field, the magnetization is pointed towards the field.
Hysteresis loops
For each angle between easy axis and field, equation () has a solution that consists of two solution curves. It is trivial to solve for these curves by varying and solving for . There is one curve for between and and another for between and ; the solutions at and correspond to .
Since the magnetization in the direction of the field is , these curves are usually plotted in the normalized form vs. , where is the component of magnetization in the direction of the field. An example is shown in Figure 2. The solid red and blue curves connect stable magnetization directions. For fields , the two curves overlap and there are two stable directions. This is the region where hysteresis occurs. Three energy profiles are included (insets). The red and blue stars are the stable magnetization directions, corresponding to energy minima. Where the vertical dashed lines intersect the red and blue dashed lines, the magnetization directions are energy maxima and determine the energy barriers between states.
In an ordinary magnetic hysteresis measurement, starts at a large positive value and is decreased to a large negative value. The magnetization direction starts on the blue curve. At the red curve appears, but for the blue state has a lower energy because it is closer to the direction of the magnetic field. When the field becomes negative, the red state has the lower energy, but the magnetization cannot immediately jump to this new direction because there is an energy barrier in between (see the insets). At , however, the energy barrier disappears, and in more negative fields the blue state no longer exists. It must therefore jump to the red state. After this jump, the magnetization remains on the red curve until the field increases past , where it jumps to the blue curve. Usually only the hysteresis loop is plotted; the energy maxima are only of interest if the effect of thermal fluctuations is calculated.
The Stoner–Wohlfarth model is a classic example of magnetic hysteresis. The loop is symmetric (by a rotation) about the origin and jumps occur at , where is known as the switching field. All the hysteresis occurs at .
Dependence on field direction
The shape of the hysteresis loop has a strong dependence on the angle between the magnetic field and the easy axis (Figure 3). If the two are parallel (), the hysteresis loop is at its biggest (with in normalized units). The magnetization starts parallel to the field and does not rotate until it becomes unstable and jumps to the opposite direction. In general, the larger the angle, the more reversible rotation occurs. At the other extreme of , with the field perpendicular to the easy axis, no jump occurs. The magnetization rotates continuously from one direction to the other (it has two choices of rotation direction, though).
For a given angle , the switching field is the point where the solution switches from an energy minimum to an energy maximum . Thus, it can be calculated directly by solving equation () along with . The solution is
where
In normalized units, .
An alternative way of representing the switching field solution is to divide the vector field into a component that is parallel to the easy axis, and a component that is perpendicular. Then
If the components are plotted against each other, the result is a Stoner–Wohlfarth astroid. A magnetic hysteresis loop can be calculated by applying a geometric construction to this astroid.
Predictions for homogeneous, isotropic systems
Hysteresis
Stoner and Wohlfarth calculated the main hysteresis loop for an isotropic system of randomly oriented, identical particles. The result of the calculation is reproduced in Figure 4. Irreversible change (single arrow) occurs for , reversible change (double arrows) elsewhere. The normalized saturation remanence and coercivity are indicated on the figure. The curve in the center is the initial magnetization curve. This simulates the behavior of the sample if it is demagnetized before applying a field. The demagnetization is assumed to leave each particle with an equal probability of being magnetized in either of the two directions parallel to the easy axis. Thus, it is an average of the upper and lower branches of the main loop.
Isothermal remanence
Some remanence calculations for randomly oriented, identical particles are shown in Figure 5. Isothermal remanent magnetization (IRM) is acquired after demagnetizing the sample and then applying a field. The curve shows the normalized remanence as a function of the field. No change occurs until because all the switching fields are larger than . Up to this field, changes in magnetization are reversible. The magnetization reaches saturation at , the largest switching field.
The other two types of remanence involve demagnetizing a saturation isothermal remanence (SIRM), so in normalized units they start at . Again, nothing happens to the remanence until the field reaches . The field at which reaches zero is called the coercivity of remanence.
Some magnetic hysteresis parameters predicted by this calculation are shown in the adjacent table. The normalized quantities used in the above equations have been expressed in terms of the normal measured quantities. The parameter is the coercivity of remanence and is the initial susceptibility (the magnetic susceptibility of a demagnetized sample).
More general systems
The above calculations are for identical particles. In a real sample the magnetic anisotropy parameter will be different for each particle. This does not change the ratio , but it does change the overall shape of the loop. A parameter that is often used to characterize the shape of the loop is the ratio , which is 1.09 for a sample with identical particles and larger if they are not identical. Plots of against are widely used in rock magnetism as a measure of the domain state (single-domain or multidomain) in magnetic minerals.
Wohlfarth relations
Wohlfarth identified relations between the remanences that hold true for any system of Stoner–Wohlfarth particles:
These Wohlfarth relations compare IRM with demagnetization of saturation remanence. Wohlfarth also described more general relations comparing the acquiring a non-saturation IRM and demagnetizing it.
The Wohlfarth relations can be represented by linear plots of one remanence against another. These Henkel plots are often used to display measured remanence curves of real samples and determine whether Stoner–Wohlfarth theory applies to them.
Extensions of the model
The Stoner–Wohlfarth model is useful in part because it is so simple, but it often falls short of representing the actual magnetic properties of a magnet. There are several ways in which it has been extended:
Generalizing the magnetic anisotropy: Hysteresis loops have been calculated for particles with pure cubic magnetocrystalline anisotropy as well as mixtures of cubic and uniaxial anisotropy.
Adding thermal fluctuations: Thermal fluctuations make jumps between stable states possible, reducing the hysteresis in the system. Pfeiffer added the effect of thermal fluctuations to the Stoner–Wohlfarth model. This makes the hysteresis dependent on the size of the magnetic particle. As the particle size (and the time between jumps) decreases, it eventually crosses over into superparamagnetism.
Adding particle interactions: Magnetostatic or exchange coupling between magnets can have a large effect on the magnetic properties. If the magnets are in a chain, they may act in unison, behaving much like Stoner–Wohlfarth particles. This effect is seen in the magnetosomes of magnetotactic bacteria. In other arrangements, the interactions may reduce the hysteresis.
Generalizing to non-uniform magnetization: This is the domain of micromagnetics.
See also
Stoner–Wohlfarth astroid
Jiles–Atherton model
Preisach model of hysteresis
Notes
References
External links
Stoner-Wohlfarth magnetization reversal app
Magnetic hysteresis
Rock magnetism | Stoner–Wohlfarth model | [
"Physics",
"Materials_science"
] | 2,167 | [
"Physical phenomena",
"Hysteresis",
"Magnetic hysteresis"
] |
28,893,371 | https://en.wikipedia.org/wiki/Parallel%20artificial%20membrane%20permeability%20assay | In medicinal chemistry, parallel artificial membrane permeability assay (PAMPA) is a method which determines the permeability of substances from a donor compartment, through a lipid-infused artificial membrane into an acceptor compartment. A multi-well microtitre plate is used for the donor and a membrane/acceptor compartment is placed on top; the whole assembly is commonly referred to as a “sandwich”. At the beginning of the test, the drug is added to the donor compartment, and the acceptor compartment is drug-free. After an incubation period which may include stirring, the sandwich is separated and the amount of drug is measured in each compartment. Mass balance allows calculation of drug that remains in the membrane.
Applications
To date, PAMPA models have been developed that exhibit a high degree of correlation with permeation across a variety of barriers, including Caco-2 cultures, the gastrointestinal tract, blood–brain barrier and skin.
The donor and/or acceptor compartments may contain solubilizing agents, or additives that bind the drugs as they permeate. To improve the in vitro - in vivo correlation and performance of the PAMPA method, the lipid, pH and chemical composition of the system is often designed with biomimetic considerations in mind.
Although active transport is not modeled by the artificial PAMPA membrane, up to 95% of known drugs are absorbed by passive transport. Some experts support a lower figure, so the amount is open to some interpretation. Microtiter plates with 96 wells can be used for the assay which increases the speed and lowers the per sample cost.
Commercialization
Since the first publication by Kansy and coworkers, several companies developed their own versions of the assay. Early models incorporated iso-pH conditions in the compartments separated by a simple lipid membrane; subsequently, commercial products were introduced which incorporated more sophisticated lipid membranes. The commercial products helped ensure that medicinal chemists across different corporate labs within a worldwide organization used the same standardized methodology, reagents and obtained equivalent system performance as demonstrated with a set of test compounds. This has proved very useful as various operational activities have been outsourced to other countries.
See also
Caco-2 cell-based permeability
Drug development
Drug discovery
Lipid bilayer
References
Further reading
Pharmacokinetics
Medicinal chemistry | Parallel artificial membrane permeability assay | [
"Chemistry",
"Biology"
] | 480 | [
"Pharmacology",
"Pharmacokinetics",
"nan",
"Medicinal chemistry",
"Biochemistry"
] |
6,757,017 | https://en.wikipedia.org/wiki/Space-based%20solar%20power | Space-based solar power (SBSP or SSP) is the concept of collecting solar power in outer space with solar power satellites (SPS) and distributing it to Earth. Its advantages include a higher collection of energy due to the lack of reflection and absorption by the atmosphere, the possibility of very little night, and a better ability to orient to face the Sun. Space-based solar power systems convert sunlight to some other form of energy (such as microwaves) which can be transmitted through the atmosphere to receivers on the Earth's surface.
Solar panels on spacecraft have been in use since 1958, when Vanguard I used them to power one of its radio transmitters; however, the term (and acronyms) above are generally used in the context of large-scale transmission of energy for use on Earth.
Various SBSP proposals have been researched since the early 1970s, but none is economically viable with the space launch costs. Some technologists propose lowering launch costs with space manufacturing or with radical new space launch technologies other than rocketry.
Besides cost, SBSP also introduces several technological hurdles, including the problem of transmitting energy from orbit. Since wires extending from Earth's surface to an orbiting satellite are not feasible with current technology, SBSP designs generally include the wireless power transmission with its associated conversion inefficiencies, as well as land use concerns for antenna stations to receive the energy at Earth's surface. The collecting satellite would convert solar energy into electrical energy, power a microwave transmitter or laser emitter, and transmit this energy to a collector (or microwave rectenna) on Earth's surface. Contrary to appearances in fiction, most designs propose beam energy densities that are not harmful if human beings were to be inadvertently exposed, such as if a transmitting satellite's beam were to wander off-course. But the necessarily vast size of the receiving antennas would still require large blocks of land near the end users. The service life of space-based collectors in the face of long-term exposure to the space environment, including degradation from radiation and micrometeoroid damage, could also become a concern for SBSP.
As of 2020, SBSP is being actively pursued by Japan, China, Russia, India, the United Kingdom, and the US.
In 2008, Japan passed its Basic Space Law which established space solar power as a national goal. JAXA has a roadmap to commercial SBSP.
In 2015, the China Academy for Space Technology (CAST) showcased its roadmap at the International Space Development Conference. In February 2019, Science and Technology Daily (科技日报, Keji Ribao), the official newspaper of the Ministry of Science and Technology of the People's Republic of China, reported that construction of a testing base had started in Chongqing's Bishan District. CAST vice-president Li Ming was quoted as saying China expects to be the first nation to build a working space solar power station with practical value. Chinese scientists were reported as planning to launch several small- and medium-sized space power stations between 2021 and 2025. In December 2019, Xinhua News Agency reported that China plans to launch a 200-tonne SBSP station capable of generating megawatts (MW) of electricity to Earth by 2035.
In May 2020, the US Naval Research Laboratory conducted its first test of solar power generation in a satellite. In August 2021, the California Institute of Technology (Caltech) announced that it planned to launch a SBSP test array by 2023, and at the same time revealed that Donald Bren and his wife Brigitte, both Caltech trustees, had been since 2013 funding the institute's Space-based Solar Power Project, donating over $100 million. A Caltech team successfully demonstrated beaming power to earth in 2023.
History
In 1941, science fiction writer Isaac Asimov published the science fiction short story "Reason", in which a space station transmits energy collected from the Sun to various planets using microwave beams. The SBSP concept, originally known as satellite solar-power system (SSPS), was first described in November 1968. In 1973 Peter Glaser was granted U.S. patent number 3,781,647 for his method of transmitting power over long distances (e.g. from an SPS to Earth's surface) using microwaves from a very large antenna (up to one square kilometer) on the satellite to a much larger one, now known as a rectenna, on the ground.
Glaser then was a vice president at Arthur D. Little, Inc. NASA signed a contract with ADL to lead four other companies in a broader study in 1974. They found that, while the concept had several major problems – chiefly the expense of putting the required materials in orbit and the lack of experience on projects of this scale in space – it showed enough promise to merit further investigation and research.
Concept development and evaluation
Between 1978 and 1986, the Congress authorized the Department of Energy (DoE) and NASA to jointly investigate the concept. They organized the Satellite Power System Concept Development and Evaluation Program. The study remains the most extensive performed to date (budget $50 million). Several reports were published investigating the engineering feasibility of such a project. They include:
Resource Requirements (Critical Materials, Energy, and Land)
Financial/Management Scenarios
Public Acceptance
State and Local Regulations as Applied to Satellite Power System Microwave Receiving Antenna Facilities
Student Participation
Potential of Laser for SBSP Power Transmission
International Agreements
Centralization/Decentralization
Mapping of Exclusion Areas For Rectenna Sites
Economic and Demographic Issues Related to Deployment
Some Questions and Answers
Meteorological Effects on Laser Beam Propagation and Direct Solar Pumped Lasers
Public Outreach Experiment
Power Transmission and Reception Technical Summary and Assessment
Space Transportation
Discontinuation
The project was not continued with the change in administrations after the 1980 United States elections. The Office of Technology Assessment concluded that "Too little is currently known about the technical, economic, and environmental aspects of SPS to make a sound decision whether to proceed with its development and deployment. In addition, without further research an SPS demonstration or systems-engineering verification program would be a high-risk venture."
In 1997, NASA conducted its "Fresh Look" study to examine the modern state of SBSP feasibility. In assessing "What has changed" since the DOE study, NASA asserted that the "US National Space Policy now calls for NASA to make significant investments in technology (not a particular vehicle) to drive the costs of ETO [Earth to Orbit] transportation down dramatically. This is, of course, an absolute requirement of space solar power."
Conversely, Pete Worden of NASA claimed that space-based solar is about five orders of magnitude more expensive than solar power from the Arizona desert, with a major cost being the transportation of materials to orbit. Worden referred to possible solutions as speculative and not available for decades at the earliest.
On November 2, 2012, China proposed a space collaboration with India that mentioned SBSP, "may be Space-based Solar Power initiative so that both India and China can work for long term association with proper funding along with other willing space faring nations to bring space solar power to earth."
Exploratory Research and Technology program
In 1999, NASA initiated its Space Solar Power Exploratory Research and Technology program (SERT) for the following purposes:
Perform design studies of selected flight demonstration concepts.
Evaluate studies of the general feasibility, design, and requirements.
Create conceptual designs of subsystems that make use of advanced SSP technologies to benefit future space or terrestrial applications.
Formulate a preliminary plan of action for the U.S. (working with international partners) to undertake an aggressive technology initiative.
Construct technology development and demonstration roadmaps for critical space solar power (SSP) elements.
SERT went about developing a solar power satellite (SPS) concept for a future gigawatt space power system, to provide electrical power by converting the Sun's energy and beaming it to Earth's surface, and provided a conceptual development path that would utilize current technologies. SERT proposed an inflatable photovoltaic gossamer structure with concentrator lenses or solar heat engines to convert sunlight into electricity. The program looked both at systems in Sun-synchronous orbit and geosynchronous orbit. Some of SERT's conclusions:
The increasing global energy demand is likely to continue for many decades resulting in new power plants of all sizes being built.
The environmental impact of those plants and their impact on world energy supplies and geopolitical relationships can be problematic.
Renewable energy is a compelling approach, both philosophically and in engineering terms.
Many renewable energy sources are limited in their ability to affordably provide the base load power required for global industrial development and prosperity, because of inherent land and water requirements.
Based on their Concept Definition Study, space solar power concepts may be ready to reenter the discussion.
Solar power satellites should no longer be envisioned as requiring unimaginably large initial investments in fixed infrastructure before the emplacement of productive power plants can begin.
Space solar power systems appear to possess many significant environmental advantages when compared to alternative approaches.
The economic viability of space solar power systems depends on many factors and the successful development of various new technologies (not least of which is the availability of much lower cost access to space than has been available); however, the same can be said of many other advanced power technologies options.
Space solar power may well emerge as a serious candidate among the options for meeting the energy demands of the 21st century.
Launch costs in the range of $100–$200 per kilogram of payload from low Earth orbit to Geosynchronous orbit are needed if SPS is to be economically viable.
Japan Aerospace Exploration Agency
The May 2014 IEEE Spectrum magazine carried a lengthy article "It's Always Sunny in Space" by Susumu Sasaki. The article stated, "It's been the subject of many previous studies and the stuff of sci-fi for decades, but space-based solar power could at last become a reality—and within 25 years, according to a proposal from researchers at the Tokyo-based Japan Aerospace Exploration Agency (JAXA)."
JAXA announced on 12 March 2015 that they wirelessly beamed 1.8 kilowatts 50 meters to a small receiver by converting electricity to microwaves and then back to electricity. This is the standard plan for this type of power. On 12 March 2015 Mitsubishi Heavy Industries demonstrated transmission of 10 kilowatts (kW) of power to a receiver unit located at a distance of 500 meters (m) away.
Advantages and disadvantages
Advantages
The SBSP concept is attractive because space has several major advantages over the Earth's surface for the collection of solar power:
It is always solar noon in space and full sun.
Collecting surfaces could receive much more intense sunlight, owing to the lack of obstructions such as atmospheric gasses, clouds, dust and other weather events. Consequently, the intensity in orbit is approximately 144% of the maximum attainable intensity on Earth's surface.
A satellite could be illuminated over 99% of the time and be in Earth's shadow a maximum of only 72 minutes per night at the spring and fall equinoxes at local midnight. Orbiting satellites can be exposed to a consistently high degree of solar radiation, generally for 24 hours per day, whereas earth surface solar panels currently collect power for an average of 29% of the day.
Power could be relatively quickly redirected directly to areas that need it most. A collecting satellite could possibly direct power on demand to different surface locations based on geographical baseload or peak load power needs.
Reduced plant and wildlife interference.
SBSP does not emit greenhouse gases unlike oil, gas, ethanol, and coal plants. Space based solar power also does not depend on or compete with scarce fresh water resources, unlike coal and nuclear plants.
SBSP generates forty times more than solar panels, and bring almost zero percent of hazardous waste to our environment. It also allows for electricity to be generated continuously, twenty four hours a day, ninety nine percent of the year.
If the clean energy that is provided from space-based solar power account for just five percent of our national energy consumption, our carbon footprint would be significantly reduced.
Disadvantages
The SBSP concept also has a number of problems:
The large cost of launching a satellite into space. For 6.5 kg/kW, the cost to place a power satellite in geosynchronous orbit (GEO) cannot exceed $200/kg if the power cost is to be competitive.
Microwave optic requires gigawatt scale to compensate for Airy disk beam spreading. Typically a 1 km disk in geosynchronous orbit transmitting at 2.45 GHz spreads out to 10 km at Earth distance.
Inability to constrain power transmission inside tiny beam angles. For example, a beam of 0.002 degrees (7.2 arc seconds) is required to stay within a one kilometer receiving antenna target from geostationary altitude. The most advanced directional wireless power transfer systems as of 2019 spread their half power beam width across at least 0.9 arc degrees.
Inaccessibility: Maintenance of an earth-based solar panel is relatively simple, but construction and maintenance on a solar panel in space would typically be done telerobotically. In addition to cost, astronauts working in GEO are exposed to unacceptably high radiation dangers and risk and cost about one thousand times more than the same task done telerobotically.
The space environment is hostile; PV panels (if used) suffer about eight times the degradation they would on Earth (except at orbits that are protected by the magnetosphere).
Space debris is a major hazard to large objects in space, particularly for large structures such as SBSP systems in transit through the debris below 2000 km. Already in 1978, astrophysicist Donald J. Kessler warned against a self-propagating collision cascade during the assembly of the SPS modules in LEO, which is now known as Kessler syndrome. Collision risk is much reduced in GEO since all the satellites are moving in the same direction at very close to the same speed.
The broadcast frequency of the microwave downlink (if used) would require isolating the SBSP systems away from other satellites. GEO space is already well used and would require coordinating with the ITU-R.
The large size and corresponding cost of the receiving station on the ground. The cost has been estimated at a billion dollars for 5 GW by SBSP researcher Keith Henson.
Energy losses during several phases of conversion from photons to electrons to photons back to electrons.
Waste heat disposal in space power systems is difficult to begin with, but becomes intractable when the entire spacecraft is designed to absorb as much solar radiation as possible. Traditional spacecraft thermal control systems such as radiative vanes may interfere with solar panel occlusion or power transmitters.
Decommissioning costs: The cost of deorbiting the satellites at the end of their service life to prevent them from exacerbating the orbital space debris problem due to impacts with asteroidal, cometary, and planetary debris is likely to be significant. While the future cost of imparting Delta-V is difficult to estimate, the amount of Delta-V that must be imparted to transfer a satellite from GEO to GTO is 1472 m/s2. If, upon reentry, the disintegrating satellite would release hazardous chemicals into the Earth's atmosphere, then the additional expenses of disassembling the satellite and deorbiting the environmentally hazardous components within a space vehicle with downmass capabilities must be factored into the decommissioning costs.
Since these systems would be in space, they obviously would not be able to be controlled hands-on. Researchers, will need to create a way to maintain these systems autonomously, which could create some technical issues.
Research has also shown that an increase in population can increase congestion and ultimately could cause pieces of orbital debris, which was concluded from a test China had done with their satellite.
Design
Space-based solar power essentially consists of three elements:
collecting solar energy in space with reflectors or inflatable mirrors onto solar cells or heaters for thermal systems
wireless power transmission to Earth via microwave or laser
receiving power on Earth via a rectenna, a microwave antenna
The space-based portion will not need to support itself against gravity (other than relatively weak tidal stresses). It needs no protection from terrestrial wind or weather, but will have to cope with space hazards such as micrometeors and solar flares. Two basic methods of conversion have been studied: photovoltaic (PV) and solar dynamic (SD). Most analyses of SBSP have focused on photovoltaic conversion using solar cells that directly convert sunlight into electricity. Solar dynamic uses mirrors to concentrate light on a boiler. The use of solar dynamic could reduce mass per watt. Wireless power transmission was proposed early on as a means to transfer energy from collection to the Earth's surface, using either microwave or laser radiation at a variety of frequencies.
Microwave power transmission
William C. Brown demonstrated in 1964, during Walter Cronkite's CBS News program, a microwave-powered model helicopter that received all the power it needed for flight from a microwave beam. Between 1969 and 1975, Bill Brown was technical director of a JPL Raytheon program that beamed 30 kW of power over a distance of at 9.6% efficiency.
The beam does spread out due to diffraction. At 2.45 GHz, a one km phased array transmitting antenna at GEO spreads to about 10 km diameter to the first zero ring. []. The overall transmission efficiency is close to 50% depending on many factors. []
Microwave power transmission of tens of kilowatts has been well proven by existing tests at Goldstone in California (1975) and Grand Bassin on Reunion Island (1997).
More recently, microwave power transmission has been demonstrated, in conjunction with solar energy capture, between a mountaintop in Maui and the island of Hawaii (92 miles away), by a team under John C. Mankins. Technological challenges in terms of array layout, single radiation element design, and overall efficiency, as well as the associated theoretical limits are presently a subject of research, as it was demonstrated by the Special Session on "Analysis of Electromagnetic Wireless Systems for Solar Power Transmission" held during the 2010 IEEE Symposium on Antennas and Propagation. In 2013, a useful overview was published, covering technologies and issues associated with microwave power transmission from space to ground. It includes an introduction to SPS, current research and future prospects. Moreover, a review of current methodologies and technologies for the design of antenna arrays for microwave power transmission appeared in the Proceedings of the IEEE.
Laser power beaming
Laser power beaming was envisioned by some at NASA as a stepping stone to further industrialization of space. In the 1980s, researchers at NASA worked on the potential use of lasers for space-to-space power beaming, focusing primarily on the development of a solar-powered laser. In 1989, it was suggested that power could also be usefully beamed by laser from Earth to space. In 1991, the SELENE project (SpacE Laser ENErgy) had begun, which included the study of laser power beaming for supplying power to a lunar base. The SELENE program was a two-year research effort, but the cost of taking the concept to operational status was too high, and the official project ended in 1993 before reaching a space-based demonstration.
Laser Solar Satellites
Laser Solar Satellites are smaller in size, meaning that they have to work as a group with other similar satellites. There are many pros to Laser Solar Satellites, specifically regarding their lower overall costs in comparison to other satellites. While the cost is lower than other satellites, there are various safety concerns, and other concerns regarding this satellite. Laser-emitting solar satellites only need to venture about 400 km into space, but because of their small generation capacity, hundreds or thousands of laser satellites would need to be launched in order to create a sustainable impact. A single satellite launch can range from fifty to four hundred million dollars. Lasers could be helpful for the energy from the sun harvested in space, to be returned back to Earth in order for terrestrial power demands to be met.
Orbital location
The main advantage of locating a space power station in geostationary orbit is that the antenna geometry stays constant, and so keeping the antennas lined up is simpler. Another advantage is that nearly continuous power transmission is immediately available as soon as the first space power station is placed in orbit, LEO requires several satellites before they are producing nearly continuous power.
Power beaming from geostationary orbit by microwaves carries the difficulty that the required 'optical aperture' sizes are very large. For example, the 1978 NASA SPS study required a 1 km diameter transmitting antenna and a 10 km diameter receiving rectenna for a microwave beam at 2.45 GHz. These sizes can be somewhat decreased by using shorter wavelengths, although they have increased atmospheric absorption and even potential beam blockage by rain or water droplets. Because of the thinned array curse, it is not possible to make a narrower beam by combining the beams of several smaller satellites. The large size of the transmitting and receiving antennas means that the minimum practical power level for an SPS will necessarily be high; small SPS systems will be possible, but uneconomic.
A collection of LEO (low Earth orbit) space power stations has been proposed as a precursor to GEO (geostationary orbit) space-based solar power.
Earth-based receiver
The Earth-based rectenna would likely consist of many short dipole antennas connected via diodes. Microwave broadcasts from the satellite would be received in the dipoles with about 85% efficiency. With a conventional microwave antenna, the reception efficiency is better, but its cost and complexity are also considerably greater. Rectennas would likely be several kilometers across.
In space applications
A laser SBSP could also power a base or vehicles on the surface of the Moon or Mars, saving on mass costs to land the power source. A spacecraft or another satellite could also be powered by the same means. In a 2012 report presented to NASA on space solar power, the author mentions another potential use for the technology behind space solar power could be for solar electric propulsion systems that could be used for interplanetary human exploration missions.
Launch costs
One problem with the SBSP concept is the cost of space launches and the amount of material that would need to be launched.
Much of the material launched need not be delivered to its eventual orbit immediately, which raises the possibility that high efficiency (but slower) engines could move SPS material from LEO to GEO at an acceptable cost. Examples include ion thrusters or nuclear propulsion. Infrastructure including solar panels, power converters, and power transmitters will have to be built in order to begin the process. This will be extremely expensive and maintaining them will cost even more.
To give an idea of the scale of the problem, assuming a solar panel mass of 20 kg per kilowatt (without considering the mass of the supporting structure, antenna, or any significant mass reduction of any focusing mirrors) a 4 GW power station would weigh about 80,000 metric tons, all of which would, in current circumstances, be launched from the Earth. This is, however, far from the state of the art for flown spacecraft, which as of 2015 was 150 W/kg (6.7 kg/kW), and improving rapidly. Very lightweight designs could likely achieve 1 kg/kW, meaning 4,000 metric tons for the solar panels for the same 4 GW capacity station. Beyond the mass of the panels, overhead (including boosting to the desired orbit and stationkeeping) must be added.
To these costs must be added the environmental impact of heavy space launch missions, if such costs are to be used in comparison to earth-based energy production. For comparison, the direct cost of a new coal or nuclear power plant ranges from $3 billion to $6 billion per GW (not including the full cost to the environment from emissions or storage of spent nuclear fuel, respectively).
Building from space
From lunar materials launched in orbit
Gerard O'Neill, noting the problem of high launch costs in the early 1970s, proposed building the SPS's in orbit with materials from the Moon. Launch costs from the Moon are potentially much lower than from Earth because of the lower gravity and lack of atmospheric drag. This 1970s proposal assumed the then-advertised future launch costing of NASA's space shuttle. This approach would require substantial upfront capital investment to establish mass drivers on the Moon. Nevertheless, on 30 April 1979, the Final Report ("Lunar Resources Utilization for Space Construction") by General Dynamics' Convair Division, under NASA contract NAS9-15560, concluded that use of lunar resources would be cheaper than Earth-based materials for a system of as few as thirty solar power satellites of 10 GW capacity each.
In 1980, when it became obvious NASA's launch cost estimates for the space shuttle were grossly optimistic, O'Neill et al. published another route to manufacturing using lunar materials with much lower startup costs. This 1980s SPS concept relied less on human presence in space and more on partially self-replicating systems on the lunar surface under remote control of workers stationed on Earth. The high net energy gain of this proposal derives from the Moon's much shallower gravitational well.
Having a relatively cheap per pound source of raw materials from space would lessen the concern for low mass designs and result in a different sort of SPS being built. The low cost per pound of lunar materials in O'Neill's vision would be supported by using lunar material to manufacture more facilities in orbit than just solar power satellites. Advanced techniques for launching from the Moon may reduce the cost of building a solar power satellite from lunar materials. Some proposed techniques include the lunar mass driver and the lunar space elevator, first described by Jerome Pearson. It would require establishing silicon mining and solar cell manufacturing facilities on the Moon.
On the Moon
Physicist Dr David Criswell suggests the Moon is the optimum location for solar power stations, and promotes lunar-based solar power. The main advantage he envisions is construction largely from locally available lunar materials, using in-situ resource utilization, with a teleoperated mobile factory and crane to assemble the microwave reflectors, and rovers to assemble and pave solar cells, which would significantly reduce launch costs compared to SBSP designs. Power relay satellites orbiting around earth and the Moon reflecting the microwave beam are also part of the project. A demo project of 1 GW starts at $50 billion. The Shimizu Corporation use combination of lasers and microwave for the Luna Ring concept, along with power relay satellites.
From an asteroid
Asteroid mining has also been seriously considered. A NASA design study evaluated a 10,000-ton mining vehicle (to be assembled in orbit) that would return a 500,000-ton asteroid fragment to geostationary orbit. Only about 3,000 tons of the mining ship would be traditional aerospace-grade payload. The rest would be reaction mass for the mass-driver engine, which could be arranged to be the spent rocket stages used to launch the payload. Assuming that 100% of the returned asteroid was useful, and that the asteroid miner itself couldn't be reused, that represents nearly a 95% reduction in launch costs. However, the true merits of such a method would depend on a thorough mineral survey of the candidate asteroids; thus far, we have only estimates of their composition. One proposal is to capture the asteroid Apophis into Earth orbit and convert it into 150 solar power satellites of 5 GW each or the larger asteroid 1999 AN10, which is 50 times the size of Apophis and large enough to build 7,500 5-gigawatt solar power satellites
Safety
The potential exposure of humans and animals on the ground to the high power microwave beams is a significant concern with these systems. At the Earth's surface, a suggested SPSP microwave beam would have a maximum intensity at its center, of 23 mW/cm2. While this is less than 1/4 the solar irradiation constant, microwaves penetrate much deeper into tissue than sunlight, and at this level would exceed the current United States Occupational Safety and Health Act (OSHA) workplace exposure limits for microwaves at 10 mW/cm2 At 23 mW/cm2, studies show humans experience significant deficits in spatial learning and memory. If the diameter of the proposed SPSP array is increased by 2.5x, the energy density on the ground increases to 1 W/cm2. At this level, the median lethal dose for mice is 30-60 seconds of microwave exposure. While designing an array with 2.5x larger diameter should be avoided, the dual-use military potential of such a system is readily apparent.
With good array sidelobe design, outside the receiver may be less than the OSHA long-term levels as over 95% of the beam energy will fall on the rectenna. However, any accidental or intentional mis-pointing of the satellite could be deadly to life on Earth within the beam.
Exposure to the beam can be minimized in various ways. On the ground, assuming the beam is pointed correctly, physical access must be controllable (e.g., via fencing). Typical aircraft flying through the beam provide passengers with a protective metal shell (i.e., a Faraday Cage), which will intercept the microwaves. Other aircraft (balloons, ultralight, etc.) can avoid exposure by using controlled airspace, as is currently done for military and other controlled airspace. In addition, a design constraint is that the microwave beam must not be so intense as to injure wildlife, particularly birds. Suggestions have been made to locate rectennas offshore, but this presents serious problems, including corrosion, mechanical stresses, and biological contamination.
A commonly proposed approach to ensuring fail-safe beam targeting is to use a retrodirective phased array antenna/rectenna. A "pilot" microwave beam emitted from the center of the rectenna on the ground establishes a phase front at the transmitting antenna. There, circuits in each of the antenna's subarrays compare the pilot beam's phase front with an internal clock phase to control the phase of the outgoing signal. If the phase offset to the pilot is chosen the same for all elements, the transmitted beam should be centered precisely on the rectenna and have a high degree of phase uniformity; if the pilot beam is lost for any reason (if the transmitting antenna is turned away from the rectenna, for example) the phase control value fails and the microwave power beam is automatically defocused. Such a system would not focus its power beam very effectively anywhere that did not have a pilot beam transmitter. The long-term effects of beaming power through the ionosphere in the form of microwaves has yet to be studied.
Timeline
In the 20th century
1941: Isaac Asimov published the science fiction short story "Reason," in which a space station transmits energy collected from the sun to various planets using microwave beams. "Reason" was published in the "Astounding Science Fiction" magazine.
1968: Peter Glaser introduces the concept of a "solar power satellite" system with square miles of solar collectors in high geosynchronous orbit for collection and conversion of sun's energy into a microwave beam to transmit usable energy to large receiving antennas (rectennas) on Earth for distribution.
1973: Peter Glaser is granted United States patent number 3,781,647 for his method of transmitting power over long distances using microwaves from a large (one square kilometer) antenna on the satellite to a much larger one on the ground, now known as a rectenna.
1978–1981: The United States Department of Energy and NASA examine the solar power satellite (SPS) concept extensively, publishing design and feasibility studies.
1987: Stationary High Altitude Relay Platform a Canadian experiment
1995–1997: NASA conducts a "Fresh Look" study of space solar power (SSP) concepts and technologies.
1998: The Space Solar Power Concept Definition Study (CDS) identifies credible, commercially viable SSP concepts, while pointing out technical and programmatic risks.
1998: Japan's space agency begins developing a space solar power system (SSPS), a program that continues to the present day.
1999: NASA's Space Solar Power Exploratory Research and Technology program (SERT, see below) begins.
2000: John Mankins of NASA testifies in the U.S. House of Representatives, saying "Large-scale SSP is a very complex integrated system of systems that requires numerous significant advances in current technology and capabilities. A technology roadmap has been developed that lays out potential paths for achieving all needed advances — albeit over several decades.
In the 21st century
2001: NASDA (One of Japan's national space agencies before it became part of JAXA) announces plans to perform additional research and prototyping by launching an experimental satellite with 10 kilowatts and 1 megawatt of power.
2003: ESA studies
2007: The US Pentagon's National Security Space Office (NSSO) issues a report on October 10, 2007 stating they intend to collect solar energy from space for use on Earth to help the United States' ongoing relationship with the Middle East and the battle for oil. A demo plant could cost $10 billion, produce 10 megawatts, and become operational in 10 years.
2007: In May 2007, a workshop is held at the US Massachusetts Institute of Technology (MIT) to review the current state of the SBSP market and technology.
2010: Professors Andrea Massa and Giorgio Franceschetti announce a special session on the "Analysis of Electromagnetic Wireless Systems for Solar Power Transmission" at the 2010 Institute of Electrical and Electronics Engineers International Symposium on Antennas and Propagation.
2010: The Indian Space Research Organisation and US' National Space Society launched a joint forum to enhance partnership in harnessing solar energy through space-based solar collectors. Called the Kalam-NSS Initiative after the former Indian President Dr APJ Abdul Kalam, the forum will lay the groundwork for the space-based solar power program which could see other countries joining in as well.
2010: Sky's No Limit: Space-Based solar power, the next major step in the Indo-US strategic partnership? written by USAF Lt Col Peter Garretson was published at the Institute for Defence Studies and Analysis.
2012: China proposed joint development between India and China towards developing a solar power satellite, during a visit by former Indian President Dr APJ Abdul Kalam.
2015: The Space Solar Power Initiative (SSPI) is established between Caltech and Northrop Grumman Corporation. An estimated $17.5 million is to be provided over a three-year project for development of a space-based solar power system.
2015: JAXA announced on 12 March 2015 that they wirelessly beamed 1.8 kilowatts 50 meters to a small receiver by converting electricity to microwaves and then back to electricity.
2016: Lt Gen. Zhang Yulin, deputy chief of the [PLA] armament development department of the Central Military Commission, suggested that China would next begin to exploit Earth-Moon space for industrial development. The goal would be the construction of space-based solar power satellites that would beam energy back to Earth.
2016: A team with membership from the Naval Research Laboratory (NRL), Defense Advanced Projects Agency (DARPA), Air Force Air University, Joint Staff Logistics (J-4), Department of State, Makins Aerospace and Northrop Grumman won the Secretary of Defense (SECDEF) / Secretary of State (SECSTATE) / USAID Director's agency-wide D3 (Diplomacy, Development, Defense) Innovation Challenge with a proposal that the US must lead in space solar power. The proposal was followed by a vision video
2016: Citizens for Space-Based Solar Power has transformed the D3 proposal into active petitions on the White House Website "America Must Lead the Transition to Space-Based Energy"and Change.org "USA Must Lead the Transition to Space-Based Energy" along with the following video.
2016: Erik Larson and others from NOAA produce a paper "Global atmospheric response to emissions from a proposed reusable space launch system" The paper makes a case that up to 2 TW/year of power satellites could be constructed without intolerable damage to the atmosphere. Before this paper, there was concern that the produced by reentry would destroy too much ozone.
2016: Ian Cash of SICA Design proposes CASSIOPeiA (Constant Aperture, Solid State, Integrated, Orbital Phased Array) a new concept SPS Faculty Listing | Electrical and Computer Engineering
2017: NASA selects five new research proposals focused on investments in space. The Colorado School of Mines focuses on "21st Century Trends in Space-Based Solar Power Generation and Storage."
2019: Aditya Baraskar and Prof Toshiya Hanada from Space System Dynamic Laboratory, Kyushu University proposed Energy Orbit (E-Orbit), a small Space Solar Power Satellite constellation for power beaming between satellites in low earth orbit. A total of 1600 satellites to transmit 10 kilowatts of electricity in a 500 km radius at an altitude of 900 km.
2019: China creates a test base for SBSP, and announces plan to launch a working megawatt-grade 200-tonne SBSP station by 2035.
2020: US Naval Research Laboratory launches test satellite. Also the USAF has its Space Solar Power Incremental Demonstrations and Research Project (SSPIDR) planning to launch the ARACHNE test satellite. Arachne is due to launch in 2024.
2021: Caltech announces that it planned to launch a SBSP test array by 2023.
2022: The Space Energy Initiative in the UK announced to launch the first power station in space during the mid-2040s, to "provide 30 percent of the UK’s (greatly increased) electricity demand" and "to slash the UK’s dependence on fossil fuels" and foreign ties.
2022: The European Space Agency proposed a program called SOLARIS to operate Solar Power Satellites from 2030.
2023: Caltech's Space Solar Power Demonstrator (SSPD-1) beams "detectable power" to Earth.
Non-typical configurations and architectural considerations
The typical reference system-of-systems involves a significant number (several thousand multi-gigawatt systems to service all or a significant portion of Earth's energy requirements) of individual satellites in GEO. The typical reference design for the individual satellite is in the 1-10 GW range and usually involves planar or concentrated solar photovoltaics (PV) as the energy collector / conversion. The most typical transmission designs are in the 1–10 GHz (2.45 or 5.8 GHz) RF band where there are minimum losses in the atmosphere. Materials for the satellites are sourced from, and manufactured on Earth and expected to be transported to LEO via re-usable rocket launch, and transported between LEO and GEO via chemical or electrical propulsion. In summary, the architecture choices are:
Location = GEO
Energy Collection = PV
Satellite = Monolithic Structure
Transmission = RF
Materials & Manufacturing = Earth
Installation = RLVs to LEO, Chemical to GEO
There are several interesting design variants from the reference system:
Alternate energy collection location: While GEO is most typical because of its advantages of nearness to Earth, simplified pointing and tracking, very small time in occultation, and scalability to meet all global demand several times over, other locations have been proposed:
Sun Earth L1: Robert Kennedy III, Ken Roy & David Fields have proposed a variant of the L1 sunshade called "Dyson Dots" where a multi-terawatt primary collector would beam energy back to a series of LEO sun-synchronous receiver satellites. The much farther distance to Earth requires a correspondingly larger transmission aperture.
Lunar surface: David Criswell has proposed using the Lunar surface itself as the collection medium, beaming power to the ground via a series of microwave reflectors in Earth Orbit. The chief advantage of this approach would be the ability to manufacture the solar collectors in-situ without the energy cost and complexity of launch. Disadvantages include the much longer distance, requiring larger transmission systems, the required "overbuild" to deal with the lunar night, and the difficulty of sufficient manufacturing and pointing of reflector satellites.
MEO: MEO systems have been proposed for in-space utilities and beam-power propulsion infrastructures. For example, see Royce Jones' paper.
Highly elliptical orbits: Molniya, Tundra, or Quazi Zenith orbits have been proposed as early locations for niche markets, requiring less energy to access and providing good persistence.
Sun-sync LEO: In this near Polar Orbit, the satellites precess at a rate that allows them to always face the Sun as they rotate around Earth. This is an easy to access orbit requiring far less energy, and its proximity to Earth requires smaller (and therefore less massive) transmitting apertures. However disadvantages to this approach include having to constantly shift receiving stations, or storing energy for a burst transmission. This orbit is already crowded and has significant space debris.
Equatorial LEO: Japan's SPS 2000 proposed an early demonstrator in equatorial LEO in which multiple equatorial participating nations could receive some power.
Earth's surface: Narayan Komerath has proposed a space power grid where excess energy from an existing grid or power plant on one side of the planet can be passed up to orbit, across to another satellite and down to receivers.
Energy collection: The most typical designs for solar power satellites include photovoltaics. These may be planar (and usually passively cooled), concentrated (and perhaps actively cooled). However, there are multiple interesting variants.
Solar thermal: Proponents of solar thermal have proposed using concentrated heating to cause a state change in a fluid to extract energy via rotating machinery followed by cooling in radiators. Advantages of this method might include overall system mass (disputed), eliminating degradation due to solar-wind damage, and radiation tolerance. One recent thermal solar power satellite design by Keith Henson and others has been visualized here. Thermal Space Solar Power concept A related concept is here: Beamed Energy Bootstrapping The proposed radiators are thin wall platic tube filled with low pressure (2.4 kPa) and temperature (20 deg C) steam.
Solar pumped laser: Japan has pursued a solar-pumped laser, where sunlight directly excites the lasing medium used to create the coherent beam to Earth.
Stellaser: A hypothetical concept of a very large laser where a star provides both the lasing energy and the lasing medium, producing a steerable energy beam of unrivaled power.
Fusion decay: This version of a power-satellite is not "solar". Rather, the vacuum of space is seen as a "feature not a bug" for traditional fusion. Per Paul Werbos, after fusion even neutral particles decay to charged particles which in a sufficiently large volume would allow direct conversion to current.
Solar wind loop: Also called a Dyson–Harrop satellite. Here the satellite makes use not of the photons from the Sun but rather the charged particles in the solar wind which via electro-magnetic coupling generate a current in a large loop.
Direct mirrors: Early concepts for direct mirror re-direction of light to planet Earth suffered from the problem that rays coming from the sun are not parallel but are expanding from a disk and so the size of the spot on the Earth is quite large. Lewis Fraas has explored an array of parabolic mirrors to augment existing solar arrays.
Alternate satellite architecture: The typical satellite is a monolithic structure composed of a structural truss, one or more collectors, one or more transmitters, and occasionally primary and secondary reflectors. The entire structure may be gravity gradient stabilized. Alternative designs include:
Swarms of smaller satellites: Some designs propose swarms of free-flying smaller satellites. This is the case with several laser designs, and appears to be the case with CALTECH's Flying Carpets. For RF designs, an engineering constraint is the thinned array problem.
Free floating components: Solaren has proposed an alternative to the monolithic structure where the primary reflector and transmission reflector are free-flying.
Spin stabilization: NASA explored a spin-stabilized thin film concept.
Photonic laser thruster (PLT) stabilized structure: Young Bae has proposed that photon pressure may substitute for compressive members in large structures.
Transmission: The most typical design for energy transmission is via an RF antenna at below 10 GHz to a rectenna on the ground. Controversy exists between the benefits of Klystrons, Gyrotrons, Magnetrons and solid state. Alternate transmission approaches include:
Laser: Lasers offer the advantage of much lower cost and mass to first power, however there is controversy regarding benefits of efficiency. Lasers allow for much smaller transmitting and receiving apertures. However, a highly concentrated beam has eye-safety, fire safety, and weaponization concerns. Proponents believe they have answers to all these concerns. A laser-based approach must also find alternate ways of coping with clouds and precipitation.
Atmospheric waveguide: Some have proposed it may be possible to use a short pulse laser to create an atmospheric waveguide through which concentrated microwaves could flow.
Nuclear synthesis: Particle accelerators based in the inner solar system (whether in orbit or on a planet such as Mercury) could use solar energy to synthesize nuclear fuel from naturally occurring materials. While this would be highly inefficient using current technology (in terms of the amount of energy needed to manufacture the fuel compared to the amount of energy contained in the fuel) and would raise obvious nuclear safety issues, the basic technology upon which such an approach would rely on has been in use for decades, making this possibly the most reliable means of sending energy especially over very long distances - in particular, from the inner solar system to the outer solar system.
Materials and manufacturing: Typical designs make use of the developed industrial manufacturing system extant on Earth, and use Earth based materials both for the satellite and propellant. Variants include:
Lunar materials: Designs exist for Solar Power Satellites that source >99% of materials from lunar regolith with very small inputs of "vitamins" from other locations. Using materials from the Moon is attractive because launch from the Moon is in theory far less complicated than from Earth. There is no atmosphere, and so components do not need to be packed tightly in an aeroshell and survive vibration, pressure and temperature loads. Launch may be via a magnetic mass driver and bypass the requirement to use propellant for launch entirely. Launch from the Moon the GEO also requires far less energy than from Earth's much deeper gravity well. Building all the solar power satellites to fully supply all the required energy for the entire planet requires less than one millionth of the mass of the Moon.
Self-replication on the Moon: NASA explored a self-replicating factory on the Moon in the early 1980s. More recently, Justin Lewis-Webber proposed a method of speciated manufacture of core elements based upon John Mankins SPS-Alpha design.
Asteroidal materials: Some asteroids are thought to have even lower Delta-V to recover materials than the Moon, and some particular materials of interest such as metals may be more concentrated or easier to access.
In-space/in-situ manufacturing: With the advent of in-space additive manufacturing, concepts such as SpiderFab might allow mass launch of raw materials for local extrusion.
Method of installation / Transportation of Material to Energy Collection Location: In the reference designs, component material is launched via well-understood chemical rockets (usually fully reusable launch systems) to LEO, after which either chemical or electrical propulsion is used to carry them to GEO. The desired characteristics for this system is very high mass-flow at low total cost. Alternate concepts include:
Lunar chemical launch: ULA has recently showcased a concept for a fully re-usable chemical lander XEUS to move materials from the Lunar surface to LLO or GEO.
Lunar mass driver: Launch of materials from the lunar surface using a system similar to an aircraft carrier electromagnetic catapult. An unexplored compact alternative would be the slingatron.
Lunar space elevator: An equatorial or near-equatorial cable extends to and through the lagrange point. This is claimed by proponents to be lower in mass than a traditional mass driver.
Space elevator: A ribbon of pure carbon nanotubes extends from its center of gravity in Geostationary orbit, allowing climbers to climb up to GEO. Problems with this include the material challenge of creating a ribbon of such length (36,000 km!) with adequate strength, management of collisions with satellites and space debris, and lightning.
MEO Skyhook: As part of an AFRL study, Roger Lenard proposed a MEO Skyhook. It appears that a gravity gradient-stabilized tether with its center of mass in MEO can be constructed of available materials. The bottom of the skyhook is close to the atmosphere in a "non-keplerian orbit". A re-usable rocket can launch to match altitude and speed with the bottom of the tether which is in a non-keplerian orbit (travelling much slower than typical orbital speed). The payload is transferred and it climbs the cable. The cable itself is kept from de-orbiting via electric propulsion and/or electromagnetic effects.
MAGLEV launch / StarTram: John Powell has a concept for a very high mass-flow system. In a first-gen system, built into a mountain, accelerates a payload through an evacuated MAGLEV track. A small on-board rocket circularizes the payload.
Beamed energy launch: Kevin Parkin and Escape Dynamics both have concepts for ground-based irradiation of a mono-propellant launch vehicle using RF energy. The RF energy is absorbed and directly heats the propellant not unlike in NERVA-style nuclear-thermal. LaserMotive has a concept for a laser-based approach.
Gallery
See also
Notes
References
The National Space Society maintains an extensive space solar power library of all major historical documents and studies associated with space solar power, and major news articles .
External links
European Space Agency (ESA) – Advanced Concepts Team, Space-based solar power
William Maness on why alternative energy and power grids aren't good playmates and his plans for beaming solar power from space. in Seed (magazine)
The World Needs Energy from Space Space-based solar technology is the key to the world's energy and environmental future, writes Peter E. Glaser, a pioneer of the technology.
Reinventing the Solar Power Satellite", NASA 2004–212743, report by Geoffrey A. Landis of NASA Glenn Research Center
Japan's plans for a solar power station in space - the Japanese government hopes to assemble a space-based solar array by 2040.
Space Energy, Inc. - Space Energy, Inc.
Whatever happened to solar power satellites? An article that covers the hurdles in the way of deploying a solar power satellite.
Solar Power Satellite from Lunar and Asteroidal Materials Provides an overview of the technological and political developments needed to construct and utilize a multi-gigawatt power satellite. Also provides some perspective on the cost savings achieved by using extraterrestrial materials in the construction of the satellite.
A renaissance for space solar power? by Jeff Foust, Monday, August 13, 2007 Reports on renewed institutional interest in SSP, and a lack of such interest in past decades.
"Conceptual Study of A Solar Power Satellite, SPS 2000" Makoto Nagatomo, Susumu Sasaki and Yoshihiro Naruo
Researchers Beam 'Space' Solar Power in Hawaii (Wired Science)
The National Space Society's Space Solar Power Library
The future of Energy is on demand? Special Session at the 2010 Festival delle Città Impresa featuring John Mankins (Artemis Innovation Management Solutions LLC, USA), Nobuyuki Kaya (Kobe University, Japan), Sergio Garribba (Ministry of Economic Development, Italy), Lorenzo Fiori (Finmeccanica Group, Italy), Andrea Massa (University of Trento, Italy) and Vincenzo Gervasio (Consiglio Nazionale dell'Economia ed del Lavoro, Italy). White Paper- History of SPS Developpements International Union of Radio Science 2007
International SunSat design competition
A simulation of AM reception from an aerial powering two inductive loads and recharging a battery.
Solar power from space 5-minute video about space-based solar power plants by the European Space Agency
Powering the Planet 20-minute streaming video from The Futures Channel that provides a "101" on space-based solar power
Space Solar Power NewSpace 2010 Panel, 72 minutes
Space Solar Power and Space Energy Systems SSI – Space Manufacturing 14 Panel – 2010 – 27 min
NASA DVD in 16 Parts Exploring New Frontiers for Tomorrow's Energy Needs
Space Solar Power Press Conference September 12, 2008 (71 minutes) National Space Society
BBC One - Bang Goes the Theory, Series 6, Episode 5, Transmitting power without wires BBC/Lighthouse DEV Eye-safe Laser Based Power Beaming Demo
Photovoltaics
Space technology
Thermodynamics
Energy conversion
Satellites
Electric power
Solar power
Solar power and space
Solar power | Space-based solar power | [
"Physics",
"Chemistry",
"Astronomy",
"Mathematics",
"Engineering"
] | 10,886 | [
"Physical quantities",
"Outer space",
"Space technology",
"Power (physics)",
"Electric power",
"Thermodynamics",
"Satellites",
"Electrical engineering",
"Dynamical systems"
] |
6,758,777 | https://en.wikipedia.org/wiki/Triphenyltin%20hydride | Triphenyltin hydride is the organotin compound with the formula (C6H5)3SnH. It is a white distillable oil that is soluble in organic solvents. It is often used as a source of "H·" to generate radicals or cleave carbon-oxygen bonds.
Preparation and reactions
Ph3SnH, as it is more commonly abbreviated, is prepared by treatment of triphenyltin chloride with lithium aluminium hydride. Although Ph3SnH is treated as a source of "H·", in fact it does not release free hydrogen atoms, which are extremely reactive species. Instead, Ph3SnH transfers H to substrates usually via a radical chain mechanism. This reactivity exploits the relatively good stability of "Ph3Sn·"
References
Metal hydrides
Triphenyltin compounds
Reagents for organic chemistry | Triphenyltin hydride | [
"Chemistry"
] | 183 | [
"Reducing agents",
"Metal hydrides",
"Inorganic compounds",
"Reagents for organic chemistry"
] |
6,761,001 | https://en.wikipedia.org/wiki/Bioorganic%20chemistry | Bioorganic chemistry is a scientific discipline that combines organic chemistry and biochemistry. It is that branch of life science that deals with the study of biological processes using chemical methods. Protein and enzyme function are examples of these processes.
Sometimes biochemistry is used interchangeably for bioorganic chemistry; the distinction being that bioorganic chemistry is organic chemistry that is focused on the biological aspects. While biochemistry aims at understanding biological processes using chemistry, bioorganic chemistry attempts to expand organic-chemical researches (that is, structures, synthesis, and kinetics) toward biology. When investigating metalloenzymes and cofactors, bioorganic chemistry overlaps bioinorganic chemistry.
Sub disciplines
Biophysical organic chemistry is a term used when attempting to describe intimate details of molecular recognition by bioorganic chemistry.
Natural product chemistry is the process of Identifying compounds found in nature to determine their properties. Compound discoveries have and often lead to medicinal uses, development of herbicides and insecticides.
References
Biochemistry | Bioorganic chemistry | [
"Chemistry",
"Biology"
] | 205 | [
"Biochemistry",
"nan"
] |
6,763,876 | https://en.wikipedia.org/wiki/Skorokhod%27s%20representation%20theorem | In mathematics and statistics, Skorokhod's representation theorem is a result that shows that a weakly convergent sequence of probability measures whose limit measure is sufficiently well-behaved can be represented as the distribution/law of a pointwise convergent sequence of random variables defined on a common probability space. It is named for the Ukrainian mathematician A. V. Skorokhod.
Statement
Let be a sequence of probability measures on a metric space such that converges weakly to some probability measure on as . Suppose also that the support of is separable. Then there exist -valued random variables defined on a common probability space such that the law of is for all (including ) and such that converges to , -almost surely.
See also
Convergence in distribution
References
(see p. 7 for weak convergence, p. 24 for convergence in distribution and p. 70 for Skorokhod's theorem)
Probability theorems
Theorems in statistics | Skorokhod's representation theorem | [
"Mathematics"
] | 189 | [
"Mathematical problems",
"Theorems in probability theory",
"Mathematical theorems",
"Theorems in statistics"
] |
35,646,178 | https://en.wikipedia.org/wiki/Homogamy%20%28biology%29 | Homogamy is used in biology in four separate senses:
Inbreeding can be referred to as homogamy.
Homogamy refers to the maturation of male and female reproductive organs (of plants) at the same time, which is also known as simultaneous or synchronous hermaphrodism and is the antonym of dichogamy. Many flowers appear to be homogamous but some of these may not be strictly functionally homogamous, because for various reasons male and female reproduction do not completely overlap.
In the daisy family, the flower heads are made up of many small flowers called florets, and are either homogamous or heterogamous. Heterogamous heads are made up of two types of florets, ray florets near the edge and disk florets in the center. Homogamous heads are made up of just one type of floret, either all ray florets or all disk florets.
Homogamy can be used as a form of choosing a mate based on characteristics that are wanted in a sexual partner.
Inbreeding
As opposed to outcrossing or outbreeding, inbreeding is the process by which organisms with common descent come together to mate and eventually procreate. An archetype of inbreeding is self-pollination. When a plant has both anthers and a stigma, the process of inbreeding can occur. Another word for this self-fertilization is autogamy, which is when an anther releases pollen to attach to the stigma on the same plant. Self-pollination is promoted by homogamy. Homogamy is when the anthers and the stigma of a flower are being matured at the same time. The action of self-pollination guides the plant to homozygosity, causing a specific gene to be received from each of the parents leading to the possession of two exact formats of that gene.
Assortative mating
Assortative mating is the choosing of a mate to breed with based on their physical characteristics, phenotypical traits. There are social factors that enhance one's choosing, such as religion, physical traits, and culture. For instance, research has been conducted by sociologists who found that men and women look for individuals who fall under the educational homogamy he or she is in. The homogamy theory holds that when organisms look for a potential partner, they search for organisms that have similar traits to themselves. The idea of sexual imprinting plays a role in this theory. Based on whether or not an individual is a male or a female, the individual tends to be attracted to other people that have most similar characteristics to their parent of the opposite gender. This is a form of positive assortative mating, where people choose a mate with attributes that correlate with their own. According to Kalmijn and Flap, there are five places individuals could become acquainted with each other in. These five places are: work, school, neighborhood, common family networks, and voluntary associations. They also studied the five criteria that are usually looked for to decide on the status of wanting to mate. As such, the five traits are: age, education, class destinations, class origins, and religious background.
Evolutionary aspect
There is an evolutionary theory that explain that there are two specific qualities that are looked out. These two traits are male dominance and the attractiveness of the female. According to the evolutionary perspective, the purpose of mating is to procreate for the purpose of survival. It is the ones with the best features and traits that survive, a known phrase called survival of the fittest. If there was a couple who lacked the ability to become fertile or have a child with a disease or a handicap, there is a great rise in the risk of the couple receiving a divorce. When there are traits that are found in a spouse that are not favorable, then the homogamy in the relationship decreases, and there begins to have a need for it for a better production of children.
References
Pollination
Reproduction | Homogamy (biology) | [
"Biology"
] | 832 | [
"Biological interactions",
"Behavior",
"Reproduction"
] |
35,648,894 | https://en.wikipedia.org/wiki/Momentum-transfer%20cross%20section | In physics, and especially scattering theory, the momentum-transfer cross section (sometimes known as the momentum-transport cross section) is an effective scattering cross section useful for describing the average momentum transferred from a particle when it collides with a target. Essentially, it contains all the information about a scattering process necessary for calculating average momentum transfers but ignores other details about the scattering angle.
The momentum-transfer cross section is defined in terms of an (azimuthally symmetric and momentum independent) differential cross section by
The momentum-transfer cross section can be written in terms of the phase shifts from a partial wave analysis as
Explanation
The factor of arises as follows. Let the incoming particle be traveling along the -axis with vector momentum
Suppose the particle scatters off the target with polar angle and azimuthal angle plane. Its new momentum is
For collision to much heavier target than striking particle (ex: electron incident on the atom or ion), so
By conservation of momentum, the target has acquired momentum
Now, if many particles scatter off the target, and the target is assumed to have azimuthal symmetry, then the radial ( and ) components of the transferred momentum will average to zero. The average momentum transfer will be just . If we do the full averaging over all possible scattering events, we get
where the total cross section is
Here, the averaging is done by using expected value calculation (see as a probability density function). Therefore, for a given total cross section, one does not need to compute new integrals for every possible momentum in order to determine the average momentum transferred to a target. One just needs to compute .
Application
This concept is used in calculating charge radius of nuclei such as proton and deuteron by electron scattering experiments.
To this purpose a useful quantity called the scattering vector having the dimension of inverse length is defined as a function of energy and scattering angle :
References
Momentum
Scattering theory | Momentum-transfer cross section | [
"Physics",
"Chemistry",
"Mathematics"
] | 384 | [
"Scattering theory",
"Physical quantities",
"Quantity",
"Scattering",
"Momentum",
"Moment (physics)"
] |
35,650,011 | https://en.wikipedia.org/wiki/Aethrioscope | An aethrioscope (or æthrioscope) is a meteorological device invented by Sir John Leslie in 1818 for measuring the chilling effect of a clear sky. The name is from the Greek word for clear – αίθριος.
It consists of a metallic cup standing upon a tall hollow pedestal, with a differential thermometer placed so that one of its bulbs is in the focus of the paraboloid formed by the cavity of the cup. The interior of the cup is highly polished and is kept covered by a plate of metal, being opened when an observation is made. The second bulb is always screened from the sky and so is not affected by the radiative effect of the clear sky, the action of which is concentrated upon the first bulb. The contraction of the air in the second bulb by its sudden exposure to a clear sky causes the liquid in the stem to rise.
The device will respond in a contrary fashion when exposed to heat radiation and so may be used as a pyrometer too.
References
Thermometers
1818 in Scotland
1818 in science
Atmospheric physics
Atmospheric radiation | Aethrioscope | [
"Physics",
"Technology",
"Engineering"
] | 223 | [
"Atmospheric physics",
"Applied and interdisciplinary physics",
"Thermometers",
"Measuring instruments"
] |
35,652,042 | https://en.wikipedia.org/wiki/Liver-expressed%20antimicrobial%20peptide | Liver-expressed antimicrobial peptides are a family of mammalian liver-expressed antimicrobial peptides (LEAP). The exact function of this family is unclear.
LEAP2 is a cysteine-rich, and cationic protein with a core structure stabilized by two disulphide bonds formed by cysteine residues in 1-3 and 2-4 relative positions. Synthesised as a 77-residue precursor, LEAP2 is predominantly expressed in the liver and highly conserved among mammals. The largest native LEAP2 form of 40 amino acid residues is generated from the precursor at a putative cleavage site for a furin-like endoprotease. In contrast to smaller LEAP-2 variants, this peptide exhibits dose-dependent antimicrobial activity against selected microbial model organisms.
References
Antimicrobial peptides
Protein families | Liver-expressed antimicrobial peptide | [
"Biology"
] | 174 | [
"Protein families",
"Protein classification"
] |
35,652,136 | https://en.wikipedia.org/wiki/Snell%20envelope | The Snell envelope, used in stochastics and mathematical finance, is the smallest supermartingale dominating a stochastic process. The Snell envelope is named after James Laurie Snell.
Definition
Given a filtered probability space and an absolutely continuous probability measure then an adapted process is the Snell envelope with respect to of the process if
is a -supermartingale
dominates , i.e. -almost surely for all times
If is a -supermartingale which dominates , then dominates .
Construction
Given a (discrete) filtered probability space and an absolutely continuous probability measure then the Snell envelope with respect to of the process is given by the recursive scheme
for
where is the join (in this case equal to the maximum of the two random variables).
Application
If is a discounted American option payoff with Snell envelope then is the minimal capital requirement to hedge from time to the expiration date.
References
Mathematical finance | Snell envelope | [
"Mathematics"
] | 191 | [
"Applied mathematics",
"Mathematical finance"
] |
35,654,252 | https://en.wikipedia.org/wiki/Hesse%27s%20theorem | In geometry, Hesse's theorem, named for Otto Hesse, states that if two pairs of opposite vertices of a quadrilateral are conjugate with respect to some conic, then so is the third pair. A quadrilateral with this property is called a Hesse quadrilateral.
References
Theorems in projective geometry | Hesse's theorem | [
"Mathematics"
] | 70 | [
"Theorems in geometry",
"Theorems in projective geometry",
"Geometry",
"Geometry stubs"
] |
5,141,960 | https://en.wikipedia.org/wiki/Neville%27s%20algorithm | In mathematics, Neville's algorithm is an algorithm used for polynomial interpolation that was derived by the mathematician Eric Harold Neville in 1934. Given n + 1 points, there is a unique polynomial of degree ≤ n which goes through the given points. Neville's algorithm evaluates this polynomial.
Neville's algorithm is based on the Newton form of the interpolating polynomial and the recursion relation for the divided differences. It is similar to Aitken's algorithm (named after Alexander Aitken), which is nowadays not used.
The algorithm
Given a set of n+1 data points (xi, yi) where no two xi are the same, the interpolating polynomial is the polynomial p of degree at most n with the property
p(xi) = yi for all i = 0,...,n
This polynomial exists and it is unique. Neville's algorithm evaluates the polynomial at some point x.
Let pi,j denote the polynomial of degree j − i which goes through the points (xk, yk) for k = i, i + 1, ..., j. The
pi,j satisfy the recurrence relation
{|
| ||
|-
| ||
|}
This recurrence can calculate
p0,n(x),
which is the value being sought. This is Neville's algorithm.
For instance, for n = 4, one can use the recurrence to fill the triangular tableau below from the left to the right.
{|
|
|-
| ||
|-
| || ||
|-
| || || ||
|-
| || || || || style="border: 1px solid;" |
|-
| || || ||
|-
| || ||
|-
| ||
|-
|
|}
This process yields
p0,4(x),
the value of the polynomial going through the n + 1 data points (xi, yi) at the point x.
This algorithm needs O(n2) floating point operations to interpolate a single point, and O(n3) floating point operations to interpolate a polynomial of degree n.
The derivative of the polynomial can be obtained in the same manner, i.e:
{|
| ||
|-
| ||
|}
Application to numerical differentiation
Lyness and Moler showed in 1966 that using undetermined coefficients for the polynomials in Neville's algorithm, one can compute the Maclaurin expansion of the final interpolating polynomial, which yields numerical approximations for the derivatives of the function at the origin. While "this process requires more arithmetic operations than is required in finite difference methods", "the choice of points for function evaluation is not restricted in any way". They also show that their method can be applied directly to the solution of linear systems of the Vandermonde type.
References
(link is bad)
J. N. Lyness and C.B. Moler, Van Der Monde Systems and Numerical Differentiation, Numerische Mathematik 8 (1966) 458-464 (doi:10.1007/BF02166671)
Neville, E.H.: Iterative interpolation. J. Indian Math. Soc.20, 87–120 (1934)
External links
Polynomials
Interpolation
de:Polynominterpolation#Algorithmus von Neville-Aitken | Neville's algorithm | [
"Mathematics"
] | 712 | [
"Polynomials",
"Algebra"
] |
5,142,109 | https://en.wikipedia.org/wiki/Licensing%20factor | A licensing factor is a protein or complex of proteins that allows an origin of replication to begin DNA replication at that site. Licensing factors primarily occur in eukaryotic cells, since bacteria use simpler systems to initiate replication. However, many archaea use homologues of eukaryotic licensing factors to initiate replication.
Function
Origins of replication represent start sites for DNA replication and so their "firing" must be regulated to maintain the correct karyotype of the cell in question. The origins are required to fire only once per cell cycle, an observation that led to the postulated existence of licensing factors by biologists in the first place. If the origins were not carefully regulated then DNA replication could be restarted at that origin giving rise to multiple copies of a section of DNA. This could be damaging to cells and could have detrimental effects on the organism as a whole.
The control that licensing factors exert over the cycle represents a flexible system, necessary so that different cell types in an organism can control the timing of DNA replication to their own cell cycles.
Subcellular distribution
The factors themselves are found in different places in different organisms. For example in metazoan organisms, they are commonly synthesised in the cytoplasm of the cell to be imported into the nucleus when required. The situation is different in yeast where the factors present are degraded and resynthesised throughout the cell cycle but are found in the nucleus for most of their existence.
Example in yeast
Immediately after mitosis has finished the cell cycle starts again, entering G1 phase of the cycle. At this point protein synthesis of various products required for the rest of the cycle begins. Two of the proteins synthesised are called Cdc6 and Cdt1 and are only synthesised in G1 phase. These two together bind to the origin recognition complex (ORC), which is already bound at the origin and in fact never leaves these sites throughout the cycle. Now we have a so-called pre-replication complex, which then allows a heterohexameric protein complex of proteins MCM2 to 7 to bind. This entire hexamer acts as a helicase unwinding the double stranded DNA. At this point Cdc6 leaves the complex and is inactivated, by being degraded in yeast but by being exported from the nucleus in metazoans, triggered by CDK-dependent phosphorylation. The next steps included the loading of a variety of other proteins like MCM10, a CDK, DDK and Cdc45, the latter directly required for loading the DNA polymerase. During this period Cdt1 is released from the complex and the cell leaves G1 phase and enters S phase when replication starts.
From the above sequence we can see that Cdc6 and Cdt1 fulfill the role of licensing factors. They are only produced in G1 phase, in addition to which binding of all the proteins in this process excludes binding of additional copies. In this way their mode of action is limited to starting replication once, since once they have been ejected from the complex by other proteins, the cell enters S phase, during which they are not re-produced or re-activated. Thus they act as licensing factors, but only together. It has been suggested that the whole pre-replication complex be called the licensing factor since the whole is required for assembling additional proteins to initiate replication.
References
External links
Recent paper on licensing in human cells
DNA replication | Licensing factor | [
"Biology"
] | 693 | [
"Genetics techniques",
"DNA replication",
"Molecular genetics"
] |
5,142,364 | https://en.wikipedia.org/wiki/Cysteine%20%28data%20page%29 |
References
Chemical data pages
Chemical data pages cleanup | Cysteine (data page) | [
"Chemistry"
] | 10 | [
"Chemical data pages",
"nan"
] |
5,143,623 | https://en.wikipedia.org/wiki/Rabi%20problem | The Rabi problem concerns the response of an atom to an applied harmonic electric field, with an applied frequency very close to the atom's natural frequency. It provides a simple and generally solvable example of light–atom interactions and is named after Isidor Isaac Rabi.
Classical Rabi problem
In the classical approach, the Rabi problem can be represented by the solution to the driven damped harmonic oscillator with the electric part of the Lorentz force as the driving term:
where it has been assumed that the atom can be treated as a charged particle (of charge e) oscillating about its equilibrium position around a neutral atom. Here xa is its instantaneous magnitude of oscillation, its natural oscillation frequency, and its natural lifetime:
which has been calculated based on the dipole oscillator's energy loss from electromagnetic radiation.
To apply this to the Rabi problem, one assumes that the electric field E is oscillatory in time and constant in space:
and xa is decomposed into a part ua that is in-phase with the driving E field (corresponding to dispersion) and a part va that is out of phase (corresponding to absorption):
Here x0 is assumed to be constant, but ua and va are allowed to vary in time. However, if the system is very close to resonance (), then these values will be slowly varying in time, and we can make the assumption that , and , .
With these assumptions, the Lorentz force equations for the in-phase and out-of-phase parts can be rewritten as
where we have replaced the natural lifetime with a more general effective lifetime T (which could include other interactions such as collisions) and have dropped the subscript a in favor of the newly defined detuning , which serves equally well to distinguish atoms of different resonant frequencies. Finally, the constant
has been defined.
These equations can be solved as follows:
After all transients have died away, the steady-state solution takes the simple form
where "c.c." stands for the complex conjugate of the opposing term.
Two-level atom
Semiclassical approach
The classical Rabi problem gives some basic results and a simple to understand picture of the issue, but in order to understand phenomena such as inversion, spontaneous emission, and the Bloch–Siegert shift, a fully quantum-mechanical treatment is necessary.
The simplest approach is through the two-level atom approximation, in which one only treats two energy levels of the atom in question. No atom with only two energy levels exists in reality, but a transition between, for example, two hyperfine states in an atom can be treated, to first approximation, as if only those two levels existed, assuming the drive is not too far off resonance.
The convenience of the two-level atom is that any two-level system evolves in essentially the same way as a spin-1/2 system, in accordance to the Bloch equations, which define the dynamics of the pseudo-spin vector in an electric field:
where we have made the rotating wave approximation in throwing out terms with high angular velocity (and thus small effect on the total spin dynamics over long time periods) and transformed into a set of coordinates rotating at a frequency .
There is a clear analogy here between these equations and those that defined the evolution of the in-phase and out-of-phase components of oscillation in the classical case. Now, however, there is a third term w, which can be interpreted as the population difference between the excited and ground state (varying from −1 to represent completely in the ground state to +1, completely in the excited state). Keep in mind that for the classical case, there was a continuous energy spectrum that the atomic oscillator could occupy, while for the quantum case (as we've assumed) there are only two possible (eigen)states of the problem.
These equations can also be stated in matrix form:
It is noteworthy that these equations can be written as a vector precession equation:
where is the pseudo-spin vector, and acts as an effective torque.
As before, the Rabi problem is solved by assuming that the electric field E is oscillatory with constant magnitude E0: . In this case, the solution can be found by applying two successive rotations to the matrix equation above, of the form
and
where
Here the frequency is known as the generalized Rabi frequency, which gives the rate of precession of the pseudo-spin vector about the transformed u axis (given by the first coordinate transformation above). As an example, if the electric field (or laser) is exactly on resonance (such that ), then the pseudo-spin vector will precess about the u axis at a rate of . If this (on-resonance) pulse is shone on a collection of atoms originally all in their ground state (w = −1) for a time , then after the pulse, the atoms will now all be in their excited state (w = +1) because of the (or 180°) rotation about the u axis. This is known as a -pulse and has the result of a complete inversion.
The general result is given by
The expression for the inversion w can be greatly simplified if the atom is assumed to be initially in its ground state (w0 = −1) with u0 = v0 = 0, in which case
Rabi problem in time-dependent perturbation theory
In the quantum approach, the periodic driving force can be considered as periodic perturbation and, therefore, the problem can be solved using time-dependent perturbation theory, with
where is the time-independent Hamiltonian that gives the original eigenstates, and is the time-dependent perturbation. Assume at time , we can expand the state as
where represents the eigenstates of the unperturbed states. For an unperturbed system, is a constant.
Now, let's calculate under a periodic perturbation . Applying operator on both sides of the previous equation, we can get
and then multiply both sides of the equation by :
When the excitation frequency is at resonance between two states and , i.e. , it becomes a normal-mode problem of a two-level system, and it is easy to find that
where
The probability of being in the state m at time t is
The value of depends on the initial condition of the system.
An exact solution of spin-1/2 system in an oscillating magnetic field is solved by Rabi (1937). From their work, it is clear that the Rabi oscillation frequency is proportional to the magnitude of oscillation magnetic field.
Quantum field theory approach
In Bloch's approach, the field is not quantized, and neither the resulting coherence nor the resonance is well explained.
for the QFT approach, mainly Jaynes–Cummings model.
See also
Rabi cycle
Rabi frequency
Vacuum Rabi oscillation
References
Atomic physics
Spintronics | Rabi problem | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,444 | [
"Spintronics",
"Quantum mechanics",
"Atomic physics",
" molecular",
"Condensed matter physics",
"Atomic",
" and optical physics"
] |
5,143,685 | https://en.wikipedia.org/wiki/Joule%20effect | Joule effect and Joule's law are any of several different physical effects discovered or characterized by English physicist James Prescott Joule. These physical effects are not the same, but all are frequently or occasionally referred to in the literature as the "Joule effect" or "Joule law" These physical effects include:
"Joule's first law" (Joule heating), a physical law expressing the relationship between the heat generated and the current flowing through a conductor.
Joule's second law states that the internal energy of an ideal gas is independent of its volume and pressure, depending only on its temperature.
Magnetostriction, a property of ferromagnetic materials that causes them to change their shape when subjected to a magnetic field.
The Joule effect (during Joule expansion), the temperature change of a gas (usually cooling) when it is allowed to expand freely.
The Joule–Thomson effect, the temperature change of a gas when it is forced through a valve or porous plug while keeping it insulated so that no heat is exchanged with the environment.
The Gough–Joule effect or the Gow–Joule effect, which is the tendency of elastomers to contract if heated while they are under tension.
Joule's first law
Between 1840 and 1843, Joule carefully studied the heat produced by an electric current. From this study, he developed Joule's laws of heating, the first of which is commonly referred to as the Joule effect. Joule's first law expresses the relationship between heat generated in a conductor and current flow, resistance, and time.
Magnetostriction
The magnetostriction effect describes a property of ferromagnetic materials which causes them to change their shape when subjected to a magnetic field. Joule first reported observing the change in the length of ferromagnetic rods in 1842.
Joule expansion
In 1845, Joule studied the free expansion of a gas into a larger volume. This became known as Joule expansion. The cooling of a gas by allowing it to expand freely is occasionally referred to as the Joule effect.
Gough–Joule effect
If an elastic band is first stretched and then subjected to heating, it will shrink rather than expand. This effect was first observed by John Gough in 1802, and was investigated further by Joule in the 1850s, when it then became known as the Gough–Joule effect.
Examples in Literature:
Popular Science magazine, January 1972: "A stretched piece of rubber contracts when heated. In doing so, it exerts a measurable increase in its pull. This surprising property of rubber was first observed by James Prescott Joule about a hundred years ago and is known as the Joule effect."
Rubber as an Engineering Material (book), by Khairi Nagdi: "The Joule effect is a phenomenon of practical importance that machine designers must consider. The simplest way of demonstrating this effect is to suspend a weight on a rubber band sufficient to elongate it by at least 50%. When an infrared lamp warms up the stretched rubber band, it does not elongate because of thermal expansion, as may be expected, but it retracts and lifts the weight."
See also
Joule–Thomson effect
References
Thermodynamics | Joule effect | [
"Physics",
"Chemistry",
"Mathematics"
] | 674 | [
"Thermodynamics",
"Dynamical systems"
] |
5,144,570 | https://en.wikipedia.org/wiki/Expansive%20clay | Expansive clay is a clay soil that is prone to large volume changes (swelling and shrinking) that are directly related to changes in water content. Soils with a high content of expansive minerals can form deep cracks in drier seasons or years; such soils are called vertisols. Soils with smectite clay minerals, including montmorillonite and bentonite, have the most dramatic shrink–swell capacity.
The mineral make-up of this type of soil is responsible for the moisture retaining capabilities. All clays consist of mineral sheets packaged into layers, and can be classified as either 1:1 or 2:1. These ratios refer to the proportion of tetrahedral sheets to octahedral sheets. Octahedral sheets are sandwiched between two tetrahedral sheets in 2:1 clays, while 1:1 clays have sheets in matched pairs. Expansive clays have an expanding crystal lattice in a 2:1 ratio; however, there are 2:1 non-expansive clays.
Mitigation of the effects of expansive clay on structures built in areas with expansive clays is a major challenge in geotechnical engineering. Some areas mitigate foundation cracking by watering around the foundation with a soaker hose during dry conditions. This process can be automated by a timer, or using a soil moisture sensor controller. Even though irrigation is expensive, the cost is small compared to repairing a cracked foundation. Admixtures can be added to expansive clays to reduce the shrink-swell properties, as well.
One laboratory test to measure the expansion potential of soil is ASTM D 4829.
See also
Argillipedoturbation
Dispersion (soil)
References
Types of soil
Soil mechanics
Soil physics
Sediments | Expansive clay | [
"Physics"
] | 350 | [
"Soil mechanics",
"Applied and interdisciplinary physics",
"Soil physics"
] |
5,145,367 | https://en.wikipedia.org/wiki/Nanoionics | Nanoionics is the study and application of phenomena, properties, effects, methods and mechanisms of processes connected with fast ion transport (FIT) in all-solid-state nanoscale systems. The topics of interest include fundamental properties of oxide ceramics at nanometer length scales, and fast-ion conductor (advanced superionic conductor)/electronic conductor heterostructures. Potential applications are in electrochemical devices (electrical double layer devices) for conversion and storage of energy, charge and information. The term and conception of nanoionics (as a new branch of science) were first introduced by A.L. Despotuli and V.I. Nikolaichik (Institute of Microelectronics Technology and High Purity Materials, Russian Academy of Sciences, Chernogolovka) in January 1992.
A multidisciplinary scientific and industrial field of solid state ionics, dealing with ionic transport phenomena in solids, considers Nanoionics as its new division. Nanoionics tries to describe, for example, diffusion&reactions, in terms that make sense only at a nanoscale, e.g., in terms of non-uniform (at a nanoscale) potential landscape.
There are two classes of solid-state ionic nanosystems and two fundamentally different nanoionics: (I) nanosystems based on solids with low ionic conductivity, and (II) nanosystems based on advanced superionic conductors (e.g. alpha–AgI, rubidium silver iodide–family). Nanoionics-I and nanoionics-II differ from each other in the design of interfaces. The role of boundaries in nanoionics-I is the creation of conditions for high concentrations of charged defects (vacancies and interstitials) in a disordered space-charge layer. But in nanoionics-II, it is necessary to conserve the original highly ionic conductive crystal structures of advanced superionic conductors at ordered (lattice-matched) heteroboundaries. Nanoionic-I can significantly enhance (up to ~108 times) the 2D-like ion conductivity in nanostructured materials with structural coherence, but it is remaining ~103 times smaller relatively to 3D ionic conductivity of advanced superionic conductors.
The classical theory of diffusion and migration in solids is based on the notion of a diffusion coefficient, activation energy and electrochemical potential. This means that accepted is the picture of a hopping ion transport in the potential landscape where all barriers are of the same height (uniform potential relief). Despite the obvious difference of objects of solid state ionics and nanoionics-I, -II, the true new problem of fast-ion transport and charge/energy storage (or transformation) for these objects (fast-ion conductors) has a special common basis: non-uniform potential landscape on nanoscale (for example) which determines the character of the mobile ion subsystem response to an impulse or harmonic external influence, e.g. a weak influence in Dielectric spectroscopy (impedance spectroscopy).
Characteristics
Being a branch of nanoscience and nanotechnology, nanoionics is unambiguously defined by its own objects (nanostructures with FIT), subject matter (properties, phenomena, effects, mechanisms of processes, and applications connected with FIT at nano-scale), method (interface design in nanosystems of superionic conductors), and the criterion (R/L ~1, where R is the length scale of device structures, and L is the characteristic length on which the properties, characteristics, and other parameters connected with FIT change drastically).
The International Technology Roadmap for Semiconductors (ITRS) relates nanoionics-based resistive switching memories to the category of "emerging research devices" ("ionic memory"). The area of close intersection of nanoelectronics and nanoionics had been called nanoelionics (1996). Now, the vision of future nanoelectronics constrained solely by fundamental ultimate limits is being formed in advanced research. The ultimate physical limits to computation are very far beyond the currently attained (1010 cm−2, 1010 Hz) region. What kind of logic switches might be used at the near nm- and sub-nm peta-scale integration? The question was the subject matter already in, where the term "nanoelectronics" was not used yet. Quantum mechanics constrains electronic distinguishable configurations by the tunneling effect at tera-scale. To overcome 1012 cm−2 bit density limit, atomic and ion configurations with a characteristic dimension of L <2 nm should be used in the information domain and materials with an effective mass of information carriers m* considerably larger than electronic ones are required: m* =13 me at L =1 nm, m* =53 me (L =0,5 nm) and m* =336 me (L =0,2 nm). Future short-sized devices may be nanoionic, i.e. based on the fast-ion transport at the nanoscale, as it was first stated in.
Examples
The examples of nanoionic devices are all-solid-state supercapacitors with fast-ion transport at the functional heterojunctions (nanoionic supercapacitors), lithium batteries and fuel cells with nanostructured electrodes, nano-switches with quantized conductivity on the basis of fast-ion conductors (see also memristors and programmable metallization cell). These are well compatible with sub-voltage and deep-sub-voltage nanoelectronics and could find wide applications, for example in autonomous micro power sources, RFID, MEMS, smartdust, nanomorphic cell, other micro- and nanosystems, or reconfigurable memory cell arrays.
An important case of fast-ionic conduction in solid states is in the surface space-charge layer of ionic crystals. Such conduction was first predicted by Kurt Lehovec. A significant role of boundary conditions with respect to ionic conductivity was first experimentally discovered by C.C. Liang who found an anomalously high conduction in the LiI-Al2O3 two-phase system. Because a space-charge layer with specific properties has nanometer thickness, the effect is directly related to nanoionics (nanoionics-I). The Lehovec effect has become the basis for the creation of a multitude of nanostructured fast-ion conductors which are used in modern portable lithium batteries and fuel cells. In 2012, a 1D structure-dynamic approach was developed in nanoionics for a detailed description of the space charge formation and relaxation processes in irregular potential relief (direct problem) and interpretation of characteristics of nanosystems with fast-ion transport (inverse problem), as an example, for the description of a collective phenomenon: coupled ion transport and dielectric-polarization processes which lead to A. K. Jonscher's "universal" dynamic response.
See also
Programmable metallization cell
References
Nanoelectronics | Nanoionics | [
"Materials_science"
] | 1,454 | [
"Nanotechnology",
"Nanoelectronics"
] |
5,146,898 | https://en.wikipedia.org/wiki/International%20premium%20rate%20service | International premium rate service (IPRS) refers to internationally available telephone-based premium services. It is analogous to "900" or "976" numbers in North America, which always incur a recipient-defined charge in excess of regular call charges. Internationally, this service has been allocated country code +979. IPRS numbers are known as UIPRNs.
The ITU recommendation for IPRS defines four charge categories, across which the +979 numbering space is divided:
UIPRNs are only available to premium service providers who will provide their service to more than one country.
The format of the numbers will be 979 a bcdefghi, where the a digit, 1, 3, 5 or 9, will indicate the charging band, and the remaining digits will be the IPRS subscriber's number. 1, 3 and 5 indicate charge bands 1, 2 and 3, respectively, while 9 will indicate a special charge band.
The service is not yet known to be active. Once it is activated, it may not be available from every country, even though there is minimal or no cost to the receiving customer, as interchange rates would need to be negotiated for countries to allow origination of calls.
Usage
IPRS phone numbers are used by the German malware manufacturer FinFisher for their Android Trojans.
References
External links
ITU IPRS page
Telephony
Telephone numbers
Special international telephone services | International premium rate service | [
"Mathematics"
] | 288 | [
"Mathematical objects",
"Numbers",
"Telephone numbers"
] |
20,320,320 | https://en.wikipedia.org/wiki/Brillouin%27s%20theorem | In quantum chemistry, Brillouin's theorem, proposed by the French physicist Léon Brillouin in 1934, relates to Hartree–Fock wavefunctions. Hartree–Fock, or the self-consistent field method, is a non-relativistic method of generating approximate wavefunctions for a many-bodied quantum system, based on the assumption that each electron is exposed to an average of the positions of all other electrons, and that the solution is a linear combination of pre-specified basis functions.
The theorem states that given a self-consistent optimized Hartree–Fock wavefunction , the matrix element of the Hamiltonian between the ground state and a single excited determinant (i.e. one where an occupied orbital a is replaced by a virtual orbital r) must be zero.
This theorem is important in constructing a configuration interaction method, among other applications.
Another interpretation of the theorem is that the ground electronic states solved by one-particle methods (such as HF or DFT) already imply configuration interaction of the ground-state configuration with the singly excited ones. That renders their further inclusion into the CI expansion redundant.
Proof
The electronic Hamiltonian of the system can be divided into two parts. One consists of one-electron operators, describing the kinetic energy of the electron and the Coulomb interaction (that is, electrostatic attraction) with the nucleus. The other is the two-electron operators, describing the Coulomb interaction (electrostatic repulsion) between electrons.
One-electron operator
Two-electron operator
In methods of wavefunction-based quantum chemistry which include the electron correlation into the model, the wavefunction is expressed as a sum of series consisting of different Slater determinants (i.e., a linear combination of such determinants). In the simplest case of configuration interaction (as well as in other single-reference multielectron-basis set methods, like MPn, etc.), all the determinants contain the same one-electron functions, or orbitals, and differ just by occupation of these orbitals by electrons. The source of these orbitals is the converged Hartree–Fock calculation, which gives the so-called reference determinant with all the electrons occupying energetically lowest states among the available.
All other determinants are then made by formally "exciting" the reference determinant (one or more electrons are removed from one-electron states occupied in and put into states unoccupied in ). As the orbitals remain the same, we can simply transition from the many-electron state basis (, , , ...) to the one-electron state basis (which was used for Hartree–Fock: , , , , ...), greatly improving the efficiency of calculations. For this transition, we apply the Slater–Condon rules and evaluate
which we recognize is simply an off-diagonal element of the Fock matrix . But the reference wave function was obtained by the Hartree–Fock calculation, or the SCF procedure, the whole point of which was to diagonalize the Fock matrix. Hence for an optimized wavefunction this off-diagonal element must be zero.
This can be made evident also if we multiply both sides of a Hartree–Fock equation
by and integrate over the electronic coordinate:
As the Fock matrix has already been diagonalized, the states and are the eigenstates of the Fock operator, and as such are orthogonal; thus their overlap is zero. It makes all the right-hand side of the equation zero:
which proves the Brillouin's theorem.
The theorem have also been proven directly from the variational principle (by Mayer) and is essentially equivalent to the Hartree–Fock equations in general.
References
Further reading
Quantum chemistry
Theoretical chemistry | Brillouin's theorem | [
"Physics",
"Chemistry"
] | 792 | [
"Quantum chemistry",
"Quantum mechanics",
"Theoretical chemistry",
" molecular",
"nan",
"Atomic",
" and optical physics"
] |
24,339,475 | https://en.wikipedia.org/wiki/Johansen%20test | In statistics, the Johansen test, named after Søren Johansen, is a procedure for testing cointegration of several, say k, I(1) time series. This test permits more than one cointegrating relationship so is more generally applicable than the Engle-Granger test which is based on the Dickey–Fuller (or the augmented) test for unit roots in the residuals from a single (estimated) cointegrating relationship.
Types
There are two types of Johansen test, either with trace or with eigenvalue, and the inferences might be a little bit different. The null hypothesis for the trace test is that the number of cointegration vectors is r = r* < k, vs. the alternative that r = k. Testing proceeds sequentially for r* = 1,2, etc. and the first non-rejection of the null is taken as an estimate of r. The null hypothesis for the "maximum eigenvalue" test is as for the trace test but the alternative is r = r* + 1 and, again, testing proceeds sequentially for r* = 1,2,etc., with the first non-rejection used as an estimator for r.
Just like a unit root test, there can be a constant term, a trend term, both, or neither in the model. For a general VAR(p) model:
There are two possible specifications for error correction: that is, two vector error correction models (VECM):
1. The longrun VECM:
where
2. The transitory VECM:
where
The two are the same. In both VECM,
Inferences are drawn on Π, and they will be the same, so is the explanatory power.
References
Further reading
Mathematical finance
Time series statistical tests | Johansen test | [
"Mathematics"
] | 371 | [
"Applied mathematics",
"Mathematical finance"
] |
24,342,385 | https://en.wikipedia.org/wiki/C20H24N2O5 | {{DISPLAYTITLE:C20H24N2O5}}
The molecular formula C20H24N2O5 (molar mass: 372.41 g/mol, exact mass: 372.1685 u) may refer to:
Codoxime
Medroxalol
Molecular formulas | C20H24N2O5 | [
"Physics",
"Chemistry"
] | 66 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,342,422 | https://en.wikipedia.org/wiki/C24H24N2O4 | {{DISPLAYTITLE:C24H24N2O4}}
The molecular formula C24H24N2O4 (molar mass: 404.45 g/mol, exact mass: 404.1736 u) may refer to:
Abecarnil (ZK-112,119)
Nicocodeine
Molecular formulas | C24H24N2O4 | [
"Physics",
"Chemistry"
] | 71 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,342,746 | https://en.wikipedia.org/wiki/C22H27NO | {{DISPLAYTITLE:C22H27NO}}
The molecular formula C22H27NO (molar mass: 321.45 g/mol, exact mass: 321.2093 u) may refer to:
Etybenzatropine, also known as ethybenztropine and tropethydrylin
Phenazocine
Molecular formulas | C22H27NO | [
"Physics",
"Chemistry"
] | 77 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,342,830 | https://en.wikipedia.org/wiki/C22H27NO3 | {{DISPLAYTITLE:C22H27NO3}}
The molecular formula C22H27NO3 (molar mass: 353.45 g/mol, exact mass: 353.1991 u) may refer to:
Dioxaphetyl butyrate
Oxpheneridine
Molecular formulas | C22H27NO3 | [
"Physics",
"Chemistry"
] | 67 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
24,342,921 | https://en.wikipedia.org/wiki/C23H30N2O2 | {{DISPLAYTITLE:C23H30N2O2}}
The molecular formula C23H30N2O2 (molar mass: 366.49 g/mol) may refer to:
Fumigaclavine C
Ohmefentanyl
Piminodine, an analgesic
Molecular formulas | C23H30N2O2 | [
"Physics",
"Chemistry"
] | 70 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.