id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
13,700,509 | https://en.wikipedia.org/wiki/Norcocaine | Norcocaine is a minor metabolite of cocaine. It is the only confirmed pharmacologically active metabolite of cocaine, although salicylmethylecgonine is also speculated to be an active metabolite. The local anesthetic potential of norcocaine has been shown to be higher than that of cocaine, however cocaine continues to be more widely used. Norcocaine used for research purposes is typically synthesized from cocaine. Several methods for the synthesis have been described.
Legal status
The legal status of norcocaine is somewhat ambiguous. The US DEA does not list norcocaine as a controlled substance. However, some suppliers of norcocaine, like Sigma-Aldrich, consider the drug to be a Schedule II drug (same as cocaine) for the purpose of their own sales.
Toxicity
The LD50 of norcocaine has been studied in mice. When administered by the intraperitoneal route, the LD50 in mice was 40 mg/kg.
Controversy
Some researchers have suggested that hair drug testing for cocaine use should include testing for metabolites like norcocaine. The basis for this suggestion is the potential for external contamination of hair during testing. There is considerable debate about whether current means of washing hair samples are sufficient for removing external contamination. Some researchers state the methods are sufficient, while others state, the residual contamination may result in a false positive test. Metabolites of cocaine, like norcocaine, in addition to cocaine, should be present in samples from drug users. Authors have stated that the metabolites should be present in any samples declared positive. Issues arise because the metabolites are present in only low concentrations. If the metabolites are present, it is possible for them to be from other contamination.
References
Cocaine
Tropanes
Stimulants
Benzoate esters
Recreational drug metabolites
Human drug metabolites | Norcocaine | [
"Chemistry"
] | 393 | [
"Chemicals in medicine",
"Human drug metabolites"
] |
13,704,501 | https://en.wikipedia.org/wiki/ISO%2016750 | ISO 16750, Road vehicles – Environmental conditions and testing for electrical and electronic equipment, is a series of ISO standards which provide guidance regarding environmental conditions commonly encountered by electrical and electronic systems installed in automobiles and specify requirements and tests.
ISO 16750 has five parts:
ISO 16750-1: General
ISO 16750-2: Electrical loads
ISO 16750-3: Mechanical loads
ISO 16750-4: Climatic loads
ISO 16750-5: Chemical loads
A similar series of ISO standards exists for electrical and electronic equipment for the drive system of electric vehicles, see ISO 19453, now withdrawn, see https://www.iso.org/standard/64930.html
References
16750
Automotive engineering | ISO 16750 | [
"Engineering"
] | 144 | [
"Automotive engineering",
"Mechanical engineering by discipline"
] |
13,705,033 | https://en.wikipedia.org/wiki/Edmonston%20Pumping%20Plant | Edmonston Pumping Plant is a pumping station near the south end of the California Aqueduct, which is the principal feature of the California State Water Project. It lifts water 1,926 feet (600 m) to cross the Tehachapi Mountains where it splits into the west and east branches of the California Aqueduct serving Southern California. It is the most powerful water lifting system in the world, not considering pumped-storage hydroelectricity stations.
There are 14 4-stage 80,000-horsepower centrifugal pumps that push the water up to the top of the mountain. Each motor-pump unit stands 65-feet high and weighs 420 tons. The pumps themselves extend downward six floors. Each unit discharges water into a manifold that connects to the main discharge lines. The two main discharge lines stairstep up the mountain in an 8400-foot-long tunnel. They are 12.5 feet in diameter for the first half and 14 feet in diameter for the last half. They each contain 8.5 million gallons of water at all times. At full capacity, the pumps can fling nearly 2 million gallons per minute up over the Tehachapis. A 68-foot-high, 50-foot-diameter surge tank is located at the top of mountain. This prevents tunnel damage when the valves to the pumps are suddenly open or closed. Near the top of the lift there are valves which can close the discharge lines to prevent backflow into the pumping plant below in event of a rupture. The station consumes up to 787 MW of electricity, delivered through a dedicated 230kV transmission line from the nearby Southern California Edison Pastoria substation.
Characteristics
Number of units: 14 (two galleries of 7)
Normal static head:
Motor rating: each
Total motor rating: )
Flow per motor at design head: 315 ft3/s (9 m3/s)
Total flow at design head: 4410 ft3/s (125 m3/s)
References
External links
DWR Edmonston Pumping Station
A.D. Edmonston Pumping Plant
The Big Lift: A photo tour of the State Water Project’s Edmonston Pumping Plant
California State Water Project
Buildings and structures in Kern County, California
Water supply pumping stations in the United States
Water supply infrastructure in California
Interbasin transfer
San Joaquin Valley
Tehachapi Mountains | Edmonston Pumping Plant | [
"Environmental_science"
] | 474 | [
"Hydrology",
"Interbasin transfer"
] |
1,630,483 | https://en.wikipedia.org/wiki/Prime%20power | In mathematics, a prime power is a positive integer which is a positive integer power of a single prime number.
For example: , and are prime powers, while
, and are not.
The sequence of prime powers begins:
2, 3, 4, 5, 7, 8, 9, 11, 13, 16, 17, 19, 23, 25, 27, 29, 31, 32, 37, 41, 43, 47, 49, 53, 59, 61, 64, 67, 71, 73, 79, 81, 83, 89, 97, 101, 103, 107, 109, 113, 121, 125, 127, 128, 131, 137, 139, 149, 151, 157, 163, 167, 169, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 243, 251, … .
The prime powers are those positive integers that are divisible by exactly one prime number; in particular, the number 1 is not a prime power. Prime powers are also called primary numbers, as in the primary decomposition.
Properties
Algebraic properties
Prime powers are powers of prime numbers. Every prime power (except powers of 2 greater than 4) has a primitive root; thus the multiplicative group of integers modulo pn (that is, the group of units of the ring Z/pnZ) is cyclic.
The number of elements of a finite field is always a prime power and conversely, every prime power occurs as the number of elements in some finite field (which is unique up to isomorphism).
Combinatorial properties
A property of prime powers used frequently in analytic number theory is that the set of prime powers which are not prime is a small set in the sense that the infinite sum of their reciprocals converges, although the primes are a large set.
Divisibility properties
The totient function (φ) and sigma functions (σ0) and (σ1) of a prime power are calculated by the formulas
All prime powers are deficient numbers. A prime power pn is an n-almost prime. It is not known whether a prime power pn can be a member of an amicable pair. If there is such a number, then pn must be greater than 101500 and n must be greater than 1400.
See also
Almost prime
Fermi–Dirac prime
Perfect power
Semiprime
References
Further reading
Jones, Gareth A. and Jones, J. Mary (1998) Elementary Number Theory Springer-Verlag London
Prime numbers
Exponentials
Number theory
Integer sequences | Prime power | [
"Mathematics"
] | 525 | [
"Sequences and series",
"Discrete mathematics",
"Integer sequences",
"Mathematical structures",
"Recreational mathematics",
"Prime numbers",
"Mathematical objects",
"Combinatorics",
"E (mathematical constant)",
"Exponentials",
"Numbers",
"Number theory"
] |
1,630,673 | https://en.wikipedia.org/wiki/Decay%20heat | Decay heat is the heat released as a result of radioactive decay. This heat is produced as an effect of radiation on materials: the energy of the alpha, beta or gamma radiation is converted into the thermal movement of atoms.
Decay heat occurs naturally from decay of long-lived radioisotopes that are primordially present from the Earth's formation.
In nuclear reactor engineering, decay heat continues to be generated after the reactor has been shut down (see SCRAM and nuclear chain reactions) and power generation has been suspended. The decay of the short-lived radioisotopes such as iodine-131 created in fission continues at high power for a time after shut down. The major source of heat production in a newly shut down reactor is due to the beta decay of new radioactive elements recently produced from fission fragments in the fission process.
Quantitatively, at the moment of reactor shutdown, decay heat from these radioactive sources is still 6.5% of the previous core power if the reactor has had a long and steady power history. About 1 hour after shutdown, the decay heat will be about 1.5% of the previous core power. After a day, the decay heat falls to 0.4%, and after a week, it will be only 0.2%. Because radioisotopes of all half-life lengths are present in nuclear waste, enough decay heat continues to be produced in spent fuel rods to require them to spend a minimum of one year, and more typically 10 to 20 years, in a spent fuel pool of water before being further processed. However, the heat produced during this time is still only a small fraction (less than 10%) of the heat produced in the first week after shutdown.
If no cooling system is working to remove the decay heat from a crippled and newly shut down reactor, the decay heat may cause the core of the reactor to reach unsafe temperatures within a few hours or days, depending upon the type of core. These extreme temperatures can lead to minor fuel damage (e.g. a few fuel particle failures (0.1 to 0.5%) in a graphite-moderated, gas-cooled design) or even major core structural damage (meltdown) in a light water reactor or liquid metal fast reactor. Chemical species released from the damaged core material may lead to further explosive reactions (steam or hydrogen) which may further damage the reactor.
Natural occurrence
Naturally occurring decay heat is a significant input to Earth's internal heat budget. Radioactive isotopes of uranium, thorium and potassium are the primary contributors to this decay heat, and this radioactive decay is the primary source of heat from which geothermal energy derives.
Decay heat has significant importance in astrophysical phenomena. For example, the light curves of Type Ia supernovae are widely thought to be powered by the heating provided by radioactive products from the decay of nickel and cobalt into iron (Type Ia light curve).
Power reactors in shutdown
In a typical nuclear fission reaction, 187 MeV of energy are released instantaneously in the form of kinetic energy from the fission products, kinetic energy from the fission neutrons, instantaneous gamma rays, or gamma rays from the capture of neutrons. An additional 23 MeV of energy are released at some time after fission from the beta decay of fission products. About 10 MeV of the energy released from the beta decay of fission products is in the form of neutrinos, and since neutrinos are very weakly interacting, this 10 MeV of energy will not be deposited in the reactor core. This results in 13 MeV (6.5% of the total fission energy) being deposited in the reactor core from delayed beta decay of fission products, at some time after any given fission reaction has occurred. In a steady state, this heat from delayed fission product beta decay contributes 6.5% of the normal reactor heat output.
When a nuclear reactor has been shut down, and nuclear fission is not occurring at a large scale, the major source of heat production will be due to the delayed beta decay of these fission products (which originated as fission fragments). For this reason, at the moment of reactor shutdown, decay heat will be about 6.5% of the previous core power if the reactor has had a long and steady power history. About 1 hour after shutdown, the decay heat will be about 1.5% of the previous core power. After a day, the decay heat falls to 0.4%, and after a week it will be only 0.2%. The decay heat production rate will continue to slowly decrease over time; the decay curve depends upon the proportions of the various fission products in the core and upon their respective half-lives.
An approximation for the decay heat curve valid from 10 seconds to 100 days after shutdown is
where is the time since reactor startup, is the power at time , is the reactor power before shutdown, and is the time of reactor shutdown measured from the time of startup (in seconds), so that is the elapsed time since shutdown.
For an approach with a more direct physical basis, some models use the fundamental concept of radioactive decay. Used nuclear fuel contains a large number of different isotopes that contribute to decay heat, which are all subject to the radioactive decay law, so some models consider decay heat to be a sum of exponential functions with different decay constants and initial contribution to the heat rate. A more accurate model would consider the effects of precursors, since many isotopes follow several steps in their radioactive decay chain, and the decay of daughter products will have a greater effect longer after shutdown.
The removal of the decay heat is a significant reactor safety concern, especially shortly after normal shutdown or following a loss-of-coolant accident. Failure to remove decay heat may cause the reactor core temperature to rise to dangerous levels and has caused nuclear accidents, including the nuclear accidents at Three Mile Island and Fukushima I. The heat removal is usually achieved through several redundant and diverse systems, from which heat is removed via heat exchangers. Water is passed through the secondary side of the heat exchanger via the essential service water system which dissipates the heat into the 'ultimate heat sink', often a sea, river or large lake. In locations without a suitable body of water, the heat is dissipated into the air by recirculating the water via a cooling tower. The failure of ESWS circulating pumps was one of the factors that endangered safety during the 1999 Blayais Nuclear Power Plant flood.
Spent fuel
After one year, typical spent nuclear fuel generates about 10 kW of decay heat per tonne, decreasing to about 1 kW/t after ten years. Hence effective active or passive cooling for spent nuclear fuel is required for a number of years.
See also
Decay energy
Spent fuel pool
Dry cask storage
Radioisotope thermoelectric generator
References
External links
DOE fundamentals handbook - Decay heat, Nuclear physics and reactor theory - volume 2 of 2, module 4, page 61
Decay Heat Estimates for MNR, page 2.
Spent Nuclear Fuel Explorer Java applet showing activity and decay heat as a function of time
Nuclear technology
Heat transfer
Nuclear reactor safety | Decay heat | [
"Physics",
"Chemistry"
] | 1,462 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Nuclear technology",
"Thermodynamics",
"Nuclear physics"
] |
1,630,999 | https://en.wikipedia.org/wiki/Registry%20of%20Toxic%20Effects%20of%20Chemical%20Substances | Registry of Toxic Effects of Chemical Substances (RTECS) is a database of toxicity information compiled from the open scientific literature without reference to the validity or usefulness of the studies reported. Until 2001 it was maintained by US National Institute for Occupational Safety and Health (NIOSH) as a freely available publication. It is now maintained by the private company BIOVIA or from several value-added resellers and is available only for a fee or by subscription.
Contents
Six types of toxicity data are included in the file:
Primary irritation
Mutagenic effects
Reproductive effects
Tumorigenic effects
Acute toxicity
Other multiple dose toxicity
Specific numeric toxicity values such as , LC50, TDLo, and TCLo are noted as well as species studied and the route of administration used. For all data the bibliographic source is listed. The studies are not evaluated in any way.
History
RTECS was an activity mandated by the US Congress, established by Section 20(a)(6) of the Occupational Safety and Health Act of 1970 (PL 91-596). The original edition, known as the Toxic Substances List was published on June 28, 1971, and included toxicological data for approximately 5,000 chemicals. The name changed later to its current name Registry of Toxic Effects of Chemical Substances. In January 2001 the database contained 152,970 chemicals. In December 2001 RTECS was transferred from NIOSH to the private company Elsevier MDL. Symyx acquired MDL from Elsevier in 2007 and the Toxicity database was included in the acquisition. The Toxicity database is only accessible for charge on an annual subscription base.
RTECS is available in English, French and Spanish language versions, offered by the Canadian Centre for Occupational Health and Safety. The database subscription is offered on the Web, on CD-ROM and as an Intranet format. The database is also available online from NISC (National Information Services Corporation, RightAnswer.com, and ToxPlanet (Timberlake Ventures, Inc)).
References
External links
RTECS overview
Accelrys website
RightAnswer Website
ToxPlanet Website
Biochemistry databases
Chemical safety
Chemical databases
Health sciences publications
Toxic effects of substances chiefly nonmedicinal as to source | Registry of Toxic Effects of Chemical Substances | [
"Chemistry",
"Biology",
"Environmental_science"
] | 444 | [
"Chemical accident",
"Toxicology",
"Biochemistry databases",
"Toxic effects of substances chiefly nonmedicinal as to source",
"Chemical databases",
"nan",
"Biochemistry",
"Chemical safety"
] |
1,631,010 | https://en.wikipedia.org/wiki/Risk-adjusted%20return%20on%20capital | Risk-adjusted return on capital (RAROC) is a risk-based profitability measurement framework for analysing risk-adjusted financial performance and providing a consistent view of profitability across businesses. The concept was developed by Bankers Trust and principal designer Dan Borge in the late 1970s. Note, however, that increasingly return on risk-adjusted capital (RORAC) is used as a measure, whereby the risk adjustment of Capital is based on the capital adequacy guidelines as outlined by the Basel Committee.
Basic formula
The formula is given by
Broadly speaking, in business enterprises, risk is traded off against benefit. RAROC is defined as the ratio of risk adjusted return to economic capital. The economic capital is the amount of money which is needed to secure the survival in a worst-case scenario, it is a buffer against unexpected shocks in market values. Economic capital is a function of market risk, credit risk, and operational risk, and is often calculated by VaR. This use of capital based on risk improves the capital allocation across different functional areas of banks, insurance companies, or any business in which capital is placed at risk for an expected return above the risk-free rate.
RAROC system allocates capital for two basic reasons:
Risk management
Performance evaluation
For risk management purposes, the main goal of allocating capital to individual business units is to determine the bank's optimal capital structure—that is economic capital allocation is closely correlated with individual business risk. As a performance evaluation tool, it allows banks to assign capital to business units based on the economic value added of each unit.
Decision measures based on regulatory and economic capital
With the financial crisis of 2007, and the introduction of Dodd–Frank Act, and Basel III, the minimum required regulatory capital requirements have become onerous. An implication of stringent regulatory capital requirements spurred debates on the validity of required economic capital in managing an organization's portfolio composition, highlighting that constraining requirements should have organizations focus entirely on the return on regulatory capital in measuring profitability and in guiding portfolio composition. The counterargument highlights that concentration and diversification effects should play a prominent role in portfolio selection – dynamics recognized in economic capital, but not regulatory capital.
It did not take long for the industry to recognize the relevance and importance of both regulatory and economic measures, and eschewed focusing exclusively on one or the other. Relatively simple rules were devised to have both regulatory and economic capital enter into the process. In 2012, researchers at Moody's Analytics designed a formal extension to the RAROC model that accounts for regulatory capital requirements as well as economic risks. In the framework, capital allocation can be represented as a composite capital measure (CCM) that is a weighted combination of economic and regulatory capital – with the weight on regulatory capital determined by the degree to which an organization is a capital constrained.
See also
Enterprise risk management
Omega ratio
Risk return ratio
Risk-return spectrum
Sharpe ratio
Sortino ratio
Notes
References
External links
RAROC & Economic Capital
Between RAROC and a hard place
Actuarial science
Capital requirement
Financial ratios
Financial risk | Risk-adjusted return on capital | [
"Mathematics"
] | 622 | [
"Metrics",
"Applied mathematics",
"Quantity",
"Financial ratios",
"Actuarial science"
] |
1,631,015 | https://en.wikipedia.org/wiki/Aneutronic%20fusion | Aneutronic fusion is any form of fusion power in which very little of the energy released is carried by neutrons. While the lowest-threshold nuclear fusion reactions release up to 80% of their energy in the form of neutrons, aneutronic reactions release energy in the form of charged particles, typically protons or alpha particles. Successful aneutronic fusion would greatly reduce problems associated with neutron radiation such as damaging ionizing radiation, neutron activation, reactor maintenance, and requirements for biological shielding, remote handling and safety.
Since it is simpler to convert the energy of charged particles into electrical power than it is to convert energy from uncharged particles, an aneutronic reaction would be attractive for power systems. Some proponents see a potential for dramatic cost reductions by converting energy directly to electricity, as well as in eliminating the radiation from neutrons, which are difficult to shield against. However, the conditions required to harness aneutronic fusion are much more extreme than those required for deuterium–tritium (D–T) fusion such as at ITER.
History
The first experiments in the field started in 1939, and serious efforts have been continual since the early 1950s.
An early supporter was Richard F. Post at Lawrence Livermore. He proposed to capture the kinetic energy of charged particles as they were exhausted from a fusion reactor and convert this into voltage to drive current. Post helped develop the theoretical underpinnings of direct conversion, later demonstrated by Barr and Moir. They demonstrated a 48 percent energy capture efficiency on the Tandem Mirror Experiment in 1981.
Polywell fusion was pioneered by the late Robert W. Bussard in 1995 and funded by the US Navy. Polywell uses inertial electrostatic confinement. He founded EMC2 to continue polywell research.
A picosecond pulse of a 10-terawatt laser produced hydrogen–boron aneutronic fusions for a Russian team in 2005. However, the number of the resulting α particles (around 103 per laser pulse) was low.
In 2006, the Z-machine at Sandia National Laboratory, a z-pinch device, reached 2 billion kelvins and 300 keV.
In 2011, Lawrenceville Plasma Physics published initial results and outlined a theory and experimental program for aneutronic fusion with the dense plasma focus (DPF). The effort was initially funded by NASA's Jet Propulsion Laboratory. Support for other DPF aneutronic fusion investigations came from the Air Force Research Laboratory.
A French research team fused protons and boron-11 nuclei using a laser-accelerated proton beam and high-intensity laser pulse. In October 2013 they reported an estimated 80 million fusion reactions during a 1.5 nanosecond laser pulse.
In 2016, a team at the Shanghai Chinese Academy of Sciences produced a laser pulse of 5.3 petawatts with the Superintense Ultrafast Laser Facility (SULF) and expected to reach 10 petawatts with the same equipment.
In 2021, TAE Technologies field-reversed configuration announced that its Norman device was regularly producing a stable plasma at temperatures over 50 million degrees.
In 2021, a Russian team reported experimental results in a miniature device with electrodynamic (oscillatory) plasma confinement. It used a ~1–2 J nanosecond vacuum discharge with a virtual cathode. Its field accelerates boron ions and protons to ~ 100–300 keV under oscillating ions' collisions. α-particles of about /4π (~ 10 α-particles/ns) were obtained during the 4 μs of applied voltage.
Australian spin-off company HB11 Energy was created in September 2019. In 2022, they claimed to be the first commercial company to demonstrate fusion.
Definition
Fusion reactions can be categorized according to their neutronicity: the fraction of the fusion energy released as energetic neutrons. The State of New Jersey defined an aneutronic reaction as one in which neutrons carry no more than 1% of the total released energy, although many papers on the subject include reactions that do not meet this criterion.
Coulomb barrier
The Coulomb barrier is the minimum energy required for the nuclei in a fusion reaction to overcome their mutual electrostatic repulsion. Repulsive force between a particle with charge Z1 and one with Z2 is proportional to , where r is the distance between them. The Coulomb barrier facing a pair of reacting, charged particles depends both on total charge and on how equally those charges are distributed; the barrier is lowest when a low-Z particle reacts with a high-Z one and highest when the reactants are of roughly equal charge. Barrier energy is thus minimized for those ions with the fewest protons.
Once the nuclear potential wells of the two reacting particles are within two proton radii of each other, the two can begin attracting one another via nuclear force. Because this interaction is much stronger than electromagnetic interaction, the particles will be drawn together despite the ongoing electrical repulsion, releasing nuclear energy. Nuclear force is a very short-range force, though, so it is a little oversimplified to say it increases with the number of nucleons. The statement is true when describing volume energy or surface energy of a nucleus, less true when addressing Coulomb energy, and does not speak to proton/neutron balance at all. Once reactants have gone past the Coulomb barrier, they're into a world dominated by a force that does not behave like electromagnetism.
In most fusion concepts, the energy needed to overcome the Coulomb barrier is provided by collisions with other fuel ions. In a thermalized fluid like a plasma, the temperature corresponds to an energy spectrum according to the Maxwell–Boltzmann distribution. Gases in this state have some particles with high energy even if the average energy is much lower. Fusion devices rely on this distribution; even at bulk temperatures far below the Coulomb barrier energy, the energy released by the reactions is great enough that capturing some of that can supply sufficient high-energy ions to keep the reaction going.
Thus, steady operation of the reactor is based on a balance between the rate that energy is added to the fuel by fusion reactions and the rate energy is lost to the surroundings. This concept is best expressed as the fusion triple product, the product of the temperature, density and "confinement time", the amount of time energy remains in the fuel before escaping to the environment. The product of temperature and density gives the reaction rate for any given fuel. The rate of reaction is proportional to the nuclear cross section (σ).
Any given device can sustain some maximum plasma pressure. An efficient device would continuously operate near this maximum. Given this pressure, the largest fusion output is obtained when the temperature is such that σv/T2 is a maximum. This is also the temperature at which the value of the triple product nTτ required for ignition is a minimum, since that required value is inversely proportional to σv/T2. A plasma is "ignited" if the fusion reactions produce enough power to maintain the temperature without external heating.
Because the Coulomb barrier is proportional to the product of proton counts () of the two reactants, varieties of heavy hydrogen, deuterium and tritium (D–T), give the fuel with the lowest total Coulomb barrier. All other potential fuels have higher Coulomb barriers, and thus require higher operational temperatures. Additionally, D–T fuels have the highest nuclear cross-sections, which means the reaction rates are higher than any other fuel. This makes D–T fusion the easiest to achieve.
Comparing the potential of other fuels to the D–T reaction: The table below shows the ignition temperature and cross-section for three of the candidate aneutronic reactions, compared to D–T:
The easiest to ignite of the aneutronic reactions, D–3He, has an ignition temperature over four times as high as that of the D–T reaction, and correspondingly lower cross-sections, while the p–11B reaction is nearly ten times more difficult to ignite.
Candidate reactions
Several fusion reactions produce no neutrons on any of their branches. Those with the largest cross sections are:
Candidate fuels
3He
The 3He–D reaction has been studied as an alternative fusion plasma because it has the lowest energy threshold.
The p–6Li, 3He–6Li, and 3He–3He reaction rates are not particularly high in a thermal plasma. When treated as a chain, however, they offer the possibility of enhanced reactivity due to a non-thermal distribution. The product 3He from the p–6Li reaction could participate in the second reaction before thermalizing, and the product p from 3He–6Li could participate in the former before thermalizing. Detailed analyses, however, do not show sufficient reactivity enhancement to overcome the inherently low cross section.
The 3He reaction suffers from a 3He availability problem. 3He occurs in only minuscule amounts on Earth, so it would either have to be bred from neutron reactions (counteracting the potential advantage of aneutronic fusion) or mined from extraterrestrial sources.
The amount of 3He needed for large-scale applications can also be described in terms of total consumption: according to the US Energy Information Administration, "Electricity consumption by 107 million U.S. households in 2001 totalled 1,140 billion kW·h" (). Again assuming 100% conversion efficiency, 6.7 tonnes per year of 3He would be required for that segment of the energy demand of the United States, 15 to 20 tonnes per year given a more realistic end-to-end conversion efficiency. Extracting that amount of pure 3He would entail processing 2 billion tonnes of lunar material per year, even assuming a recovery rate of 100%.
In 2022, Helion Energy claimed that their 7th fusion prototype (Polaris; fully funded and under construction as of September 2022) will demonstrate "net electricity from fusion", and will demonstrate "helium-3 production through deuterium–deuterium fusion" by means of a "patented high-efficiency closed-fuel cycle".
Deuterium
Although the deuterium reactions (deuterium + 3He and deuterium + 6lithium) do not in themselves release neutrons, in a fusion reactor the plasma would also produce D–D side reactions that result in reaction product of 3He plus a neutron. Although neutron production can be minimized by running a plasma reaction hot and deuterium-lean, the fraction of energy released as neutrons is probably several percent, so that these fuel cycles, although neutron-poor, do not meet the 1% threshold. See 3He. The D–3He reaction also suffers from the 3He fuel availability problem, as discussed above.
Lithium
Fusion reactions involving lithium are well studied due to the use of lithium for breeding tritium in thermonuclear weapons. They are intermediate in ignition difficulty between the reactions involving lower atomic-number species, H and He, and the 11B reaction.
The p–7Li reaction, although highly energetic, releases neutrons because of the high cross section for the alternate neutron-producing reaction 1p + 7Li → 7Be + n
Boron
Many studies of aneutronic fusion concentrate on the p–11B reaction, which uses easily available fuel. The fusion of the boron nucleus with a proton produces energetic alpha particles (helium nuclei).
Since igniting the p–11B reaction is much more difficult than D–T, alternatives to the usual tokamak fusion reactors are usually proposed, such as inertial confinement fusion. One proposed method uses one laser to create a boron-11 plasma and another to create a stream of protons that smash into the plasma. The proton beam produces a tenfold increase of fusion because protons and boron nuclei collide directly. Earlier methods used a solid boron target, "protected" by its electrons, which reduced the fusion rate. Experiments suggest that a petawatt-scale laser pulse could launch an 'avalanche' fusion reaction, although this remains controversial. The plasma lasts about one nanosecond, requiring the picosecond pulse of protons to be precisely synchronized. Unlike conventional methods, this approach does not require a magnetically confined plasma. The proton beam is preceded by an electron beam, generated by the same laser, that strips electrons in the boron plasma, increasing the protons' chance to collide with the boron nuclei and fuse.
Residual radiation
Calculations show that at least 0.1% of the reactions in a thermal p–11B plasma produce neutrons, although their energy accounts for less than 0.2% of the total energy released.
These neutrons come primarily from the reaction:
11B + α → 14N + n + 157 keV
The reaction itself produces only 157 keV, but the neutron carries a large fraction of the alpha energy, close to Efusion/3 = . Another significant source of neutrons is:
11B + p → 11C + n − 2.8 MeV.
These neutrons are less energetic, with an energy comparable to the fuel temperature. In addition, 11C itself is radioactive, but quickly decays to 11B with a half life of only 20 minutes.
Since these reactions involve the reactants and products of the primary reaction, it is difficult to lower the neutron production by a significant fraction. A clever magnetic confinement scheme could in principle suppress the first reaction by extracting the alphas as they are created, but then their energy would not be available to keep the plasma hot. The second reaction could in principle be suppressed relative to the desired fusion by removing the high energy tail of the ion distribution, but this would probably be prohibited by the power required to prevent the distribution from thermalizing.
In addition to neutrons, large quantities of hard X-rays are produced by bremsstrahlung, and 4, 12, and 16 MeV gamma rays are produced by the fusion reaction
11B + p → 12C + γ + 16.0 MeV
with a branching probability relative to the primary fusion reaction of about 10−4.
The hydrogen must be isotopically pure and the influx of impurities into the plasma must be controlled to prevent neutron-producing side reactions such as:
11B + d → 12C + n + 13.7 MeV
d + d → 3He + n + 3.27 MeV
The shielding design reduces the occupational dose of both neutron and gamma radiation to a negligible level. The primary components are water (to moderate the fast neutrons), boron (to absorb the moderated neutrons) and metal (to absorb X-rays). The total thickness is estimated to be about one meter, mostly water.
Approaches
HB11 Energy
HB11 Energy uses thousands of merged diode-pumped lasers. This allows mass-produced and less expensive kilojoule lasers to deliver megajoules to a target. The resulting nanosecond and picosecond two-pulse laser system provides next-generation input. The approach uses pulsed power (shots). Fuel pellets burn at a rate of about 1 per second. The energy released drives a conventional steam cycle generator.
Laser technology
Laser power has been increasing at about 103x/decade amid falling costs. Advancements include:
Diode-Pumped Solid-State Lasers (DPSSLs) convert more electrical input into light, reducing waste heat.
Optical Parametric Chirped Pulse Amplification (OPCPA): These systems uses nonlinear optical processes to reduce thermal load and increase efficiency.
Plasma-Based Pulse Compression: Plasma can be used to compress laser pulses, achieving high peak power with minimal energy loss.
Coherent beam combining (CBC) merges multiple beams into a single, more powerful one, spreading thermal load from across multiple beams while coherently combining their energy.
Efficient gas-cooled or cryogenic cooling systems are essential for operating high-power lasers.
Gain media such as ytterbium-doped crystals or ceramics, offer better thermal properties and higher energy storage capabilities.
Researchers prototyped a single-chip titanium:sapphire laser that is 104x smaller and 103x less expensive than earlier models.
Energy capture
Aneutronic fusion produces energy in the form of charged particles instead of neutrons. This means that energy from aneutronic fusion could be captured directly instead of blasting neutrons at a target to boil something. Direct conversion can be either inductive, based on changes in magnetic fields, electrostatic, based on pitting charged particles against an electric field, or photoelectric, in which light energy is captured in a pulsed mode.
Electrostatic conversion uses the motion of charged particles to create voltage that produces current–electrical power. It is the reverse of phenomena that use a voltage to put a particle in motion. It has been described as a linear accelerator running backwards.
Aneutronic fusion loses much of its energy as light. This energy results from the acceleration and deceleration of charged particles. These speed changes can be caused by bremsstrahlung radiation, cyclotron radiation, synchrotron radiation, or electric field interactions. The radiation can be estimated using the Larmor formula and comes in the X-ray, UV, visible, and IR spectra. Some of the energy radiated as X-rays may be converted directly to electricity. Because of the photoelectric effect, X-rays passing through an array of conducting foils transfer some of their energy to electrons, which can then be captured electrostatically. Since X-rays can go through far greater material thickness than electrons, hundreds or thousands of layers are needed to absorb them.
Technical challenges
Many challenges confront the commercialization of aneutronic fusion.
Temperature
The large majority of fusion research has gone toward D–T fusion, which is the easiest to achieve. Fusion experiments typically use deuterium–deuterium fusion (D–D) because deuterium is cheap and easy to handle, being non-radioactive. Experimenting with D–T fusion is more difficult because tritium is expensive and radioactive, requiring additional environmental protection and safety measures.
The combination of lower cross-section and higher loss rates in D–3He fusion is offset to a degree because the reactants are mainly charged particles that deposit their energy in the plasma. This combination of offsetting features demands an operating temperature about four times that of a D–T system. However, due to the high loss rates and consequent rapid cycling of energy, the confinement time of a working reactor needs to be about fifty times higher than D–T, and the energy density about 80 times higher. This requires significant advances in plasma physics.
Proton–boron fusion requires ion energies, and thus plasma temperatures, some nine times higher than those for D–T fusion. For any given density of the reacting nuclei, the reaction rate for proton-boron achieves its peak rate at around 600 keV (6.6 billion degrees Celsius or 6.6 gigakelvins) while D–T has a peak at around 66 keV (765 million degrees Celsius, or 0.765 gigakelvin). For pressure-limited confinement concepts, optimum operating temperatures are about 5 times lower, but the ratio is still roughly ten-to-one.
Power balance
The peak reaction rate of p–11B is only one third that for D–T, requiring better plasma confinement. Confinement is usually characterized by the time τ the energy is retained so that the power released exceeds that required to heat the plasma. Various requirements can be derived, most commonly the Lawson criterion, the product of the density, nτ, and the product with the pressure nTτ. The nτ required for p–11B is 45 times higher than that for D–T. The nTτ required is 500 times higher. Since the confinement properties of conventional fusion approaches, such as the tokamak and laser pellet fusion are marginal, most aneutronic proposals use radically different confinement concepts.
In most fusion plasmas, bremsstrahlung radiation is a major energy loss channel. (See also bremsstrahlung losses in quasineutral, isotropic plasmas.) For the p–11B reaction, some calculations indicate that the bremsstrahlung power will be at least 1.74 times larger than the fusion power. The corresponding ratio for the 3He–3He reaction is only slightly more favorable at 1.39. This is not applicable to non-neutral plasmas, and different in anisotropic plasmas.
In conventional reactor designs, whether based on magnetic or inertial confinement, the bremsstrahlung can easily escape the plasma and is considered a pure energy loss term. The outlook would be more favorable if the plasma could reabsorb the radiation. Absorption occurs primarily via Thomson scattering on the electrons, which has a total cross section of σT = . In a 50–50 D–T mixture this corresponds to a range of . This is considerably higher than the Lawson criterion of ρR > 1 g/cm2, which is already difficult to attain, but might be achievable in inertial confinement systems.
In megatesla magnetic fields a quantum mechanical effect might suppress energy transfer from the ions to the electrons. According to one calculation, bremsstrahlung losses could be reduced to half the fusion power or less. In a strong magnetic field cyclotron radiation is even larger than the bremsstrahlung. In a megatesla field, an electron would lose its energy to cyclotron radiation in a few picoseconds if the radiation could escape. However, in a sufficiently dense plasma (ne > , a density greater than that of a solid), the cyclotron frequency is less than twice the plasma frequency. In this well-known case, the cyclotron radiation is trapped inside the plasmoid and cannot escape, except from a very thin surface layer.
While megatesla fields have not yet been achieved, fields of 0.3 megatesla have been produced with high intensity lasers, and fields of 0.02–0.04 megatesla have been observed with the dense plasma focus device.
At much higher densities (ne > ), the electrons will be Fermi degenerate, which suppresses bremsstrahlung losses, both directly and by reducing energy transfer from the ions to the electrons. If necessary conditions can be attained, net energy production from p–11B or D–3He fuel may be possible. The probability of a feasible reactor based solely on this effect remains low, however, because the gain is predicted to be less than 20, while more than 200 is usually considered to be necessary.
Power density
In every published fusion power plant design, the part of the plant that produces the fusion reactions is much more expensive than the part that converts the nuclear power to electricity. In that case, as indeed in most power systems, power density is an important characteristic. Doubling power density at least halves the cost of electricity. In addition, the confinement time required depends on the power density.
It is, however, not trivial to compare the power density produced by different fusion fuel cycles. The case most favorable to p–11B relative to D–T fuel is a (hypothetical) confinement device that only works well at ion temperatures above about 400 keV, in which the reaction rate parameter σv is equal for the two fuels, and that runs with low electron temperature. p–11B does not require as long a confinement time because the energy of its charged products is two and a half times higher than that for D–T. However, relaxing these assumptions, for example by considering hot electrons, by allowing the D–T reaction to run at a lower temperature or by including the energy of the neutrons in the calculation shifts the power density advantage to D–T.
The most common assumption is to compare power densities at the same pressure, choosing the ion temperature for each reaction to maximize power density, and with the electron temperature equal to the ion temperature. Although confinement schemes can be and sometimes are limited by other factors, most well-investigated schemes have some kind of pressure limit. Under these assumptions, the power density for p–11B is about times smaller than that for D–T. Using cold electrons lowers the ratio to about 700. These numbers are another indication that aneutronic fusion power is not possible with mainline confinement concepts.
See also
CNO cycle
Cold fusion
History of nuclear fusion
Notes
References
External links
Focus Fusion Society
Proton-boron Fusion Prototype
Aneutronic fusion in a degenerate plasma
Lasers trigger cleaner fusion (news@nature.com, 26 August 2005)
Observation of neutronless fusion reactions in picosecond laser plasmas (Physical Review E 72, 2005)
New Opportunities for Fusion in the 21st Century – Advanced Fuels , G.L. Kulcinski and J.F.Santarius, 14th Topical Meeting on the Technology of Fusion Energy, Oct 15–19, 2000.
Fusion power
Nuclear fusion reactions | Aneutronic fusion | [
"Physics",
"Chemistry"
] | 5,161 | [
"Nuclear fusion",
"Fusion power",
"Nuclear fusion reactions",
"Plasma physics"
] |
1,631,288 | https://en.wikipedia.org/wiki/Direct-drive%20mechanism | A direct-drive mechanism is a mechanism design where the force or torque from a prime mover is transmitted directly to the effector device (such as the drive wheels of a vehicle) without involving any intermediate couplings such as a gear train or a belt.
History
In the late 19th century and early 20th century, some of the earliest locomotives and cars used direct drive transmissions at higher speeds. Direct-drive mechanisms for industrial arms began to be possible in the 1980s, with the use of rare-earth magnetic materials. The first direct-drive arm was built in 1981 at Carnegie Mellon University.
Today the most commonly used magnets are neodymium magnets.
Design
Direct-drive systems are characterized by smooth torque transmission, and nearly-zero backlash.
The main benefits of a direct-drive system are increased efficiency (due to reduced power losses from the drivetrain components) and being a simpler design with fewer moving parts. Major benefits also include the ability to deliver high torque over a wide range of speeds, fast response, precise positioning, and low inertia.
The main drawback is that a special type of electric motor is often needed to provide high torque outputs at low rpm. Compared with a multi-speed transmission, the motor is usually operating in its optimal power band for a smaller range of output speeds for the system (e.g., road speeds in the case of a motor vehicle).
Direct-drive mechanisms also need a more precise control mechanism. High-speed motors with speed reduction have relatively high inertia, which helps smooth the output motion. Most motors exhibit positional torque ripple known as cogging torque. In high-speed motors, this effect is usually negligible, as the frequency at which it occurs is too high to significantly affect system performance; direct-drive units will suffer more from this phenomenon unless additional inertia is added (i.e. by a flywheel) or the system uses feedback to actively counter the effect.
Applications
Direct-drive mechanisms are used in applications ranging from low speed operation (such as phonographs, telescope mounts, video game racing wheels and gearless wind turbines) to high speeds (such as fans, computer hard drives, VCR heads, sewing machines, CNC machines and washing machines.)
Some electric railway locomotives have used direct-drive mechanisms, such as the 1919 Milwaukee Road class EP-2 and the 2007 East Japan Railway Company E331. Several cars from the late 19th century used direct-drive wheel hub motors, as did some concept cars in the early 2000s; however, most modern electric cars use inboard motor(s), where drive is transferred to the wheels, via the axles.
Some automobile manufacturers have managed to create their own unique direct-drive transmissions, such as the one Christian von Koenigsegg invented for the Koenigsegg Regera.
See also
Belt-drive
Chain-drive
Direct-drive sim racing wheel
Drive shaft
Hubless wheel
Linear motor
Individual wheel drive
References
Mechanisms (engineering)
Gearless electric drive | Direct-drive mechanism | [
"Engineering"
] | 613 | [
"Mechanical engineering",
"Mechanisms (engineering)"
] |
1,631,732 | https://en.wikipedia.org/wiki/Ian%20Wilmut | Sir Ian Wilmut (7 July 1944 – 10 September 2023) was a British embryologist and the chair of the Scottish Centre for Regenerative Medicine at the University of Edinburgh. He is best known as the leader of the research group that in 1996 first cloned a mammal from an adult somatic cell, a Finnish Dorset lamb named Dolly.
Wilmut was appointed OBE in 1999 for services to embryo development and knighted in the 2008 New Year Honours. He, Keith Campbell and Shinya Yamanaka jointly received the 2008 Shaw Prize for Medicine and Life Sciences for their work on cell differentiation in mammals.
Early life and education
Wilmut was born in Hampton Lucy, Warwickshire, England, on 7 July 1944. Wilmut's father, Leonard Wilmut, was a mathematics teacher who suffered from diabetes for fifty years, which eventually caused him to become blind. The younger Wilmut attended the Boys' High School in Scarborough, where his father taught. His early desire was to embark on a naval career, but he was unable to do so due to his colour blindness. As a schoolboy, Wilmut worked as a farm hand on weekends, which inspired him to study Agriculture at the University of Nottingham.
In 1966, Wilmut spent eight weeks working in the laboratory of Christopher Polge, who is credited with developing the technique of cryopreservation in 1949. The following year Wilmut joined Polge's laboratory to undertake a Doctor of Philosophy degree at the University of Cambridge, from where he graduated in 1971 with a thesis on semen cryopreservation. During this time he was a postgraduate student at Darwin College.
Career and research
After completing his PhD, he was involved in research focusing on gametes and embryogenesis, including working at the Roslin Institute.
Wilmut was the leader of the research group that in 1996 first cloned a mammal, a lamb named Dolly. She died of a respiratory disease in 2003. In 2008 Wilmut announced that he would abandon the technique of somatic cell nuclear transfer by which Dolly was created in favour of an alternative technique developed by Shinya Yamanaka. This method has been used in mice to derive pluripotent stem cells from differentiated adult skin cells, thus circumventing the need to generate embryonic stem cells. Wilmut believed that this method holds greater potential for the treatment of degenerative conditions such as Parkinson's disease and to treat stroke and heart attack patients.
Wilmut led the team that created Dolly, but in 2006 admitted his colleague Keith Campbell deserved "66 per cent" of the invention that made Dolly's birth possible, and that the statement "I did not create Dolly" was accurate. His supervisory role is consistent with the post of principal investigator held by Wilmut at the time of Dolly's creation.
Wilmut was an Emeritus Professor at the Scottish Centre for Regenerative Medicine at the University of Edinburgh and in 2008 was knighted in the New Year Honours for services to science.
Wilmut and Campbell, in conjunction with Colin Tudge, published The Second Creation in 2000.
In 2006 Wilmut's book After Dolly: The Uses and Misuses of Human Cloning was published, co-authored with Roger Highfield.
Death
Wilmut died from complications of Parkinson's disease on 10 September 2023, aged 79.
Awards and honours
In 1998 he received the Lord Lloyd of Kilgerran Award and the Golden Plate Award of the American Academy of Achievement.
Wilmut was appointed Officer of the Order of the British Empire (OBE) in the 1999 Birthday Honours "for services to Embryo Development" and a Fellow of the Royal Society (FRS) in 2002. He was also an elected Fellow of the Academy of Medical Sciences in 1999 and Fellow of the Royal Society of Edinburgh in 2000. He was elected an EMBO Member in 2003.
In 1997 Wilmut was Time magazine man of the year runner up. He was knighted in the 2008 New Year Honours for services to science.
Publications
References
External links
1944 births
2023 deaths
People from Warwickshire
Alumni of the University of Nottingham
Cloning
Members of the European Molecular Biology Organization
English atheists
English inventors
English geneticists
Academics of the University of Edinburgh
Alumni of Darwin College, Cambridge
Fellows of the Royal Society
Fellows of the Academy of Medical Sciences (United Kingdom)
Fellows of the Royal Society of Edinburgh
Knights Bachelor
Officers of the Order of the British Empire
Foreign associates of the National Academy of Sciences
British embryologists
People educated at Scarborough High School for Boys
Deaths from Parkinson's disease | Ian Wilmut | [
"Engineering",
"Biology"
] | 906 | [
"Cloning",
"Genetic engineering"
] |
1,632,806 | https://en.wikipedia.org/wiki/Acoustical%20Society%20of%20America | The Acoustical Society of America (ASA) is an international scientific society founded in 1929 dedicated to generating, disseminating and promoting the knowledge of acoustics and its practical applications. The Society is primarily a voluntary organization of about 7500 members and attracts the interest, commitment, and service of many professionals.
History
In the summer of 1928, Floyd R. Watson and Wallace Waterfall (1900–1974), a former doctoral student of Watson, were invited by UCLA's Vern Oliver Knudsen to an evening dinner at Knudsen's beach club in Santa Monica. The three physicists decided to form a society of acoustical engineers interested in architectural acoustics. In the early part of December 1928, Wallace Waterfall sent letters to sixteen people inquiring about the possibility of organizing such a society. Harvey Fletcher offered the use of the Bell Telephone Laboratories at 463 West Street in Manhattan as a meeting place for an organizational, initial meeting to be held on December 27, 1928. The meeting was attended by forty scientists and engineers who started the Acoustical Society of America (ASA). Temporary officers were elected: Harvey Fletcher as president, V. O. Knudsen as vice-president, Wallace Waterfall as secretary, and Charles Fuller Stoddard (1876–1958) as treasurer. A constitution and by-laws were drafted. The first issue of the Journal of the Acoustical Society of America was published in October 1929.
Technical committees
The Society has 13 technical committees that represent specialized interests in the field of acoustics. The committees organize technical sessions at conferences and are responsible for the representation of their sub-field in ASA publications. The committees include:
Acoustical oceanography
Animal bioacoustics
Architectural acoustics
Biomedical acoustics
Computational acoustics (Technical Specialty Group)
Acoustical engineering
Musical acoustics
Noise
Physical acoustics
Psychoacoustics
Signal processing in acoustics
Speech communication
Structural acoustics and vibration
Underwater acoustics
Founding members
The first meeting was attended by forty scientists and engineers who started the Acoustical Society of America (ASA). Some of those members include:
Edward Joseph Schroeter
Harvey Fletcher
Floyd K. Richtmyer
Dayton Miller
Harold D. Arnold
Frederick Albert Saunders
Floyd R. Watson
Irving Wolff
Publications
The Acoustical Society of America publishes a wide variety of material related to the knowledge and practical application of acoustics in physics, engineering, architecture, noise, oceanography, biology, speech and hearing, psychology and music.
The Journal of the Acoustical Society of America (JASA) - founded in 1929, this is a peer-reviewed academic journal operating on the traditional subscription model.
JASA Express Letters (2021–present) online archive- this is a peer-reviewed academic journal operating on the open access model.
Proceedings of Meetings on Acoustics (POMA) (2007–present) online archive - repository for conference proceedings.
Acoustics Today (2005–present) online archive a general interest magazine on acoustics.
In 2021, the ASA Publications' Office began producing Across Acoustics, a podcast to highlight authors' research from these four publications.
Discontinued publications
Echoes (1991-2013) online archive - Quarterly newsletter.
Acoustics Research Letters Online (2000-2005) online archive - Launched as an open access journal. It became a section of the Journal of the Acoustical Society of America from 2006 to 2020, then in 2021 became the current journal JASA Express Letters.
Noise Control (1955-1961) online archive
Sound: Its Uses and Control (1962-1963) online archive - A continuation of Noise Control, with broadened scope.
Awards
The ASA presents awards and prizes to individuals for contributions to the field of Acoustics. These include:
Gold Medal
Silver Medal
Interdisciplinary Silver Medal – Helmholtz-Rayleigh Interdisciplinary Silver Medal
R. Bruce Lindsay Award
Wallace Clement Sabine Medal
Pioneers of Underwater Acoustics Medal
A. B. Wood Medal and Prize of the Institute of Acoustics
Trent-Crede Medal
von Békésy Medal
Honorary Fellows
Distinguished Service Citation
Science Communication Award
Rossing Prize in Acoustics Education
David T. Blackstock Mentor Award
Medwin Prize in Acoustical Oceanography
William and Christine Hartmann Prize in Auditory Neuroscience
Most technical committees also sponsor awards for best student or early career presenter at each conference.
Student activity
The ASA offers membership and conference attendance to students at a substantially reduced rate. Conference attendance is further promoted by travel subsidies and formal and informal student meetings and social activities. The ASA also expanded services to students in 2004 by introducing regional student chapters.
References
External links
ASA Home Page'
ASA Standards
ASA Publications
ASA students
ASA Press Room
Archival collections
Acoustical Society of America miscellaneous publications, 1934-2016, Niels Bohr Library & Archives
ASA Office of the President Edward Christopher Wente records, 1929-1946, Niels Bohr Library & Archives
Professional associations based in the United States
Acoustics
Learned societies of the United States | Acoustical Society of America | [
"Physics"
] | 975 | [
"Classical mechanics",
"Acoustics"
] |
1,633,875 | https://en.wikipedia.org/wiki/Feedwater%20heater | A feedwater heater is a power plant component used to pre-heat water delivered to a steam generating boiler. Preheating the feedwater reduces the irreversibilities involved in steam generation and therefore improves the thermodynamic efficiency of the system. This reduces plant operating costs and also helps to avoid thermal shock to the boiler metal when the feedwater is introduced back into the steam cycle.
In a steam power plant (usually modeled as a modified Rankine cycle), feedwater heaters allow the feedwater to be brought up to the saturation temperature very gradually. This minimizes the inevitable irreversibilities associated with heat transfer to the working fluid (water). See the article on the second law of thermodynamics for a further discussion of such irreversibilities.
Cycle discussion and explanation
The energy used to heat the feedwater is usually derived from steam extracted between the stages of the steam turbine. Therefore, the steam that would be used to perform expansion work in the turbine (and therefore generate power) is not utilized for that purpose. The percentage of the total cycle steam mass flow used for the feedwater heater is termed the extraction fraction and must be carefully optimized for maximum power plant thermal efficiency since increasing this fraction causes a decrease in turbine power output.
Feedwater heaters can also be "open" or "closed" heat exchangers. An open heat exchanger is one in which extracted steam is allowed to mix with the feedwater. This kind of heater will normally require a feed pump at both the feed inlet and outlet since the pressure in the heater is between the boiler pressure and the condenser pressure. A deaerator is a special case of the open feedwater heater which is specifically designed to remove non-condensable gases from the feedwater.
Closed feedwater heaters are typically shell and tube heat exchangers where the feedwater passes throughout the tubes and is heated by turbine extraction steam. These do not require separate pumps before and after the heater to boost the feedwater to the pressure of the extracted steam as with an open heater. However, the extracted steam (which is most likely almost fully condensed after heating the feedwater) must then be throttled to the condenser pressure, an isenthalpic process that results in some entropy gain with a slight penalty on overall cycle efficiency:
Many power plants incorporate a number of feedwater heaters and may use both open and closed components. Feedwater heaters are used in both fossil- and nuclear-fueled power plants.
Economizer
An economizer serves a similar purpose to a feedwater heater, but is technically different as it does not use cycle steam for heating. In fossil-fuel plants, the economizer uses the lowest-temperature flue gas from the furnace to heat the water before it enters the boiler proper. This allows for the heat transfer between the furnace and the feedwater to occur across a smaller average temperature gradient (for the steam generator as a whole). System efficiency is therefore further increased when viewed with respect to actual energy content of the fuel.
Most nuclear power plants do not have an economizer. However, the Combustion Engineering System 80+ nuclear plant design and its evolutionary successors, (e.g. Korea Electric Power Corporation's APR-1400) incorporate an integral feedwater economizer. This economizer preheats the steam generator feedwater at the steam generator inlet using the lowest-temperature primary coolant.
Testing
A widely use Code for the procedures, direction, and guidance for determining the thermo-hydraulic performance of a closed feedwater heater is the ASME PTC 12.1 Feedwater Heater Standard.
See also
Fossil fuel power plant
Thermal power plant
ASME Codes
The American Society of Mechanical Engineers (ASME), publishes the following Code:
PTC 4.4 Gas Turbine Heat Recovery Steam Generators
References
External links
Power plant diagram
High pressure feedwater heaters
Mechanical engineering
Chemical process engineering
ru:Экономайзер (энергетика) | Feedwater heater | [
"Physics",
"Chemistry",
"Engineering"
] | 847 | [
"Chemical process engineering",
"Chemical engineering",
"Applied and interdisciplinary physics",
"Mechanical engineering"
] |
1,633,917 | https://en.wikipedia.org/wiki/C%20parity | In physics, the C parity or charge parity is a multiplicative quantum number of some particles that describes their behavior under the symmetry operation of charge conjugation.
Charge conjugation changes the sign of all quantum charges (that is, additive quantum numbers), including the electrical charge, baryon number and lepton number, and the flavor charges strangeness, charm, bottomness, topness and Isospin (I3). In contrast, it doesn't affect the mass, linear momentum or spin of a particle.
Formalism
Consider an operation that transforms a particle into its antiparticle,
Both states must be normalizable, so that
which implies that is unitary,
By acting on the particle twice with the operator,
we see that and . Putting this all together, we see that
meaning that the charge conjugation operator is Hermitian and therefore a physically observable quantity.
Eigenvalues
For the eigenstates of charge conjugation,
.
As with parity transformations, applying twice must leave the particle's state unchanged,
allowing only eigenvalues of the so-called C-parity or charge parity of the particle.
Eigenstates
The above implies that for eigenstates, Since antiparticles and particles have charges of opposite sign, only states with all quantum charges equal to zero, such as the photon and particle–antiparticle bound states like , , or positronium, are eigenstates of
Multiparticle systems
For a system of free particles, the C parity is the product of C parities for each particle.
In a pair of bound mesons there is an additional component due to the orbital angular momentum. For example, in a bound state of two pions, with an orbital angular momentum , exchanging and inverts the relative position vector, which is identical to a parity operation. Under this operation, the angular part of the spatial wave function contributes a phase factor of , where is the angular momentum quantum number associated with .
.
With a two-fermion system, two extra factors appear: One factor comes from the spin part of the wave function, and the second by considering the intrinsic parities of both the particles. Note that a fermion and an antifermion always have opposite intrinsic parity. Hence,
Bound states can be described with the spectroscopic notation (see term symbol), where is the total spin quantum number (not to be confused with the S orbital), is the total angular momentum quantum number, and the total orbital momentum quantum number (with quantum number etc. replaced by orbital letters S, P, D, etc.).
Example positronium is a bound state electron-positron similar to a hydrogen atom. The names parapositronium and orthopositronium are given to the states 1S0 and 3S1.
With , the spins are anti-parallel, and with they are parallel. This gives a multiplicity of 1 (anti-parallel) or 3 (parallel)
The total orbital angular momentum quantum number is (spectroscopic S orbital)
Total angular momentum quantum number is
C parity depending on and . Since charge parity is preserved, annihilation of these states in photons must be:
{|
|-
| Orbital :
| 1S0 || → ||
|
| 3S1 || → ||
|-
| :
| +1 || = || (−1) × (−1)
|
| −1 || = || (−1) × (−1) × (−1)
|}
Experimental tests of C-parity conservation
: The neutral pion, , is observed to decay to two photons, We can infer that the pion therefore has but each additional introduces a factor of to the overall C-parity of the pion. The decay to would violate C parity conservation. A search for this decay was conducted using pions created in the reaction
: Decay of the eta meson.
annihilations
See also
G-parity
References
Quantum mechanics
Quantum field theory | C parity | [
"Physics"
] | 851 | [
"Quantum field theory",
"Theoretical physics",
"Quantum mechanics"
] |
1,634,352 | https://en.wikipedia.org/wiki/Atomic%20battery | An atomic battery, nuclear battery, radioisotope battery or radioisotope generator uses energy from the decay of a radioactive isotope to generate electricity. Like a nuclear reactor, it generates electricity from nuclear energy, but it differs by not using a chain reaction. Although commonly called batteries, atomic batteries are technically not electrochemical and cannot be charged or recharged. Although they are very costly, they have extremely long lives and high energy density, so they are typically used as power sources for equipment that must operate unattended for long periods, such as spacecraft, pacemakers, underwater systems, and automated scientific stations in remote parts of the world.
Nuclear batteries began in 1913, when Henry Moseley first demonstrated a current generated by charged-particle radiation. In the 1950s and 1960s, this field of research got much attention for applications requiring long-life power sources for spacecraft. In 1954, RCA researched a small atomic battery for small radio receivers and hearing aids. Since RCA's initial research and development in the early 1950s, many types and methods have been designed to extract electrical energy from nuclear sources. The scientific principles are well known, but modern nano-scale technology and new wide-bandgap semiconductors have allowed the making of new devices and interesting material properties not previously available.
Nuclear batteries can be classified by their means of energy conversion into two main groups: thermal converters and non-thermal converters. The thermal types convert some of the heat generated by the nuclear decay into electricity; an example is the radioisotope thermoelectric generator (RTG), often used in spacecraft. The non-thermal converters, such as betavoltaic cells, extract energy directly from the emitted radiation, before it is degraded into heat; they are easier to miniaturize and do not need a thermal gradient to operate, so they can be used in small machines.
Atomic batteries usually have an efficiency of 0.1–5%. High-efficiency betavoltaic devices can reach 6–8% efficiency.
Thermal conversion
Thermionic conversion
A thermionic converter consists of a hot electrode, which thermionically emits electrons over a space-charge barrier to a cooler electrode, producing a useful power output. Caesium vapor is used to optimize the electrode work functions and provide an ion supply (by surface ionization) to neutralize the electron space charge.
Thermoelectric conversion
A radioisotope thermoelectric generator (RTG) uses thermocouples. Each thermocouple is formed from two wires of different metals (or other materials). A temperature gradient along the length of each wire produces a voltage gradient from one end of the wire to the other; but the different materials produce different voltages per degree of temperature difference. By connecting the wires at one end, heating that end but cooling the other end, a usable, but small (millivolts), voltage is generated between the unconnected wire ends. In practice, many are connected in series (or in parallel) to generate a larger voltage (or current) from the same heat source, as heat flows from the hot ends to the cold ends. Metal thermocouples have low thermal-to-electrical efficiency. However, the carrier density and charge can be adjusted in semiconductor materials such as bismuth telluride and silicon germanium to achieve much higher conversion efficiencies.
Thermophotovoltaic conversion
Thermophotovoltaic (TPV) cells work by the same principles as a photovoltaic cell, except that they convert infrared light (rather than visible light) emitted by a hot surface, into electricity. Thermophotovoltaic cells have an efficiency slightly higher than thermoelectric couples and can be overlaid on thermoelectric couples, potentially doubling efficiency. The University of Houston TPV Radioisotope Power Conversion Technology development effort is aiming at combining thermophotovoltaic cells concurrently with thermocouples to provide a 3- to 4-fold improvement in system efficiency over current thermoelectric radioisotope generators.
Stirling generators
A Stirling radioisotope generator is a Stirling engine driven by the temperature difference produced by a radioisotope. A more efficient version, the advanced Stirling radioisotope generator, was under development by NASA, but was cancelled in 2013 due to large-scale cost overruns.
Non-thermal conversion
Non-thermal converters extract energy from emitted radiation before it is degraded into heat. Unlike thermoelectric and thermionic converters their output does not depend on the temperature difference. Non-thermal generators can be classified by the type of particle used and by the mechanism by which their energy is converted.
Electrostatic conversion
Energy can be extracted from emitted charged particles when their charge builds up in a conductor, thus creating an electrostatic potential. Without a dissipation mode the voltage can increase up to the energy of the radiated particles, which may range from several kilovolts (for beta radiation) up to megavolts (alpha radiation). The built up electrostatic energy can be turned into usable electricity in one of the following ways.
Direct-charging generator
A direct-charging generator consists of a capacitor charged by the current of charged particles from a radioactive layer deposited on one of the electrodes. Spacing can be either vacuum or dielectric. Negatively charged beta particles or positively charged alpha particles, positrons or fission fragments may be utilized. Although this form of nuclear-electric generator dates back to 1913, few applications have been found in the past for the extremely low currents and inconveniently high voltages provided by direct-charging generators. Oscillator/transformer systems are employed to reduce the voltages, then rectifiers are used to transform the AC power back to direct current.
English physicist H. G. J. Moseley constructed the first of these. Moseley's apparatus consisted of a glass globe silvered on the inside with a radium emitter mounted on the tip of a wire at the center. The charged particles from the radium created a flow of electricity as they moved quickly from the radium to the inside surface of the sphere. As late as 1945 the Moseley model guided other efforts to build experimental batteries generating electricity from the emissions of radioactive elements.
Electromechanical conversion
Electromechanical atomic batteries use the buildup of charge between two plates to pull one bendable plate towards the other, until the two plates touch, discharge, equalizing the electrostatic buildup, and spring back. The mechanical motion produced can be used to produce electricity through flexing of a piezoelectric material or through a linear generator. Milliwatts of power are produced in pulses depending on the charge rate, in some cases multiple times per second (35 Hz).
Radiovoltaic conversion
A radiovoltaic (RV) device converts the energy of ionizing radiation directly into electricity using a semiconductor junction, similar to the conversion of photons into electricity in a photovoltaic cell. Depending on the type of radiation targeted, these devices are called alphavoltaic (AV, αV), betavoltaic (BV, βV) and/or gammavoltaic (GV, γV). Betavoltaics have traditionally received the most attention since (low-energy) beta emitters cause the least amount of radiative damage, thus allowing a longer operating life and less shielding. Interest in alphavoltaic and (more recently) gammavoltaic devices is driven by their potential higher efficiency.
Alphavoltaic conversion
Alphavoltaic devices use a semiconductor junction to produce electrical energy from energetic alpha particles.
Betavoltaic conversion
Betavoltaic devices use a semiconductor junction to produce electrical energy from energetic beta particles (electrons). A commonly used source is the hydrogen isotope tritium, which is employed in City Labs' NanoTritium batteries.
Betavoltaic devices are particularly well-suited to low-power electrical applications where long life of the energy source is needed, such as implantable medical devices or military and space applications.
The Chinese startup Betavolt claimed in January 2024 to have a miniature device in the pilot testing stage. It is allegedly generating 100 microwatts of power and a voltage of 3V and has a lifetime of 50 years without any need for charging or maintenance. Betavolt claims it to be the first such miniaturised device ever developed.
It gains its energy from the isotope nickel-63, held in a module the size of a very small coin.
As it is consumed, the nickel-63 decays into stable, non-radioactive isotopes of copper, which pose no environmental threat. It contains a thin wafer of nickel-63 providing beta particle electrons sandwiched between two thin crystallographic diamond semiconductor layers.
Gammavoltaic conversion
Gammavoltaic devices use a semiconductor junction to produce electrical energy from energetic gamma particles (high-energy photons). They have only been considered in the 2010s but were proposed as early as 1981.
A gammavoltaic effect has been reported in perovskite solar cells. Another patented design involves scattering of the gamma particle until its energy has decreased enough to be absorbed in a conventional photovoltaic cell. Gammavoltaic designs using diamond and Schottky diodes are also being investigated.
Radiophotovoltaic (optoelectric) conversion
In a radiophotovoltaic (RPV) device the energy conversion is indirect: the emitted particles are first converted into light using a radioluminescent material (a scintillator or phosphor), and the light is then converted into electricity using a photovoltaic cell. Depending on the type of particle targeted, the conversion type can be more precisely specified as alphaphotovoltaic (APV or α-PV), betaphotovoltaic (BPV or β-PV) or gammaphotovoltaic (GPV or γ-PV).
Radiophotovoltaic conversion can be combined with radiovoltaic conversion to increase the conversion efficiency.
Pacemakers
Medtronic and Alcatel developed a plutonium-powered pacemaker, the Numec NU-5, powered by a 2.5 Ci slug of plutonium 238, first implanted in a human patient in 1970. The 139 Numec NU-5 nuclear pacemakers implanted in the 1970s are expected to never need replacing, an advantage over non-nuclear pacemakers, which require surgical replacement of their batteries every 5 to 10 years. The plutonium "batteries" are expected to produce enough power to drive the circuit for longer than the 88-year halflife of the plutonium-238.
The last of these units was implanted in 1988, as lithium-powered pacemakers, which had an expected lifespan of 10 or more years without the disadvantages of radiation concerns and regulatory hurdles, made these units obsolete.
Betavoltaic batteries are also being considered as long-lasting power sources for lead-free pacemakers.
Radioisotopes used
Atomic batteries use radioisotopes that produce low energy beta particles or sometimes alpha particles of varying energies. Low energy beta particles are needed to prevent the production of high energy penetrating Bremsstrahlung radiation that would require heavy shielding. Radioisotopes such as tritium, nickel-63, promethium-147, and technetium-99 have been tested. Plutonium-238, curium-242, curium-244 and strontium-90 have been used. Besides the nuclear properties of the used isotope, there are also the issues of chemical properties and availability. A product deliberately produced via neutron irradiation or in a particle accelerator is more difficult to obtain than a fission product easily extracted from spent nuclear fuel.
Plutonium-238 must be deliberately produced via neutron irradiation of Neptunium-237 but it can be easily converted into a stable plutonium oxide ceramic. Strontium-90 is easily extracted from spent nuclear fuel but must be converted into the perovskite form strontium titanate to reduce its chemical mobility, cutting power density in half. Caesium-137, another high yield nuclear fission product, is rarely used in atomic batteries because it is difficult to convert into chemically inert substances. Another undesirable property of Cs-137 extracted from spent nuclear fuel is that it is contaminated with other isotopes of Caesium which reduce power density further.
Micro-batteries
In the field of microelectromechanical systems (MEMS), nuclear engineers at the University of Wisconsin, Madison have explored the possibilities of producing minuscule batteries which exploit radioactive nuclei of substances such as polonium or curium to produce electric energy. As an example of an integrated, self-powered application, the researchers have created an oscillating cantilever beam that is capable of consistent, periodic oscillations over very long time periods without the need for refueling. Ongoing work demonstrate that this cantilever is capable of radio frequency transmission, allowing MEMS devices to communicate with one another wirelessly.
These micro-batteries are very light and deliver enough energy to function as power supply for use in MEMS devices and further for supply for nanodevices.
The radiation energy released is transformed into electric energy, which is restricted to the area of the device that contains the processor and the micro-battery that supplies it with energy.
See also
References
External links
Betavoltaic Historical Review
Cantilever Electromechanical Atomic Battery
Types of Radioisotopic Batteries
Americium Battery Concept Proposed for Space Applications- TFOT article
Nuclear Batteries (25 MW)
Tiny 'nuclear batteries' unveiled, BBC article about the research of Jae Wan Kwon et al. from the University of Missouri.
Battery types
Electrical generators
Nuclear technology
Nuclear power in space | Atomic battery | [
"Physics",
"Technology"
] | 2,872 | [
"Electrical generators",
"Machines",
"Nuclear technology",
"Physical systems",
"Nuclear physics"
] |
631,310 | https://en.wikipedia.org/wiki/Mark%20%28unit%29 | The Mark (from Middle High German: Marc, march, brand) is originally a medieval weight or mass unit, which supplanted the pound weight as a precious metals and coinage weight in parts of Europe in the 11th century. The Mark is traditionally divided into 8 ounces or 16 lots. The Cologne mark corresponded to about 234 grams.
Like the German systems, the French poids de marc weight system considered one "Marc" equal to 8 troy ounces.
Just as the pound of 12 troy ounces (373 g) lent its name to the pound unit of currency, the mark lent its name to the mark unit of currency.
Origin of the term
The Etymological Dictionary of the German Language by Friedrich Kluge derives the word from the Proto-Germanic term marka, "weight and value unit" (originally "division, shared").<ref>Kluge, Friedrich (2012). Etymological Dictionary of the German Language. 25th edition, edited by Elmar Seebold, Berlin/Boston, ISBN 978-3-11-022364-4, p. 602 (Google Books).</ref>
The etymological dictionary by Wolfgang Pfeifer sees the Old High German marc, "delimitation, sign", as the stem and assumes that marc originally meant "minting" (marking of a certain weight), later denoting the ingot itself and its weight, and finally a coin of a certain weight and value.
According to an 1848 trade lexicon, the term Gewichtsmark comes from the fact that "the piece of metal used for weighing was stamped with a sign or symbol". Meyer's 1905 Konversationslexikon similarly derives the origin of the word to the emergence of the mark from the Roman pound of to 11 ounces. Charlemagne, as King of the Franks, carried out a monetary and measures reform towards the end of the 8th century. In particular, he had introduced the Karlspfund ("Charles pound") as the basic unit of coinage and trade which, however, weighed only 8 ounces. In order to prevent a further reduction in the weight of a pound, a sign, the mark, was now stamped on the new weights. The actual weight of these weights, known as marca'', is said to have fluctuated between 196 g and 280 g.
References
Units of mass
Obsolete units of measurement
Units of measurement of the Holy Roman Empire | Mark (unit) | [
"Physics",
"Mathematics"
] | 505 | [
"Obsolete units of measurement",
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
632,489 | https://en.wikipedia.org/wiki/Quantum%20algorithm | In quantum computing, a quantum algorithm is an algorithm that runs on a realistic model of quantum computation, the most commonly used model being the quantum circuit model of computation. A classical (or non-quantum) algorithm is a finite sequence of instructions, or a step-by-step procedure for solving a problem, where each step or instruction can be performed on a classical computer. Similarly, a quantum algorithm is a step-by-step procedure, where each of the steps can be performed on a quantum computer. Although all classical algorithms can also be performed on a quantum computer, the term quantum algorithm is generally reserved for algorithms that seem inherently quantum, or use some essential feature of quantum computation such as quantum superposition or quantum entanglement.
Problems that are undecidable using classical computers remain undecidable using quantum computers. What makes quantum algorithms interesting is that they might be able to solve some problems faster than classical algorithms because the quantum superposition and quantum entanglement that quantum algorithms exploit generally cannot be efficiently simulated on classical computers (see Quantum supremacy).
The best-known algorithms are Shor's algorithm for factoring and Grover's algorithm for searching an unstructured database or an unordered list. Shor's algorithm runs much (almost exponentially) faster than the best-known classical algorithm for factoring, the general number field sieve. Grover's algorithm runs quadratically faster than the best possible classical algorithm for the same task, a linear search.
Overview
Quantum algorithms are usually described, in the commonly used circuit model of quantum computation, by a quantum circuit that acts on some input qubits and terminates with a measurement. A quantum circuit consists of simple quantum gates, each of which acts on some finite number of qubits. Quantum algorithms may also be stated in other models of quantum computation, such as the Hamiltonian oracle model.
Quantum algorithms can be categorized by the main techniques involved in the algorithm. Some commonly used techniques/ideas in quantum algorithms include phase kick-back, phase estimation, the quantum Fourier transform, quantum walks, amplitude amplification and topological quantum field theory. Quantum algorithms may also be grouped by the type of problem solved; see, e.g., the survey on quantum algorithms for algebraic problems.
Algorithms based on the quantum Fourier transform
The quantum Fourier transform is the quantum analogue of the discrete Fourier transform, and is used in several quantum algorithms. The Hadamard transform is also an example of a quantum Fourier transform over an n-dimensional vector space over the field F2. The quantum Fourier transform can be efficiently implemented on a quantum computer using only a polynomial number of quantum gates.
Deutsch–Jozsa algorithm
The Deutsch–Jozsa algorithm solves a black-box problem that requires exponentially many queries to the black box for any deterministic classical computer, but can be done with a single query by a quantum computer. However, when comparing bounded-error classical and quantum algorithms, there is no speedup, since a classical probabilistic algorithm can solve the problem with a constant number of queries with small probability of error. The algorithm determines whether a function f is either constant (0 on all inputs or 1 on all inputs) or balanced (returns 1 for half of the input domain and 0 for the other half).
Bernstein–Vazirani algorithm
The Bernstein–Vazirani algorithm is the first quantum algorithm that solves a problem more efficiently than the best known classical algorithm. It was designed to create an oracle separation between BQP and BPP.
Simon's algorithm
Simon's algorithm solves a black-box problem exponentially faster than any classical algorithm, including bounded-error probabilistic algorithms. This algorithm, which achieves an exponential speedup over all classical algorithms that we consider efficient, was the motivation for Shor's algorithm for factoring.
Quantum phase estimation algorithm
The quantum phase estimation algorithm is used to determine the eigenphase of an eigenvector of a unitary gate, given a quantum state proportional to the eigenvector and access to the gate. The algorithm is frequently used as a subroutine in other algorithms.
Shor's algorithm
Shor's algorithm solves the discrete logarithm problem and the integer factorization problem in polynomial time, whereas the best known classical algorithms take super-polynomial time. It is unknown whether these problems are in P or NP-complete. It is also one of the few quantum algorithms that solves a non-black-box problem in polynomial time, where the best known classical algorithms run in super-polynomial time.
Hidden subgroup problem
The abelian hidden subgroup problem is a generalization of many problems that can be solved by a quantum computer, such as Simon's problem, solving Pell's equation, testing the principal ideal of a ring R and factoring. There are efficient quantum algorithms known for the Abelian hidden subgroup problem. The more general hidden subgroup problem, where the group is not necessarily abelian, is a generalization of the previously mentioned problems, as well as graph isomorphism and certain lattice problems. Efficient quantum algorithms are known for certain non-abelian groups. However, no efficient algorithms are known for the symmetric group, which would give an efficient algorithm for graph isomorphism and the dihedral group, which would solve certain lattice problems.
Estimating Gauss sums
A Gauss sum is a type of exponential sum. The best known classical algorithm for estimating these sums takes exponential time. Since the discrete logarithm problem reduces to Gauss sum estimation, an efficient classical algorithm for estimating Gauss sums would imply an efficient classical algorithm for computing discrete logarithms, which is considered unlikely. However, quantum computers can estimate Gauss sums to polynomial precision in polynomial time.
Fourier fishing and Fourier checking
Consider an oracle consisting of n random Boolean functions mapping n-bit strings to a Boolean value, with the goal of finding n n-bit strings z1,..., zn such that for the Hadamard-Fourier transform, at least 3/4 of the strings satisfy
and at least 1/4 satisfy
This can be done in bounded-error quantum polynomial time (BQP).
Algorithms based on amplitude amplification
Amplitude amplification is a technique that allows the amplification of a chosen subspace of a quantum state. Applications of amplitude amplification usually lead to quadratic speedups over the corresponding classical algorithms. It can be considered as a generalization of Grover's algorithm.
Grover's algorithm
Grover's algorithm searches an unstructured database (or an unordered list) with N entries for a marked entry, using only queries instead of the queries required classically. Classically, queries are required even allowing bounded-error probabilistic algorithms.
Theorists have considered a hypothetical generalization of a standard quantum computer that could access the histories of the hidden variables in Bohmian mechanics. (Such a computer is completely hypothetical and would not be a standard quantum computer, or even possible under the standard theory of quantum mechanics.) Such a hypothetical computer could implement a search of an N-item database in at most steps. This is slightly faster than the steps taken by Grover's algorithm. However, neither search method would allow either model of quantum computer to solve NP-complete problems in polynomial time.
Quantum counting
Quantum counting solves a generalization of the search problem. It solves the problem of counting the number of marked entries in an unordered list, instead of just detecting whether one exists. Specifically, it counts the number of marked entries in an -element list with an error of at most by making only queries, where is the number of marked elements in the list. More precisely, the algorithm outputs an estimate for , the number of marked entries, with accuracy .
Algorithms based on quantum walks
A quantum walk is the quantum analogue of a classical random walk. A classical random walk can be described by a probability distribution over some states, while a quantum walk can be described by a quantum superposition over states. Quantum walks are known to give exponential speedups for some black-box problems. They also provide polynomial speedups for many problems. A framework for the creation of quantum walk algorithms exists and is a versatile tool.
Boson sampling problem
The Boson Sampling Problem in an experimental configuration assumes an input of bosons (e.g., photons) of moderate number that are randomly scattered into a large number of output modes, constrained by a defined unitarity. When individual photons are used, the problem is isomorphic to a multi-photon quantum walk. The problem is then to produce a fair sample of the probability distribution of the output that depends on the input arrangement of bosons and the unitarity. Solving this problem with a classical computer algorithm requires computing the permanent of the unitary transform matrix, which may take a prohibitively long time or be outright impossible. In 2014, it was proposed that existing technology and standard probabilistic methods of generating single-photon states could be used as an input into a suitable quantum computable linear optical network and that sampling of the output probability distribution would be demonstrably superior using quantum algorithms. In 2015, investigation predicted the sampling problem had similar complexity for inputs other than Fock-state photons and identified a transition in computational complexity from classically simulable to just as hard as the Boson Sampling Problem, depending on the size of coherent amplitude inputs.
Element distinctness problem
The element distinctness problem is the problem of determining whether all the elements of a list are distinct. Classically, queries are required for a list of size ; however, it can be solved in queries on a quantum computer. The optimal algorithm was put forth by Andris Ambainis, and Yaoyun Shi first proved a tight lower bound when the size of the range is sufficiently large. Ambainis and Kutin independently (and via different proofs) extended that work to obtain the lower bound for all functions.
Triangle-finding problem
The triangle-finding problem is the problem of determining whether a given graph contains a triangle (a clique of size 3). The best-known lower bound for quantum algorithms is , but the best algorithm known requires O(N1.297) queries, an improvement over the previous best O(N1.3) queries.
Formula evaluation
A formula is a tree with a gate at each internal node and an input bit at each leaf node. The problem is to evaluate the formula, which is the output of the root node, given oracle access to the input.
A well studied formula is the balanced binary tree with only NAND gates. This type of formula requires queries using randomness, where . With a quantum algorithm, however, it can be solved in queries. No better quantum algorithm for this case was known until one was found for the unconventional Hamiltonian oracle model. The same result for the standard setting soon followed.
Fast quantum algorithms for more complicated formulas are also known.
Group commutativity
The problem is to determine if a black-box group, given by k generators, is commutative. A black-box group is a group with an oracle function, which must be used to perform the group operations (multiplication, inversion, and comparison with identity). The interest in this context lies in the query complexity, which is the number of oracle calls needed to solve the problem. The deterministic and randomized query complexities are and , respectively. A quantum algorithm requires queries, while the best-known classical algorithm uses queries.
BQP-complete problems
The complexity class BQP (bounded-error quantum polynomial time) is the set of decision problems solvable by a quantum computer in polynomial time with error probability of at most 1/3 for all instances. It is the quantum analogue to the classical complexity class BPP.
A problem is BQP-complete if it is in BQP and any problem in BQP can be reduced to it in polynomial time. Informally, the class of BQP-complete problems are those that are as hard as the hardest problems in BQP and are themselves efficiently solvable by a quantum computer (with bounded error).
Computing knot invariants
Witten had shown that the Chern-Simons topological quantum field theory (TQFT) can be solved in terms of Jones polynomials. A quantum computer can simulate a TQFT, and thereby approximate the Jones polynomial, which as far as we know, is hard to compute classically in the worst-case scenario.
Quantum simulation
The idea that quantum computers might be more powerful than classical computers originated in Richard Feynman's observation that classical computers seem to require exponential time to simulate many-particle quantum systems, yet quantum many-body systems are able to "solve themselves." Since then, the idea that quantum computers can simulate quantum physical processes exponentially faster than classical computers has been greatly fleshed out and elaborated. Efficient (i.e., polynomial-time) quantum algorithms have been developed for simulating both Bosonic and Fermionic systems, as well as the simulation of chemical reactions beyond the capabilities of current classical supercomputers using only a few hundred qubits. Quantum computers can also efficiently simulate topological quantum field theories. In addition to its intrinsic interest, this result has led to efficient quantum algorithms for estimating quantum topological invariants such as Jones and HOMFLY polynomials, and the Turaev-Viro invariant of three-dimensional manifolds.
Solving a linear system of equations
In 2009, Aram Harrow, Avinatan Hassidim, and Seth Lloyd, formulated a quantum algorithm for solving linear systems. The algorithm estimates the result of a scalar measurement on the solution vector to a given linear system of equations.
Provided that the linear system is sparse and has a low condition number , and that the user is interested in the result of a scalar measurement on the solution vector (instead of the values of the solution vector itself), then the algorithm has a runtime of , where is the number of variables in the linear system. This offers an exponential speedup over the fastest classical algorithm, which runs in (or for positive semidefinite matrices).
Hybrid quantum/classical algorithms
Hybrid Quantum/Classical Algorithms combine quantum state preparation and measurement with classical optimization. These algorithms generally aim to determine the ground-state eigenvector and eigenvalue of a Hermitian operator.
QAOA
The quantum approximate optimization algorithm takes inspiration from quantum annealing, performing a discretized approximation of quantum annealing using a quantum circuit. It can be used to solve problems in graph theory. The algorithm makes use of classical optimization of quantum operations to maximize an "objective function."
Variational quantum eigensolver
The variational quantum eigensolver (VQE) algorithm applies classical optimization to minimize the energy expectation value of an ansatz state to find the ground state of a Hermitian operator, such as a molecule's Hamiltonian. It can also be extended to find excited energies of molecular Hamiltonians.
Contracted quantum eigensolver
The contracted quantum eigensolver (CQE) algorithm minimizes the residual of a contraction (or projection) of the Schrödinger equation onto the space of two (or more) electrons to find the ground- or excited-state energy and two-electron reduced density matrix of a molecule. It is based on classical methods for solving energies and two-electron reduced density matrices directly from the anti-Hermitian contracted Schrödinger equation.
See also
Quantum machine learning
Quantum optimization algorithms
Quantum sort
Primality test
References
External links
The Quantum Algorithm Zoo: A comprehensive list of quantum algorithms that provide a speedup over the fastest known classical algorithms.
Andrew Childs' lecture notes on quantum algorithms
The Quantum search algorithm - brute force .
Surveys
Quantum computing
Theoretical computer science | Quantum algorithm | [
"Mathematics"
] | 3,241 | [
"Theoretical computer science",
"Applied mathematics"
] |
632,786 | https://en.wikipedia.org/wiki/Insulin-like%20growth%20factor%201 | Insulin-like growth factor 1 (IGF-1), also called somatomedin C, is a hormone similar in molecular structure to insulin which plays an important role in childhood growth, and has anabolic effects in adults. In the 1950s IGF-1 was called "sulfation factor" because it stimulated sulfation of cartilage in vitro, and in the 1970s due to its effects it was termed "nonsuppressible insulin-like activity" (NSILA).
IGF-1 is a protein that in humans is encoded by the IGF1 gene. IGF-1 consists of 70 amino acids in a single chain with three intramolecular disulfide bridges. IGF-1 has a molecular weight of 7,649 daltons. In dogs, an ancient mutation in IGF1 is the primary cause of the toy phenotype.
IGF-1 is produced primarily by the liver. Production is stimulated by growth hormone (GH). Most of IGF-1 is bound to one of 6 binding proteins (IGF-BP). IGFBP-1 is regulated by insulin. IGF-1 is produced throughout life; the highest rates of IGF-1 production occur during the pubertal growth spurt. The lowest levels occur in infancy and old age.
Low IGF-1 levels are associated with cardiovascular disease, while high IGF-1 levels are associated with cancer. Mid-range IGF-1 levels are associated with the lowest mortality.
A synthetic analog of IGF-1, mecasermin, is used for the treatment of growth failure in children with severe IGF-1 deficiency. Cyclic glycine-proline (cGP) is a metabolite of hormone insulin-like growth factor-1 (IGF-1). It has a cyclic structure, lipophilic nature, and is enzymatically stable which makes it a more favourable candidate for manipulating the binding-release process between IGF-1 and its binding protein, thereby normalising IGF-1 function.
Synthesis and circulation
The polypeptide hormone IGF-1 is synthesized primarily in the liver upon stimulation by growth hormone (GH). It is a key mediator of anabolic activities in numerous tissues and cells, such as growth hormone-stimulated growth, metabolism and protein translation. Due to its participation in the GH-IGF-1 axis it contributes among other things to the maintenance of muscle strength, muscle mass, development of the skeleton and is a key factor in brain, eye and lung development during fetal development.
Studies have shown the importance of the GH-IGF-1 axis in directing development and growth, where mice with a IGF-1 deficiency had a reduced body- and tissue mass. Mice with an excessive expression of IGF-1 had an increased mass.
The levels of IGF-1 in the body vary throughout life, depending on age, where peaks of the hormone is generally observed during puberty and the postnatal period. After puberty, when entering the third decade of life, there is a rapid decrease in IGF-1 levels due to the actions of GH. Between the third and eight decade of life, the IGF-1 levels decrease gradually, but unrelated to functional decline. However, protein intake is proven to increase IGF-1 levels.
Mechanism of action
IGF-1 is a primary mediator of the effects of growth hormone (GH). Growth hormone is made in the anterior pituitary gland, released into the bloodstream, and then stimulates the liver to produce IGF-1. IGF-1 then stimulates systemic body growth, and has growth-promoting effects on almost every cell in the body, especially skeletal muscle, cartilage, bone, liver, kidney, nerve, skin, hematopoietic, and lung cells. In addition to the insulin-like effects, IGF-1 can also regulate cellular DNA synthesis.
IGF-1 binds to at least two cell surface receptor tyrosine kinases: the IGF-1 receptor (IGF1R), and the insulin receptor. Its primary action is mediated by binding to its specific receptor, IGF1R, which is present on the surface of many cell types in many tissues. Binding to the IGF1R initiates intracellular signaling. IGF-1 is one of the most potent natural activators of the Akt signaling pathway, a stimulator of cell growth and proliferation, and a potent inhibitor of programmed cell death. The IGF-1 receptor and insulin receptor are two closely related members of a transmembrane tetrameric tyrosine kinase receptor family. They control vital brain functions, such as survival, growth, energy metabolism, longevity, neuroprotection and neuroregeneration.
Metabolic effects
As a major growth factor, IGF-1 is responsible for stimulating growth of all cell types, and causing significant metabolic effects. One important metabolic effect of IGF-1 is signaling cells that sufficient nutrients are available for them to undergo hypertrophy and cell division. Its effects also include inhibiting cell apoptosis and increasing the production of cellular proteins. IGF-1 receptors are ubiquitous, which allows for metabolic changes caused by IGF-1 to occur in all cell types. IGF-1's metabolic effects are far-reaching and can coordinate protein, carbohydrate, and fat metabolism in a variety of different cell types. The regulation of IGF-1's metabolic effects on target tissues is also coordinated with other hormones such as growth hormone and insulin.
The IGF system
IGF-1 is part of the insulin-like growth factor (IGF) system. This system consists of three ligands (insulin, IGF-1 and IGF-2), two tyrosine kinase receptors (insulin receptor and IGF-1R receptor) and six ligand binding proteins (IGFBP 1–6). Together they play an essential role in proliferation, survival, regulation of cell growth and affect almost every organ system in the body.
Similarly to IGF-1, IGF-2 is mainly produced in the liver and after it is released into circulation, it stimulates growth and cell proliferation. IGF-2 is thought to be a fetal growth factor, as it is essential for a normal embryonic development and is highly expressed in embryonic and neonatal tissues.
Variants
A splice variant of IGF-1 sharing an identical mature region, but with a different E domain is known as mechano-growth factor (MGF).
Related disorders
Laron syndrome
Acromegaly
Acromegaly is a syndrome caused by the anterior pituitary gland producing excess growth hormone (GH). A number of disorders may increase the pituitary's GH output, although most commonly it involves a tumor called pituitary adenoma, derived from a distinct type of cell (somatotrophs). It leads to anatomical changes and metabolic dysfunction caused by elevated GH and IGF-1 levels.
High level of IGF-1 in acromegaly is related to an increased risk of some cancers, particularly colon cancer and thyroid cancer.
Use as a diagnostic test
Growth hormone deficiency
IGF-1 levels can be analyzed and used by physicians as a screening test for growth hormone deficiency (GHD), acromegaly and gigantism. However, IGF-1 has been shown to be a bad diagnostic screening test for growth hormone deficiency.
The ratio of IGF-1 and insulin-like growth factor-binding protein 3 has been shown to be a useful diagnostic test for GHD.
Liver fibrosis
Low serum IGF-1 levels have been suggested as a biomarker for predicting fibrosis, but not steatosis, in people with metabolic dysfunction–associated steatotic liver disease.
Causes of elevated IGF-1 levels
Medical conditions:
acromegaly (especially when GH is also high)
delayed puberty
pregnancy
hyperthyroidism
some rare tumors, such as carcinoids, secreting IGF-1
Diet:
High-protein diet
consumption of dairy products (except for cheese)
consumption of fish
IGF-1 assay problems
Calorie restriction has been found to have no effect on IGF-1 levels.
Causes of reduced IGF-1 levels
Metabolic dysfunction–associated steatotic liver disease, especially at advanced stages of steatohepatitis and fibrosis
Health effects
Mortality
Both high and low levels of IGF‐1 increase mortality risk, with the mid‐range (120–160 ng/ml) being associated with the lowest mortality.
Cancer
Higher levels of IGF-1 are associated with an increased risk of breast cancer, colon cancer and lung cancer.
Dairy consumption
It has been suggested that consumption of IGF-1 in dairy products could increase cancer risk, particularly prostate cancer. However, significant levels of intact IGF-1 from oral consumption are not absorbed as they are digested by gastric enzymes. IGF-1 present in food is not expected to be active within the body in the way that IGF-1 is produced by the body itself.
The Food and Drug Administration has stated that IGF-I concentrations in milk are not significant when evaluated against concentrations of IGF-I endogenously produced in humans.
A 2018 review by the Committee on Carcinogenicity of Chemicals in Food, Consumer Products and the Environment (COC) concluded that there is "insufficient evidence to draw any firm conclusions as to whether exposure to dietary IGF-1 is associated with an increased incidence of cancer in consumers". Certain dairy processes such as fermentation are known to significantly decrease IGF-1 concentrations. The British Dietetic Association has described the idea that milk promotes hormone related cancerous tumor growth as a myth, stating "no link between dairy containing diets and risk of cancer or promoting cancer growth as a result of hormones".
Cardiovascular disease
Increased IGF-1 levels are associated with a 16% lower risk of cardiovascular disease and a 28% reduction of cardiovascular events.
Diabetes
Low IGF-1 levels are shown to increase the risk of developing type 2 diabetes and insulin resistance. On the other hand, a high IGF-1 bioavailability in people with diabetes may delay or prevent diabetes-associated complications, as it improves impaired small blood vessel function.
IGF-1 has been characterized as an insulin sensitizer.
Low serum IGF‐1 levels can be considered an indicator of liver fibrosis in type 2 diabetes mellitus patients.
See also
Somatopause
References
External links
Peptide hormones
Hormones of the somatotropic axis
Insulin-like growth factor receptor agonists
Insulin receptor agonists
Aging-related proteins
Neurotrophic factors
Developmental neuroscience
de:IGF-1 | Insulin-like growth factor 1 | [
"Chemistry",
"Biology"
] | 2,234 | [
"Signal transduction",
"Senescence",
"Neurotrophic factors",
"Neurochemistry",
"Aging-related proteins"
] |
633,000 | https://en.wikipedia.org/wiki/Martensitic%20stainless%20steel | Martensitic stainless steel is a family of stainless steel alloy that has a martensite (body-centered tetragonal) crystal structure. It can be hardened and tempered through aging and heat treatment. The other main types of stainless steel are austenitic, ferritic, duplex, and precipitation hardened.
History
In 1912, Harry Brearley of the Brown-Firth research laboratory in Sheffield, England, while seeking a corrosion-resistant alloy for gun barrels, discovered and subsequently industrialized a martensitic stainless steel alloy. The discovery was announced two years later in a January 1915 newspaper article in The New York Times. Brearly applied for a U.S. patent during 1915. This was later marketed under the "Staybrite" brand by Firth Vickers in England and was used for the new entrance canopy for the Savoy Hotel in 1929 in London.
The characteristic body-centered tetragonal martensite microstructure was first observed by German microscopist Adolf Martens around 1890. In 1912, Elwood Haynes applied for a U.S. patent on a martensitic stainless steel alloy. This patent was not granted until 1919.
Overview
Martensitic stainless steels can be high- or low-carbon steels built around the composition of iron, 12% up to 17% chromium, carbon from 0.10% (Type 410) up to 1.2% (Type 440C):
Up to about 0.4% C they are used mostly for their mechanical properties in applications such as pumps, valves, and shafts.
Above 0.4% C they are used mostly for their wear resistance, such as in cutlery, surgical blades, plastic injection molds, and nozzles.
They may contain some Ni (Type 431) which allows a higher Cr and/or Mo content, thereby improving corrosion resistance and as the carbon content is also lower, the toughness is improved. Grade EN 1.4313 (CA6NM) with a low C, 13% Cr and 4% Ni offers good mechanical properties, good castability, and good weldability. It is used for nearly all the hydroelectric turbines in the world, including those of the huge "Three Gorges" dam in China.
Additions of B, Co, Nb, Ti improve the high temperature properties, particularly creep resistance. This is used for heat exchangers in steam turbines.
A specific grade is Type 630 (also called 17-4 PH) which is martensitic and hardens by precipitation at .
Chemical compositions
There are many proprietary grades not listed in the standards, particularly for cutlery.
Mechanical Properties
Martensitic stainless alloys are hardenable by heat treatment, specifically by quenching and stress relieving, or by quenching and tempering (referred to as QT). The alloy composition, and the high cooling rate of quenching enable the formation of martensite. Untempered martensite is low in toughness and therefore brittle.Tempered martensite gives steel good hardness and high toughness as can be seen below, and is largely used for medical surgical instruments, such as scalpels, razors, and internal clamps.
In the heat treatment column, QT refers to Quenched and Tempered, P refers to Precipitation hardened
Physical properties
Processing
When formability, softness, etc. are required in fabrication, steel having 0.12% maximum carbon is often used in soft condition. With increasing carbon, it is possible by hardening and tempering to obtain tensile strength in the range of , combined with reasonable toughness and ductility. In this condition, these steels find many useful general applications where mild corrosion resistance is required. Also, with the higher carbon range in the hardened and lightly tempered condition, tensile strength of about may be developed with lowered ductility.
A common example of a Martensitic stainless steel is X46Cr13.
Martensitic stainless steel can be nondestructively tested using the magnetic particle inspection method, unlike austenitic stainless steel.
Applications
Martensitic stainless steels, depending upon their carbon content and are often used for their corrosion resistance and high strength in pumps, valves, and boat shafts.
They are also used for their wear resistance in, cutlery, medical tools (scalpels, razors and internal clamps), ball bearings, razor blades, injection molds for polymers, and brake disks for bicycles and motorbikes.
References
Building materials
Stainless steel | Martensitic stainless steel | [
"Physics",
"Engineering"
] | 911 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
633,233 | https://en.wikipedia.org/wiki/Gravitino | In supergravity theories combining general relativity and supersymmetry, the gravitino () is the gauge fermion supersymmetric partner of the hypothesized graviton. It has been suggested as a candidate for dark matter.
If it exists, it is a fermion of spin and therefore obeys the Rarita–Schwinger equation. The gravitino field is conventionally written as ψμα with a four-vector index and a spinor index.
For one would get negative norm modes, as with every massless particle of spin 1 or higher. These modes are unphysical, and for consistency there must be a gauge symmetry which cancels these modes: , where εα(x) is a spinor function of spacetime. This gauge symmetry is a local supersymmetry transformation, and the resulting theory is supergravity.
Thus the gravitino is the fermion mediating supergravity interactions, just as the photon is mediating electromagnetism, and the graviton is presumably mediating gravitation. Whenever supersymmetry is broken in supergravity theories, it acquires a mass which is determined by the scale at which supersymmetry is broken. This varies greatly between different models of supersymmetry breaking, but if supersymmetry is to solve the hierarchy problem of the Standard Model, the gravitino cannot be more massive than about 1 TeV/c2.
History
Murray Gell-Mann and Peter van Nieuwenhuizen intended the spin-3/2 particle associated with supergravity to be called the 'hemitrion', meaning 'half-3', however the editors of Physical Review were not keen on the name and instead suggested 'massless Rarita–Schwinger particle' for their 1977 publication. The current name of gravitino was instead suggested by Sidney Coleman and Heinz Pagels, although this term was originally coined in 1954 by Felix Pirani to describe a class of negative energy excitations with zero rest mass.
Gravitino cosmological problem
If the gravitino indeed has a mass of the order of TeV, then it creates a problem in the standard model of cosmology, at least naïvely.
One option is that the gravitino is stable. This would be the case if the gravitino is the lightest supersymmetric particle and R-parity is conserved (or nearly so). In this case the gravitino is a candidate for dark matter; as such gravitinos will have been created in the very early universe. However, one may calculate the density of gravitinos and it turns out to be much higher than the observed dark matter density.
The other option is that the gravitino is unstable. Thus the gravitinos mentioned above would decay and will not contribute to the observed dark matter density. However, since they decay only through gravitational interactions, their lifetime would be very long, of the order of in natural units, where Mpl is the Planck mass and m is the mass of a gravitino. For a gravitino mass of the order of TeV this would be , much later than the era of nucleosynthesis. At least one possible channel of decay must include either a photon, a charged lepton or a meson, each of which would be energetic enough to destroy a nucleus if it strikes one. One can show that enough such energetic particles will be created in the decay as to destroy almost all the nuclei created in the era of nucleosynthesis, in contrast with observations. In fact, in such a case the universe would have been made of hydrogen alone, and star formation would probably be impossible.
One possible solution to the cosmological gravitino problem is the split supersymmetry model, where the gravitino mass is much higher than the TeV scale, but other fermionic supersymmetric partners of standard model particles already appear at this scale.
Another solution is that R-parity is slightly violated and the gravitino is the lightest supersymmetric particle. This causes almost all supersymmetric particles in the early Universe to decay into Standard Model particles via R-parity violating interactions well before the synthesis of primordial nuclei; a small fraction however decay into gravitinos, whose half-life is orders of magnitude greater than the age of the Universe due to the suppression of the decay rate by the Planck scale and the small R-parity violating couplings.
See also
Dual graviton
Graviton
Gravity
Supersymmetry
References
Fermions
Quantum gravity
Supersymmetry
Hypothetical elementary particles | Gravitino | [
"Physics",
"Materials_science"
] | 966 | [
"Symmetry",
"Fermions",
"Unsolved problems in physics",
"Quantum gravity",
"Subatomic particles",
"Condensed matter physics",
"Hypothetical elementary particles",
"Supersymmetry",
"Physics beyond the Standard Model",
"Matter"
] |
633,423 | https://en.wikipedia.org/wiki/Visual%20Molecular%20Dynamics | Visual Molecular Dynamics (VMD) is a molecular modelling and visualization computer program. VMD is developed mainly as a tool to view and analyze the results of molecular dynamics simulations. It also includes tools for working with volumetric data, sequence data, and arbitrary graphics objects. Molecular scenes can be exported to external rendering tools such as POV-Ray, RenderMan, Tachyon, Virtual Reality Modeling Language (VRML), and many others. Users can run their own Tcl and Python scripts within VMD as it includes embedded Tcl and Python interpreters. VMD runs on Unix, Apple Mac macOS, and Microsoft Windows. VMD is available to non-commercial users under a distribution-specific license which permits both use of the program and modification of its source code, at no charge.
History
VMD has been developed under the aegis of principal investigator Klaus Schulten in the Theoretical and Computational Biophysics group at the Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana–Champaign. A precursor program, called VRChem, was developed in 1992 by Mike Krogh, William Humphrey, and Rick Kufrin. The initial version of VMD was written by William Humphrey, Andrew Dalke, Ken Hamer, Jon Leech, and James Phillips. It was released in 1995. The earliest versions of VMD were developed for Silicon Graphics workstations and could also run in a cave automatic virtual environment (CAVE) and communicate with a Nanoscale Molecular Dynamics (NAMD) simulation. VMD was further developed by A. Dalke, W. Humphrey, J. Ulrich in 1995–1996, followed by Sergei Izrailev and J. Stone during 1997–1998. In 1998, John Stone became the main VMD developer, porting VMD to many other Unix operating systems and completing the first full-featured OpenGL version. The first version of VMD for the Microsoft Windows platform was released in 1999. In 2001, Justin Gullingsrud, and Paul Grayson, and John Stone added support for haptic feedback devices and further developing the interface between VMD and NAMD for performing interactive molecular dynamics simulations. In subsequent developments, Jordi Cohen, Gullingsrud, and Stone entirely rewrote the graphical user interfaces, added built-in support for display and processing of volumetric data, and the use of OpenGL Shading Language.
Interprocess communication
VMD can communicate with other programs via Tcl/Tk. This communication allows the development of several external plugins that works together with VMD. These plugins increases the set of features and tools of VMD making it one of the most used software in computational chemistry, biology, and biochemistry.
Here is a list of some VMD plugins developed using Tcl/Tk:
Delphi Force — electrostatic force calculation and visualization
Pathways Plugin — identify dominant electron transfer pathways and estimate donor-to-acceptor electronic tunneling
Check Sidechains Plugin — checks and helps select best orientation and protonation state for Asn, Gln, and His side chains
MultiMSMS Plugin — caches MSMS calculations to speedup the animation of a sequence of frames
Interactive Essential Dynamics — Interactive visualization of essential dynamics
Mead Ionize — Improved version of autoionize for highly charged systems
Andriy Anishkin's VMD Scripts — Many useful VMD scripts for visualization and analysis
RMSD Trajectory Tool — Development version of RMSD plugin for trajectories
Clustering Tool — Visualize clusters of conformations of a structure
iTrajComp — interactive Trajectory Comparison tool
Swap — Atomic coordinate swapping for improved RMSD alignment
Intervor — Protein-Protein interface extraction and display
SurfVol — Measure surface area and volume of proteins
vmdICE — Plugin for computing RMSD, RMSF, SASA, and other time-varying quantities
molUP - A VMD plugin to handle QM and ONIOM calculations using the gaussian software
VMD Store - A VMD extensions that helps users to discover, install, and update other VMD plugins.
See also
References
External links
VMD on GPUs
Protein workbench STRAP
Molecular modelling software | Visual Molecular Dynamics | [
"Chemistry"
] | 859 | [
"Molecular modelling",
"Molecular modelling software",
"Computational chemistry software"
] |
634,016 | https://en.wikipedia.org/wiki/Fluorescence%20recovery%20after%20photobleaching | Fluorescence recovery after photobleaching (FRAP) is a method for determining the kinetics of diffusion through tissue or cells. It is capable of quantifying the two-dimensional lateral diffusion of a molecularly thin film containing fluorescently labeled probes, or to examine single cells. This technique is very useful in biological studies of cell membrane diffusion and protein binding. In addition, surface deposition of a fluorescing phospholipid bilayer (or monolayer) allows the characterization of hydrophilic (or hydrophobic) surfaces in terms of surface structure and free energy.
Similar, though less well known, techniques have been developed to investigate the 3-dimensional diffusion and binding of molecules inside the cell; they are also referred to as FRAP.
Experimental setup
The basic apparatus comprises an optical microscope, a light source and some fluorescent probe. Fluorescent emission is contingent upon absorption of a specific optical wavelength or color which restricts the choice of lamps. Most commonly, a broad spectrum mercury or xenon source is used in conjunction with a color filter. The technique begins by saving a background image of the sample before photobleaching. Next, the light source is focused onto a small patch of the viewable area either by switching to a higher magnification microscope objective or with laser light of the appropriate wavelength. The fluorophores in this region receive high intensity illumination which causes their fluorescence lifetime to quickly elapse (limited to roughly 105 photons before extinction). Now the image in the microscope is that of a uniformly fluorescent field with a noticeable dark spot. As Brownian motion proceeds, the still-fluorescing probes will diffuse throughout the sample and replace the non-fluorescent probes in the bleached region. This diffusion proceeds in an ordered fashion, analytically determinable from the diffusion equation. Assuming a Gaussian profile for the bleaching beam, the diffusion constant D can be simply calculated from:
where w is the radius of the beam and tD is the "Characteristic" diffusion time.
Applications
Supported lipid bilayers
Originally, the FRAP technique was intended for use as a means to characterize the mobility of individual lipid molecules within a cell membrane. While providing great utility in this role, current research leans more toward investigation of artificial lipid membranes. Supported by hydrophilic or hydrophobic substrates (to produce lipid bilayers or monolayers respectively) and incorporating membrane proteins, these biomimetic structures are potentially useful as analytical devices for determining the identity of unknown substances, understanding cellular transduction, and identifying ligand binding sites.
Protein binding
This technique is commonly used in conjunction with green fluorescent protein (GFP) fusion proteins, where the studied protein is fused to a GFP. When excited by a specific wavelength of light, the protein will fluoresce. When the protein that is being studied is produced with the GFP, then the fluorescence can be tracked. Photodestroying the GFP, and then watching the repopulation into the bleached area can reveal information about protein interaction partners, organelle continuity and protein trafficking.
If after some time the fluorescence doesn't reach the initial level anymore, then some part of the fluorescence is caused by an immobile fraction (that cannot be replenished by diffusion). Similarly, if the fluorescent proteins bind to static cell receptors, the rate of recovery will be retarded by a factor related to the association and disassociation coefficients of binding. This observation has most recently been exploited to investigate protein binding. Similarly, if the GFP labeled protein is constitutively incorporated into a larger complex, the dynamics of fluorescence recovery will be characterized by the diffusion of the larger complex.
Applications outside the membrane
FRAP can also be used to monitor proteins outside the membrane. After the protein of interest is made fluorescent, generally by expression as a GFP fusion protein, a confocal microscope is used to photobleach and monitor a region of the cytoplasm, mitotic spindle, nucleus, or another cellular structure. The mean fluorescence in the region can then be plotted versus time since the photobleaching, and the resulting curve can yield kinetic coefficients, such as those for the protein's binding reactions and/or the protein's diffusion coefficient in the medium where it is being monitored. Often the only dynamics considered are diffusion and binding/unbinding interactions, however, in principle proteins can also move via flow, i.e., undergo directed motion, and this was recognized very early by Axelrod et al. This could be due to flow of the cytoplasm or nucleoplasm, or transport along filaments in the cell such as microtubules by molecular motors.
The analysis is most simple when the fluorescence recovery is limited by either the rate of diffusion into the bleached area or by rate at which bleached proteins unbind from their binding sites within the bleached area, and are replaced by fluorescent protein. Let us look at these two limits, for the common case of bleaching a GFP fusion protein in a living cell.
Diffusion-limited fluorescence recovery
For a circular bleach spot of radius and diffusion-dominated recovery, the fluorescence is described by an equation derived by Soumpasis (which involves modified Bessel functions and )
with the characteristic timescale for diffusion, and is the time. is the normalized fluorescence (goes to 1 as goes to infinity). The diffusion timescale for a bleached spot of radius is , with D the diffusion coefficient.
Note that this is for an instantaneous bleach with a step function profile, i.e., the fraction of protein assumed to be bleached instantaneously at time is , and , for is the distance from the centre of the bleached area. It is also assumed that the recovery can be modelled by diffusion in two dimensions, that is also both uniform and isotropic. In other words, that diffusion is occurring in a uniform medium so the effective diffusion constant D is the same everywhere, and that the diffusion is isotropic, i.e., occurs at the same rate along all axes in the plane.
In practice, in a cell none of these assumptions will be strictly true.
Bleaching will not be instantaneous. Particularly if strong bleaching of a large area is required, bleaching may take a significant fraction of the diffusion timescale . Then a significant fraction of the bleached protein will diffuse out of the bleached region actually during bleaching. Failing to take account of this will introduce a significant error into D.
The bleached profile will not be a radial step function. If the bleached spot is effectively a single pixel then the bleaching as a function of position will typically be diffraction limited and determined by the optics of the confocal laser scanning microscope used. This is not a radial step function and also varies along the axis perpendicular to the plane.
Cells are of course three-dimensional not two-dimensional, as is the bleached volume. Neglecting diffusion out of the plane (we take this to be the xy plane) will be a reasonable approximation only if the fluorescence recovers predominantly via diffusion in this plane. This will be true, for example, if a cylindrical volume is bleached with the axis of the cylinder along the z axis and with this cylindrical volume going through the entire height of the cell. Then diffusion along the z axis does not cause fluorescence recovery as all protein is bleached uniformly along the z axis, and so neglecting it, as Soumpasis' equation does, is harmless. However, if diffusion along the z axis does contribute to fluorescence recovery then it must be accounted for.
There is no reason to expect the cell cytoplasm or nucleoplasm to be completely spatially uniform or isotropic.
Thus, the equation of Soumpasis is just a useful approximation, that can be used when the assumptions listed above are good approximations to the true situation, and when the recovery of fluorescence is indeed limited by the timescale of diffusion . Note that just because the Soumpasis can be fitted adequately to data does not necessarily imply that the assumptions are true and that diffusion dominates recovery.
Reaction-limited recovery
The equation describing the fluorescence as a function of time is particularly simple in another limit. If a large number of proteins bind to sites in a small volume such that there the fluorescence signal is dominated by the signal from bound proteins, and if this binding is all in a single state with an off rate koff, then the fluorescence as a function of time is given by
Note that the recovery depends on the rate constant for unbinding, koff, only. It does not depend on the on rate for binding. Although it does depend on a number of assumptions
The on rate must be sufficiently large in order for the local concentration of bound protein to greatly exceed the local concentration of free protein, and so allow us to neglect the contribution to f of the free protein.
The reaction is a simple bimolecular reaction, where the protein binds to localised sites that do not move significantly during recovery
Exchange is much slower than diffusion (or whatever transport mechanism is responsible for mobility), as only then does the diffusing fraction recovery rapidly and then acts as the source of fluorescent protein that binds and replaces the bound bleached protein and so increases the fluorescence. With r the radius of the bleached spot, this means that the equation is only valid if the bound lifetime .
If all these assumptions are satisfied, then fitting an exponential to the recovery curve will give the off rate constant, koff. However, other dynamics can give recovery curves similar to exponentials, so fitting an exponential does not necessarily imply that recovery is dominated by a simple bimolecular reaction. One way to distinguish between recovery with a rate determined by unbinding and recovery that is limited by diffusion, is to note that the recovery rate for unbinding-limited recovery is independent of the size of the bleached area r, while it scales as , for diffusion-limited recovery. Thus if a small and a large area are bleached, if recovery is limited by unbinding then the recovery rates will be the same for the two sizes of bleached area, whereas if recovery is limited by diffusion then it will be much slower for the larger bleached area.
Diffusion and reaction
In general, the recovery of fluorescence will not be dominated by either simple isotropic diffusion, or by a single simple unbinding rate. There will be both diffusion and binding, and indeed the diffusion constant may not be uniform in space, and there may be more than one type of binding sites, and these sites may also have a non-uniform distribution in space. Flow processes may also be important. This more complex behavior implies that a model with several parameters is required to describe the data; models with only either a single diffusion constant D or a single off rate constant, koff, are inadequate.
There are models with both diffusion and reaction. Unfortunately, a single FRAP curve may provide insufficient evidence to reliably and uniquely fit (possibly noisy) experimental data. Sadegh Zadeh et al. have shown that FRAP curves can be fitted by different pairs of values of the diffusion constant and the on-rate constant, or, in other words, that fits to the FRAP are not unique. This is in three-parameter (on-rate constant, off-rate constant and diffusion constant) fits. Fits that are not unique, are not generally useful.
Thus for models with a number of parameters, a single FRAP experiment may be insufficient to estimate all the model parameters. Then more data is required, e.g., by bleaching areas of different sizes, determining some model parameters independently, etc.
See also
Fluorescence microscope
Photobleaching
Fluorescence loss in photobleaching (FLIP)
References
Cell imaging
Biochemistry methods
Fluorescence
Microscopy
Fluorescence techniques
Biophysics | Fluorescence recovery after photobleaching | [
"Physics",
"Chemistry",
"Biology"
] | 2,482 | [
"Biochemistry methods",
"Luminescence",
"Fluorescence",
"Applied and interdisciplinary physics",
"Biophysics",
"Microscopy",
"Biochemistry",
"Cell imaging",
"Fluorescence techniques"
] |
634,140 | https://en.wikipedia.org/wiki/Breeder | A breeder is a person who selectively breeds carefully selected mates, normally of the same breed, to sexually reproduce offspring with specific, consistently replicable qualities and characteristics. This might be as a farmer, agriculturalist, or hobbyist, and can be practiced on a large or small scale, for food, fun, or profit.
About
A breeder can breed purebred pets such as cats or dogs, livestock such as cattle or horses, and may show their animals professionally in assorted forms of competitions. In these specific instances, the breeder strives to meet standards in each animal set out by organizations. A breeder may also assist with breeding animals in the zoo. In other cases, a breeder can be referred to an animal scientist who has the capabilities of developing more efficient ways to produce the meat and other animal products humans eat.
Earnings as a breeder vary widely because of the various types of work involved in the job title. Even in breeding small domestic animals, the earning differ. It mostly depends on the type of animal being bred and whether or not the breeder has a reputation of breeding champions. The US Bureau of Labor Statistics reports that large animal breeders that work as veterinarians earned a median annual income of $61,029 in 2006. The other individuals employed in the field of animal science earned $47,800.
Required education
To breed small and domestic animals, no formal training or credentials are required, though it is recommended they familiarize themselves with the desired and standard characteristics of the breed they work with. For those who are seeking to breed more exotic animals, such as those in a zoo, a bachelor's degree in veterinary science is needed. It is also recommended that an individual also goes onto graduate school and specializes in zoology. To breed agricultural animals, a 4-year degree in agricultural science is needed for most entry-level positions.
See also
Animal breeding
Animal husbandry
Animal fancy
Breeding in the wild
Plant breeding
Dog Breeding
References
Animal husbandry occupations
Pets
Breeding | Breeder | [
"Biology"
] | 405 | [
"Behavior",
"Breeding",
"Reproduction"
] |
634,183 | https://en.wikipedia.org/wiki/Radio%20spectrum | The radio spectrum is the part of the electromagnetic spectrum with frequencies from 3 Hz to 3,000 GHz (3 THz). Electromagnetic waves in this frequency range, called radio waves, are widely used in modern technology, particularly in telecommunication. To prevent interference between different users, the generation and transmission of radio waves is strictly regulated by national laws, coordinated by an international body, the International Telecommunication Union (ITU).
Different parts of the radio spectrum are allocated by the ITU for different radio transmission technologies and applications; some 40 radiocommunication services are defined in the ITU's Radio Regulations (RR). In some cases, parts of the radio spectrum are sold or licensed to operators of private radio transmission services (for example, cellular telephone operators or broadcast television stations). Ranges of allocated frequencies are often referred to by their provisioned use (for example, cellular spectrum or television spectrum). Because it is a fixed resource which is in demand by an increasing number of users, the radio spectrum has become increasingly congested in recent decades, and the need to utilize it more effectively is driving modern telecommunications innovations such as trunked radio systems, spread spectrum, ultra-wideband, frequency reuse, dynamic spectrum management, frequency pooling, and cognitive radio.
Limits
The frequency boundaries of the radio spectrum are a matter of convention in physics and are somewhat arbitrary. Since radio waves are the lowest frequency category of electromagnetic waves, there is no lower limit to the frequency of radio waves. Radio waves are defined by the ITU as: "electromagnetic waves of frequencies arbitrarily
lower than 3000 GHz, propagated in space without artificial guide". At the high frequency end the radio spectrum is bounded by the infrared band. The boundary between radio waves and infrared waves is defined at different frequencies in different scientific fields. The terahertz band, from 300 gigahertz to 3 terahertz, can be considered either as microwaves or infrared. It is the highest band categorized as radio waves by the International Telecommunication Union. but spectroscopic scientists consider these frequencies part of the far infrared and mid infrared bands.
Because it is a fixed resource, the practical limits and basic physical considerations of the radio spectrum, the frequencies which are useful for radio communication, are determined by technological limitations which are impossible to overcome. So although the radio spectrum is becoming increasingly congested, there is no possible way to add additional frequency bandwidth outside of that currently in use. The lowest frequencies used for radio communication are limited by the increasing size of transmitting antennas required. The size of antenna required to radiate radio power efficiently increases in proportion to wavelength or inversely with frequency. Below about 10 kHz (a wavelength of 30 km), elevated wire antennas kilometers in diameter are required, so very few radio systems use frequencies below this. A second limit is the decreasing bandwidth available at low frequencies, which limits the data rate that can be transmitted. Below about 30 kHz, audio modulation is impractical and only slow baud rate data communication is used. The lowest frequencies that have been used for radio communication are around 80 Hz, in ELF submarine communications systems built by a few nations' navies to communicate with their submerged submarines hundreds of meters underwater. These employ huge ground dipole antennas 20–60 km long excited by megawatts of transmitter power, and transmit data at an extremely slow rate of about 1 bit per minute (17 millibits per second, or about 5 minutes per character).
The highest frequencies useful for radio communication are limited by the absorption of microwave energy by the atmosphere. As frequency increases above 30 GHz (the beginning of the millimeter wave band), atmospheric gases absorb increasing amounts of power, so the power in a beam of radio waves decreases exponentially with distance from the transmitting antenna. At 30 GHz, useful communication is limited to about 1 km, but as frequency increases the range at which the waves can be received decreases. In the terahertz band above 300 GHz, the radio waves are attenuated to zero within a few meters due to the absorption of electromagnetic radiation by the atmosphere (mainly due to ozone, water vapor and carbon dioxide), which is so great that it is essentially opaque to electromagnetic emissions, until it becomes transparent again near the near-infrared and optical window frequency ranges.
Bands
A radio band is a small frequency band (a contiguous section of the range of the radio spectrum) in which channels are usually used or set aside for the same purpose. To prevent interference and allow for efficient use of the radio spectrum, similar services are allocated in bands. For example, broadcasting, mobile radio, or navigation devices, will be allocated in non-overlapping ranges of frequencies.
Band plan
For each radio band, the ITU has a band plan (or frequency plan) which dictates how it is to be used and shared, to avoid interference and to set protocol for the compatibility of transmitters and receivers.
Each frequency plan defines the frequency range to be included, how channels are to be defined, and what will be carried on those channels. Typical definitions set forth in a frequency plan are:
numbering scheme – which channel numbers or letters (if any) will be assigned
center frequencies – how far apart the carrier wave for each channel will be
bandwidth and/or deviation – how wide each channel will be
spectral mask – how extraneous signals will be attenuated by frequency
modulation – what type will be used or are permissible
content – what types of information are allowed, such as audio or video, analog or digital
licensing – what the procedure will be to obtain a broadcast license
ITU
The actual authorized frequency bands are defined by the ITU and the local regulating agencies like the US Federal Communications Commission (FCC) and voluntary best practices help avoid interference.
As a matter of convention, the ITU divides the radio spectrum into 12 bands, each beginning at a wavelength which is a power of ten (10n) metres, with corresponding frequency of 3×108−n hertz, and each covering a decade of frequency or wavelength. Each of these bands has a traditional name. For example, the term high frequency (HF) designates the wavelength range from 100 to 10 metres, corresponding to a frequency range of 3 to 30 MHz. This is just a symbol and is not related to allocation; the ITU further divides each band into subbands allocated to different services. Above 300 GHz, the absorption of electromagnetic radiation by Earth's atmosphere is so great that the atmosphere is effectively opaque, until it becomes transparent again in the near-infrared and optical window frequency ranges.
These ITU radio bands are defined in the ITU Radio Regulations. Article 2, provision No. 2.1 states that "the radio spectrum shall be subdivided into nine frequency bands, which shall be designated by progressive whole numbers in accordance with the following table".
The table originated with a recommendation of the fourth CCIR meeting, held in Bucharest in 1937, and was approved by the International Radio Conference held at Atlantic City, NJ in 1947. The idea to give each band a number, in which the number is the logarithm of the approximate geometric mean of the upper and lower band limits in Hz, originated with B. C. Fleming-Williams, who suggested it in a letter to the editor of Wireless Engineer in 1942. For example, the approximate geometric mean of band 7 is 10 MHz, or 107 Hz.
The band name "tremendously low frequency" (TLF) has been used for frequencies from 1–3 Hz (wavelengths from 300,000–100,000 km), but the term has not been defined by the ITU.
IEEE radar bands
Frequency bands in the microwave range are designated by letters. This convention began around World War II with military designations for frequencies used in radar, which was the first application of microwaves. There are several incompatible naming systems for microwave bands, and even within a given system the exact frequency range designated by a letter may vary somewhat between different application areas. One widely used standard is the IEEE radar bands established by the US Institute of Electrical and Electronics Engineers.
EU, NATO, US ECM frequency designations
Waveguide frequency bands
Comparison of radio band designation standards
The band name "tremendously low frequency" (TLF) has been used for frequencies from 1–3 Hz (wavelengths of 300,000–100,000 km), but the term has not been defined by the ITU.
Applications
Broadcasting
Broadcast frequencies:
Longwave AM Radio = 148.5 kHz – 283.5 kHz (LF)
Mediumwave AM Radio = 520 kHz – 1700 kHz (MF)
Shortwave AM Radio = 3 MHz – 30 MHz (HF)
Designations for television and FM radio broadcast frequencies vary between countries, see Television channel frequencies and FM broadcast band. Since VHF and UHF frequencies are desirable for many uses in urban areas, in North America some parts of the former television broadcasting band have been reassigned to cellular phone and various land mobile communications systems. Even within the allocation still dedicated to television, TV-band devices use channels without local broadcasters.
The Apex band in the United States was a pre-WWII allocation for VHF audio broadcasting; it was made obsolete after the introduction of FM broadcasting.
Air band
Airband refers to VHF frequencies 108 to 137 MHz, used for navigation and voice communication with aircraft. Trans-oceanic aircraft also carry HF radio and satellite transceivers.
Marine band
The greatest incentive for development of radio was the need to communicate with ships out of visual range of shore. From the very early days of radio, large oceangoing vessels carried powerful long-wave and medium-wave transmitters. High-frequency allocations are still designated for ships, although satellite systems have taken over some of the safety applications previously served by 500 kHz and other frequencies. 2182 kHz is a medium-wave frequency still used for marine emergency communication.
Marine VHF radio is used in coastal waters and relatively short-range communication between vessels and to shore stations. Radios are channelized, with different channels used for different purposes; marine Channel 16 is used for calling and emergencies.
Amateur radio frequencies
Amateur radio frequency allocations vary around the world. Several bands are common for amateurs worldwide, usually in the HF part of the spectrum. Other bands are national or regional allocations only due to differing allocations for other services, especially in the VHF and UHF parts of the radio spectrum.
Citizens' band and personal radio services
Citizens' band radio is allocated in many countries, using channelized radios in the upper HF part of the spectrum (around 27 MHz). It is used for personal, small business and hobby purposes. Other frequency allocations are used for similar services in different jurisdictions, for example UHF CB is allocated in Australia. A wide range of personal radio services exist around the world, usually emphasizing short-range communication between individuals or for small businesses, simplified license requirements or in some countries covered by a class license, and usually FM transceivers using around 1 watt or less.
Industrial, scientific, medical
The ISM bands were initially reserved for non-communications uses of RF energy, such as microwave ovens, radio-frequency heating, and similar purposes. However, in recent years the largest use of these bands has been by short-range low-power communications systems, since users do not have to hold a radio operator's license. Cordless telephones, wireless computer networks, Bluetooth devices, and garage door openers all use the ISM bands. ISM devices do not have regulatory protection against interference from other users of the band.
Land mobile bands
Bands of frequencies, especially in the VHF and UHF parts of the spectrum, are allocated for communication between fixed base stations and land mobile vehicle-mounted or portable transceivers. In the United States these services are informally known as business band radio. See also Professional mobile radio.
Police radio and other public safety services such as fire departments and ambulances are generally found in the VHF and UHF parts of the spectrum. Trunking systems are often used to make most efficient use of the limited number of frequencies available.
The demand for mobile telephone service has led to large blocks of radio spectrum allocated to cellular frequencies.
Radio control
Reliable radio control uses bands dedicated to the purpose. Radio-controlled toys may use portions of unlicensed spectrum in the 27 MHz or 49 MHz bands, but more costly aircraft, boat, or land vehicle models use dedicated radio control frequencies near 72 MHz to avoid interference by unlicensed uses. The 21st century has seen a move to 2.4 GHz spread spectrum RC control systems.
Licensed amateur radio operators use portions of the 6-meter band in North America. Industrial remote control of cranes or railway locomotives use assigned frequencies that vary by area.
Radar
Radar applications use relatively high power pulse transmitters and sensitive receivers, so radar is operated on bands not used for other purposes. Most radar bands are in the microwave part of the spectrum, although certain important applications for meteorology make use of powerful transmitters in the UHF band.
See also
Notes
References
ITU-R Recommendation V.431: Nomenclature of the frequency and wavelength bands used in telecommunications. International Telecommunication Union, Geneva.
IEEE Standard 521-2002: Standard Letter Designations for Radar-Frequency Bands
AFR 55-44/AR 105-86/OPNAVINST 3430.9A/MCO 3430.1, 27 October 1964 superseded by AFR 55-44/AR 105-86/OPNAVINST 3430.1A/MCO 3430.1A, 6 December 1978: Performing Electronic Countermeasures in the United States and Canada, Attachment 1,ECM Frequency Authorizations.
External links
UnwantedEmissions.com A reference to radio spectrum allocations.
"Radio spectrum: a vital resource in a wireless world" European Commission policy. | Radio spectrum | [
"Physics"
] | 2,787 | [
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum"
] |
634,264 | https://en.wikipedia.org/wiki/Catalogue%20of%20Galaxies%20and%20of%20Clusters%20of%20Galaxies | The Catalogue of Galaxies and of Clusters of Galaxies (or CGCG) was compiled by Fritz Zwicky in 1961–68. It contains 29,418 galaxies and 9,134 galaxy clusters.
Gallery
External Links
Caltech library's free online PDFs of all six volumes of the Catalogue
References
Astronomical catalogues
Astronomical catalogues of galaxies
Astronomical catalogues of galaxy clusters | Catalogue of Galaxies and of Clusters of Galaxies | [
"Astronomy"
] | 77 | [
"Works about astronomy",
"Astronomy stubs",
"Astronomical catalogues",
"Astronomical catalogue stubs",
"Astronomical objects"
] |
634,266 | https://en.wikipedia.org/wiki/FR-4 | FR-4 (or FR4) is a NEMA grade designation for glass-reinforced epoxy laminate material. FR-4 is a composite material composed of woven fiberglass cloth with an epoxy resin binder that is flame resistant (self-extinguishing).
"FR" stands for "flame retardant", and does not denote that the material complies with the standard UL94V-0 unless testing is performed to UL 94, Vertical Flame testing in Section 8 at a compliant lab. The designation FR-4 was created by NEMA in 1968.
FR-4 glass epoxy is a popular and versatile high-pressure thermoset plastic laminate grade with good strength to weight ratios. With near zero water absorption, FR-4 is most commonly used as an electrical insulator possessing considerable mechanical strength. The material is known to retain its high mechanical values and electrical insulating qualities in both dry and humid conditions. These attributes, along with good fabrication characteristics, lend utility to this grade for a wide variety of electrical and mechanical applications.
Grade designations for glass epoxy laminates are: G-10, G-11, FR-4, FR-5 and FR-6. Of these, FR-4 is the grade most widely in use today. G-10, the predecessor to FR-4, lacks FR-4's self-extinguishing flammability characteristics. Hence, FR-4 has since replaced G-10 in most applications.
FR-4 epoxy resin systems typically employ bromine, a halogen, to facilitate flame-resistant properties in FR-4 glass epoxy laminates. Some applications where thermal destruction of the material is a desirable trait will still use G-10 non flame resistant.
Properties
Which materials fall into the "FR-4" category is defined in the NEMA LI 1-1998 standard. Typical physical and electrical properties of FR-4 are as follows. The abbreviations LW (lengthwise, warp yarn direction) and CW (crosswise, fill yarn direction) refer to the conventional perpendicular fiber orientations in the XY plane of the board (in-plane). In terms of Cartesian coordinates, lengthwise is along the x-axis, crosswise is along the y-axis, and the z-axis is referred to as the through-plane direction. The values shown below are an example of a certain manufacturer's material. Another manufacturer's material will usually have slightly different values. Checking the actual values, for any particular material, from the manufacturer's datasheet, can be very important, for example in high frequency applications.
where:
LW Lengthwise
CW Crosswise
PF Perpendicular to laminate face
Applications
FR-4 is a common material for printed circuit boards (PCBs). A thin layer of copper foil is typically laminated to one or both sides of an FR-4 glass epoxy panel. These are commonly referred to as copper clad laminates. The copper thickness or copper weight can vary and so is specified separately.
FR-4 is also used in the construction of relays, switches, standoffs, busbars, washers, arc shields, transformers and screw terminal strips.
See also
FR-2
Polyimide
G-10 (material)
References
Further reading
Printed circuit board manufacturing
Fibre-reinforced polymers | FR-4 | [
"Engineering"
] | 692 | [
"Electrical engineering",
"Electronic engineering",
"Printed circuit board manufacturing"
] |
634,543 | https://en.wikipedia.org/wiki/Bendixson%E2%80%93Dulac%20theorem | In mathematics, the Bendixson–Dulac theorem on dynamical systems states that if there exists a function (called the Dulac function) such that the expression
has the same sign () almost everywhere in a simply connected region of the plane, then the plane autonomous system
has no nonconstant periodic solutions lying entirely within the region. "Almost everywhere" means everywhere except possibly in a set of measure 0, such as a point or line.
The theorem was first established by Swedish mathematician Ivar Bendixson in 1901 and further refined by French mathematician Henri Dulac in 1923 using Green's theorem.
Proof
Without loss of generality, let there exist a function such that
in simply connected region . Let be a closed trajectory of the plane autonomous system in . Let be the interior of . Then by Green's theorem,
Because of the constant sign, the left-hand integral in the previous line must evaluate to a positive number. But on , and , so the bottom integrand is in fact 0 everywhere and for this reason the right-hand integral evaluates to 0. This is a contradiction, so there can be no such closed trajectory .
See also
Liouville's theorem (Hamiltonian), similar theorem with
References
Differential equations
Theorems in dynamical systems | Bendixson–Dulac theorem | [
"Mathematics"
] | 263 | [
"Theorems in dynamical systems",
"Mathematical objects",
"Differential equations",
"Equations",
"Mathematical problems",
"Mathematical theorems",
"Dynamical systems"
] |
2,287,401 | https://en.wikipedia.org/wiki/Crooked%20spire | A crooked spire, (also known as a twisted spire) is a tower showing a twist and/or a deviation from the vertical. A church tower usually consists of a square stone tower topped with a pyramidal wooden structure, the spire is usually cladded with slates or lead to protect the wood. Through accident or design the spire may contain a twist, or it may not point perfectly straight upwards. Some however have been built or rebuilt with a deliberate twist, generally as a design choice.
There are about a hundred bell towers of this type in Europe.
Reasons for spires to twist and bend
Twisting can be caused by internal or external forces. Internal conditions, such as green or unseasoned wood, can cause some twisting until after about 50 years when fully seasoned. Also the weight of any lead used in construction can cause the wood to twist. Dry wood will shrink, causing further movement.
External forces, such as water ingress that causes rot, can cause partial collapse, resulting in tilting. Heat from the sun on one side can also cause movement. Earthquakes have also occasionally caused twisting. Subsidence can cause leaning. Strong winds have been blamed at times, but there is little evidence to back this up. Finally, weak design can be at fault, for instance with a lack of cross-bracing, resulting in the ability of the tower to move.
One legend relating to Chesterfield says that a virgin once married in the church, and the church was so surprised that the spire turned around to look at the bride. Another version of the myth common in Chesterfield is that the devil twisted the spire when a virgin married in the church, saying that he would untwist it when the next virgin got married there. A third myth says that the devil perched on the spire and twisted his tail around it to hold on, the twist of his tail transmitting to the structure.
List of twisted spires
References
Towers | Crooked spire | [
"Engineering"
] | 387 | [
"Structural engineering",
"Towers"
] |
2,288,549 | https://en.wikipedia.org/wiki/Momentum%20operator | In quantum mechanics, the momentum operator is the operator associated with the linear momentum. The momentum operator is, in the position representation, an example of a differential operator. For the case of one particle in one spatial dimension, the definition is:
where is the reduced Planck constant, the imaginary unit, is the spatial coordinate, and a partial derivative (denoted by ) is used instead of a total derivative () since the wave function is also a function of time. The "hat" indicates an operator. The "application" of the operator on a differentiable wave function is as follows:
In a basis of Hilbert space consisting of momentum eigenstates expressed in the momentum representation, the action of the operator is simply multiplication by , i.e. it is a multiplication operator, just as the position operator is a multiplication operator in the position representation. Note that the definition above is the canonical momentum, which is not gauge invariant and not a measurable physical quantity for charged particles in an electromagnetic field. In that case, the canonical momentum is not equal to the kinetic momentum.
At the time quantum mechanics was developed in the 1920s, the momentum operator was found by many theoretical physicists, including Niels Bohr, Arnold Sommerfeld, Erwin Schrödinger, and Eugene Wigner. Its existence and form is sometimes taken as one of the foundational postulates of quantum mechanics.
Origin from de Broglie plane waves
The momentum and energy operators can be constructed in the following way.
One dimension
Starting in one dimension, using the plane wave solution to Schrödinger's equation of a single free particle,
where is interpreted as momentum in the -direction and is the particle energy. The first order partial derivative with respect to space is
This suggests the operator equivalence
so the momentum of the particle and the value that is measured when a particle is in a plane wave state is the (generalized) eigenvalue of the above operator.
Since the partial derivative is a linear operator, the momentum operator is also linear, and because any wave function can be expressed as a superposition of other states, when this momentum operator acts on the entire superimposed wave, it yields the momentum eigenvalues for each plane wave component. These new components then superimpose to form the new state, in general not a multiple of the old wave function.
Three dimensions
The derivation in three dimensions is the same, except the gradient operator del is used instead of one partial derivative. In three dimensions, the plane wave solution to Schrödinger's equation is:
and the gradient is
where , , and are the unit vectors for the three spatial dimensions, hence
This momentum operator is in position space because the partial derivatives were taken with respect to the spatial variables.
Definition (position space)
For a single particle with no electric charge and no spin, the momentum operator can be written in the position basis as:
where is the gradient operator, is the reduced Planck constant, and is the imaginary unit.
In one spatial dimension, this becomes
This is the expression for the canonical momentum. For a charged particle in an electromagnetic field, during a gauge transformation, the position space wave function undergoes a local U(1) group transformation, and will change its value. Therefore, the canonical momentum is not gauge invariant, and hence not a measurable physical quantity.
The kinetic momentum, a gauge invariant physical quantity, can be expressed in terms of the canonical momentum, the scalar potential and vector potential :
The expression above is called minimal coupling. For electrically neutral particles, the canonical momentum is equal to the kinetic momentum.
Properties
Hermiticity
The momentum operator can be described as a symmetric (i.e. Hermitian), unbounded operator acting on a dense subspace of the quantum state space. If the operator acts on a (normalizable) quantum state then the operator is self-adjoint. In physics the term Hermitian often refers to both symmetric and self-adjoint operators.
(In certain artificial situations, such as the quantum states on the semi-infinite interval , there is no way to make the momentum operator Hermitian. This is closely related to the fact that a semi-infinite interval cannot have translational symmetry—more specifically, it does not have unitary translation operators. See below.)
Canonical commutation relation
By applying the commutator to an arbitrary state in either the position or momentum basis, one can easily show that:
where is the unit operator.
The Heisenberg uncertainty principle defines limits on how accurately the momentum and position of a single observable system can be known at once. In quantum mechanics, position and momentum are conjugate variables.
Fourier transform
The following discussion uses the bra–ket notation. One may write
so the tilde represents the Fourier transform, in converting from coordinate space to momentum space. It then holds that
that is, the momentum acting in coordinate space corresponds to spatial frequency,
An analogous result applies for the position operator in the momentum basis,
leading to further useful relations,
where stands for Dirac's delta function.
Derivation from infinitesimal translations
The translation operator is denoted , where represents the length of the translation. It satisfies the following identity:
that becomes
Assuming the function to be analytic (i.e. differentiable in some domain of the complex plane), one may expand in a Taylor series about :
so for infinitesimal values of :
As it is known from classical mechanics, the momentum is the generator of translation, so the relation between translation and momentum operators is:
thus
4-momentum operator
Inserting the 3d momentum operator above and the energy operator into the 4-momentum (as a 1-form with metric signature):
obtains the 4-momentum operator:
where is the 4-gradient, and the becomes preceding the 3-momentum operator. This operator occurs in relativistic quantum field theory, such as the Dirac equation and other relativistic wave equations, since energy and momentum combine into the 4-momentum vector above, momentum and energy operators correspond to space and time derivatives, and they need to be first order partial derivatives for Lorentz covariance.
The Dirac operator and Dirac slash of the 4-momentum is given by contracting with the gamma matrices:
If the signature was , the operator would be
instead.
See also
Mathematical descriptions of the electromagnetic field
Translation operator (quantum mechanics)
Relativistic wave equations
Pauli–Lubanski pseudovector
References
Quantum mechanics | Momentum operator | [
"Physics"
] | 1,309 | [
"Quantum operators",
"Quantum mechanics"
] |
2,288,927 | https://en.wikipedia.org/wiki/Negative%20thermal%20expansion | Negative thermal expansion (NTE) is an unusual physicochemical process in which some materials contract upon heating, rather than expand as most other materials do. The most well-known material with NTE is water at 0 to 3.98 °C. Also, the density of solid water (ice) is lower than the density of liquid water at standard pressure. Water's NTE is the reason why water ice floats, rather than sinks, in liquid water. Materials which undergo NTE have a range of potential engineering, photonic, electronic, and structural applications. For example, if one were to mix a negative thermal expansion material with a "normal" material which expands on heating, it could be possible to use it as a thermal expansion compensator that might allow for forming composites with tailored or even close to zero thermal expansion.
Origin of negative thermal expansion
There are a number of physical processes which may cause contraction with increasing temperature, including transverse vibrational modes, rigid unit modes and phase transitions.
In 2011, Liu et al. showed that the NTE phenomenon originates from the existence of high pressure, small volume configurations with higher entropy, with their configurations present in the stable phase matrix through thermal fluctuations. They were able to predict both the colossal positive thermal expansion (In cerium) and zero and infinite negative thermal expansion (in ).
Alternatively, large negative and positive thermal expansion may result from the design of internal microstructure.
Negative thermal expansion in close-packed systems
Negative thermal expansion is usually observed in non-close-packed systems with directional interactions (e.g. ice, graphene, etc.) and complex compounds (e.g. , , beta-quartz, some zeolites, etc.). However, in a paper, it was shown that negative thermal expansion (NTE) is also realized in single-component close-packed lattices with pair central force interactions. The following sufficient condition for potential giving rise to NTE behavior is proposed for the interatomic potential, , at the equilibrium distance :
where is shorthand for the third derivative of the interatomic potential at the equilibrium point:
This condition is (i) necessary and sufficient in 1D and (ii) sufficient, but not necessary in 2D and 3D. An approximate necessary and sufficient condition is derived in a paper
where is the space dimensionality. Thus in 2D and 3D negative thermal expansion in close-packed systems with pair interactions is realized even when the third derivative of the potential is zero or even negative. Note that one-dimensional and multidimensional cases are qualitatively different. In 1D thermal expansion is caused by anharmonicity of interatomic potential only. Therefore, the sign of thermal expansion coefficient is determined by the sign of the third derivative of the potential. In multidimensional case the geometrical nonlinearity is also present, i.e. lattice vibrations are nonlinear even in the case of harmonic interatomic potential. This nonlinearity contributes to thermal expansion. Therefore, in multidimensional case both and are present in the condition for negative thermal expansion.
Materials
Perhaps one of the most studied materials to exhibit negative thermal expansion is zirconium tungstate (). This compound contracts continuously over a temperature range of 0.3 to 1050 K (at higher temperatures the material decomposes). Other materials that exhibit NTE behaviour include other members of the family of materials (where A = or , M = or ) and and , though and only in their high temperature phase starting at 350 to 400 K. also is an example of controllable negative thermal expansion. Cubic materials like and also and are especially precious for applications in engineering because they exhibit isotropic NTE i.e. the NTE is the same in all three dimensions making it easier to apply them as thermal expansion compensators.
Ordinary ice shows NTE in its hexagonal and cubic phases at very low temperatures (below –200 °C). In its liquid form, pure water also displays negative thermal expansivity below 3.984 °C.
ALLVAR Alloy 30, a titanium-based alloy, shows NTE over a wide temperature range, with a -30 ppm/°C instantaneous coefficient of thermal expansion at 20 °C. ALLVAR Alloy 30's negative thermal expansion is anisotropic. This commercially available material is used in the optics, aerospace, and cryogenics industries in the form of optical spacers that prevent thermal defocus, ultra-stable struts, and washers for thermally-stable bolted joints.
Carbon fibers shows NTE between 20°C and 500°C. This property is utilized in tight-tolerance aerospace applications to tailor the CTE of carbon fiber reinforced plastic components for specific applications/conditions, by adjusting the ratio of carbon fiber to plastic and by adjusting the orientation of the carbon fibers within the part.
Quartz () and a number of zeolites also show NTE over certain temperature ranges. Fairly pure silicon (Si) has a negative coefficient of thermal expansion for temperatures between about 18 K and 120 K.
Cubic Scandium trifluoride has this property which is explained by the quartic oscillation of the fluoride ions. The energy stored in the bending strain of the fluoride ion is proportional to the fourth power of the displacement angle, unlike most other materials where it is proportional to the square of the displacement. A fluorine atom is bound to two scandium atoms, and as temperature increases the fluorine oscillates more perpendicularly to its bonds. This draws the scandium atoms together throughout the material and it contracts. exhibits this property from 10 to 1100 K above which it shows the normal positive thermal expansion. Shape memory alloys such as NiTi are a nascent class of materials that exhibit zero and negative thermal expansion.
Applications
Forming a composite of a material with (ordinary) positive thermal expansion with a material with (anomalous) negative thermal expansion could allow for tailoring the thermal expansion of the composites or even having composites with a thermal expansion close to zero. Negative and positive thermal expansion hereby compensate each other to a certain amount if the temperature is changed. Tailoring the overall thermal expansion coefficient (CTE) to a certain value can be achieved by varying the volume fractions of the different materials contributing to the thermal expansion of the composite.
Especially in engineering there is a need for having materials with a CTE close to zero i.e. with constant performance over a large temperature range e.g. for application in precision instruments. But also in everyday life materials with a CTE close to zero are required. Glass-ceramic cooktops like Ceran cooktops need to withstand large temperature gradients and rapid changes in temperature while cooking because only certain parts of the cooktops will be heated while other parts stay close to ambient temperature. In general, due to its brittleness temperature gradients in glass might cause cracks. However, the glass-ceramics used in cooktops consist of multiple different phases, some exhibiting positive and some others exhibiting negative thermal expansion. The expansion of the different phases compensate each other so that there is not much change in volume of the glass-ceramic with temperature and crack formation is avoided.
An everyday life example for the need for materials with tailored thermal expansion are dental fillings. If the fillings tend to expand by an amount different from the teeth, for example when drinking a hot or cold drink, it might cause a toothache. If dental fillings are, however, made of a composite material containing a mixture of materials with positive and negative thermal expansion then the overall expansion could be precisely tailored to that of tooth enamel.
References
Further reading
Physical chemistry
Thermodynamics
Materials science | Negative thermal expansion | [
"Physics",
"Chemistry",
"Materials_science",
"Mathematics",
"Engineering"
] | 1,575 | [
"Applied and interdisciplinary physics",
"Materials science",
"Thermodynamics",
"nan",
"Physical chemistry",
"Dynamical systems"
] |
2,289,369 | https://en.wikipedia.org/wiki/Logarithmic%20mean%20temperature%20difference | In thermal engineering, the logarithmic mean temperature difference (LMTD) is used to determine the temperature driving force for heat transfer in flow systems, most notably in heat exchangers. The LMTD is a logarithmic average of the temperature difference between the hot and cold feeds at each end of the double pipe exchanger. For a given heat exchanger with constant area and heat transfer coefficient, the larger the LMTD, the more heat is transferred. The use of the LMTD arises straightforwardly from the analysis of a heat exchanger with constant flow rate and fluid thermal properties.
Definition
We assume that a generic heat exchanger has two ends (which we call "A" and "B") at which the hot and cold streams enter or exit on either side; then, the LMTD is defined by the logarithmic mean as follows:
where is the temperature difference between the two streams at end , and is the temperature difference between the two streams at end . When the two temperature differences are equal, this formula does not directly resolve, so the LMTD is conventionally taken to equal its limit value, which is in this case trivially equal to the two differences.
With this definition, the LMTD can be used to find the exchanged heat in a heat exchanger:
where (in SI units):
is the exchanged heat duty (watts),
is the heat transfer coefficient (watts per kelvin per square meter),
is the exchange area.
Note that estimating the heat transfer coefficient may be quite complicated.
This holds both for cocurrent flow, where the streams enter from the same end, and for countercurrent flow, where they enter from different ends.
In a cross-flow, in which one system, usually the heat sink, has the same nominal temperature at all points on the heat transfer surface, a similar relation between exchanged heat and LMTD holds, but with a correction factor. A correction factor is also required for other more complex geometries, such as a shell and tube exchanger with baffles.
Derivation
Assume heat transfer is occurring in a heat exchanger along an axis , from generic coordinate to , between two fluids, identified as and , whose temperatures along are and .
The local exchanged heat flux at is proportional to the temperature difference:
The heat that leaves the fluids causes a temperature gradient according to Fourier's law:
where are the thermal conductivities of the intervening material at points and respectively. Summed together, this becomes
where .
The total exchanged energy is found by integrating the local heat transfer from to :
Notice that is clearly the pipe length, which is distance along , and is the circumference. Multiplying those gives the heat exchanger area of the pipe, and use this fact:
In both integrals, make a change of variables from to :
With the relation for (equation ), this becomes
Integration at this point is trivial, and finally gives:
,
from which the definition of LMTD follows.
Assumptions and limitations
It has been assumed that the rate of change for the temperature of both fluids is proportional to the temperature difference; this assumption is valid for fluids with a constant specific heat, which is a good description of fluids changing temperature over a relatively small range. However, if the specific heat changes, the LMTD approach will no longer be accurate.
A particular case for the LMTD are condensers and reboilers, where the latent heat associated to phase change is a special case of the hypothesis. For a condenser, the hot fluid inlet temperature is then equivalent to the hot fluid exit temperature.
It has also been assumed that the heat transfer coefficient (U) is constant, and not a function of temperature. If this is not the case, the LMTD approach will again be less valid
The LMTD is a steady-state concept, and cannot be used in dynamic analyses. In particular, if the LMTD were to be applied on a transient in which, for a brief time, the temperature difference had different signs on the two sides of the exchanger, the argument to the logarithm function would be negative, which is not allowable.
No phase change during heat transfer
Changes in kinetic energy and potential energy are neglected
Logarithmic Mean Pressure Difference
A related quantity, the logarithmic mean pressure difference or LMPD, is often used in mass transfer for stagnant solvents with dilute solutes to simplify the bulk flow problem.
References
Kay J M & Nedderman R M (1985) Fluid Mechanics and Transfer Processes, Cambridge University Press
Heat transfer | Logarithmic mean temperature difference | [
"Physics",
"Chemistry"
] | 948 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Thermodynamics"
] |
2,289,914 | https://en.wikipedia.org/wiki/Fr%C3%A9my%27s%20salt | Frémy's salt is a chemical compound with the formula (K4[ON(SO3)2]2), sometimes written as (K2[NO(SO3)2]). It is a bright yellowish-brown solid, but its aqueous solutions are bright violet. The related sodium salt, disodium nitrosodisulfonate (NDS, Na2ON(SO3)2, CAS 29554-37-8) is also referred to as Frémy's salt.
Regardless of the cations, the salts are distinctive because aqueous solutions contain the radical [ON(SO3)2]2−.
Applications
Frémy's salt, being a long-lived free radical, is used as a standard in electron paramagnetic resonance (EPR) spectroscopy, e.g. for quantitation of radicals. Its intense EPR spectrum is dominated by three lines of equal intensity with a spacing of about 13 G (1.3 mT).
The inorganic aminoxyl group is a persistent radical, akin to TEMPO.
It has been used in some oxidation reactions, such as for oxidation of some anilines and phenols allowing polymerization and cross-linking of peptides and peptide-based hydrogels.
It can also be used as a model for peroxyl radicals in studies that examine the antioxidant mechanism of action in a wide range of natural products.
Preparation
Frémy's salt is prepared from hydroxylaminedisulfonic acid. Oxidation of the conjugate base gives the purple dianion:
HON(SO3H)2 → [HON(SO3)2]2− + 2 H+
2 [HON(SO3)2]2− + PbO2 → 2 [ON(SO3)2]2− + PbO + H2O
The synthesis can be performed by combining nitrite and bisulfite to give the hydroxylaminedisulfonate. Oxidation is typically conducted at low-temperature, either chemically or by electrolysis.
Other reactions:
HNO2 + 2 → + H2O
3 + + H+ → 3 + MnO2 + 2 H2O
2 + 4 K+ → K4[ON(SO3)2]2
History
Frémy's salt was discovered in 1845 by Edmond Frémy (1814–1894). Its use in organic synthesis was popularized by Hans Teuber, such that an oxidation using this salt is called the Teuber reaction.
References
Further reading
Free radicals
Oxidizing agents
Sodium compounds
Potassium compounds
Nitrogen–oxygen compounds
Reagents for organic chemistry | Frémy's salt | [
"Chemistry",
"Biology"
] | 543 | [
"Redox",
"Free radicals",
"Oxidizing agents",
"Senescence",
"Biomolecules",
"Reagents for organic chemistry"
] |
5,712,189 | https://en.wikipedia.org/wiki/Fructose%201%2C6-bisphosphate | Fructose 1,6-bisphosphate, known in older publications as Harden-Young ester, is fructose sugar phosphorylated on carbons 1 and 6 (i.e., is a fructosephosphate). The β-D-form of this compound is common in cells. Upon entering the cell, most glucose and fructose is converted to fructose 1,6-bisphosphate.
In glycolysis
Fructose 1,6-bisphosphate lies within the glycolysis metabolic pathway and is produced by phosphorylation of fructose 6-phosphate. It is, in turn, broken down into two compounds: glyceraldehyde 3-phosphate and dihydroxyacetone phosphate. It is an allosteric activator of pyruvate kinase through distinct interactions of binding and allostery at the enzyme's catalytic site
The numbering of the carbon atoms indicates the fate of the carbons according to their position in fructose 6-phosphate.
Isomerism
Fructose 1,6-bisphosphate has only one biologically active isomer, the β-D-form. There are many other isomers, analogous to those of fructose.
Iron chelation
Fructose 1,6-bis(phosphate) has also been implicated in the ability to bind and sequester Fe(II), a soluble form of iron whose oxidation to the insoluble Fe(III) is capable of generating reactive oxygen species via Fenton chemistry. The ability of fructose 1,6-bis(phosphate) to bind Fe(II) may prevent such electron transfers, and thus act as an antioxidant within the body. Certain neurodegenerative diseases, like Alzheimer's and Parkinson's, have been linked to metal deposits with high iron content, although it is uncertain whether Fenton chemistry plays a substantial role in these diseases, or whether fructose 1,6-bis(phosphate) is capable of mitigating those effects.
See also
Fructose 2,6-bisphosphate
References
External links
Monosaccharide derivatives
Organophosphates
Glycolysis | Fructose 1,6-bisphosphate | [
"Chemistry"
] | 465 | [
"Carbohydrate metabolism",
"Glycolysis"
] |
5,712,506 | https://en.wikipedia.org/wiki/Phosphoglycerate%20kinase | Phosphoglycerate kinase () (PGK 1) is an enzyme that catalyzes the reversible transfer of a phosphate group from 1,3-bisphosphoglycerate (1,3-BPG) to ADP producing 3-phosphoglycerate (3-PG) and ATP :
1,3-bisphosphoglycerate + ADP glycerate 3-phosphate + ATP
Like all kinases it is a transferase. PGK is a major enzyme used in glycolysis, in the first ATP-generating step of the glycolytic pathway. In gluconeogenesis, the reaction catalyzed by PGK proceeds in the opposite direction, generating ADP and 1,3-BPG.
In humans, two isozymes of PGK have been so far identified, PGK1 and PGK2. The isozymes have 87-88% identical amino acid sequence identity and though they are structurally and functionally similar, they have different localizations: PGK2, encoded by an autosomal gene, is unique to meiotic and postmeiotic spermatogenic cells, while PGK1, encoded on the X-chromosome, is ubiquitously expressed in all cells.
Biological function
PGK is present in all living organisms as one of the two ATP-generating enzymes in glycolysis. In the gluconeogenic pathway, PGK catalyzes the reverse reaction. Under biochemical standard conditions, the glycolytic direction is favored.
In the Calvin cycle in photosynthetic organisms, PGK catalyzes the phosphorylation of 3-PG, producing 1,3-BPG and ADP, as part of the reactions that regenerate ribulose-1,5-bisphosphate.
PGK has been reported to exhibit thiol reductase activity on plasmin, leading to angiostatin formation, which inhibits angiogenesis and tumor growth. The enzyme was also shown to participate in DNA replication and repair in mammal cell nuclei.
The human isozyme PGK2, which is only expressed during spermatogenesis, was shown to be essential for sperm function in mice.
Interactive pathway map
Structure
Overview
PGK is found in all living organisms and its sequence has been highly conserved throughout evolution. The enzyme exists as a 415-residue monomer containing two nearly equal-sized domains that correspond to the N- and C-termini of the protein. 3-phosphoglycerate (3-PG) binds to the N-terminal, while the nucleotide substrates, MgATP or MgADP, bind to the C-terminal domain of the enzyme. This extended two-domain structure is associated with large-scale 'hinge-bending' conformational changes, similar to those found in hexokinase. The two domains of the protein are separated by a cleft and linked by two alpha-helices. At the core of each domain is a 6-stranded parallel beta-sheet surrounded by alpha helices. The two lobes are capable of folding independently, consistent with the presence of intermediates on the folding pathway with a single domain folded. Though the binding of either substrate triggers a conformational change, only through the binding of both substrates does domain closure occur, leading to the transfer of the phosphate group.
The enzyme has a tendency to exist in the open conformation with short periods of closure and catalysis, which allow for rapid diffusion of substrate and products through the binding sites; the open conformation of PGK is more conformationally stable due to the exposure of a hydrophobic region of the protein upon domain closure.
Role of magnesium
Magnesium ions are normally complexed to the phosphate groups the nucleotide substrates of PGK. It is known that in the absence of magnesium, no enzyme activity occurs. The bivalent metal assists the enzyme ligands in shielding the bound phosphate group's negative charges, allowing the nucleophilic attack to occur; this charge-stabilization is a typical characteristic of phosphotransfer reaction. It is theorized that the ion may also encourage domain closure when PGK has bound both substrates.
Mechanism
Without either substrate bound, PGK exists in an "open" conformation. After both the triose and nucleotide substrates are bound to the N- and C-terminal domains, respectively, an extensive hinge-bending motion occurs, bringing the domains and their bound substrates into close proximity and leading to a "closed" conformation. Then, in the case of the forward glycolytic reaction, the beta-phosphate of ADP initiates a nucleophilic attack on the 1-phosphate of 1,3-BPG. The Lys219 on the enzyme guides the phosphate group to the substrate.
PGK proceeds through a charge-stabilized transition state that is favored over the arrangement of the bound substrate in the closed enzyme because in the transition state, all three phosphate oxygens are stabilized by ligands, as opposed to only two stabilized oxygens in the initial bound state.
In the glycolytic pathway, 1,3-BPG is the phosphate donor and has a high phosphoryl-transfer potential. The PGK-catalyzed transfer of the phosphate group from 1,3-BPG to ADP to yield ATP can the carbon-oxidation reaction of the previous glycolytic step (converting glyceraldehyde 3-phosphate to 3-phosphoglycerate).
Regulation
The enzyme is activated by low concentrations of various multivalent anions, such as pyrophosphate, sulfate, phosphate, and citrate. High concentrations of MgATP and 3-PG activates PGK, while Mg2+ at high concentrations non-competitively inhibits the enzyme.
PGK exhibits a wide specificity toward nucleotide substrates. Its activity is inhibited by salicylates, which appear to mimic the enzyme's nucleotide substrate.
Macromolecular crowding has been shown to increase PGK activity in both computer simulations and in vitro environments simulating a cell interior; as a result of crowding, the enzyme becomes more enzymatically active and more compact.
Disease relevance
Phosphoglycerate kinase (PGK) deficiency is an X-linked recessive trait associated with hemolytic anemia, mental disorders and myopathy in humans, depending on form – there exists a hemolytic form and a myopathic form. Since the trait is X-linked, it is usually fully expressed in males, who have one X chromosome; affected females are typically asymptomatic. The condition results from mutations in Pgk1, the gene encoding PGK1, and twenty mutations have been identified. On a molecular level, the mutation in Pgk1 impairs the thermal stability and inhibits the catalytic activity of the enzyme. PGK is the only enzyme in the immediate glycolytic pathway encoded by an X-linked gene. In the case of hemolytic anemia, PGK deficiency occurs in the erythrocytes. Currently, no definitive treatment exists for PGK deficiency.
PGK1 overexpression has been associated with gastric cancer and has been found to increase the invasiveness of gastric cancer cells in vitro. The enzyme is secreted by tumor cells and participates in the angiogenic process, leading to the release of angiostatin and the inhibition of tumor blood vessel growth.
Due to its wide specificity towards nucleotide substrates, PGK is known to participate in the phosphorylation and activation of HIV antiretroviral drugs, which are nucleotide-based.
Human isozymes
References
External links
Illustration at arizona.edu
EC 2.7.2
Glycolysis enzymes
Glycolysis | Phosphoglycerate kinase | [
"Chemistry"
] | 1,636 | [
"Carbohydrate metabolism",
"Glycolysis"
] |
5,713,217 | https://en.wikipedia.org/wiki/Transdermal%20spray | A metered-dose transdermal spray (MDTS) delivers a drug to the surface of the skin and is absorbed into the circulation on a sustained basis. It works in a similar manner to a transdermal patch or topical gel. The drug is delivered by a device placed gently against the skin and triggered, causing it to release a light spray containing a proprietary formulation of the drug that quickly dries on the skin to form an invisible drug depot. As it would be from a patch, the drug is then absorbed steadily for a predetermined amount of time.
References
Drug delivery devices
Dosage forms | Transdermal spray | [
"Chemistry"
] | 125 | [
"Pharmacology",
"Drug delivery devices"
] |
5,716,217 | https://en.wikipedia.org/wiki/Greigite | Greigite is an iron sulfide mineral with the chemical formula . It is the sulfur equivalent of the iron oxide magnetite (Fe3O4). It was first described in 1964 for an occurrence in San Bernardino County, California, and named after the mineralogist and physical chemist Joseph W. Greig (1895–1977).
Natural occurrence and composition
It occurs in lacustrine sediments with clays, silts and arkosic sand often in varved sulfide rich clays. It is also found in hydrothermal veins. Greigite is formed by magnetotactic bacteria and sulfate-reducing bacteria. Greigite has also been identified in the sclerites of scaly-foot gastropods.
The mineral typically appears as microscopic (< 0.03 mm) isometric hexoctahedral crystals and as minute sooty masses. Association minerals include montmorillonite, chlorite, calcite, colemanite, veatchite, sphalerite, pyrite, marcasite, galena and dolomite.
Common impurities include Cu, Ni, Zn, Mn, Cr, Sb and As. Ni impurities are of particular interest because the structural similarity between Ni-doped greigite and the clusters present in biological enzymes has led to suggestions that greigite or similar minerals could have acted as catalysts for the origin of life. In particular, the cubic Fe4S4 unit of greigite is found in the Fe4S4 thiocubane units of proteins of relevance to the acetyl-CoA pathway.
Crystal structure
Greigite has the spinel structure. The crystallographic unit cell is cubic, with space group Fd3m. The S anions form a cubic close-packed lattice, and the Fe cations occupy both tetrahedral and octahedral sites.
Magnetic and electronic properties
Like the related oxide magnetite (Fe3O4), greigite is ferrimagnetic, with the spin magnetic moments of the Fe cations in the tetrahedral sites oriented in the opposite direction as those in the octahedral sites, and a net magnetization. It is a mixed-valence compound, featuring both Fe(II) and Fe(III) centers in a 1:2 ratio. Both metal sites have high spin quantum numbers. The electronic structure of greigite is that of a half metal.
References
Thiospinel group
Iron(II,III) minerals
Ferromagnetic materials
Magnetic minerals
Cubic minerals
Minerals in space group 227 | Greigite | [
"Physics"
] | 535 | [
"Materials",
"Ferromagnetic materials",
"Matter"
] |
5,717,580 | https://en.wikipedia.org/wiki/Active%20appearance%20model | An active appearance model (AAM) is a computer vision algorithm for matching a statistical model of object shape and appearance to a new image. They are built during a training phase. A set of images, together with coordinates of landmarks that appear in all of the images, is provided to the training supervisor.
The model was first introduced by Edwards, Cootes and Taylor in the context of face analysis at the 3rd International Conference on Face and Gesture Recognition, 1998. Cootes, Edwards and Taylor further described the approach as a general method in computer vision at the European Conference on Computer Vision in the same year. The approach is widely used for matching and tracking faces and for medical image interpretation.
The algorithm uses the difference between the current estimate of appearance and the target image to drive an optimization process.
By taking advantage of the least squares techniques, it can match to new images very swiftly.
It is related to the active shape model (ASM). One disadvantage of ASM is that it only uses shape constraints (together with some information about the image structure near the landmarks), and does not take advantage of all the available information – the texture across the target object. This can be modelled using an AAM.
References
Some reading
T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham. Training models of shape from sets of examples. In Proceedings of BMVC'92, pages 266–275, 1992
S. C. Mitchell, J. G. Bosch, B. P. F. Lelieveldt, R. J. van der Geest, J. H. C. Reiber, and M. Sonka. 3-d active appearance models: Segmentation of cardiac MR and ultrasound images. IEEE Trans. Med. Imaging, 21(9):1167–1178, 2002
T.F. Cootes, G. J. Edwards, and C. J. Taylor. Active appearance models. ECCV, 2:484–498, 1998[pdf]
External links
Professor Tim Cootes AAM Code Free Tools for experimenting with AAMs from Manchester University (for research use only).
Professor Tim Cootes AAM Page Co-creator of AAM page from Manchester University.
IMM AAM Code Dr Mikkel B. Stegmann's home page of AAM-API, C++ AAM implementation (non-commercial use only).
Matlab AAM Code Open-source Matlab implementation of the original AAM algorithm.
AAMtools An Active Appearance Modelling Toolbox in Matlab by Dr George Papandreou.
DeMoLib AAM Toolbox in C++ by Dr Jason Saragih and Dr Roland Goecke.
Computer vision | Active appearance model | [
"Engineering"
] | 569 | [
"Artificial intelligence engineering",
"Packaging machinery",
"Computer vision"
] |
5,719,764 | https://en.wikipedia.org/wiki/Phenylhydrazine | Phenylhydrazine is the chemical compound with the formula . It is often abbreviated as . It is also found in edible mushrooms.
Properties
Phenylhydrazine forms monoclinic prisms that melt to an oil around room temperature which may turn yellow to dark red upon exposure to air. Phenylhydrazine is miscible with ethanol, diethyl ether, chloroform and benzene. It is sparingly soluble in water.
Preparation
Phenylhydrazine is prepared by reacting aniline with sodium nitrite in the presence of hydrogen chloride to form the diazonium salt, which is subsequently reduced using sodium sulfite in the presence of sodium hydroxide to form the final product.
History
Phenylhydrazine was the first hydrazine derivative characterized, reported by Hermann Emil Fischer in 1875. He prepared it by reduction of a phenyl diazonium salt using sulfite salts. Fischer used phenylhydrazine to characterize sugars via formation of hydrazones known as osazones with the sugar aldehyde. He also demonstrated in this first paper many of the key properties recognized for hydrazines.
Uses
Phenylhydrazine is used to prepare indoles by the Fischer indole synthesis, which are intermediates in the synthesis of various dyes and pharmaceuticals.
Phenylhydrazine is used to form phenylhydrazones of natural mixtures of simple sugars in order to render the differing sugars easily separable from each other.
This molecule is also used to induce acute hemolytic anemia in animal models.
Safety
Exposure to phenylhydrazine may cause contact dermatitis, hemolytic anemia, and liver damage.
References
External links
PubChem
Additional chemical properties of phenylhydrazine
CDC - NIOSH Pocket Guide to Chemical Hazards
Hydrazines
Monoamine oxidase inhibitors
Emil Fischer
Phenyl compounds | Phenylhydrazine | [
"Chemistry"
] | 403 | [
"Functional groups",
"Hydrazines"
] |
361,449 | https://en.wikipedia.org/wiki/Descent%20%28mathematics%29 | In mathematics, the idea of descent extends the intuitive idea of 'gluing' in topology. Since the topologists' glue is the use of equivalence relations on topological spaces, the theory starts with some ideas on identification.
Descent of vector bundles
The case of the construction of vector bundles from data on a disjoint union of topological spaces is a straightforward place to start.
Suppose X is a topological space covered by open sets Xi. Let Y be the disjoint union of the Xi, so that there is a natural mapping
We think of Y as 'above' X, with the Xi projection 'down' onto X. With this language, descent implies a vector bundle on Y (so, a bundle given on each Xi), and our concern is to 'glue' those bundles Vi, to make a single bundle V on X. What we mean is that V should, when restricted to Xi, give back Vi, up to a bundle isomorphism.
The data needed is then this: on each overlap
intersection of Xi and Xj, we'll require mappings
to use to identify Vi and Vj there, fiber by fiber. Further the fij must satisfy conditions based on the reflexive, symmetric and transitive properties of an equivalence relation (gluing conditions). For example, the composition
for transitivity (and choosing apt notation). The fii should be identity maps and hence symmetry becomes (so that it is fiberwise an isomorphism).
These are indeed standard conditions in fiber bundle theory (see transition map). One important application to note is change of fiber: if the fij are all you need to make a bundle, then there are many ways to make an associated bundle. That is, we can take essentially same fij, acting on various fibers.
Another major point is the relation with the chain rule: the discussion of the way there of constructing tensor fields can be summed up as 'once you learn to descend the tangent bundle, for which transitivity is the Jacobian chain rule, the rest is just 'naturality of tensor constructions'.
To move closer towards the abstract theory we need to interpret the disjoint union of the
now as
the fiber product (here an equalizer) of two copies of the projection p. The bundles on the Xij that we must control are V′ and V", the pullbacks to the fiber of V via the two different projection maps to X.
Therefore, by going to a more abstract level one can eliminate the combinatorial side (that is, leave out the indices) and get something that makes sense for p not of the special form of covering with which we began. This then allows a category theory approach: what remains to do is to re-express the gluing conditions.
History
The ideas were developed in the period 1955–1965 (which was roughly the time at which the requirements of algebraic topology were met but those of algebraic geometry were not). From the point of view of abstract category theory the work of comonads of Beck was a summation of those ideas; see Beck's monadicity theorem.
The difficulties of algebraic geometry with passage to the quotient are acute. The urgency (to put it that way) of the problem for the geometers accounts for the title of the 1959 Grothendieck seminar TDTE on theorems of descent and techniques of existence (see FGA) connecting the descent question with the representable functor question in algebraic geometry in general, and the moduli problem in particular.
Fully faithful descent
Let . Each sheaf F on X gives rise to a descent datum
,
where satisfies the cocycle condition
.
The fully faithful descent says: The functor is fully faithful. Descent theory tells conditions for which there is a fully faithful descent, and when this functor is an equivalence of categories.
See also
Grothendieck connection
Stack (mathematics)
Galois descent
Grothendieck topology
Fibered category
Beck's monadicity theorem
Cohomological descent
References
SGA 1, Ch VIII – this is the main reference
A chapter on the descent theory is more accessible than SGA.
Further reading
Other possible sources include:
Angelo Vistoli, Notes on Grothendieck topologies, fibered categories and descent theory
Mattieu Romagny, A straight way to algebraic stacks
External links
What is descent theory?
Topology
Category theory
Algebraic geometry | Descent (mathematics) | [
"Physics",
"Mathematics"
] | 899 | [
"Functions and mappings",
"Mathematical structures",
"Algebraic geometry",
"Mathematical objects",
"Fields of abstract algebra",
"Topology",
"Space",
"Category theory",
"Mathematical relations",
"Geometry",
"Spacetime"
] |
361,609 | https://en.wikipedia.org/wiki/Moduli%20space | In mathematics, in particular algebraic geometry, a moduli space is a geometric space (usually a scheme or an algebraic stack) whose points represent algebro-geometric objects of some fixed kind, or isomorphism classes of such objects. Such spaces frequently arise as solutions to classification problems: If one can show that a collection of interesting objects (e.g., the smooth algebraic curves of a fixed genus) can be given the structure of a geometric space, then one can parametrize such objects by introducing coordinates on the resulting space. In this context, the term "modulus" is used synonymously with "parameter"; moduli spaces were first understood as spaces of parameters rather than as spaces of objects. A variant of moduli spaces is formal moduli. Bernhard Riemann first used the term "moduli" in 1857.
Motivation
Moduli spaces are spaces of solutions of geometric classification problems. That is, the points of a moduli space correspond to solutions of geometric problems. Here different solutions are identified if they are isomorphic (that is, geometrically the same). Moduli spaces can be thought of as giving a universal space of parameters for the problem. For example, consider the problem of finding all circles in the Euclidean plane up to congruence. Any circle can be described uniquely by giving three points, but many different sets of three points give the same circle: the correspondence is many-to-one. However, circles are uniquely parameterized by giving their center and radius: this is two real parameters and one positive real parameter. Since we are only interested in circles "up to congruence", we identify circles having different centers but the same radius, and so the radius alone suffices to parameterize the set of interest. The moduli space is, therefore, the positive real numbers.
Moduli spaces often carry natural geometric and topological structures as well. In the example of circles, for instance, the moduli space is not just an abstract set, but the absolute value of the difference of the radii defines a metric for determining when two circles are "close". The geometric structure of moduli spaces locally tells us when two solutions of a geometric classification problem are "close", but generally moduli spaces also have a complicated global structure as well.
For example, consider how to describe the collection of lines in R2 that intersect the origin. We want to assign to each line L of this family a quantity that can uniquely identify it—a modulus. An example of such a quantity is the positive angle θ(L) with 0 ≤ θ < π radians. The set of lines L so parametrized is known as P1(R) and is called the real projective line.
We can also describe the collection of lines in R2 that intersect the origin by means of a topological construction. To wit: consider the unit circle S1 ⊂ R2 and notice that every point s ∈ S1 gives a line L(s) in the collection (which joins the origin and s). However, this map is two-to-one, so we want to identify s ~ −s to yield P1(R) ≅ S1/~ where the topology on this space is the quotient topology induced by the quotient map S1 → P1(R).
Thus, when we consider P1(R) as a moduli space of lines that intersect the origin in R2, we capture the ways in which the members (lines in this case) of the family can modulate by continuously varying 0 ≤ θ < π.
Basic examples
Projective space and Grassmannians
The real projective space Pn is a moduli space that parametrizes the space of lines in Rn+1 which pass through the origin. Similarly, complex projective space is the space of all complex lines in Cn+1 passing through the origin.
More generally, the Grassmannian G(k, V) of a vector space V over a field F is the moduli space of all k-dimensional linear subspaces of V.
Projective space as moduli of very ample line bundles generated by global sections
Whenever there is an embedding of a scheme into the universal projective space , the embedding is given by a line bundle and sections which all don't vanish at the same time. This means, given a pointthere is an associated pointgiven by the compositionsThen, two line bundles with sections are equivalentiff there is an isomorphism such that . This means the associated moduli functor sends a scheme to the setShowing this is true can be done by running through a series of tautologies: any projective embedding gives the globally generated sheaf with sections . Conversely, given an ample line bundle globally generated by sections gives an embedding as above.
Chow variety
The Chow variety Chow(d,P3) is a projective algebraic variety which parametrizes degree d curves in P3. It is constructed as follows. Let C be a curve of degree d in P3, then consider all the lines in P3 that intersect the curve C. This is a degree d divisor DC in G(2, 4), the Grassmannian of lines in P3. When C varies, by associating C to DC, we obtain a parameter space of degree d curves as a subset of the space of degree d divisors of the Grassmannian: Chow(d,P3).
Hilbert scheme
The Hilbert scheme Hilb(X) is a moduli scheme. Every closed point of Hilb(X) corresponds to a closed subscheme of a fixed scheme X, and every closed subscheme is represented by such a point. A simple example of a Hilbert scheme is the Hilbert scheme parameterizing degree hypersurfaces of projective space . This is given by the projective bundlewith universal family given bywhere is the associated projective scheme for the degree homogeneous polynomial .
Definitions
There are several related notions of things we could call moduli spaces. Each of these definitions formalizes a different notion of what it means for the points of space M to represent geometric objects.
Fine moduli spaces
This is the standard concept. Heuristically, if we have a space M for which each point m ∊ M corresponds to an algebro-geometric object Um, then we can assemble these objects into a tautological family U over M. (For example, the Grassmannian G(k, V) carries a rank k bundle whose fiber at any point [L] ∊ G(k, V) is simply the linear subspace L ⊂ V.) M is called a base space of the family U. We say that such a family is universal if any family of algebro-geometric objects T over any base space B is the pullback of U along a unique map B → M. A fine moduli space is a space M which is the base of a universal family.
More precisely, suppose that we have a functor F from schemes to sets, which assigns to a scheme B the set of all suitable families of objects with base B. A space M is a fine moduli space for the functor F if M represents F, i.e., there is a natural isomorphism
τ : F → Hom(−, M), where Hom(−, M) is the functor of points. This implies that M carries a universal family; this family is the family on M corresponding to the identity map 1M ∊ Hom(M, M).
Coarse moduli spaces
Fine moduli spaces are desirable, but they do not always exist and are frequently difficult to construct, so mathematicians sometimes use a weaker notion, the idea of a coarse moduli space. A space M is a coarse moduli space for the functor F if there exists a natural transformation τ : F → Hom(−, M) and τ is universal among such natural transformations. More concretely, M is a coarse moduli space for F if any family T over a base B gives rise to a map φT : B → M and any two objects V and W (regarded as families over a point) correspond to the same point of M if and only if V and W are isomorphic. Thus, M is a space which has a point for every object that could appear in a family, and whose geometry reflects the ways objects can vary in families. Note, however, that a coarse moduli space does not necessarily carry any family of appropriate objects, let alone a universal one.
In other words, a fine moduli space includes both a base space M and universal family U → M, while a coarse moduli space only has the base space M.
Moduli stacks
It is frequently the case that interesting geometric objects come equipped with many natural automorphisms. This in particular makes the existence of a fine moduli space impossible (intuitively, the idea is that if L is some geometric object, the trivial family L × [0,1] can be made into a twisted family on the circle S1 by identifying L × {0} with L × {1} via a nontrivial automorphism. Now if a fine moduli space X existed, the map S1 → X should not be constant, but would have to be constant on any proper open set by triviality), one can still sometimes obtain a coarse moduli space. However, this approach is not ideal, as such spaces are not guaranteed to exist, they are frequently singular when they do exist, and miss details about some non-trivial families of objects they classify.
A more sophisticated approach is to enrich the classification by remembering the isomorphisms. More precisely, on any base B one can consider the category of families on B with only isomorphisms between families taken as morphisms. One then considers the fibred category which assigns to any space B the groupoid of families over B. The use of these categories fibred in groupoids to describe a moduli problem goes back to Grothendieck (1960/61). In general, they cannot be represented by schemes or even algebraic spaces, but in many cases, they have a natural structure of an algebraic stack.
Algebraic stacks and their use to analyze moduli problems appeared in Deligne-Mumford (1969) as a tool to prove the irreducibility of the (coarse) moduli space of curves of a given genus. The language of algebraic stacks essentially provides a systematic way to view the fibred category that constitutes the moduli problem as a "space", and the moduli stack of many moduli problems is better-behaved (such as smooth) than the corresponding coarse moduli space.
Further examples
Moduli of curves
The moduli stack classifies families of smooth projective curves of genus g, together with their isomorphisms. When g > 1, this stack may be compactified by adding new "boundary" points which correspond to stable nodal curves (together with their isomorphisms). A curve is stable if it has only a finite group of automorphisms. The resulting stack is denoted . Both moduli stacks carry universal families of curves. One can also define coarse moduli spaces representing isomorphism classes of smooth or stable curves. These coarse moduli spaces were actually studied before the notion of moduli stack was invented. In fact, the idea of a moduli stack was invented by Deligne and Mumford in an attempt to prove the projectivity of the coarse moduli spaces. In recent years, it has become apparent that the stack of curves is actually the more fundamental object.
Both stacks above have dimension 3g−3; hence a stable nodal curve can be completely specified by choosing the values of 3g−3 parameters, when g > 1. In lower genus, one must account for the presence of smooth families of automorphisms, by subtracting their number. There is exactly one complex curve of genus zero, the Riemann sphere, and its group of isomorphisms is PGL(2). Hence, the dimension of is
dim(space of genus zero curves) − dim(group of automorphisms) = 0 − dim(PGL(2)) = −3.
Likewise, in genus 1, there is a one-dimensional space of curves, but every such curve has a one-dimensional group of automorphisms. Hence, the stack has dimension 0. The coarse moduli spaces have dimension 3g−3 as the stacks when g > 1 because the curves with genus g > 1 have only a finite group as its automorphism i.e. dim(a group of automorphisms) = 0. Eventually, in genus zero, the coarse moduli space has dimension zero, and in genus one, it has dimension one.
One can also enrich the problem by considering the moduli stack of genus g nodal curves with n marked points. Such marked curves are said to be stable if the subgroup of curve automorphisms which fix the marked points is finite. The resulting moduli stacks of smooth (or stable) genus g curves with n-marked points are denoted (or ), and have dimension 3g − 3 + n.
A case of particular interest is the moduli stack of genus 1 curves with one marked point. This is the stack of elliptic curves, and is the natural home of the much studied modular forms, which are meromorphic sections of bundles on this stack.
Moduli of varieties
In higher dimensions, moduli of algebraic varieties are more difficult to construct and study. For instance, the higher-dimensional analogue of the moduli space of elliptic curves discussed above is the moduli space of abelian varieties, such as the Siegel modular variety. This is the problem underlying Siegel modular form theory. See also Shimura variety.
Using techniques arising out of the minimal model program, moduli spaces of varieties of general type were constructed by János Kollár and Nicholas Shepherd-Barron, now known as KSB moduli spaces.
Using techniques arising out of differential geometry and birational geometry simultaneously, the construction of moduli spaces of Fano varieties has been achieved by restricting to a special class of K-stable varieties. In this setting important results about boundedness of Fano varieties proven by Caucher Birkar are used, for which he was awarded the 2018 Fields medal.
The construction of moduli spaces of Calabi-Yau varieties is an important open problem, and only special cases such as moduli spaces of K3 surfaces or Abelian varieties are understood.
Moduli of vector bundles
Another important moduli problem is to understand the geometry of (various substacks of) the moduli stack Vectn(X) of rank n vector bundles on a fixed algebraic variety X. This stack has been most studied when X is one-dimensional, and especially when n equals one. In this case, the coarse moduli space is the Picard scheme, which like the moduli space of curves, was studied before stacks were invented. When the bundles have rank 1 and degree zero, the study of coarse moduli space is the study of the Jacobian variety.
In applications to physics, the number of moduli of vector bundles and the closely related problem of the number of moduli of principal G-bundles has been found to be significant in gauge theory.
Volume of the moduli space
Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces.
Methods for constructing moduli spaces
The modern formulation of moduli problems and definition of moduli spaces in terms of the moduli functors (or more generally the categories fibred in groupoids), and spaces (almost) representing them, dates back to Grothendieck (1960/61), in which he described the general framework, approaches, and main problems using Teichmüller spaces in complex analytical geometry as an example. The talks, in particular, describe the general method of constructing moduli spaces by first rigidifying the moduli problem under consideration.
More precisely, the existence of non-trivial automorphisms of the objects being classified makes it impossible to have a fine moduli space. However, it is often possible to consider a modified moduli problem of classifying the original objects together with additional data, chosen in such a way that the identity is the only automorphism respecting also the additional data. With a suitable choice of the rigidifying data, the modified moduli problem will have a (fine) moduli space T, often described as a subscheme of a suitable Hilbert scheme or Quot scheme. The rigidifying data is moreover chosen so that it corresponds to a principal bundle with an algebraic structure group G. Thus one can move back from the rigidified problem to the original by taking quotient by the action of G, and the problem of constructing the moduli space becomes that of finding a scheme (or more general space) that is (in a suitably strong sense) the quotient T/G of T by the action of G. The last problem, in general, does not admit a solution; however, it is addressed by the groundbreaking geometric invariant theory (GIT), developed by David Mumford in 1965, which shows that under suitable conditions the quotient indeed exists.
To see how this might work, consider the problem of parametrizing smooth curves of the genus g > 2. A smooth curve together with a complete linear system of degree d > 2g is equivalent to a closed one dimensional subscheme of the projective space Pd−g. Consequently, the moduli space of smooth curves and linear systems (satisfying certain criteria) may be embedded in the Hilbert scheme of a sufficiently high-dimensional projective space. This locus H in the Hilbert scheme has an action of PGL(n) which mixes the elements of the linear system; consequently, the moduli space of smooth curves is then recovered as the quotient of H by the projective general linear group.
Another general approach is primarily associated with Michael Artin. Here the idea is to start with an object of the kind to be classified and study its deformation theory. This means first constructing infinitesimal deformations, then appealing to prorepresentability theorems to put these together into an object over a formal base. Next, an appeal to Grothendieck's formal existence theorem provides an object of the desired kind over a base which is a complete local ring. This object can be approximated via Artin's approximation theorem by an object defined over a finitely generated ring. The spectrum of this latter ring can then be viewed as giving a kind of coordinate chart on the desired moduli space. By gluing together enough of these charts, we can cover the space, but the map from our union of spectra to the moduli space will, in general, be many to one. We, therefore, define an equivalence relation on the former; essentially, two points are equivalent if the objects over each are isomorphic. This gives a scheme and an equivalence relation, which is enough to define an algebraic space (actually an algebraic stack if we are being careful) if not always a scheme.
In physics
The term moduli space is sometimes used in physics to refer specifically to the moduli space of vacuum expectation values of a set of scalar fields, or to the moduli space of possible string backgrounds.
Moduli spaces also appear in physics in topological field theory, where one can use Feynman path integrals to compute the intersection numbers of various algebraic moduli spaces.
See also
Construction tools
Hilbert scheme
Quot scheme
Deformation theory
GIT quotient
Artin's criterion, general criterion for constructing moduli spaces as algebraic stacks from moduli functors
Moduli spaces
Moduli of algebraic curves
Moduli stack of elliptic curves
Moduli spaces of K-stable Fano varieties
Modular curve
Picard functor
Moduli of semistable sheaves on a curve
Kontsevich moduli space
Moduli of semistable sheaves
References
Notes
Moduli theory
Moduli stacks in P-adic modular forms and Langlands program
Research articles
Fundamental papers
Mumford, David, Geometric invariant theory. Ergebnisse der Mathematik und ihrer Grenzgebiete, Neue Folge, Band 34 Springer-Verlag, Berlin-New York 1965 vi+145 pp
Mumford, David; Fogarty, J.; Kirwan, F. Geometric invariant theory. Third edition. Ergebnisse der Mathematik und ihrer Grenzgebiete (2) (Results in Mathematics and Related Areas (2)), 34. Springer-Verlag, Berlin, 1994. xiv+292 pp.
Early applications
Other references
Papadopoulos, Athanase, ed. (2007), Handbook of Teichmüller theory. Vol. I, IRMA Lectures in Mathematics and Theoretical Physics, 11, European Mathematical Society (EMS), Zürich, , ,
Papadopoulos, Athanase, ed. (2009), Handbook of Teichmüller theory. Vol. II, IRMA Lectures in Mathematics and Theoretical Physics, 13, European Mathematical Society (EMS), Zürich, , ,
Papadopoulos, Athanase, ed. (2012), Handbook of Teichmüller theory. Vol. III, IRMA Lectures in Mathematics and Theoretical Physics, 17, European Mathematical Society (EMS), Zürich, , .
Other articles and sources
Maryam Mirzakhani (2007) "Simple geodesics and Weil-Petersson volumes of moduli spaces of bordered Riemann surfaces" Inventiones Mathematicae
External links
Moduli theory
Invariant theory | Moduli space | [
"Physics"
] | 4,425 | [
"Invariant theory",
"Group actions",
"Symmetry"
] |
361,897 | https://en.wikipedia.org/wiki/Astrophysics | Astrophysics is a science that employs the methods and principles of physics and chemistry in the study of astronomical objects and phenomena. As one of the founders of the discipline, James Keeler, said, astrophysics "seeks to ascertain the nature of the heavenly bodies, rather than their positions or motions in space—what they are, rather than where they are", which is studied in celestial mechanics.
Among the subjects studied are the Sun (solar physics), other stars, galaxies, extrasolar planets, the interstellar medium, and the cosmic microwave background. Emissions from these objects are examined across all parts of the electromagnetic spectrum, and the properties examined include luminosity, density, temperature, and chemical composition. Because astrophysics is a very broad subject, astrophysicists apply concepts and methods from many disciplines of physics, including classical mechanics, electromagnetism, statistical mechanics, thermodynamics, quantum mechanics, relativity, nuclear and particle physics, and atomic and molecular physics.
In practice, modern astronomical research often involves substantial work in the realms of theoretical and observational physics. Some areas of study for astrophysicists include the properties of dark matter, dark energy, black holes, and other celestial bodies; and the origin and ultimate fate of the universe. Topics also studied by theoretical astrophysicists include Solar System formation and evolution; stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity, special relativity, and quantum and physical cosmology (the physical study of the largest-scale structures of the universe), including string cosmology and astroparticle physics.
History
Astronomy is an ancient science, long separated from the study of terrestrial physics. In the Aristotelian worldview, bodies in the sky appeared to be unchanging spheres whose only motion was uniform motion in a circle, while the earthly world was the realm which underwent growth and decay and in which natural motion was in a straight line and ended when the moving object reached its goal. Consequently, it was held that the celestial region was made of a fundamentally different kind of matter from that found in the terrestrial sphere; either Fire as maintained by Plato, or Aether as maintained by Aristotle.
During the 17th century, natural philosophers such as Galileo, Descartes, and Newton began to maintain that the celestial and terrestrial regions were made of similar kinds of material and were subject to the same natural laws. Their challenge was that the tools had not yet been invented with which to prove these assertions.
For much of the nineteenth century, astronomical research was focused on the routine work of measuring the positions and computing the motions of astronomical objects. A new astronomy, soon to be called astrophysics, began to emerge when William Hyde Wollaston and Joseph von Fraunhofer independently discovered that, when decomposing the light from the Sun, a multitude of dark lines (regions where there was less or no light) were observed in the spectrum. By 1860 the physicist, Gustav Kirchhoff, and the chemist, Robert Bunsen, had demonstrated that the dark lines in the solar spectrum corresponded to bright lines in the spectra of known gases, specific lines corresponding to unique chemical elements. Kirchhoff deduced that the dark lines in the solar spectrum are caused by absorption by chemical elements in the Solar atmosphere. In this way it was proved that the chemical elements found in the Sun and stars were also found on Earth.
Among those who extended the study of solar and stellar spectra was Norman Lockyer, who in 1868 detected radiant, as well as dark lines in solar spectra. Working with chemist Edward Frankland to investigate the spectra of elements at various temperatures and pressures, he could not associate a yellow line in the solar spectrum with any known elements. He thus claimed the line represented a new element, which was called helium, after the Greek Helios, the Sun personified.
In 1885, Edward C. Pickering undertook an ambitious program of stellar spectral classification at Harvard College Observatory, in which a team of woman computers, notably Williamina Fleming, Antonia Maury, and Annie Jump Cannon, classified the spectra recorded on photographic plates. By 1890, a catalog of over 10,000 stars had been prepared that grouped them into thirteen spectral types. Following Pickering's vision, by 1924 Cannon expanded the catalog to nine volumes and over a quarter of a million stars, developing the Harvard Classification Scheme which was accepted for worldwide use in 1922.
In 1895, George Ellery Hale and James E. Keeler, along with a group of ten associate editors from Europe and the United States, established The Astrophysical Journal: An International Review of Spectroscopy and Astronomical Physics. It was intended that the journal would fill the gap between journals in astronomy and physics, providing a venue for publication of articles on astronomical applications of the spectroscope; on laboratory research closely allied to astronomical physics, including wavelength determinations of metallic and gaseous spectra and experiments on radiation and absorption; on theories of the Sun, Moon, planets, comets, meteors, and nebulae; and on instrumentation for telescopes and laboratories.
Around 1920, following the discovery of the Hertzsprung–Russell diagram still used as the basis for classifying stars and their evolution, Arthur Eddington anticipated the discovery and mechanism of nuclear fusion processes in stars, in his paper The Internal Constitution of the Stars. At that time, the source of stellar energy was a complete mystery; Eddington correctly speculated that the source was fusion of hydrogen into helium, liberating enormous energy according to Einstein's equation E = mc2. This was a particularly remarkable development since at that time fusion and thermonuclear energy, and even that stars are largely composed of hydrogen (see metallicity), had not yet been discovered.
In 1925 Cecilia Helena Payne (later Cecilia Payne-Gaposchkin) wrote an influential doctoral dissertation at Radcliffe College, in which she applied Saha's ionization theory to stellar atmospheres to relate the spectral classes to the temperature of stars. Most significantly, she discovered that hydrogen and helium were the principal components of stars, not the composition of Earth. Despite Eddington's suggestion, discovery was so unexpected that her dissertation readers (including Russell) convinced her to modify the conclusion before publication. However, later research confirmed her discovery.
By the end of the 20th century, studies of astronomical spectra had expanded to cover wavelengths extending from radio waves through optical, x-ray, and gamma wavelengths. In the 21st century, it further expanded to include observations based on gravitational waves.
Observational astrophysics
Observational astronomy is a division of the astronomical science that is concerned with recording and interpreting data, in contrast with theoretical astrophysics, which is mainly concerned with finding out the measurable implications of physical models. It is the practice of observing celestial objects by using telescopes and other astronomical apparatus.
Most astrophysical observations are made using the electromagnetic spectrum.
Radio astronomy studies radiation with a wavelength greater than a few millimeters. Example areas of study are radio waves, usually emitted by cold objects such as interstellar gas and dust clouds; the cosmic microwave background radiation which is the redshifted light from the Big Bang; pulsars, which were first detected at microwave frequencies. The study of these waves requires very large radio telescopes.
Infrared astronomy studies radiation with a wavelength that is too long to be visible to the naked eye but is shorter than radio waves. Infrared observations are usually made with telescopes similar to the familiar optical telescopes. Objects colder than stars (such as planets) are normally studied at infrared frequencies.
Optical astronomy was the earliest kind of astronomy. Telescopes paired with a charge-coupled device or spectroscopes are the most common instruments used. The Earth's atmosphere interferes somewhat with optical observations, so adaptive optics and space telescopes are used to obtain the highest possible image quality. In this wavelength range, stars are highly visible, and many chemical spectra can be observed to study the chemical composition of stars, galaxies, and nebulae.
Ultraviolet, X-ray and gamma ray astronomy study very energetic processes such as binary pulsars, black holes, magnetars, and many others. These kinds of radiation do not penetrate the Earth's atmosphere well. There are two methods in use to observe this part of the electromagnetic spectrum—space-based telescopes and ground-based imaging air Cherenkov telescopes (IACT). Examples of Observatories of the first type are RXTE, the Chandra X-ray Observatory and the Compton Gamma Ray Observatory. Examples of IACTs are the High Energy Stereoscopic System (H.E.S.S.) and the MAGIC telescope.
Other than electromagnetic radiation, few things may be observed from the Earth that originate from great distances. A few gravitational wave observatories have been constructed, but gravitational waves are extremely difficult to detect. Neutrino observatories have also been built, primarily to study the Sun. Cosmic rays consisting of very high-energy particles can be observed hitting the Earth's atmosphere.
Observations can also vary in their time scale. Most optical observations take minutes to hours, so phenomena that change faster than this cannot readily be observed. However, historical data on some objects is available, spanning centuries or millennia. On the other hand, radio observations may look at events on a millisecond timescale (millisecond pulsars) or combine years of data (pulsar deceleration studies). The information obtained from these different timescales is very different.
The study of the Sun has a special place in observational astrophysics. Due to the tremendous distance of all other stars, the Sun can be observed in a kind of detail unparalleled by any other star. Understanding the Sun serves as a guide to understanding of other stars.
The topic of how stars change, or stellar evolution, is often modeled by placing the varieties of star types in their respective positions on the Hertzsprung–Russell diagram, which can be viewed as representing the state of a stellar object, from birth to destruction.
Theoretical astrophysics
Theoretical astrophysicists use a wide variety of tools which include analytical models (for example, polytropes to approximate the behaviors of a star) and computational numerical simulations. Each has some advantages. Analytical models of a process are generally better for giving insight into the heart of what is going on. Numerical models can reveal the existence of phenomena and effects that would otherwise not be seen.
Theorists in astrophysics endeavor to create theoretical models and figure out the observational consequences of those models. This helps allow observers to look for data that can refute a model or help in choosing between several alternate or conflicting models.
Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model.
Topics studied by theoretical astrophysicists include stellar dynamics and evolution; galaxy formation and evolution; magnetohydrodynamics; large-scale structure of matter in the universe; origin of cosmic rays; general relativity and physical cosmology, including string cosmology and astroparticle physics. Relativistic astrophysics serves as a tool to gauge the properties of large-scale structures for which gravitation plays a significant role in physical phenomena investigated and as the basis for black hole (astro)physics and the study of gravitational waves.
Some widely accepted and studied theories and models in astrophysics, now included in the Lambda-CDM model, are the Big Bang, cosmic inflation, dark matter, dark energy and fundamental theories of physics.
Popularization
The roots of astrophysics can be found in the seventeenth century emergence of a unified physics, in which the same laws applied to the celestial and terrestrial realms. There were scientists who were qualified in both physics and astronomy who laid the firm foundation for the current science of astrophysics. In modern times, students continue to be drawn to astrophysics due to its popularization by the Royal Astronomical Society and notable educators such as prominent professors Lawrence Krauss, Subrahmanyan Chandrasekhar, Stephen Hawking, Hubert Reeves, Carl Sagan and Patrick Moore. The efforts of the early, late, and present scientists continue to attract young people to study the history and science of astrophysics.
The television sitcom show The Big Bang Theory popularized the field of astrophysics with the general public, and featured some well known scientists like Stephen Hawking and Neil deGrasse Tyson.
See also
References
Further reading
Astrophysics, Scholarpedia Expert articles
External links
Astronomy and Astrophysics, a European Journal
Astrophysical Journal
Cosmic Journey: A History of Scientific Cosmology from the American Institute of Physics
International Journal of Modern Physics D from World Scientific
List and directory of peer-reviewed Astronomy / Astrophysics Journals
Ned Wright's Cosmology Tutorial, UCLA
Astronomical sub-disciplines | Astrophysics | [
"Physics",
"Astronomy"
] | 2,644 | [
"Astronomical sub-disciplines",
"Astrophysics"
] |
361,924 | https://en.wikipedia.org/wiki/Order%20theory | Order theory is a branch of mathematics that investigates the intuitive notion of order using binary relations. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". This article introduces the field and provides basic definitions. A list of order-theoretic terms can be found in the order theory glossary.
Background and motivation
Orders are everywhere in mathematics and related fields like computer science. The first order often discussed in primary school is the standard order on the natural numbers e.g. "2 is less than 3", "10 is greater than 5", or "Does Tom have fewer cookies than Sally?". This intuitive concept can be extended to orders on other sets of numbers, such as the integers and the reals. The idea of being greater than or less than another number is one of the basic intuitions of number systems (compare with numeral systems) in general (although one usually is also interested in the actual difference of two numbers, which is not given by the order). Other familiar examples of orderings are the alphabetical order of words in a dictionary and the genealogical property of lineal descent within a group of people.
The notion of order is very general, extending beyond contexts that have an immediate, intuitive feel of sequence or relative quantity. In other contexts orders may capture notions of containment or specialization. Abstractly, this type of order amounts to the subset relation, e.g., "Pediatricians are physicians," and "Circles are merely special-case ellipses."
Some orders, like "less-than" on the natural numbers and alphabetical order on words, have a special property: each element can be compared to any other element, i.e. it is smaller (earlier) than, larger (later) than, or identical to. However, many other orders do not. Consider for example the subset order on a collection of sets: though the set of birds and the set of dogs are both subsets of the set of animals, neither the birds nor the dogs constitutes a subset of the other. Those orders like the "subset-of" relation for which there exist incomparable elements are called partial orders; orders for which every pair of elements is comparable are total orders.
Order theory captures the intuition of orders that arises from such examples in a general setting. This is achieved by specifying properties that a relation ≤ must have to be a mathematical order. This more abstract approach makes much sense, because one can derive numerous theorems in the general setting, without focusing on the details of any particular order. These insights can then be readily transferred to many less abstract applications.
Driven by the wide practical usage of orders, numerous special kinds of ordered sets have been defined, some of which have grown into mathematical fields of their own. In addition, order theory does not restrict itself to the various classes of ordering relations, but also considers appropriate functions between them. A simple example of an order theoretic property for functions comes from analysis where monotone functions are frequently found.
Basic definitions
This section introduces ordered sets by building upon the concepts of set theory, arithmetic, and binary relations.
Partially ordered sets
Orders are special binary relations. Suppose that P is a set and that ≤ is a relation on P ('relation on a set' is taken to mean 'relation amongst its inhabitants', i.e. ≤ is a subset of the cartesian product P x P). Then ≤ is a partial order if it is reflexive, antisymmetric, and transitive, that is, if for all a, b and c in P, we have that:
a ≤ a (reflexivity)
if a ≤ b and b ≤ a then a = b (antisymmetry)
if a ≤ b and b ≤ c then a ≤ c (transitivity).
A set with a partial order on it is called a partially ordered set, poset, or just ordered set if the intended meaning is clear. By checking these properties, one immediately sees that the well-known orders on natural numbers, integers, rational numbers and reals are all orders in the above sense. However, these examples have the additional property that any two elements are comparable, that is, for all a and b in P, we have that:
a ≤ b or b ≤ a.
A partial order with this property is called a total order. These orders can also be called linear orders or chains. While many familiar orders are linear, the subset order on sets provides an example where this is not the case. Another example is given by the divisibility (or "is-a-factor-of") relation |. For two natural numbers n and m, we write n|m if n divides m without remainder. One easily sees that this yields a partial order. For example neither 3 divides 13 nor 13 divides 3, so 3 and 13 are not comparable elements of the divisibility relation on the set of integers.
The identity relation = on any set is also a partial order in which every two distinct elements are incomparable. It is also the only relation that is both a partial order and an equivalence relation because it satisfies both the antisymmetry property of partial orders and the symmetry property of equivalence relations. Many advanced properties of posets are interesting mainly for non-linear orders.
Visualizing a poset
Hasse diagrams can visually represent the elements and relations of a partial ordering. These are graph drawings where the vertices are the elements of the poset and the ordering relation is indicated by both the edges and the relative positioning of the vertices. Orders are drawn bottom-up: if an element x is smaller than (precedes) y then there exists a path from x to y that is directed upwards. It is often necessary for the edges connecting elements to cross each other, but elements must never be located within an edge. An instructive exercise is to draw the Hasse diagram for the set of natural numbers that are smaller than or equal to 13, ordered by | (the divides relation).
Even some infinite sets can be diagrammed by superimposing an ellipsis (...) on a finite sub-order. This works well for the natural numbers, but it fails for the reals, where there is no immediate successor above 0; however, quite often one can obtain an intuition related to diagrams of a similar kind.
Special elements within an order
In a partially ordered set there may be some elements that play a special role. The most basic example is given by the least element of a poset. For example, 1 is the least element of the positive integers and the empty set is the least set under the subset order. Formally, an element m is a least element if:
m ≤ a, for all elements a of the order.
The notation 0 is frequently found for the least element, even when no numbers are concerned. However, in orders on sets of numbers, this notation might be inappropriate or ambiguous, since the number 0 is not always least. An example is given by the above divisibility order |, where 1 is the least element since it divides all other numbers. In contrast, 0 is the number that is divided by all other numbers. Hence it is the greatest element of the order. Other frequent terms for the least and greatest elements is bottom and top or zero and unit.
Least and greatest elements may fail to exist, as the example of the real numbers shows. But if they exist, they are always unique. In contrast, consider the divisibility relation | on the set {2,3,4,5,6}. Although this set has neither top nor bottom, the elements 2, 3, and 5 have no elements below them, while 4, 5 and 6 have none above. Such elements are called minimal and maximal, respectively. Formally, an element m is minimal if:
a ≤ m implies a = m, for all elements a of the order.
Exchanging ≤ with ≥ yields the definition of maximality. As the example shows, there can be many maximal elements and some elements may be both maximal and minimal (e.g. 5 above). However, if there is a least element, then it is the only minimal element of the order. Again, in infinite posets maximal elements do not always exist - the set of all finite subsets of a given infinite set, ordered by subset inclusion, provides one of many counterexamples. An important tool to ensure the existence of maximal elements under certain conditions is Zorn's Lemma.
Subsets of partially ordered sets inherit the order. We already applied this by considering the subset {2,3,4,5,6} of the natural numbers with the induced divisibility ordering. Now there are also elements of a poset that are special with respect to some subset of the order. This leads to the definition of upper bounds. Given a subset S of some poset P, an upper bound of S is an element b of P that is above all elements of S. Formally, this means that
s ≤ b, for all s in S.
Lower bounds again are defined by inverting the order. For example, -5 is a lower bound of the natural numbers as a subset of the integers. Given a set of sets, an upper bound for these sets under the subset ordering is given by their union. In fact, this upper bound is quite special: it is the smallest set that contains all of the sets. Hence, we have found the least upper bound of a set of sets. This concept is also called supremum or join, and for a set S one writes sup(S) or for its least upper bound. Conversely, the greatest lower bound is known as infimum or meet and denoted inf(S) or . These concepts play an important role in many applications of order theory. For two elements x and y, one also writes and for sup({x,y}) and inf({x,y}), respectively.
For example, 1 is the infimum of the positive integers as a subset of integers.
For another example, consider again the relation | on natural numbers. The least upper bound of two numbers is the smallest number that is divided by both of them, i.e. the least common multiple of the numbers. Greatest lower bounds in turn are given by the greatest common divisor.
Duality
In the previous definitions, we often noted that a concept can be defined by just inverting the ordering in a former definition. This is the case for "least" and "greatest", for "minimal" and "maximal", for "upper bound" and "lower bound", and so on. This is a general situation in order theory: A given order can be inverted by just exchanging its direction, pictorially flipping the Hasse diagram top-down. This yields the so-called dual, inverse, or opposite order.
Every order theoretic definition has its dual: it is the notion one obtains by applying the definition to the inverse order. Since all concepts are symmetric, this operation preserves the theorems of partial orders. For a given mathematical result, one can just invert the order and replace all definitions by their duals and one obtains another valid theorem. This is important and useful, since one obtains two theorems for the price of one. Some more details and examples can be found in the article on duality in order theory.
Constructing new orders
There are many ways to construct orders out of given orders. The dual order is one example. Another important construction is the cartesian product of two partially ordered sets, taken together with the product order on pairs of elements. The ordering is defined by (a, x) ≤ (b, y) if (and only if) a ≤ b and x ≤ y. (Notice carefully that there are three distinct meanings for the relation symbol ≤ in this definition.) The disjoint union of two posets is another typical example of order construction, where the order is just the (disjoint) union of the original orders.
Every partial order ≤ gives rise to a so-called strict order <, by defining a < b if a ≤ b and not b ≤ a. This transformation can be inverted by setting a ≤ b if a < b or a = b. The two concepts are equivalent although in some circumstances one can be more convenient to work with than the other.
Functions between orders
It is reasonable to consider functions between partially ordered sets having certain additional properties that are related to the ordering relations of the two sets. The most fundamental condition that occurs in this context is monotonicity. A function f from a poset P to a poset Q is monotone, or order-preserving, if a ≤ b in P implies f(a) ≤ f(b) in Q (Noting that, strictly, the two relations here are different since they apply to different sets.). The converse of this implication leads to functions that are order-reflecting, i.e. functions f as above for which f(a) ≤ f(b) implies a ≤ b. On the other hand, a function may also be order-reversing or antitone, if a ≤ b implies f(a) ≥ f(b).
An order-embedding is a function f between orders that is both order-preserving and order-reflecting. Examples for these definitions are found easily. For instance, the function that maps a natural number to its successor is clearly monotone with respect to the natural order. Any function from a discrete order, i.e. from a set ordered by the identity order "=", is also monotone. Mapping each natural number to the corresponding real number gives an example for an order embedding. The set complement on a powerset is an example of an antitone function.
An important question is when two orders are "essentially equal", i.e. when they are the same up to renaming of elements. Order isomorphisms are functions that define such a renaming. An order-isomorphism is a monotone bijective function that has a monotone inverse. This is equivalent to being a surjective order-embedding. Hence, the image f(P) of an order-embedding is always isomorphic to P, which justifies the term "embedding".
A more elaborate type of functions is given by so-called Galois connections. Monotone Galois connections can be viewed as a generalization of order-isomorphisms, since they constitute of a pair of two functions in converse directions, which are "not quite" inverse to each other, but that still have close relationships.
Another special type of self-maps on a poset are closure operators, which are not only monotonic, but also idempotent, i.e. f(x) = f(f(x)), and extensive (or inflationary), i.e. x ≤ f(x). These have many applications in all kinds of "closures" that appear in mathematics.
Besides being compatible with the mere order relations, functions between posets may also behave well with respect to special elements and constructions. For example, when talking about posets with least element, it may seem reasonable to consider only monotonic functions that preserve this element, i.e. which map least elements to least elements. If binary infima ∧ exist, then a reasonable property might be to require that f(x ∧ y) = f(x) ∧ f(y), for all x and y. All of these properties, and indeed many more, may be compiled under the label of limit-preserving functions.
Finally, one can invert the view, switching from functions of orders to orders of functions. Indeed, the functions between two posets P and Q can be ordered via the pointwise order. For two functions f and g, we have f ≤ g if f(x) ≤ g(x) for all elements x of P. This occurs for example in domain theory, where function spaces play an important role.
Special types of orders
Many of the structures that are studied in order theory employ order relations with further properties. In fact, even some relations that are not partial orders are of special interest. Mainly the concept of a preorder has to be mentioned. A preorder is a relation that is reflexive and transitive, but not necessarily antisymmetric. Each preorder induces an equivalence relation between elements, where a is equivalent to b, if a ≤ b and b ≤ a. Preorders can be turned into orders by identifying all elements that are equivalent with respect to this relation.
Several types of orders can be defined from numerical data on the items of the order: a total order results from attaching distinct real numbers to each item and using the numerical comparisons to order the items; instead, if distinct items are allowed to have equal numerical scores, one obtains a strict weak ordering. Requiring two scores to be separated by a fixed threshold before they may be compared leads to the concept of a semiorder, while allowing the threshold to vary on a per-item basis produces an interval order.
An additional simple but useful property leads to so-called well-founded, for which all non-empty subsets have a minimal element. Generalizing well-orders from linear to partial orders, a set is well partially ordered if all its non-empty subsets have a finite number of minimal elements.
Many other types of orders arise when the existence of infima and suprema of certain sets is guaranteed. Focusing on this aspect, usually referred to as completeness of orders, one obtains:
Bounded posets, i.e. posets with a least and greatest element (which are just the supremum and infimum of the empty subset),
Lattices, in which every non-empty finite set has a supremum and infimum,
Complete lattices, where every set has a supremum and infimum, and
Directed complete partial orders (dcpos), that guarantee the existence of suprema of all directed subsets and that are studied in domain theory.
Partial orders with complements, or poc sets, are posets with a unique bottom element 0, as well as an order-reversing involution such that
However, one can go even further: if all finite non-empty infima exist, then ∧ can be viewed as a total binary operation in the sense of universal algebra. Hence, in a lattice, two operations ∧ and ∨ are available, and one can define new properties by giving identities, such as
x ∧ (y ∨ z) = (x ∧ y) ∨ (x ∧ z), for all x, y, and z.
This condition is called distributivity and gives rise to distributive lattices. There are some other important distributivity laws which are discussed in the article on distributivity in order theory. Some additional order structures that are often specified via algebraic operations and defining identities are
Heyting algebras and
Boolean algebras,
which both introduce a new operation ~ called negation. Both structures play a role in mathematical logic and especially Boolean algebras have major applications in computer science.
Finally, various structures in mathematics combine orders with even more algebraic operations, as in the case of quantales, that allow for the definition of an addition operation.
Many other important properties of posets exist. For example, a poset is locally finite if every closed interval [a, b] in it is finite. Locally finite posets give rise to incidence algebras which in turn can be used to define the Euler characteristic of finite bounded posets.
Subsets of ordered sets
In an ordered set, one can define many types of special subsets based on the given order. A simple example are upper sets; i.e. sets that contain all elements that are above them in the order. Formally, the upper closure of a set S in a poset P is given by the set {x in P | there is some y in S with y ≤ x}. A set that is equal to its upper closure is called an upper set. Lower sets are defined dually.
More complicated lower subsets are ideals, which have the additional property that each two of their elements have an upper bound within the ideal. Their duals are given by filters. A related concept is that of a directed subset, which like an ideal contains upper bounds of finite subsets, but does not have to be a lower set. Furthermore, it is often generalized to preordered sets.
A subset which is - as a sub-poset - linearly ordered, is called a chain. The opposite notion, the antichain, is a subset that contains no two comparable elements; i.e. that is a discrete order.
Related mathematical areas
Although most mathematical areas use orders in one or the other way, there are also a few theories that have relationships which go far beyond mere application. Together with their major points of contact with order theory, some of these are to be presented below.
Universal algebra
As already mentioned, the methods and formalisms of universal algebra are an important tool for many order theoretic considerations. Beside formalizing orders in terms of algebraic structures that satisfy certain identities, one can also establish other connections to algebra. An example is given by the correspondence between Boolean algebras and Boolean rings. Other issues are concerned with the existence of free constructions, such as free lattices based on a given set of generators. Furthermore, closure operators are important in the study of universal algebra.
Topology
In topology, orders play a very prominent role. In fact, the collection of open sets provides a classical example of a complete lattice, more precisely a complete Heyting algebra (or "frame" or "locale"). Filters and nets are notions closely related to order theory and the closure operator of sets can be used to define a topology. Beyond these relations, topology can be looked at solely in terms of the open set lattices, which leads to the study of pointless topology. Furthermore, a natural preorder of elements of the underlying set of a topology is given by the so-called specialization order, that is actually a partial order if the topology is T0.
Conversely, in order theory, one often makes use of topological results. There are various ways to define subsets of an order which can be considered as open sets of a topology. Considering topologies on a poset (X, ≤) that in turn induce ≤ as their specialization order, the finest such topology is the Alexandrov topology, given by taking all upper sets as opens. Conversely, the coarsest topology that induces the specialization order is the upper topology, having the complements of principal ideals (i.e. sets of the form {y in X | y ≤ x} for some x) as a subbase. Additionally, a topology with specialization order ≤ may be order consistent, meaning that their open sets are "inaccessible by directed suprema" (with respect to ≤). The finest order consistent topology is the Scott topology, which is coarser than the Alexandrov topology. A third important topology in this spirit is the Lawson topology. There are close connections between these topologies and the concepts of order theory. For example, a function preserves directed suprema if and only if it is continuous with respect to the Scott topology (for this reason this order theoretic property is also called Scott-continuity).
Category theory
The visualization of orders with Hasse diagrams has a straightforward generalization: instead of displaying lesser elements below greater ones, the direction of the order can also be depicted by giving directions to the edges of a graph. In this way, each order is seen to be equivalent to a directed acyclic graph, where the nodes are the elements of the poset and there is a directed path from a to b if and only if a ≤ b. Dropping the requirement of being acyclic, one can also obtain all preorders.
When equipped with all transitive edges, these graphs in turn are just special categories, where elements are objects and each set of morphisms between two elements is at most singleton. Functions between orders become functors between categories. Many ideas of order theory are just concepts of category theory in small. For example, an infimum is just a categorical product. More generally, one can capture infima and suprema under the abstract notion of a categorical limit (or colimit, respectively). Another place where categorical ideas occur is the concept of a (monotone) Galois connection, which is just the same as a pair of adjoint functors.
But category theory also has its impact on order theory on a larger scale. Classes of posets with appropriate functions as discussed above form interesting categories. Often one can also state constructions of orders, like the product order, in terms of categories. Further insights result when categories of orders are found categorically equivalent to other categories, for example of topological spaces. This line of research leads to various representation theorems, often collected under the label of Stone duality.
History
As explained before, orders are ubiquitous in mathematics. However, the earliest explicit mentionings of partial orders are probably to be found not before the 19th century. In this context the works of George Boole are of great importance. Moreover, works of Charles Sanders Peirce, Richard Dedekind, and Ernst Schröder also consider concepts of order theory.
Contributors to ordered geometry were listed in a 1961 textbook:
In 1901 Bertrand Russell wrote "On the Notion of Order" exploring the foundations of the idea through generation of series. He returned to the topic in part IV of The Principles of Mathematics (1903). Russell noted that binary relation aRb has a sense proceeding from a to b with the converse relation having an opposite sense, and sense "is the source of order and series." (p 95) He acknowledges Immanuel Kant was "aware of the difference between logical opposition and the opposition of positive and negative". He wrote that Kant deserves credit as he "first called attention to the logical importance of asymmetric relations."
The term poset as an abbreviation for partially ordered set is attributed to Garrett Birkhoff in the second edition of his influential book Lattice Theory.
See also
Causal Sets
Cyclic order
Hierarchy
Incidence algebra
Notes
References
External links
Orders at ProvenMath partial order, linear order, well order, initial segment; formal definitions and proofs within the axioms of set theory.
Nagel, Felix (2013). Set Theory and Topology. An Introduction to the Foundations of Analysis
Organization | Order theory | [
"Mathematics"
] | 5,498 | [
"Order theory"
] |
362,070 | https://en.wikipedia.org/wiki/Ignition%20system | Ignition systems are used by heat engines to initiate combustion by igniting the fuel-air mixture. In a spark ignition versions of the internal combustion engine (such as petrol engines), the ignition system creates a spark to ignite the fuel-air mixture just before each combustion stroke. Gas turbine engines and rocket engines normally use an ignition system only during start-up.
Diesel engines use compression ignition to ignite the fuel-air mixture using the heat of compression and therefore do not use an ignition system. They usually have glowplugs that preheat the combustion chamber to aid starting in cold weather.
Early cars used ignition magneto and trembler coil systems, which were superseded by Distributor-based systems (first used in 1912). Electronic ignition systems (first used in 1968) became common towards the end of the 20th century, with coil-on-plug versions of these systems becoming widespread since the 1990s.
Magneto and mechanical systems
Ignition magneto systems
An ignition magneto (also called a high-tension magneto) is an older type of ignition system used in spark-ignition engines (such as petrol engines). It uses a magneto and a transformer to make pulses of high voltage for the spark plugs. The older term "high-tension" means "high-voltage".
Used on many cars in the early 20th century, ignition magnetos were largely replaced by induction coil ignition systems. The use of ignition magnetos is now confined mainly to engines without a battery, for example in lawnmowers and chainsaws. It is also used in modern piston-engined aircraft (even though a battery is present), to avoid the engine relying on an electrical system.
Induction coil systems
As batteries became more common in cars (due to the increased usage of electric starter motors), magneto systems were replaced by systems using an induction coil. The 1886 Benz Patent-Motorwagen and the 1908 Ford Model T used a trembler coil ignition system, whereby the trembler interrupted the current through the coil and caused a rapid series of sparks during each firing. The trembler coil would be energized at an appropriate point in the engine cycle. In the Model T, the four-cylinder engine had a trembler coil for each cylinder.
Distributor-based systems
An improved ignition system was invented by Charles Kettering at Delco in the United States and introduced in Cadillac's 1912 cars. The Kettering ignition system consisted of a single ignition coil, breaker points, a capacitor (to prevent the points from arcing at break) and a distributor (to direct the electricity from the ignition coil to the correct cylinder). The Kettering system became the primary ignition system for many years in the automotive industry due to its lower cost and relative simplicity.
Electronic systems
The first electronic ignition (a cold cathode type) was tested in 1948 by Delco-Remy, while Lucas introduced a transistorized ignition in 1955, which was used on BRM and Coventry Climax Formula One engines in 1962. The aftermarket began offering EI that year, with both the AutoLite Electric Transistor 201 and Tung-Sol EI-4 (thyratron capacitive discharge) being available. Pontiac became the first automaker to offer an optional EI, the breakerless magnetic pulse-triggered Delcotronic, on some 1963 models; it was also available on some Corvettes. The first commercially available all solid-state (SCR) capacitive discharge ignition was manufactured by Hyland Electronics in Canada also in 1963. Ford fitted a FORD designed breakerless system on the Lotus 25s entered at Indianapolis the next year, ran a fleet test in 1964, and began offering optional EI on some models in 1965. This electronic system was utilized on the GT40s campaigned by Shelby American and Holman and Moody. Robert C. Hogle, Ford Motor Company, presented the, "Mark II-GT Ignition and Electrical System", Publication #670068, at the SAE Congress, Detroit, Michigan, January 9–13, 1967. Beginning in 1958, Earl W. Meyer at Chrysler worked on EI, continuing until 1961 and resulting in use of EI on the company's NASCAR hemis in 1963 and 1964.
Prest-O-Lite's CD-65, which relied on capacitance discharge (CD), appeared in 1965, and had "an unprecedented 50,000 mile warranty." (This differs from the non-CD Prest-O-Lite system introduced on AMC products in 1972, and made standard equipment for the 1975 model year.) A similar CD unit was available from Delco in 1966, which was optional on Oldsmobile, Pontiac, and GMC vehicles in the 1967 model year. Also in 1967, Motorola debuted their breakerless CD system. The most famous aftermarket electronic ignition which debuted in 1965, was the Delta Mark 10 capacitive discharge ignition, which was sold assembled or as a kit.
The Fiat Dino was the first production car to come standard with EI in 1968, followed by the Jaguar XJ Series 1 in 1971, Chrysler (after a 1971 trial) in 1973 and by Ford and GM in 1975.
In 1967, Prest-O-Lite made a "Black Box" ignition amplifier, intended to take the load off the distributor's breaker points during high rpm runs, which was used by Dodge and Plymouth on their factory Super Stock Coronet and Belvedere drag racers. This amplifier was installed on the interior side of the cars' firewall, and had a duct which provided outside air to cool the unit. The rest of the system (distributor and spark plugs) remains as for the mechanical system. The lack of moving parts compared with the mechanical system leads to greater reliability and longer service intervals.
A variation coil-on-plug ignition has each coil handle two plugs, on cylinders which are 360 degrees out of phase (and therefore reach top dead center (TDC) at the same time); in the four-cycle engine this means that one plug will be sparking during the end of the exhaust stroke while the other fires at the usual time, a so-called "wasted spark" arrangement which has no drawbacks apart from faster spark plug erosion; the paired cylinders are 1/4 and 2/3 on four cylinder arrangements, 1/4, 6/3, 2/5 on six cylinder engines and 6/7, 4/1, 8/3 and 2/5 on V8 engines. Other systems do away with the distributor as a timing apparatus and use a magnetic crank angle sensor mounted on the crankshaft to trigger the ignition at the proper time.
Engine Control Units
Modern automotive engines use an engine control unit (ECU), which is a single device that controls various engine functions including the ignition system and the fuel injection. This contrasts earlier engines, where the fuel injection and ignition were operated as separate systems.
Gas turbine and rocket engines
Gas turbine engines (including jet engines) use capacitor discharge ignition, however the ignition system is only used at startup or when the combustor(s) flame goes out.
The ignition system in a rocket engine is critical to avoiding a hard start or explosion. Rockets often employ pyrotechnic devices that place flames across the face of the injector plate, or, alternatively, hypergolic propellants that ignite spontaneously on contact with each other.
See also
Electromagnetism
Faraday's law of induction
History of the internal combustion engine
References
Auto parts
Applications of control engineering
Engine components | Ignition system | [
"Technology",
"Engineering"
] | 1,543 | [
"Engine components",
"Control engineering",
"Engines",
"Applications of control engineering"
] |
362,132 | https://en.wikipedia.org/wiki/Glans | The glans (, : glandes ; from the Latin word for "acorn") is a vascular structure located at the tip of the penis in male mammals or a homologous genital structure of the clitoris in female mammals.
Structure
The exterior structure of the glans consists of mucous membrane, which is usually covered by foreskin or clitoral hood in naturally developed genitalia. This covering, called the prepuce, is normally retractable in adulthood unless removed by circumcision.
The glans naturally joins with the frenulum of the penis or clitoris, as well as the inner labia in women, and the foreskin in men. In non-technical or sexual discussions, often the word "clitoris" refers to the external glans alone, excluding the clitoral hood, frenulum, and internal body of the clitoris. Similarly, phrases "tip" or "head" of the penis refers to the glans alone.
Sex differences in humans
In males, the glans is known as the glans penis, while in females the glans is known as the clitoral glans.
In females, the clitoris is above the urethra. The glans of the clitoris is the most highly innervated part of the external female genitalia.
In spotted hyenas, the female's pseudo-penis can be distinguished from the male's penis by its greater thickness and more rounded glans. In both male and female spotted hyenas, the base of the glans is covered with penile spines.
Development
In the development of the urinary and reproductive organs, the glans is derived from the genital tubercle.
See also
Glanuloplasty
References
Works cited
Sexual anatomy | Glans | [
"Biology"
] | 379 | [
"Behavior",
"Sex",
"Sexuality stubs",
"Sexual anatomy",
"Sexuality"
] |
362,348 | https://en.wikipedia.org/wiki/Terahertz%20radiation | Terahertz radiation – also known as submillimeter radiation, terahertz waves, tremendously high frequency (THF), T-rays, T-waves, T-light, T-lux or THz – consists of electromagnetic waves within the International Telecommunication Union-designated band of frequencies from 0.3 to 3 terahertz (THz), although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz. One terahertz is 1012 Hz or 1,000 GHz. Wavelengths of radiation in the terahertz band correspondingly range from 1 mm to 0.1 mm = 100 μm. Because terahertz radiation begins at a wavelength of around 1 millimeter and proceeds into shorter wavelengths, it is sometimes known as the submillimeter band, and its radiation as submillimeter waves, especially in astronomy. This band of electromagnetic radiation lies within the transition region between microwave and far infrared, and can be regarded as either.
Compared to lower radio frequencies, terahertz radiation is strongly absorbed by the gases of the atmosphere, and in air most of the energy is attenuated within a few meters, so it is not practical for long distance terrestrial radio communication. It can penetrate thin layers of materials but is blocked by thicker objects. THz beams transmitted through materials can be used for material characterization, layer inspection, relief measurement, and as a lower-energy alternative to X-rays for producing high resolution images of the interior of solid objects.
Terahertz radiation occupies a middle ground where the ranges of microwaves and infrared light waves overlap, known as the "terahertz gap"; it is called a "gap" because the technology for its generation and manipulation is still in its infancy. The generation and modulation of electromagnetic waves in this frequency range ceases to be possible by the conventional electronic devices used to generate radio waves and microwaves, requiring the development of new devices and techniques.
Description
Terahertz radiation falls in between infrared radiation and microwave radiation in the electromagnetic spectrum, and it shares some properties with each of these. Terahertz radiation travels in a line of sight and is non-ionizing. Like microwaves, terahertz radiation can penetrate a wide variety of non-conducting materials; clothing, paper, cardboard, wood, masonry, plastic and ceramics. The penetration depth is typically less than that of microwave radiation. Like infrared, terahertz radiation has limited penetration through fog and clouds and cannot penetrate liquid water or metal. Terahertz radiation can penetrate some distance through body tissue like x-rays, but unlike them is non-ionizing, so it is of interest as a replacement for medical X-rays. Due to its longer wavelength, images made using terahertz waves have lower resolution than X-rays and need to be enhanced (see figure at right).
The earth's atmosphere is a strong absorber of terahertz radiation, so the range of terahertz radiation in air is limited to tens of meters, making it unsuitable for long-distance communications. However, at distances of ~10 meters the band may still allow many useful applications in imaging and construction of high bandwidth wireless networking systems, especially indoor systems. In addition, producing and detecting coherent terahertz radiation remains technically challenging, though inexpensive commercial sources now exist in the 0.3–1.0 THz range (the lower part of the spectrum), including gyrotrons, backward wave oscillators, and resonant-tunneling diodes. Due to the small energy of THz photons, current THz devices require low temperature during operation to suppress environmental noise. Tremendous efforts thus have been put into THz research to improve the operation temperature, using different strategies such as optomechanical meta-devices.
Sources
Natural
Terahertz radiation is emitted as part of the black-body radiation from anything with a temperature greater than about 2 kelvin. While this thermal emission is very weak, observations at these frequencies are important for characterizing cold 10–20 K cosmic dust in interstellar clouds in the Milky Way galaxy, and in distant starburst galaxies.
Telescopes operating in this band include the James Clerk Maxwell Telescope, the Caltech Submillimeter Observatory and the Submillimeter Array at the Mauna Kea Observatory in Hawaii, the BLAST balloon borne telescope, the Herschel Space Observatory, the Heinrich Hertz Submillimeter Telescope at the Mount Graham International Observatory in Arizona, and at the recently built Atacama Large Millimeter Array. Due to Earth's atmospheric absorption spectrum, the opacity of the atmosphere to submillimeter radiation restricts these observatories to very high altitude sites, or to space.
Artificial
, viable sources of terahertz radiation are the gyrotron, the backward wave oscillator ("BWO"), the molecule gas far-infrared laser, Schottky-diode multipliers, varactor (varicap) multipliers, quantum-cascade laser, the free-electron laser, synchrotron light sources, photomixing sources, single-cycle or pulsed sources used in terahertz time-domain spectroscopy such as photoconductive, surface field, photo-Dember and optical rectification emitters, and electronic oscillators based on resonant tunneling diodes have been shown to operate up to 1.98 THz.
There have also been solid-state sources of millimeter and submillimeter waves for many years. AB Millimeter in Paris, for instance, produces a system that covers the entire range from 8 GHz to 1,000 GHz with solid state sources and detectors. Nowadays, most time-domain work is done via ultrafast lasers.
In mid-2007, scientists at the U.S. Department of Energy's Argonne National Laboratory, along with collaborators in Turkey and Japan, announced the creation of a compact device that could lead to portable, battery-operated terahertz radiation sources. The device uses high-temperature superconducting crystals, grown at the University of Tsukuba in Japan. These crystals comprise stacks of Josephson junctions, which exhibit a property known as the Josephson effect: when external voltage is applied, alternating current flows across the junctions at a frequency proportional to the voltage. This alternating current induces an electromagnetic field. A small voltage (around two millivolts per junction) can induce frequencies in the terahertz range.
In 2008, engineers at Harvard University achieved room temperature emission of several hundred nanowatts of coherent terahertz radiation using a semiconductor source. THz radiation was generated by nonlinear mixing of two modes in a mid-infrared quantum cascade laser. Previous sources had required cryogenic cooling, which greatly limited their use in everyday applications.
In 2009, it was discovered that the act of unpeeling adhesive tape generates non-polarized terahertz radiation, with a narrow peak at 2 THz and a broader peak at 18 THz. The mechanism of its creation is tribocharging of the adhesive tape and subsequent discharge; this was hypothesized to involve bremsstrahlung with absorption or energy density focusing during dielectric breakdown of a gas.
In 2013, researchers at Georgia Institute of Technology's Broadband Wireless Networking Laboratory and the Polytechnic University of Catalonia developed a method to create a graphene antenna: an antenna that would be shaped into graphene strips from 10 to 100 nanometers wide and one micrometer long. Such an antenna could be used to emit radio waves in the terahertz frequency range.
Terahertz gap
In engineering, the terahertz gap is a frequency band in the THz region for which practical technologies for generating and detecting the radiation do not exist. It is defined as 0.1 to 10 THz (wavelengths of 3 mm to 30 μm) although the upper boundary is somewhat arbitrary and is considered by some sources as 30 THz (a wavelength of 10 μm). Currently, at frequencies within this range, useful power generation and receiver technologies are inefficient and unfeasible.
Mass production of devices in this range and operation at room temperature (at which energy kT is equal to the energy of a photon with a frequency of 6.2 THz) are mostly impractical. This leaves a gap between mature microwave technologies in the highest frequencies of the radio spectrum and the well-developed optical engineering of infrared detectors in their lowest frequencies. This radiation is mostly used in small-scale, specialized applications such as submillimetre astronomy. Research that attempts to resolve this issue has been conducted since the late 20th century.
In 2024, an experiment has been published by German researchers where a TDLAS experiment at 4.75 THz has been performed in "infrared quality" with an uncooled pyroelectric receiver while the THz source has been a cw DFB-QC-Laser operated at 43.3 K and laser currents between 480 mA and 600 mA.
Closure of the terahertz gap
Most vacuum electronic devices that are used for microwave generation can be modified to operate at terahertz frequencies, including the magnetron, gyrotron, synchrotron, and free-electron laser. Similarly, microwave detectors such as the tunnel diode have been re-engineered to detect at terahertz and infrared frequencies as well. However, many of these devices are in prototype form, are not compact, or exist at university or government research labs, without the benefit of cost savings due to mass production.
Research
Molecular biology
Terahertz radiation has comparable frequencies to the motion of biomolecular systems in the course of their function (a frequency 1THz is equivalent to a timescale of 1 picosecond, therefore in particular the range of hundreds of GHz up to low numbers of THz is comparable to biomolecular relaxation timescales of a few ps to a few ns). Modulation of biological and also neurological function is therefore possible using radiation in the range hundreds of GHz up to a few THz at relatively low energies (without significant heating or ionisation) achieving either beneficial or harmful effects.
Medical imaging
Unlike X-rays, terahertz radiation is not ionizing radiation and its low photon energies in general do not damage living tissues and DNA. Some frequencies of terahertz radiation can penetrate several millimeters of tissue with low water content (e.g., fatty tissue) and reflect back. Terahertz radiation can also detect differences in water content and density of a tissue. Such methods could allow effective detection of epithelial cancer with an imaging system that is safe, non-invasive, and painless. In response to the demand for COVID-19 screening terahertz spectroscopy and imaging has been proposed as a rapid screening tool.
The first images generated using terahertz radiation date from the 1960s; however, in 1995 images generated using terahertz time-domain spectroscopy generated a great deal of interest.
Some frequencies of terahertz radiation can be used for 3D imaging of teeth and may be more accurate than conventional X-ray imaging in dentistry.
Security
Terahertz radiation can penetrate fabrics and plastics, so it can be used in surveillance, such as security screening, to uncover concealed weapons on a person, remotely. This is of particular interest because many materials of interest have unique spectral "fingerprints" in the terahertz range. This offers the possibility to combine spectral identification with imaging. In 2002, the European Space Agency (ESA) Star Tiger team, based at the Rutherford Appleton Laboratory (Oxfordshire, UK), produced the first passive terahertz image of a hand. By 2004, ThruVision Ltd, a spin-out from the Council for the Central Laboratory of the Research Councils (CCLRC) Rutherford Appleton Laboratory, had demonstrated the world's first compact THz camera for security screening applications. The prototype system successfully imaged guns and explosives concealed under clothing. Passive detection of terahertz signatures avoid the bodily privacy concerns of other detection by being targeted to a very specific range of materials and objects.
In January 2013, the NYPD announced plans to experiment with the new technology to detect concealed weapons, prompting Miami blogger and privacy activist Jonathan Corbett to file a lawsuit against the department in Manhattan federal court that same month, challenging such use: "For thousands of years, humans have used clothing to protect their modesty and have quite reasonably held the expectation of privacy for anything inside of their clothing, since no human is able to see through them." He sought a court order to prohibit using the technology without reasonable suspicion or probable cause. By early 2017, the department said it had no intention of ever using the sensors given to them by the federal government.
Scientific use and imaging
In addition to its current use in submillimetre astronomy, terahertz radiation spectroscopy could provide new sources of information for chemistry and biochemistry.
Recently developed methods of THz time-domain spectroscopy (THz TDS) and THz tomography have been shown to be able to image samples that are opaque in the visible and near-infrared regions of the spectrum. The utility of THz-TDS is limited when the sample is very thin, or has a low absorbance, since it is very difficult to distinguish changes in the THz pulse caused by the sample from those caused by long-term fluctuations in the driving laser source or experiment. However, THz-TDS produces radiation that is both coherent and spectrally broad, so such images can contain far more information than a conventional image formed with a single-frequency source.
Submillimeter waves are used in physics to study materials in high magnetic fields, since at high fields (over about 11 tesla), the electron spin Larmor frequencies are in the submillimeter band. Many high-magnetic field laboratories perform these high-frequency EPR experiments, such as the National High Magnetic Field Laboratory (NHMFL) in Florida.
Terahertz radiation could let art historians see murals hidden beneath coats of plaster or paint in centuries-old buildings, without harming the artwork.
In additional, THz imaging has been done with lens antennas to capture radio image of the object.
Particle accelerators
New types of particle accelerators that could achieve multi Giga-electron volts per metre (GeV/m) accelerating gradients are of utmost importance to reduce the size and cost of future generations of high energy colliders as well as provide a widespread availability of compact accelerator technology to smaller laboratories around the world. Gradients in the order of 100 MeV/m have been achieved by conventional techniques and are limited by RF-induced plasma breakdown. Beam driven dielectric wakefield accelerators (DWAs) typically operate in the Terahertz frequency range, which pushes the plasma breakdown threshold for surface electric fields into the multi-GV/m range. DWA technique allows to accommodate a significant amount of charge per bunch, and gives an access to conventional fabrication techniques for the accelerating structures. To date 0.3 GeV/m accelerating and 1.3 GeV/m decelerating gradients have been achieved using a dielectric lined waveguide with sub-millimetre transverse aperture.
An accelerating gradient larger than 1 GeV/m, can potentially be produced by the Cherenkov Smith-Purcell radiative mechanism in a dielectric capillary with a variable inner radius. When an electron bunch propagates through the capillary, its self-field interacts with the dielectric material and produces wakefields that propagate inside the material at the Cherenkov angle. The wakefields are slowed down below the speed of light, as the relative dielectric permittivity of the material is larger than 1. The radiation is then reflected from the capillary's metallic boundary and diffracted back into the vacuum region, producing high accelerating fields on the capillary axis with a distinct frequency signature. In presence of a periodic boundary the Smith-Purcell radiation imposes frequency dispersion.
A preliminary study with corrugated capillaries has shown some modification to the spectral content and amplitude of the generated wakefields, but the possibility of using Smith-Purcell effect in DWA is still under consideration.
Communication
The high atmospheric absorption of terahertz waves limits the range of communication using existing transmitters and antennas to tens of meters. However, the huge unallocated bandwidth available in the band (ten times the bandwidth of the millimeter wave band, 100 times that of the SHF microwave band) makes it very attractive for future data transmission and networking use. There are tremendous difficulties to extending the range of THz communication through the atmosphere, but the world telecommunications industry is funding much research into overcoming those limitations. One promising application area is the 6G cellphone and wireless standard, which will supersede the current 5G standard around 2030.
For a given antenna aperture, the gain of directive antennas scales with the square of frequency, while for low power transmitters the power efficiency is independent of bandwidth. So the consumption factor theory of communication links indicates that, contrary to conventional engineering wisdom, for a fixed aperture it is more efficient in bits per second per watt to use higher frequencies in the millimeter wave and terahertz range. Small directive antennas a few centimeters in diameter can produce very narrow 'pencil' beams of THz radiation, and phased arrays of multiple antennas could concentrate virtually all the power output on the receiving antenna, allowing communication at longer distances.
In May 2012, a team of researchers from the Tokyo Institute of Technology published in Electronics Letters that it had set a new record for wireless data transmission by using T-rays and proposed they be used as bandwidth for data transmission in the future. The team's proof of concept device used a resonant tunneling diode (RTD) negative resistance oscillator to produce waves in the terahertz band. With this RTD, the researchers sent a signal at 542 GHz, resulting in a data transfer rate of 3 Gigabits per second. It doubled the record for data transmission rate set the previous November. The study suggested that Wi-Fi using the system would be limited to approximately , but could allow data transmission at up to 100 Gbit/s. In 2011, Japanese electronic parts maker Rohm and a research team at Osaka University produced a chip capable of transmitting 1.5 Gbit/s using terahertz radiation.
Potential uses exist in high-altitude telecommunications, above altitudes where water vapor causes signal absorption: aircraft to satellite, or satellite to satellite.
Amateur radio
A number of administrations permit amateur radio experimentation within the 275–3,000 GHz range or at even higher frequencies on a national basis, under license conditions that are usually based on RR5.565 of the ITU Radio Regulations. Amateur radio operators utilizing submillimeter frequencies often attempt to set two-way communication distance records. In the United States, WA1ZMS and W4WWQ set a record of on 403 GHz using CW (Morse code) on 21 December 2004. In Australia, at 30 THz a distance of was achieved by stations VK3CV and VK3LN on 8 November 2020.
Manufacturing
Many possible uses of terahertz sensing and imaging are proposed in manufacturing, quality control, and process monitoring. These in general exploit the traits of plastics and cardboard being transparent to terahertz radiation, making it possible to inspect packaged goods. The first imaging system based on optoelectronic terahertz time-domain spectroscopy were developed in 1995 by researchers from AT&T Bell Laboratories and was used for producing a transmission image of a packaged electronic chip. This system used pulsed laser beams with duration in range of picoseconds. Since then commonly used commercial/ research terahertz imaging systems have used pulsed lasers to generate terahertz images. The image can be developed based on either the attenuation or phase delay of the transmitted terahertz pulse.
Since the beam is scattered more at the edges and also different materials have different absorption coefficients, the images based on attenuation indicates edges and different materials inside of objects. This approach is similar to X-ray transmission imaging, where images are developed based on attenuation of the transmitted beam.
In the second approach, terahertz images are developed based on the time delay of the received pulse. In this approach, thicker parts of the objects are well recognized as the thicker parts cause more time delay of the pulse. Energy of the laser spots are distributed by a Gaussian function. The geometry and behavior of Gaussian beam in the Fraunhofer region imply that the electromagnetic beams diverge more as the frequencies of the beams decrease and thus the resolution decreases. This implies that terahertz imaging systems have higher resolution than scanning acoustic microscope (SAM) but lower resolution than X-ray imaging systems. Although terahertz can be used for inspection of packaged objects, it suffers from low resolution for fine inspections. X-ray image and terahertz images of an electronic chip are brought in the figure on the right. Obviously the resolution of X-ray is higher than terahertz image, but X-ray is ionizing and can be impose harmful effects on certain objects such as semiconductors and live tissues.
To overcome low resolution of the terahertz systems near-field terahertz imaging systems are under development. In nearfield imaging the detector needs to be located very close to the surface of the plane and thus imaging of the thick packaged objects may not be feasible. In another attempt to increase the resolution, laser beams with frequencies higher than terahertz are used to excite the p-n junctions in semiconductor objects, the excited junctions generate terahertz radiation as a result as long as their contacts are unbroken and in this way damaged devices can be detected. In this approach, since the absorption increases exponentially with the frequency, again inspection of the thick packaged semiconductors may not be doable. Consequently, a tradeoff between the achievable resolution and the thickness of the penetration of the beam in the packaging material should be considered.
THz gap research
Ongoing investigation has resulted in improved emitters (sources) and detectors, and research in this area has intensified. However, drawbacks remain that include the substantial size of emitters, incompatible frequency ranges, and undesirable operating temperatures, as well as component, device, and detector requirements that are somewhere between solid state electronics and photonic technologies.
Free-electron lasers can generate a wide range of stimulated emission of electromagnetic radiation from microwaves, through terahertz radiation to X-ray. However, they are bulky, expensive and not suitable for applications that require critical timing (such as wireless communications). Other sources of terahertz radiation which are actively being researched include solid state oscillators (through frequency multiplication), backward wave oscillators (BWOs), quantum cascade lasers, and gyrotrons.
Safety
The terahertz region is between the radio frequency region and the laser optical region. Both the IEEE C95.1–2005 RF safety standard and the ANSI Z136.1–2007 Laser safety standard have limits into the terahertz region, but both safety limits are based on extrapolation. It is expected that effects on biological tissues are thermal in nature and, therefore, predictable by conventional thermal models . Research is underway to collect data to populate this region of the spectrum and validate safety limits.
A theoretical study published in 2010 and conducted by Alexandrov et al at the Center for Nonlinear Studies at Los Alamos National Laboratory in New Mexico created mathematical models predicting how terahertz radiation would interact with double-stranded DNA, showing that, even though involved forces seem to be tiny, nonlinear resonances (although much less likely to form than less-powerful common resonances) could allow terahertz waves to "unzip double-stranded DNA, creating bubbles in the double strand that could significantly interfere with processes such as gene expression and DNA replication". Experimental verification of this simulation was not done. Swanson's 2010 theoretical treatment of the Alexandrov study concludes that the DNA bubbles do not occur under reasonable physical assumptions or if the effects of temperature are taken into account. A bibliographical study published in 2003 reported that T-ray intensity drops to less than 1% in the first 500 μm of skin but stressed that "there is currently very little information about the optical properties of human tissue at terahertz frequencies".
See also
Far-infrared laser
Full body scanner
Heterojunction bipolar transistor
High-electron-mobility transistor (HEMT)
Picarin
Terahertz time-domain spectroscopy
Microwave analog signal processing
References
Further reading
External links
Electromagnetic spectrum
Terahertz technology | Terahertz radiation | [
"Physics"
] | 5,117 | [
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Terahertz technology"
] |
362,598 | https://en.wikipedia.org/wiki/Indium%20tin%20oxide | Indium tin oxide (ITO) is a ternary composition of indium, tin and oxygen in varying proportions. Depending on the oxygen content, it can be described as either a ceramic or an alloy. Indium tin oxide is typically encountered as an oxygen-saturated composition with a formulation of 74% In, 8% Sn, and 18% O by weight. Oxygen-saturated compositions are so typical that unsaturated compositions are termed oxygen-deficient ITO. It is transparent and colorless in thin layers, while in bulk form it is yellowish to gray. In the infrared region of the spectrum it acts as a metal-like mirror.
Indium tin oxide is one of the most widely used transparent conducting oxides, not just for its electrical conductivity and optical transparency, but also for the ease with which it can be deposited as a thin film, as well as its chemical resistance to moisture. As with all transparent conducting films, a compromise must be made between conductivity and transparency, since increasing the thickness and increasing the concentration of charge carriers increases the film's conductivity, but decreases its transparency.
Thin films of indium tin oxide are most commonly deposited on surfaces by physical vapor deposition. Often used is electron beam evaporation, or a range of sputter deposition techniques.
Material and properties
ITO is a mixed oxide of indium and tin with a melting point in the range 1526–1926 °C (1800–2200 K, 2800–3500 °F), depending on composition. The most commonly used material is an oxide of a composition of ca. In4Sn. The material is a n-type semiconductor with a large bandgap of around 4 eV. ITO is both transparent to visible light and relatively conductive. It has a low electrical resistivity of ~10−4 Ω·cm, and a thin film can have an optical transmittance of greater than 80%. These properties are utilized to great advantage in touch-screen applications such as mobile phones.
Common uses
Indium tin oxide (ITO) is an optoelectronic material that is applied widely in both research and industry. ITO can be used for many applications, such as flat-panel displays, smart windows, polymer-based electronics, thin film photovoltaics, glass doors of supermarket freezers, and architectural windows. Moreover, ITO thin films for glass substrates can be helpful for glass windows to conserve energy.
ITO green tapes are utilized for the production of lamps that are electroluminescent, functional, and fully flexible. Also, ITO thin films are used primarily to serve as coatings that are anti-reflective and for liquid crystal displays (LCDs) and electroluminescence, where the thin films are used as conducting, transparent electrodes.
ITO is often used to make transparent conductive coating for displays such as liquid crystal displays, OLED displays, plasma displays, touch panels, and electronic ink applications. Thin films of ITO are also used in organic light-emitting diodes, solar cells, antistatic coatings and EMI shieldings. In organic light-emitting diodes, ITO is used as the anode (hole injection layer).
ITO films deposited on windshields are used for defrosting aircraft windshields. The heat is generated by applying a voltage across the film. ITO is also used to reflect electromagnetic radiation. The F-22 Raptor's canopy has an ITO coating that reflects radar waves, enhancing its stealth capabilities and giving it a distinctive gold tint.
ITO is also used for various optical coatings, most notably infrared-reflecting coatings (hot mirrors) for automotive, and sodium vapor lamp glasses. Other uses include gas sensors, antireflection coatings, electrowetting on dielectrics, and Bragg reflectors for VCSEL lasers. ITO is also used as the IR reflector for low-e window panes. ITO was also used as a sensor coating in the later Kodak DCS cameras, starting with the Kodak DCS 520, as a means of increasing blue channel response.
ITO thin film strain gauges can operate at temperatures up to 1400 °C and can be used in harsh environments, such as gas turbines, jet engines, and rocket engines.
Silver nanoparticle–ITO hybrid
ITO has been popularly used as a high-quality flexible substrate to produce flexible electronics. However, this substrate's flexibility decreases as its conductivity improves. Previous research have indicated that the mechanical properties of ITO can be improved through increasing the degree of crystallinity. Doping with silver (Ag) can improve this property, but results in a loss of transparency. An improved method that embeds Ag nanoparticles (AgNPs) instead of homogeneously to create a hybrid ITO has proven to be effective in compensating for the decrease in transparency. The hybrid ITO consists of domains in one orientation grown on the AgNPs and a matrix of the other orientation. The domains are stronger than the matrix and function as barriers to crack propagation, significantly increasing the flexibility. The change in resistivity with increased bending significantly decreases in the hybrid ITO compared with homogeneous ITO.
Alternative synthesis methods
ITO is typically deposited through expensive and energy-intensive processes that deal with physical vapor deposition (PVD). Such processes include sputtering, which results in the formation of brittle layers.
Because of the cost and energy of physical vapor deposition, with the required vacuum processing, alternative methods of preparing ITO are being investigated.
Tape casting process
An alternative process that uses a particle-based technique, is known as the tape casting process. Because it is a particle-based technique, the ITO nano-particles are dispersed first, then placed in organic solvents for stability. Benzyl phthalate plasticizer and polyvinyl butyral binder have been shown to be helpful in preparing nanoparticle slurries. Once the tape casting process has been carried out, the characterization of the green ITO tapes showed that optimal transmission went up to about 75%, with a lower bound on the electrical resistance of 2 Ω·cm.
Laser sintering
Using ITO nanoparticles imposes a limit on the choice of substrate, owing to the high temperature required for sintering. As an alternative starting material, In-Sn alloy nanoparticles allow for a more diverse range of possible substrates. A continuous conductive In-Sn alloy film is formed firstly, followed by oxidation to bring transparency. This two step process involves thermal annealing, which requires special atmosphere control and increased processing time. Because metal nanoparticles can be converted easily into a conductive metal film under the treatment of laser, laser sintering is applied to achieve products' homogeneous morphology. Laser sintering is also easy and less costly to use since it can be performed in air.
Ambient gas conditions
For example, using conventional methods but varying the ambient gas conditions to improve the optoelectronic properties as, for example, oxygen plays a major role in the properties of ITO.
Chemical shaving for very thin films
There has been numerical modeling of plasmonic metallic nanostructures have shown great potential as a method of light management in thin-film nanodisc-patterned hydrogenated amorphous silicon (a-Si:H) solar photovoltaic (PV) cells. A problem that arises for plasmonic-enhanced PV devices is the requirement for 'ultra-thin' transparent conducting oxides (TCOs) with high transmittance and low enough resistivity to be used as device top contacts/electrodes. Unfortunately, most work on TCOs is on relatively thick layers and the few reported cases of thin TCO showed a marked decrease in conductivity. To overcome this it is possible to first grow a thick layer and then chemically shave it down to obtain a thin layer that is whole and highly conductive.
Constraints and trade-offs
A major concern with ITO is its cost. ITO costs several times more than aluminium zinc oxide (AZO). AZO is a common choice of transparent conducting oxide (TCO) because of its lower cost and relatively good optical transmission performance in the solar spectrum. However, ITO is superior to AZO in many other important performance categories including chemical resistance to moisture. ITO is not affected by moisture, and is stable as part of copper indium gallium selenide solar cell for 25–30 years on a rooftop.
While the sputtering target or evaporative material that is used to deposit the ITO is significantly more costly than AZO, the amount of material placed on each cell is quite small. Therefore, the cost penalty per cell is quite small, too.
Benefits
The primary advantage of ITO compared to AZO as a transparent conductor for LCDs is that ITO can be precisely etched into fine patterns. AZO cannot be etched as precisely: It is so sensitive to acid that it tends to get over-etched by an acid treatment.
Another benefit of ITO compared to AZO is that if moisture does penetrate, ITO will degrade less than AZO.
The role of ITO glass as a cell culture substrate can be extended easily, which opens up new opportunities for studies on growing cells involving electron microscopy and correlative light.
Research examples
ITO can be used in nanotechnology to provide a path to a new generation of solar cells. Solar cells made with these devices have the potential to provide low-cost, ultra-lightweight, and flexible cells with a wide range of applications. Because of the nanoscale dimensions of the nanorods, quantum-size effects influence their optical properties. By tailoring the size of the rods, they can be made to absorb light within a specific narrow band of colors. By stacking several cells with different sized rods, a broad range of wavelengths across the solar spectrum can be collected and converted to energy. Moreover, the nanoscale volume of the rods leads to a significant reduction in the amount of semiconductor material needed compared to a conventional cell. Recent studies demonstrated that nanostructured ITO can behave as a miniaturized photocapacitor, combining in a unique material the absorption and storage of light energy.
Health and safety
Inhalation of indium tin oxide may cause mild irritation to the respiratory tracts and should be avoided. If exposure is long-term, symptoms may become chronic and result in benign pneumoconiosis. Studies with animals indicate that indium tin oxide is toxic when ingested, along with negative effects on the kidney, lung, and heart.
During the process of mining, production and reclamation, workers are potentially exposed to indium, especially in countries such as China, Japan, the Republic of Korea, and Canada and face the possibility of pulmonary alveolar proteinosis, pulmonary fibrosis, emphysema, and granulomas. Workers in the US, China, and Japan have been diagnosed with cholesterol clefts under indium exposure. Silver nanoparticles existed in improved ITOs have been found in vitro to penetrate through both intact and breached skin into the epidermal layer. Un-sintered ITOs are suspected of induce T-cell-mediated sensitization: on an intradermal exposure study, a concentration of 5% uITO resulted in lymphocyte proliferation in mice including the number increase of cells through a 10-day period.
A new occupational problem called indium lung disease was developed through contact with indium-containing dusts. The first patient is a worker associated with wet surface grinding of ITO who suffered from interstitial pneumonia: his lung was filled with ITO related particles. These particles can also induce cytokine production and macrophage dysfunction. Sintered ITOs particles alone can cause phagocytic dysfunction but not cytokine release in macrophage cells; however, they can intrigue a pro-inflammatory cytokine response in pulmonary epithelial cells. Unlike uITO, they can also bring endotoxin to workers handling the wet process if in contact with endotoxin-containing liquids. This can be attributed to the fact that sITOs have larger diameter and smaller surface area, and that this change after the sintering process can cause cytotoxicity.
Because of these issues, alternatives to ITO have been found.
Recycling
The etching water used in the process of sintering ITO can only be used for a limited numbers of times before it has to be disposed. After degradation, the waste water should still contain valuable metals such as In and Cu as a secondary resource as well as Mo, Cu, Al, Sn and In, which can pose a health hazard to human beings.
Alternative materials
Because of high cost and limited supply of indium, the fragility and lack of flexibility of ITO layers, and the costly layer deposition requiring vacuum, alternative materials are being investigated. Promising alternatives based on zinc oxide doped with various elements.
Doped compounds
Promising alternatives based on zinc oxide doped with various elements.
Several transition metal dopants in indium oxide, particularly molybdenum, give much higher electron mobility and conductivity than obtained with tin. Doped binary compounds such as aluminum-doped zinc oxide (AZO) and indium-doped cadmium oxide have been proposed as alternative materials. Other inorganic alternatives include aluminum, gallium or indium-doped zinc oxide (AZO, GZO or IZO).
Carbon nanotubes
Carbon nanotube conductive coatings are a prospective replacement.
Graphene
As another carbon-based alternative, films of graphene are flexible and have been shown to allow 90% transparency with a lower electrical resistance than standard ITO. Thin metal films are also seen as a potential replacement material. A hybrid material alternative currently being tested is an electrode made of silver nanowires and covered with graphene. The advantages to such materials include maintaining transparency while simultaneously being electrically conductive and flexible.
Conductive polymers
Inherently conductive polymers (ICPs) are also being developed for some ITO applications. Typically the conductivity is lower for conducting polymers, such as polyaniline and PEDOT:PSS, than for inorganic materials, but they are more flexible, less expensive and more environmentally friendly in processing and manufacture.
Amorphous indium–zinc oxide
In order to reduce indium content, decrease processing difficulty, and improve electrical homogeneity, amorphous transparent conducting oxides have been developed. One such material, amorphous indium-zinc-oxide maintains short-range order even though crystallization is disrupted by the difference in the ratio of oxygen to metal atoms between In2O3 and ZnO. Indium-zinc-oxide has some comparable properties to ITO. The amorphous structure remains stable even up to 500 °C, which allows for important processing steps common in organic solar cells. The improvement in homogeneity significantly enhances the usability of the material in the case of organic solar cells. Areas of poor electrode performance in organic solar cells render a percentage of the cell's area unusable.
See also
Transparent conducting film
References
External links
Spectroscopic studies of conducting metal oxides, with many slides about ITO
Articles containing unverified chemical infoboxes
Oxides
Indium compounds
Tin
Display technology
Transparent electrodes | Indium tin oxide | [
"Chemistry",
"Engineering"
] | 3,094 | [
"Electronic engineering",
"Oxides",
"Display technology",
"Salts"
] |
362,722 | https://en.wikipedia.org/wiki/Heat%20death%20of%20the%20universe | The heat death of the universe (also known as the Big Chill or Big Freeze) is a hypothesis on the ultimate fate of the universe, which suggests the universe will evolve to a state of no thermodynamic free energy, and will therefore be unable to sustain processes that increase entropy. Heat death does not imply any particular absolute temperature; it only requires that temperature differences or other processes may no longer be exploited to perform work. In the language of physics, this is when the universe reaches thermodynamic equilibrium.
If the curvature of the universe is hyperbolic or flat, or if dark energy is a positive cosmological constant, the universe will continue expanding forever, and a heat death is expected to occur, with the universe cooling to approach equilibrium at a very low temperature after a long time period.
The hypothesis of heat death stems from the ideas of Lord Kelvin who, in the 1850s, took the theory of heat as mechanical energy loss in nature (as embodied in the first two laws of thermodynamics) and extrapolated it to larger processes on a universal scale. This also allowed Kelvin to formulate the heat death paradox, which disproves an infinitely old universe.
Origins of the idea
The idea of heat death stems from the second law of thermodynamics, of which one version states that entropy tends to increase in an isolated system. From this, the hypothesis implies that if the universe lasts for a sufficient time, it will asymptotically approach a state where all energy is evenly distributed. In other words, according to this hypothesis, there is a tendency in nature towards the dissipation (energy transformation) of mechanical energy (motion) into thermal energy; hence, by extrapolation, there exists the view that, in time, the mechanical movement of the universe will run down as work is converted to heat because of the second law.
The conjecture that all bodies in the universe cool off, eventually becoming too cold to support life, seems to have been first put forward by the French astronomer Jean Sylvain Bailly in 1777 in his writings on the history of astronomy and in the ensuing correspondence with Voltaire. In Bailly's view, all planets have an internal heat and are now at some particular stage of cooling. Jupiter, for instance, is still too hot for life to arise there for thousands of years, while the Moon is already too cold. The final state, in this view, is described as one of "equilibrium" in which all motion ceases.
The idea of heat death as a consequence of the laws of thermodynamics, however, was first proposed in loose terms beginning in 1851 by Lord Kelvin (William Thomson), who theorized further on the mechanical energy loss views of Sadi Carnot (1824), James Joule (1843) and Rudolf Clausius (1850). Thomson's views were then elaborated over the next decade by Hermann von Helmholtz and William Rankine.
History
The idea of the heat death of the universe derives from discussion of the application of the first two laws of thermodynamics to universal processes. Specifically, in 1851, Lord Kelvin outlined the view, as based on recent experiments on the dynamical theory of heat: "heat is not a substance, but a dynamical form of mechanical effect, we perceive that there must be an equivalence between mechanical work and heat, as between cause and effect."
In 1852, Thomson published On a Universal Tendency in Nature to the Dissipation of Mechanical Energy, in which he outlined the rudiments of the second law of thermodynamics summarized by the view that mechanical motion and the energy used to create that motion will naturally tend to dissipate or run down. The ideas in this paper, in relation to their application to the age of the Sun and the dynamics of the universal operation, attracted the likes of William Rankine and Hermann von Helmholtz. The three of them were said to have exchanged ideas on this subject. In 1862, Thomson published "On the age of the Sun's heat", an article in which he reiterated his fundamental beliefs in the indestructibility of energy (the first law) and the universal dissipation of energy (the second law), leading to diffusion of heat, cessation of useful motion (work), and exhaustion of potential energy, "lost irrecoverably" through the material universe, while clarifying his view of the consequences for the universe as a whole. Thomson wrote:
The clock's example shows how Kelvin was unsure whether the universe would eventually achieve thermodynamic equilibrium. Thompson later speculated that restoring the dissipated energy in "vis viva" and then usable work – and therefore revert the clock's direction, resulting in a "rejuvenating universe" – would require "a creative act or an act possessing similar power". Starting from this publication, Kelvin also introduced the heat death paradox (Kelvin's paradox), which challenged the classical concept of an infinitely old universe, since the universe has not achieved its thermodynamic equilibrium, thus further work and entropy production are still possible. The existence of stars and temperature differences can be considered an empirical proof that the universe is not infinitely old.
In the years to follow both Thomson's 1852 and the 1862 papers, Helmholtz and Rankine both credited Thomson with the idea, along with his paradox, but read further into his papers by publishing views stating that Thomson argued that the universe will end in a "heat death" (Helmholtz), which will be the "end of all physical phenomena" (Rankine).
Current status
Proposals about the final state of the universe depend on the assumptions made about its ultimate fate, and these assumptions have varied considerably over the late 20th century and early 21st century. In a hypothesized "open" or "flat" universe that continues expanding indefinitely, either a heat death or a Big Rip is expected to eventually occur. If the cosmological constant is zero, the universe will approach absolute zero temperature over a very long timescale. However, if the cosmological constant is positive, the temperature will asymptote to a non-zero positive value, and the universe will approach a state of maximum entropy in which no further work is possible.
Time frame for heat death
The theory suggests that from the "Big Bang" through the present day, matter and dark matter in the universe are thought to have been concentrated in stars, galaxies, and galaxy clusters, and are presumed to continue to do so well into the future. Therefore, the universe is not in thermodynamic equilibrium, and objects can do physical work.:§VID The decay time for a supermassive black hole of roughly 1 galaxy mass (1011 solar masses) because of Hawking radiation is in the order of 10100 years, so entropy can be produced until at least that time. Some large black holes in the universe are predicted to continue to grow up to perhaps 1014 during the collapse of superclusters of galaxies. Even these would evaporate over a timescale of up to 10106 years. After that time, the universe enters the so-called Dark Era and is expected to consist chiefly of a dilute gas of photons and leptons.:§VIA With only very diffuse matter remaining, activity in the universe will have tailed off dramatically, with extremely low energy levels and extremely long timescales. Speculatively, it is possible that the universe may enter a second inflationary epoch, or assuming that the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state.:§VE It is also possible that entropy production will cease and the universe will reach heat death.:§VID
It is suggested that, over vast periods of time, a spontaneous entropy decrease would eventually occur via the Poincaré recurrence theorem, thermal fluctuations, and fluctuation theorem. Through this, another universe could possibly be created by random quantum fluctuations or quantum tunnelling in roughly years.
Opposing views
Max Planck wrote that the phrase "entropy of the universe" has no meaning because it admits of no accurate definition. In 2008, Walter Grandy wrote: "It is rather presumptuous to speak of the entropy of a universe about which we still understand so little, and we wonder how one might define thermodynamic entropy for a universe and its major constituents that have never been in equilibrium in their entire existence." According to László Tisza, "If an isolated system is not in equilibrium, we cannot associate an entropy with it." Hans Adolf Buchdahl writes of "the entirely unjustifiable assumption that the universe can be treated as a closed thermodynamic system". According to Giovanni Gallavotti, "there is no universally accepted notion of entropy for systems out of equilibrium, even when in a stationary state". Discussing the question of entropy for non-equilibrium states in general, Elliott H. Lieb and Jakob Yngvason express their opinion as follows: "Despite the fact that most physicists believe in such a nonequilibrium entropy, it has so far proved impossible to define it in a clearly satisfactory way." In Peter Landsberg's opinion: "The third misconception is that thermodynamics, and in particular, the concept of entropy, can without further enquiry be applied to the whole universe. ... These questions have a certain fascination, but the answers are speculations."
A 2010 analysis of entropy states, "The entropy of a general gravitational field is still not known", and "gravitational entropy is difficult to quantify". The analysis considers several possible assumptions that would be needed for estimates and suggests that the observable universe has more entropy than previously thought. This is because the analysis concludes that supermassive black holes are the largest contributor. Lee Smolin goes further: "It has long been known that gravity is important for keeping the universe out of thermal equilibrium. Gravitationally bound systems have negative specific heat—that is, the velocities of their components increase when energy is removed. ... Such a system does not evolve toward a homogeneous equilibrium state. Instead it becomes increasingly structured and heterogeneous as it fragments into subsystems." This point of view is also supported by the fact of a recent experimental discovery of a stable non-equilibrium steady state in a relatively simple closed system. It should be expected that an isolated system fragmented into subsystems does not necessarily come to thermodynamic equilibrium and remain in non-equilibrium steady state. Entropy will be transmitted from one subsystem to another, but its production will be zero, which does not contradict the second law of thermodynamics.
In popular culture
In Isaac Asimov's 1956 short story The Last Question, humans repeatedly wonder how the heat death of the universe can be avoided.
In the 1981 Doctor Who story "Logopolis", the Doctor realizes that the Logopolitans have created vents in the universe to expel heat build-up into other universes—"Charged Vacuum Emboitments" or "CVE"—to delay the demise of the universe. The Doctor unwittingly travelled through such a vent in "Full Circle".
In the 1995 computer game I Have No Mouth, and I Must Scream, based on Harlan Ellison's short story of the same name, it is stated that AM, the malevolent supercomputer, will survive the heat death of the universe and continue torturing its immortal victims to eternity.
In the 2011 anime series Puella Magi Madoka Magica, the antagonist Kyubey reveals he is a member of an alien race who has been creating magical girls for millennia in order to harvest their energy to combat entropy and stave off the heat death of the universe.
In the last act of Final Fantasy XIV: Endwalker, the player encounters an alien race known as the Ea who have lost all hope in the future and any desire to live further, all because they have learned of the eventual heat death of the universe and see everything else as pointless due to its probable inevitability.
The overarching plot of the Xeelee Sequence concerns the Photino Birds' efforts to accelerate the heat death of the universe by accelerating the rate at which stars become white dwarves.
The 2019 hit indie video game Outer Wilds has several themes grappling with the idea of the heat death of the universe, and the theory that the universe is a cycle of big bangs once the previous one has experienced a heat death.
In "Singularity Immemorial", the seventh main story event of the mobile game Girls' Frontline: Neural Cloud, the plot is about a virtual sector made to simulate space exploration and the threat of the heat death of the universe. The simulation uses an imitation of Neural Cloud's virus entities known as the Entropics as a stand in for the effects of a heat death.
See also
References
Ultimate fate of the universe
Thermodynamic entropy
1851 in science | Heat death of the universe | [
"Physics"
] | 2,685 | [
"Statistical mechanics",
"Entropy",
"Physical quantities",
"Thermodynamic entropy"
] |
362,728 | https://en.wikipedia.org/wiki/Negative%20temperature | Certain systems can achieve negative thermodynamic temperature; that is, their temperature can be expressed as a negative quantity on the Kelvin or Rankine scales. This phenomenon was first discovered at the University of Alberta. This should be distinguished from temperatures expressed as negative numbers on non-thermodynamic Celsius or Fahrenheit scales, which are nevertheless higher than absolute zero. A system with a truly negative temperature on the Kelvin scale is hotter than any system with a positive temperature. If a negative-temperature system and a positive-temperature system come in contact, heat will flow from the negative- to the positive-temperature system. A standard example of such a system is population inversion in laser physics.
Thermodynamic systems with unbounded phase space cannot achieve negative temperatures: adding heat always increases their entropy. The possibility of a decrease in entropy as energy increases requires the system to "saturate" in entropy. This is only possible if the number of high energy states is limited. For a system of ordinary (quantum or classical) particles such as atoms or dust, the number of high energy states is unlimited (particle momenta can in principle be increased indefinitely). Some systems, however (see the examples below), have a maximum amount of energy that they can hold, and as they approach that maximum energy their entropy actually begins to decrease.
History
The possibility of negative temperatures was first predicted by Lars Onsager in 1949.
Onsager was investigating 2D vortices confined within a finite area, and realized that since their positions are not independent degrees of freedom from their momenta, the resulting phase space must also be bounded by the finite area. Bounded phase space is the essential property that allows for negative temperatures, and can occur in both classical and quantum systems. As shown by Onsager, a system with bounded phase space necessarily has a peak in the entropy as energy is increased. For energies exceeding the value where the peak occurs, the entropy decreases as energy increases, and high-energy states necessarily have negative Boltzmann temperature.
The limited range of states accessible to a system with negative temperature means that negative temperature is associated with emergent ordering of the system at high energies. For example in Onsager's point-vortex analysis negative temperature is associated with the emergence of large-scale clusters of vortices. This spontaneous ordering in equilibrium statistical mechanics goes against common physical intuition that increased energy leads to increased disorder.
It seems negative temperatures were first found experimentally in 1951, when Purcell and Pound observed evidence for them in the nuclear spins of a lithium fluoride crystal placed in a magnetic field, and then removed from this field. They wrote:
A system in a negative temperature state is not cold, but very hot, giving up energy to any system at positive temperature put into contact with it. It decays to a normal state through infinite temperature.
Definition of temperature
The absolute temperature (Kelvin) scale can be loosely interpreted as the average kinetic energy of the system's particles. The existence of negative temperature, let alone negative temperature representing "hotter" systems than positive temperature, would seem paradoxical in this interpretation. The paradox is resolved by considering the more rigorous definition of thermodynamic temperature in terms of Boltzmann's entropy formula. This reveals the tradeoff between internal energy and entropy contained in the system, with "coldness", the reciprocal of temperature, being the more fundamental quantity. Systems with a positive temperature will increase in entropy as one adds energy to the system, while systems with a negative temperature will decrease in entropy as one adds energy to the system.
The definition of thermodynamic temperature is a function of the change in the system's entropy under reversible heat transfer :
Entropy being a state function, the integral of over any cyclical process is zero. For a system in which the entropy is purely a function of the system's energy , the temperature can be defined as:
Equivalently, thermodynamic beta, or "coldness", is defined as
where is the Boltzmann constant.
Note that in classical thermodynamics, is defined in terms of temperature. This is reversed here, is the statistical entropy, a function of the possible microstates of the system, and temperature conveys information on the distribution of energy levels among the possible microstates. For systems with many degrees of freedom, the statistical and thermodynamic definitions of entropy are generally consistent with each other.
Some theorists have proposed using an alternative definition of entropy as a way to resolve perceived inconsistencies between statistical and thermodynamic entropy for small systems and systems where the number of states decreases with energy, and the temperatures derived from these entropies are different. It has been argued that the new definition would create other inconsistencies; its proponents have argued that this is only apparent.
Heat and molecular energy distribution
Negative temperatures can only exist in a system where there are a limited number of energy states (see below). As the temperature is increased on such a system, particles move into higher and higher energy states, so that the number of particles in the lower energy states and in the higher energy states approaches equality. (This is a consequence of the definition of temperature in statistical mechanics for systems with limited states.) By injecting energy into these systems in the right fashion, it is possible to create a system in which there are more particles in the higher energy states than in the lower ones. The system can then be characterized as having a negative temperature.
A substance with a negative temperature is not colder than absolute zero, but rather it is hotter than infinite temperature. As Kittel and Kroemer (p. 462) put it,
The corresponding inverse temperature scale, for the quantity (where is the Boltzmann constant), runs continuously from low energy to high as +∞, …, 0, …, −∞. Because it avoids the abrupt jump from +∞ to −∞, is considered more natural than . Although a system can have multiple negative temperature regions and thus have −∞ to +∞ discontinuities.
In many familiar physical systems, temperature is associated to the kinetic energy of atoms. Since there is no upper bound on the momentum of an atom, there is no upper bound to the number of energy states available when more energy is added, and therefore no way to get to a negative temperature. However, in statistical mechanics, temperature can correspond to other degrees of freedom than just kinetic energy (see below).
Temperature and disorder
The distribution of energy among the various translational, vibrational, rotational, electronic, and nuclear modes of a system determines the macroscopic temperature. In a "normal" system, thermal energy is constantly being exchanged between the various modes.
However, in some situations, it is possible to isolate one or more of the modes. In practice, the isolated modes still exchange energy with the other modes, but the time scale of this exchange is much slower than for the exchanges within the isolated mode. One example is the case of nuclear spins in a strong external magnetic field. In this case, energy flows fairly rapidly among the spin states of interacting atoms, but energy transfer between the nuclear spins and other modes is relatively slow. Since the energy flow is predominantly within the spin system, it makes sense to think of a spin temperature that is distinct from the temperature associated to other modes.
A definition of temperature can be based on the relationship:
The relationship suggests that a positive temperature corresponds to the condition where entropy, , increases as thermal energy, , is added to the system. This is the "normal" condition in the macroscopic world, and is always the case for the translational, vibrational, rotational, and non-spin-related electronic and nuclear modes. The reason for this is that there are an infinite number of these types of modes, and adding more heat to the system increases the number of modes that are energetically accessible, and thus increases the entropy.
Examples
Noninteracting two-level particles
The simplest example, albeit a rather nonphysical one, is to consider a system of particles, each of which can take an energy of either or but are otherwise noninteracting. This can be understood as a limit of the Ising model in which the interaction term becomes negligible. The total energy of the system is
where is the sign of the th particle and is the number of particles with positive energy minus the number of particles with negative energy. From elementary combinatorics, the total number of microstates with this amount of energy is a binomial coefficient:
By the fundamental assumption of statistical mechanics, the entropy of this microcanonical ensemble is
We can solve for thermodynamic beta () by considering it as a central difference without taking the continuum limit:
hence the temperature
This entire proof assumes the microcanonical ensemble with energy fixed and temperature being the emergent property. In the canonical ensemble, the temperature is fixed and energy is the emergent property. This leads to ( refers to microstates):
Following the previous example, we choose a state with two levels and two particles. This leads to microstates , , , and .
The resulting values for , , and all increase with and never need to enter a negative temperature regime.
Nuclear spins
The previous example is approximately realized by a system of nuclear spins in an external magnetic field. This allows the experiment to be run as a variation of nuclear magnetic resonance spectroscopy. In the case of electronic and nuclear spin systems, there are only a finite number of modes available, often just two, corresponding to spin up and spin down. In the absence of a magnetic field, these spin states are degenerate, meaning that they correspond to the same energy. When an external magnetic field is applied, the energy levels are split, since those spin states that are aligned with the magnetic field will have a different energy from those that are anti-parallel to it.
In the absence of a magnetic field, such a two-spin system would have maximum entropy when half the atoms are in the spin-up state and half are in the spin-down state, and so one would expect to find the system with close to an equal distribution of spins. Upon application of a magnetic field, some of the atoms will tend to align so as to minimize the energy of the system, thus slightly more atoms should be in the lower-energy state (for the purposes of this example we will assume the spin-down state is the lower-energy state). It is possible to add energy to the spin system using radio frequency techniques. This causes atoms to flip from spin-down to spin-up.
Since we started with over half the atoms in the spin-down state, this initially drives the system towards a 50/50 mixture, so the entropy is increasing, corresponding to a positive temperature. However, at some point, more than half of the spins are in the spin-up position. In this case, adding additional energy reduces the entropy, since it moves the system further from a 50/50 mixture. This reduction in entropy with the addition of energy corresponds to a negative temperature. In NMR spectroscopy, this corresponds to pulses with a pulse width of over 180° (for a given spin). While relaxation is fast in solids, it can take several seconds in solutions and even longer in gases and in ultracold systems; several hours were reported for silver and rhodium at picokelvin temperatures. It is still important to understand that the temperature is negative only with respect to nuclear spins. Other degrees of freedom, such as molecular vibrational, electronic and electron spin levels are at a positive temperature, so the object still has positive sensible heat. Relaxation actually happens by exchange of energy between the nuclear spin states and other states (e.g. through the nuclear Overhauser effect with other spins).
Lasers
This phenomenon can also be observed in many lasing systems, wherein a large fraction of the system's atoms (for chemical and gas lasers) or electrons (in semiconductor lasers) are in excited states. This is referred to as a population inversion.
The Hamiltonian for a single mode of a luminescent radiation field at frequency is
The density operator in the grand canonical ensemble is
For the system to have a ground state, the trace to converge, and the density operator to be generally meaningful, must be positive semidefinite. So if , and is negative semidefinite, then must itself be negative, implying a negative temperature.
Motional degrees of freedom
Negative temperatures have also been achieved in motional degrees of freedom. Using an optical lattice, upper bounds were placed on the kinetic energy, interaction energy and potential energy of cold potassium-39 atoms. This was done by tuning the interactions of the atoms from repulsive to attractive using a Feshbach resonance and changing the overall harmonic potential from trapping to anti-trapping, thus transforming the Bose-Hubbard Hamiltonian from . Performing this transformation adiabatically while keeping the atoms in the Mott insulator regime, it is possible to go from a low entropy positive temperature state to a low entropy negative temperature state. In the negative temperature state, the atoms macroscopically occupy the maximum momentum state of the lattice. The negative temperature ensembles equilibrated and showed long lifetimes in an anti-trapping harmonic potential.
Two-dimensional vortex motion
The two-dimensional systems of vortices confined to a finite area can form thermal equilibrium states at negative temperature, and indeed negative temperature states were first predicted by Onsager in his analysis of classical point vortices. Onsager's prediction was confirmed experimentally for a system of quantum vortices in a Bose-Einstein condensate in 2019.
See also
Negative resistance
Two's complement
References
Further reading
External links
Temperature
Entropy
Magnetism
Laser science | Negative temperature | [
"Physics",
"Chemistry",
"Mathematics"
] | 2,810 | [
"Scalar physical quantities",
"Thermodynamic properties",
"Temperature",
"Physical quantities",
"SI base quantities",
"Intensive quantities",
"Quantity",
"Entropy",
"Thermodynamics",
"Asymmetry",
"Wikipedia categories named after physical quantities",
"Symmetry",
"Dynamical systems"
] |
363,325 | https://en.wikipedia.org/wiki/Homogeneous%20space | In mathematics, a homogeneous space is, very informally, a space that looks the same everywhere, as you move through it, with movement given by the action of a group. Homogeneous spaces occur in the theories of Lie groups, algebraic groups and topological groups. More precisely, a homogeneous space for a group G is a non-empty manifold or topological space X on which G acts transitively. The elements of G are called the symmetries of X. A special case of this is when the group G in question is the automorphism group of the space X – here "automorphism group" can mean isometry group, diffeomorphism group, or homeomorphism group. In this case, X is homogeneous if intuitively X looks locally the same at each point, either in the sense of isometry (rigid geometry), diffeomorphism (differential geometry), or homeomorphism (topology). Some authors insist that the action of G be faithful (non-identity elements act non-trivially), although the present article does not. Thus there is a group action of G on X that can be thought of as preserving some "geometric structure" on X, and making X into a single G-orbit.
Formal definition
Let X be a non-empty set and G a group. Then X is called a G-space if it is equipped with an action of G on X. Note that automatically G acts by automorphisms (bijections) on the set. If X in addition belongs to some category, then the elements of G are assumed to act as automorphisms in the same category. That is, the maps on X coming from elements of G preserve the structure associated with the category (for example, if X is an object in Diff then the action is required to be by diffeomorphisms). A homogeneous space is a G-space on which G acts transitively.
If X is an object of the category C, then the structure of a G-space is a homomorphism:
into the group of automorphisms of the object X in the category C. The pair defines a homogeneous space provided ρ(G) is a transitive group of symmetries of the underlying set of X.
Examples
For example, if X is a topological space, then group elements are assumed to act as homeomorphisms on X. The structure of a G-space is a group homomorphism ρ : G → Homeo(X) into the homeomorphism group of X.
Similarly, if X is a differentiable manifold, then the group elements are diffeomorphisms. The structure of a G-space is a group homomorphism into the diffeomorphism group of X.
Riemannian symmetric spaces are an important class of homogeneous spaces, and include many of the examples listed below.
Concrete examples include:
Isometry groups
Positive curvature:
Sphere (orthogonal group): . This is true because of the following observations: First, Sn−1 is the set of vectors in Rn with norm 1. If we consider one of these vectors as a base vector, then any other vector can be constructed using an orthogonal transformation. If we consider the span of this vector as a one dimensional subspace of Rn, then the complement is an -dimensional vector space that is invariant under an orthogonal transformation from . This shows us why we can construct Sn−1 as a homogeneous space.
Oriented sphere (special orthogonal group):
Projective space (projective orthogonal group):
Flat (zero curvature):
Euclidean space (Euclidean group, point stabilizer is orthogonal group):
Negative curvature:
Hyperbolic space (orthochronous Lorentz group, point stabilizer orthogonal group, corresponding to hyperboloid model):
Oriented hyperbolic space:
Anti-de Sitter space:
Others
Affine space over field K (for affine group, point stabilizer general linear group): .
Grassmannian:
Topological vector spaces (in the sense of topology)
There are other interesting homogeneous spaces, in particular with relevance in physics: This includes Minkowski space or Galilean and Carrollian spaces.
Geometry
From the point of view of the Erlangen program, one may understand that "all points are the same", in the geometry of X. This was true of essentially all geometries proposed before Riemannian geometry, in the middle of the nineteenth century.
Thus, for example, Euclidean space, affine space and projective space are all in natural ways homogeneous spaces for their respective symmetry groups. The same is true of the models found of non-Euclidean geometry of constant curvature, such as hyperbolic space.
A further classical example is the space of lines in projective space of three dimensions (equivalently, the space of two-dimensional subspaces of a four-dimensional vector space). It is simple linear algebra to show that GL4 acts transitively on those. We can parameterize them by line co-ordinates: these are the 2×2 minors of the 4×2 matrix with columns two basis vectors for the subspace. The geometry of the resulting homogeneous space is the line geometry of Julius Plücker.
Homogeneous spaces as coset spaces
In general, if X is a homogeneous space of G, and Ho is the stabilizer of some marked point o in X (a choice of origin), the points of X correspond to the left cosets G/Ho, and the marked point o corresponds to the coset of the identity. Conversely, given a coset space G/H, it is a homogeneous space for G with a distinguished point, namely the coset of the identity. Thus a homogeneous space can be thought of as a coset space without a choice of origin.
For example, if H is the identity subgroup , then X is the G-torsor, which explains why G-torsors are often described intuitively as "G with forgotten identity".
In general, a different choice of origin o will lead to a quotient of G by a different subgroup Ho′ that is related to Ho by an inner automorphism of G. Specifically,
where g is any element of G for which . Note that the inner automorphism (1) does not depend on which such g is selected; it depends only on g modulo Ho.
If the action of G on X is continuous and X is Hausdorff, then H is a closed subgroup of G. In particular, if G is a Lie group, then H is a Lie subgroup by Cartan's theorem. Hence is a smooth manifold and so X carries a unique smooth structure compatible with the group action.
One can go further to double coset spaces, notably Clifford–Klein forms Γ\G/H, where Γ is a discrete subgroup (of G) acting properly discontinuously.
Example
For example, in the line geometry case, we can identify H as a 12-dimensional subgroup of the 16-dimensional general linear group, GL(4), defined by conditions on the matrix entries
h13 = h14 = h23 = h24 = 0,
by looking for the stabilizer of the subspace spanned by the first two standard basis vectors. That shows that X has dimension 4.
Since the homogeneous coordinates given by the minors are 6 in number, this means that the latter are not independent of each other. In fact, a single quadratic relation holds between the six minors, as was known to nineteenth-century geometers.
This example was the first known example of a Grassmannian, other than a projective space. There are many further homogeneous spaces of the classical linear groups in common use in mathematics.
Prehomogeneous vector spaces
The idea of a prehomogeneous vector space was introduced by Mikio Sato.
It is a finite-dimensional vector space V with a group action of an algebraic group G, such that there is an orbit of G that is open for the Zariski topology (and so, dense). An example is GL(1) acting on a one-dimensional space.
The definition is more restrictive than it initially appears: such spaces have remarkable properties, and there is a classification of irreducible prehomogeneous vector spaces, up to a transformation known as "castling".
Homogeneous spaces in physics
Given the Poincaré group G and its subgroup the Lorentz group H, the space of cosets is the Minkowski space. Together with de Sitter space and Anti-de Sitter space these are the maximally symmetric lorentzian spacetimes. There are also homogeneous spaces of relevance in physics that are non-lorentzian, for example Galilean, Carrollian or Aristotelian spacetimes.
Physical cosmology using the general theory of relativity makes use of the Bianchi classification system. Homogeneous spaces in relativity represent the space part of background metrics for some cosmological models; for example, the three cases of the Friedmann–Lemaître–Robertson–Walker metric may be represented by subsets of the Bianchi I (flat), V (open), VII (flat or open) and IX (closed) types, while the Mixmaster universe represents an anisotropic example of a Bianchi IX cosmology.
A homogeneous space of N dimensions admits a set of Killing vectors. For three dimensions, this gives a total of six linearly independent Killing vector fields; homogeneous 3-spaces have the property that one may use linear combinations of these to find three everywhere non-vanishing Killing vector fields ξ,
where the object Cabc, the "structure constants", form a constant order-three tensor antisymmetric in its lower two indices (on the left-hand side, the brackets denote antisymmetrisation and ";" represents the covariant differential operator). In the case of a flat isotropic universe, one possibility is (type I), but in the case of a closed FLRW universe, , where εabcis the Levi-Civita symbol.
See also
Erlangen program
Klein geometry
Heap (mathematics)
Homogeneous variety
Notes
References
John Milnor & James D. Stasheff (1974) Characteristic Classes, Princeton University Press
Takashi Koda An Introduction to the Geometry of Homogeneous Spaces from Kyungpook National University
Menelaos Zikidis Homogeneous Spaces from Heidelberg University
Shoshichi Kobayashi, Katsumi Nomizu (1969) Foundations of Differential Geometry, volume 2, chapter X, (Wiley Classics Library)
Topological groups
Lie groups | Homogeneous space | [
"Physics",
"Mathematics"
] | 2,157 | [
"Lie groups",
"Mathematical structures",
"Group actions",
"Homogeneous spaces",
"Space (mathematics)",
"Topological spaces",
"Algebraic structures",
"Geometry",
"Topological groups",
"Symmetry"
] |
363,360 | https://en.wikipedia.org/wiki/Lyapunov%20stability | Various types of stability may be discussed for the solutions of differential equations or difference equations describing dynamical systems. The most important type is that concerning the stability of solutions near to a point of equilibrium. This may be discussed by the theory of Aleksandr Lyapunov. In simple terms, if the solutions that start out near an equilibrium point stay near forever, then is Lyapunov stable. More strongly, if is Lyapunov stable and all solutions that start out near converge to , then is said to be asymptotically stable (see asymptotic analysis). The notion of exponential stability guarantees a minimal rate of decay, i.e., an estimate of how quickly the solutions converge. The idea of Lyapunov stability can be extended to infinite-dimensional manifolds, where it is known as structural stability, which concerns the behavior of different but "nearby" solutions to differential equations. Input-to-state stability (ISS) applies Lyapunov notions to systems with inputs.
History
Lyapunov stability is named after Aleksandr Mikhailovich Lyapunov, a Russian mathematician who defended the thesis The General Problem of Stability of Motion at Kharkov University in 1892. A. M. Lyapunov was a pioneer in successful endeavors to develop a global approach to the analysis of the stability of nonlinear dynamical systems by comparison with the widely spread local method of linearizing them about points of equilibrium. His work, initially published in Russian and then translated to French, received little attention for many years. The mathematical theory of stability of motion, founded by A. M. Lyapunov, considerably anticipated the time for its implementation in science and technology. Moreover Lyapunov did not himself make application in this field, his own interest being in the stability of rotating fluid masses with astronomical application. He did not have doctoral students who followed the research in the field of stability and his own destiny was terribly tragic because of his suicide in 1918 . For several decades the theory of stability sank into complete oblivion. The Russian-Soviet mathematician and mechanician Nikolay Gur'yevich Chetaev working at the Kazan Aviation Institute in the 1930s was the first who realized the incredible magnitude of the discovery made by A. M. Lyapunov. The contribution to the theory made by N. G. Chetaev was so significant that many mathematicians, physicists and engineers consider him Lyapunov's direct successor and the next-in-line scientific descendant in the creation and development of the mathematical theory of stability.
The interest in it suddenly skyrocketed during the Cold War period when the so-called "Second Method of Lyapunov" (see below) was found to be applicable to the stability of aerospace guidance systems which typically contain strong nonlinearities not treatable by other methods. A large number of publications appeared then and since in the control and systems literature.
More recently the concept of the Lyapunov exponent (related to Lyapunov's First Method of discussing stability) has received wide interest in connection with chaos theory. Lyapunov stability methods have also been applied to finding equilibrium solutions in traffic assignment problems.
Definition for continuous-time systems
Consider an autonomous nonlinear dynamical system
,
where denotes the system state vector, an open set containing the origin, and is a continuous vector field on . Suppose has an equilibrium at so that then
This equilibrium is said to be Lyapunov stable if for every there exists a such that if then for every we have .
The equilibrium of the above system is said to be asymptotically stable if it is Lyapunov stable and there exists such that if then .
The equilibrium of the above system is said to be exponentially stable if it is asymptotically stable and there exist such that if then for all .
Conceptually, the meanings of the above terms are the following:
Lyapunov stability of an equilibrium means that solutions starting "close enough" to the equilibrium (within a distance from it) remain "close enough" forever (within a distance from it). Note that this must be true for any that one may want to choose.
Asymptotic stability means that solutions that start close enough not only remain close enough but also eventually converge to the equilibrium.
Exponential stability means that solutions not only converge, but in fact converge faster than or at least as fast as a particular known rate .
The trajectory is (locally) attractive if
as
for all trajectories that start close enough to , and globally attractive if this property holds for all trajectories.
That is, if x belongs to the interior of its stable manifold, it is asymptotically stable if it is both attractive and stable. (There are examples showing that attractivity does not imply asymptotic stability. Such examples are easy to create using homoclinic connections.)
If the Jacobian of the dynamical system at an equilibrium happens to be a stability matrix (i.e., if the real part of each eigenvalue is strictly negative), then the equilibrium is asymptotically stable.
System of deviations
Instead of considering stability only near an equilibrium point (a constant solution ), one can formulate similar definitions of stability near an arbitrary solution . However, one can reduce the more general case to that of an equilibrium by a change of variables called a "system of deviations". Define , obeying the differential equation:
.
This is no longer an autonomous system, but it has a guaranteed equilibrium point at whose stability is equivalent to the stability of the original solution .
Lyapunov's second method for stability
Lyapunov, in his original 1892 work, proposed two methods for demonstrating stability. The first method developed the solution in a series which was then proved convergent within limits. The second method, which is now referred to as the Lyapunov stability criterion or the Direct Method, makes use of a Lyapunov function V(x) which has an analogy to the potential function of classical dynamics. It is introduced as follows for a system having a point of equilibrium at . Consider a function such that
if and only if
if and only if
for all values of . Note: for asymptotic stability, for is required.
Then V(x) is called a Lyapunov function and the system is stable in the sense of Lyapunov. (Note that is required; otherwise for example would "prove" that is locally stable.) An additional condition called "properness" or "radial unboundedness" is required in order to conclude global stability. Global asymptotic stability (GAS) follows similarly.
It is easier to visualize this method of analysis by thinking of a physical system (e.g. vibrating spring and mass) and considering the energy of such a system. If the system loses energy over time and the energy is never restored then eventually the system must grind to a stop and reach some final resting state. This final state is called the attractor. However, finding a function that gives the precise energy of a physical system can be difficult, and for abstract mathematical systems, economic systems or biological systems, the concept of energy may not be applicable.
Lyapunov's realization was that stability can be proven without requiring knowledge of the true physical energy, provided a Lyapunov function can be found to satisfy the above constraints.
Definition for discrete-time systems
The definition for discrete-time systems is almost identical to that for continuous-time systems. The definition below provides this, using an alternate language commonly used in more mathematical texts.
Let (X, d) be a metric space and f : X → X a continuous function. A point x in X is said to be Lyapunov stable, if,
We say that x is asymptotically stable if it belongs to the interior of its stable set, i.e. if,
Stability for linear state space models
A linear state space model
,
where is a finite matrix, is asymptotically stable (in fact, exponentially stable) if all real parts of the eigenvalues of are negative. This condition is equivalent to the following one:
is negative definite for some positive definite matrix . (The relevant Lyapunov function is .)
Correspondingly, a time-discrete linear state space model
is asymptotically stable (in fact, exponentially stable) if all the eigenvalues of have a modulus smaller than one.
This latter condition has been generalized to switched systems: a linear switched discrete time system (ruled by a set of matrices )
is asymptotically stable (in fact, exponentially stable) if the joint spectral radius of the set is smaller than one.
Stability for systems with inputs
A system with inputs (or controls) has the form
where the (generally time-dependent) input u(t) may be viewed as a control, external input,
stimulus, disturbance, or forcing function. It has been shown that near to a point of equilibrium which is Lyapunov stable the system remains stable under small disturbances. For larger input disturbances the study of such systems is the subject of control theory and applied in control engineering. For systems with inputs, one must quantify the effect of inputs on the stability of the system. The main two approaches to this analysis are BIBO stability (for linear systems) and input-to-state stability (ISS) (for nonlinear systems)
Example
This example shows a system where a Lyapunov function can be used to prove Lyapunov stability but cannot show asymptotic stability.
Consider the following equation, based on the Van der Pol oscillator equation with the friction term changed:
Let
so that the corresponding system is
The origin is the only equilibrium point.
Let us choose as a Lyapunov function
which is clearly positive definite. Its derivative is
It seems that if the parameter is positive, stability is asymptotic for But this is wrong, since does not depend on , and will be 0 everywhere on the axis. The equilibrium is Lyapunov stable but not asymptotically stable.
Barbalat's lemma and stability of time-varying systems
It may be difficult to find a Lyapunov function with a negative definite derivative as required by the Lyapunov stability criterion, however a function with that is only negative semi-definite may be available. In autonomous systems, the invariant set theorem can be applied to prove asymptotic stability, but this theorem is not applicable when the dynamics are a function of time.
Instead, Barbalat's lemma allows for Lyapunov-like analysis of these non-autonomous systems. The lemma is motivated by the following observations. Assuming f is a function of time only:
Having does not imply that has a limit at . For example, .
Having approaching a limit as does not imply that . For example, .
Having lower bounded and decreasing () implies it converges to a limit. But it does not say whether or not as .
Barbalat's Lemma says:
If has a finite limit as and if is uniformly continuous (a sufficient condition for uniform continuity is that is bounded), then as .
An alternative version is as follows:
Let and . If and , then as
In the following form the Lemma is true also in the vector valued case:
Let be a uniformly continuous function with values in a Banach space and assume that has a finite limit as . Then as .
The following example is taken from page 125 of Slotine and Li's book Applied Nonlinear Control.
Consider a non-autonomous system
This is non-autonomous because the input is a function of time. Assume that the input is bounded.
Taking gives
This says that by first two conditions and hence and are bounded. But it does not say anything about the convergence of to zero, as is only negative semi-definite (note can be non-zero when =0) and the dynamics are non-autonomous.
Using Barbalat's lemma:
.
This is bounded because , and are bounded. This implies as and hence . This proves that the error converges.
See also
Lyapunov function
LaSalle's invariance principle
Lyapunov–Malkin theorem
Markus–Yamabe conjecture
Libration point orbit
Hartman–Grobman theorem
Perturbation theory
References
Further reading
Stability theory
Dynamical systems
Lagrangian mechanics
Three-body orbits | Lyapunov stability | [
"Physics",
"Mathematics"
] | 2,553 | [
"Lagrangian mechanics",
"Classical mechanics",
"Stability theory",
"Mechanics",
"Dynamical systems"
] |
363,400 | https://en.wikipedia.org/wiki/Combinatorial%20topology | In mathematics, combinatorial topology was an older name for algebraic topology, dating from the time when topological invariants of spaces (for example the Betti numbers) were regarded as derived from combinatorial decompositions of spaces, such as decomposition into simplicial complexes. After the proof of the simplicial approximation theorem this approach provided rigour.
The change of name reflected the move to organise topological classes such as cycles-modulo-boundaries explicitly into abelian groups. This point of view is often attributed to Emmy Noether, and so the change of title may reflect her influence. The transition is also attributed to the work of Heinz Hopf, who was influenced by Noether, and to Leopold Vietoris and Walther Mayer, who independently defined homology.
A fairly precise date can be supplied in the internal notes of the Bourbaki group. While topology was still combinatorial in 1942, it had become algebraic by 1944. This corresponds also to the period where homological algebra and category theory were introduced for the study of topological spaces, and largely supplanted combinatorial methods.
Azriel Rosenfeld (1973) proposed digital topology for a type of image processing that can be considered as a new development of combinatorial topology. The digital forms of the Euler characteristic theorem and the Gauss–Bonnet theorem were obtained by Li Chen and Yongwu Rong. A 2D grid cell topology already appeared in the Alexandrov–Hopf book Topologie I (1935).
See also
Hauptvermutung
Topological combinatorics
Topological graph theory
Notes
References
Algebraic topology
Combinatorics
es:Topología combinatoria | Combinatorial topology | [
"Mathematics"
] | 341 | [
"Discrete mathematics",
"Algebraic topology",
"Combinatorics",
"Fields of abstract algebra",
"Topology"
] |
363,540 | https://en.wikipedia.org/wiki/Serre%E2%80%93Swan%20theorem | In the mathematical fields of topology and K-theory, the Serre–Swan theorem, also called Swan's theorem, relates the geometric notion of vector bundles to the algebraic concept of projective modules and gives rise to a common intuition throughout mathematics: "projective modules over commutative rings are like vector bundles on compact spaces".
The two precise formulations of the theorems differ somewhat. The original theorem, as stated by Jean-Pierre Serre in 1955, is more algebraic in nature, and concerns vector bundles on an algebraic variety over an algebraically closed field (of any characteristic). The complementary variant stated by Richard Swan in 1962 is more analytic, and concerns (real, complex, or quaternionic) vector bundles on a smooth manifold or Hausdorff space.
Differential geometry
Suppose M is a smooth manifold (not necessarily compact), and E is a smooth vector bundle over M. Then Γ(E), the space of smooth sections of E, is a module over C∞(M) (the commutative algebra of smooth real-valued functions on M). Swan's theorem states that this module is finitely generated and projective over C∞(M). In other words, every vector bundle is a direct summand of some trivial bundle: for some k. The theorem can be proved by constructing a bundle epimorphism from a trivial bundle This can be done by, for instance, exhibiting sections s1...sk with the property that for each point p, {si(p)} span the fiber over p.
When M is connected, the converse is also true: every finitely generated projective module over C∞(M) arises in this way from some smooth vector bundle on M. Such a module can be viewed as a smooth function f on M with values in the n × n idempotent matrices for some n. The fiber of the corresponding vector bundle over x is then the range of f(x). If M is not connected, the converse does not hold unless one allows for vector bundles of non-constant rank (which means admitting manifolds of non-constant dimension). For example, if M is a zero-dimensional 2-point manifold, the module is finitely-generated and projective over but is not free, and so cannot correspond to the sections of any (constant-rank) vector bundle over M (all of which are trivial).
Another way of stating the above is that for any connected smooth manifold M, the section functor Γ from the category of smooth vector bundles over M to the category of finitely generated, projective C∞(M)-modules is full, faithful, and essentially surjective. Therefore the category of smooth vector bundles on M is equivalent to the category of finitely generated, projective C∞(M)-modules. Details may be found in .
Topology
Suppose X is a compact Hausdorff space, and C(X) is the ring of continuous real-valued functions on X. Analogous to the result above, the category of real vector bundles on X is equivalent to the category of finitely generated projective modules over C(X). The same result holds if one replaces "real-valued" by "complex-valued" and "real vector bundle" by "complex vector bundle", but it does not hold if one replace the field by a totally disconnected field like the rational numbers.
In detail, let Vec(X) be the category of complex vector bundles over X, and let ProjMod(C(X)) be the category of finitely generated projective modules over the C*-algebra C(X). There is a functor Γ : Vec(X) → ProjMod(C(X)) which sends each complex vector bundle E over X to the C(X)-module Γ(X, E) of sections. If is a morphism of vector bundles over X then and it follows that
giving the map
which respects the module structure . Swan's theorem asserts that the functor Γ is an equivalence of categories.
Algebraic geometry
The analogous result in algebraic geometry, due to applies to vector bundles in the category of affine varieties. Let X be an affine variety with structure sheaf and a coherent sheaf of -modules on X. Then is the sheaf of germs of a finite-dimensional vector bundle if and only if the space of sections of is a projective module over the commutative ring
References
.
.
.
.
Commutative algebra
Theorems in algebraic topology
Differential topology
K-theory | Serre–Swan theorem | [
"Mathematics"
] | 932 | [
"Theorems in topology",
"Fields of abstract algebra",
"Topology",
"Differential topology",
"Commutative algebra",
"Theorems in algebraic topology"
] |
363,551 | https://en.wikipedia.org/wiki/Universal%20enveloping%20algebra | In mathematics, the universal enveloping algebra of a Lie algebra is the unital associative algebra whose representations correspond precisely to the representations of that Lie algebra.
Universal enveloping algebras are used in the representation theory of Lie groups and Lie algebras. For example, Verma modules can be constructed as quotients of the universal enveloping algebra. In addition, the enveloping algebra gives a precise definition for the Casimir operators. Because Casimir operators commute with all elements of a Lie algebra, they can be used to classify representations. The precise definition also allows the importation of Casimir operators into other areas of mathematics, specifically, those that have a differential algebra. They also play a central role in some recent developments in mathematics. In particular, their dual provides a commutative example of the objects studied in non-commutative geometry, the quantum groups. This dual can be shown, by the Gelfand–Naimark theorem, to contain the C* algebra of the corresponding Lie group. This relationship generalizes to the idea of Tannaka–Krein duality between compact topological groups and their representations.
From an analytic viewpoint, the universal enveloping algebra of the Lie algebra of a Lie group may be identified with the algebra of left-invariant differential operators on the group.
Informal construction
The idea of the universal enveloping algebra is to embed a Lie algebra into an associative algebra with identity in such a way that the abstract bracket operation in corresponds to the commutator in and the algebra is generated by the elements of . There may be many ways to make such an embedding, but there is a unique "largest" such , called the universal enveloping algebra of .
Generators and relations
Let be a Lie algebra, assumed finite-dimensional for simplicity, with basis . Let be the structure constants for this basis, so that
Then the universal enveloping algebra is the associative algebra (with identity) generated by elements subject to the relations
and no other relations. Below we will make this "generators and relations" construction more precise by constructing the universal enveloping algebra as a quotient of the tensor algebra over .
Consider, for example, the Lie algebra sl(2,C), spanned by the matrices
which satisfy the commutation relations , , and . The universal enveloping algebra of sl(2,C) is then the algebra generated by three elements subject to the relations
and no other relations. We emphasize that the universal enveloping algebra is not the same as (or contained in) the algebra of matrices. For example, the matrix satisfies , as is easily verified. But in the universal enveloping algebra, the element does not satisfy because we do not impose this relation in the construction of the enveloping algebra. Indeed, it follows from the Poincaré–Birkhoff–Witt theorem (discussed § below) that the elements are all linearly independent in the universal enveloping algebra.
Finding a basis
In general, elements of the universal enveloping algebra are linear combinations of products of the generators in all possible orders. Using the defining relations of the universal enveloping algebra, we can always re-order those products in a particular order, say with all the factors of first, then factors of , etc. For example, whenever we have a term that contains (in the "wrong" order), we can use the relations to rewrite this as plus a linear combination of the 's. Doing this sort of thing repeatedly eventually converts any element into a linear combination of terms in ascending order. Thus, elements of the form
with the 's being non-negative integers, span the enveloping algebra. (We allow , meaning that we allow terms in which no factors of occur.) The Poincaré–Birkhoff–Witt theorem, discussed below, asserts that these elements are linearly independent and thus form a basis for the universal enveloping algebra. In particular, the universal enveloping algebra is always infinite dimensional.
The Poincaré–Birkhoff–Witt theorem implies, in particular, that the elements themselves are linearly independent. It is therefore common—if potentially confusing—to identify the 's with the generators of the original Lie algebra. That is to say, we identify the original Lie algebra as the subspace of its universal enveloping algebra spanned by the generators. Although may be an algebra of matrices, the universal enveloping of does not consist of (finite-dimensional) matrices. In particular, there is no finite-dimensional algebra that contains the universal enveloping of ; the universal enveloping algebra is always infinite dimensional. Thus, in the case of sl(2,C), if we identify our Lie algebra as a subspace of its universal enveloping algebra, we must not interpret , and as matrices, but rather as symbols with no further properties (other than the commutation relations).
Formalities
The formal construction of the universal enveloping algebra takes the above ideas, and wraps them in notation and terminology that makes it more convenient to work with. The most important difference is that the free associative algebra used in the above is narrowed to the tensor algebra, so that the product of symbols is understood to be the tensor product. The commutation relations are imposed by constructing a quotient space of the tensor algebra quotiented by the smallest two-sided ideal containing elements of the form . The universal enveloping algebra is the "largest" unital associative algebra generated by elements of with a Lie bracket compatible with the original Lie algebra.
Formal definition
Recall that every Lie algebra is in particular a vector space. Thus, one is free to construct the tensor algebra from it. The tensor algebra is a free algebra: it simply contains all possible tensor products of all possible vectors in , without any restrictions whatsoever on those products.
That is, one constructs the space
where is the tensor product, and is the direct sum of vector spaces. Here, is the field over which the Lie algebra is defined. From here, through to the remainder of this article, the tensor product is always explicitly shown. Many authors omit it, since, with practice, its location can usually be inferred from context. Here, a very explicit approach is adopted, to minimize any possible confusion about the meanings of expressions.
The first step in the construction is to "lift" the Lie bracket from the Lie algebra (where it is defined) to the tensor algebra (where it is not), so that one can coherently work with the Lie bracket of two tensors. The lifting is done as follows. First, recall that the bracket operation on a Lie algebra is a map that is bilinear, skew-symmetric and satisfies the Jacobi identity. We wish to define a Lie bracket [-,-] that is a map that is also bilinear, skew symmetric and obeys the Jacobi identity.
The lifting can be done grade by grade. Begin by defining the bracket on as
This is a consistent, coherent definition, because both sides are bilinear, and both sides are skew symmetric (the Jacobi identity will follow shortly). The above defines the bracket on ; it must now be lifted to for arbitrary This is done recursively, by defining
and likewise
It is straightforward to verify that the above definition is bilinear, and is skew-symmetric; one can also show that it obeys the Jacobi identity. The final result is that one has a Lie bracket that is consistently defined on all of one says that it has been "lifted" to all of in the conventional sense of a "lift" from a base space (here, the Lie algebra) to a covering space (here, the tensor algebra).
The result of this lifting is explicitly a Poisson algebra. It is a unital associative algebra with a Lie bracket that is compatible with the Lie algebra bracket; it is compatible by construction. It is not the smallest such algebra, however; it contains far more elements than needed. One can get something smaller by projecting back down. The universal enveloping algebra of is defined as the quotient space
where the equivalence relation is given by
That is, the Lie bracket defines the equivalence relation used to perform the quotienting. The result is still a unital associative algebra, and one can still take the Lie bracket of any two members. Computing the result is straight-forward, if one keeps in mind that each element of can be understood as a coset: one just takes the bracket as usual, and searches for the coset that contains the result. It is the smallest such algebra; one cannot find anything smaller that still obeys the axioms of an associative algebra.
The universal enveloping algebra is what remains of the tensor algebra after modding out the Poisson algebra structure. (This is a non-trivial statement; the tensor algebra has a rather complicated structure: it is, among other things, a Hopf algebra; the Poisson algebra is likewise rather complicated, with many peculiar properties. It is compatible with the tensor algebra, and so the modding can be performed. The Hopf algebra structure is conserved; this is what leads to its many novel applications, e.g. in string theory. However, for the purposes of the formal definition, none of this particularly matters.)
The construction can be performed in a slightly different (but ultimately equivalent) way. Forget, for a moment, the above lifting, and instead consider the two-sided ideal generated by elements of the form
This generator is an element of
A general member of the ideal will have the form
for some All elements of are obtained as linear combinations of elements of this form. Clearly, is a subspace. It is an ideal, in that if and then and Establishing that this is an ideal is important, because ideals are precisely those things that one can quotient with; ideals lie in the kernel of the quotienting map. That is, one has the short exact sequence
where each arrow is a linear map, and the kernel of that map is given by the image of the previous map. The universal enveloping algebra can then be defined as
Superalgebras and other generalizations
The above construction focuses on Lie algebras and on the Lie bracket, and its skewness and antisymmetry. To some degree, these properties are incidental to the construction. Consider instead some (arbitrary) algebra (not a Lie algebra) over a vector space, that is, a vector space endowed with multiplication that takes elements If the multiplication is bilinear, then the same construction and definitions can go through. One starts by lifting up to so that the lifted obeys all of the same properties that the base does – symmetry or antisymmetry or whatever. The lifting is done exactly as before, starting with
This is consistent precisely because the tensor product is bilinear, and the multiplication is bilinear. The rest of the lift is performed so as to preserve multiplication as a homomorphism. By definition, one writes
and also that
This extension is consistent by appeal to a lemma on free objects: since the tensor algebra is a free algebra, any homomorphism on its generating set can be extended to the entire algebra. Everything else proceeds as described above: upon completion, one has a unital associative algebra; one can take a quotient in either of the two ways described above.
The above is exactly how the universal enveloping algebra for Lie superalgebras is constructed. One need only to carefully keep track of the sign, when permuting elements. In this case, the (anti-)commutator of the superalgebra lifts to an (anti-)commuting Poisson bracket.
Another possibility is to use something other than the tensor algebra as the covering algebra. One such possibility is to use the exterior algebra; that is, to replace every occurrence of the tensor product by the exterior product. If the base algebra is a Lie algebra, then the result is the Gerstenhaber algebra; it is the exterior algebra of the corresponding Lie group. As before, it has a grading naturally coming from the grading on the exterior algebra. (The Gerstenhaber algebra should not be confused with the Poisson superalgebra; both invoke anticommutation, but in different ways.)
The construction has also been generalized for Malcev algebras, Bol algebras and left alternative algebras.
Universal property
The universal enveloping algebra, or rather the universal enveloping algebra together with the canonical map , possesses a universal property. Suppose we have any Lie algebra map
to a unital associative algebra (with Lie bracket in given by the commutator). More explicitly, this means that we assume
for all . Then there exists a unique unital algebra homomorphism
such that
where is the canonical map. (The map is obtained by embedding into its tensor algebra and then composing with the quotient map to the universal enveloping algebra. This map is an embedding, by the Poincaré–Birkhoff–Witt theorem.)
To put it differently, if is a linear map into a unital algebra satisfying , then extends to an algebra homomorphism of . Since is generated by elements of , the map must be uniquely determined by the requirement that
.
The point is that because there are no other relations in the universal enveloping algebra besides those coming from the commutation relations of , the map is well defined, independent of how one writes a given element as a linear combination of products of Lie algebra elements.
The universal property of the enveloping algebra immediately implies that every representation of acting on a vector space extends uniquely to a representation of . (Take .) This observation is important because it allows (as discussed below) the Casimir elements to act on . These operators (from the center of ) act as scalars and provide important information about the representations. The quadratic Casimir element is of particular importance in this regard.
Other algebras
Although the canonical construction, given above, can be applied to other algebras, the result, in general, does not have the universal property. Thus, for example, when the construction is applied to Jordan algebras, the resulting enveloping algebra contains the special Jordan algebras, but not the exceptional ones: that is, it does not envelope the Albert algebras. Likewise, the Poincaré–Birkhoff–Witt theorem, below, constructs a basis for an enveloping algebra; it just won't be universal. Similar remarks hold for the Lie superalgebras.
Poincaré–Birkhoff–Witt theorem
The Poincaré–Birkhoff–Witt theorem gives a precise description of . This can be done in either one of two different ways: either by reference to an explicit vector basis on the Lie algebra, or in a coordinate-free fashion.
Using basis elements
One way is to suppose that the Lie algebra can be given a totally ordered basis, that is, it is the free vector space of a totally ordered set. Recall that a free vector space is defined as the space of all finitely supported functions from a set to the field (finitely supported means that only finitely many values are non-zero); it can be given a basis such that is the indicator function for . Let be the injection into the tensor algebra; this is used to give the tensor algebra a basis as well. This is done by lifting: given some arbitrary sequence of , one defines the extension of to be
The Poincaré–Birkhoff–Witt theorem then states that one can obtain a basis for from the above, by enforcing the total order of onto the algebra. That is, has a basis
where , the ordering being that of total order on the set . The proof of the theorem involves noting that, if one starts with out-of-order basis elements, these can always be swapped by using the commutator (together with the structure constants). The hard part of the proof is establishing that the final result is unique and independent of the order in which the swaps were performed.
This basis should be easily recognized as the basis of a symmetric algebra. That is, the underlying vector spaces of and the symmetric algebra are isomorphic, and it is the PBW theorem that shows that this is so. See, however, the section on the algebra of symbols, below, for a more precise statement of the nature of the isomorphism.
It is useful, perhaps, to split the process into two steps. In the first step, one constructs the free Lie algebra: this is what one gets, if one mods out by all commutators, without specifying what the values of the commutators are. The second step is to apply the specific commutation relations from The first step is universal, and does not depend on the specific It can also be precisely defined: the basis elements are given by Hall words, a special case of which are the Lyndon words; these are explicitly constructed to behave appropriately as commutators.
Coordinate-free
One can also state the theorem in a coordinate-free fashion, avoiding the use of total orders and basis elements. This is convenient when there are difficulties in defining the basis vectors, as there can be for infinite-dimensional Lie algebras. It also gives a more natural form that is more easily extended to other kinds of algebras. This is accomplished by constructing a filtration whose limit is the universal enveloping algebra
First, a notation is needed for an ascending sequence of subspaces of the tensor algebra. Let
where
is the -times tensor product of The form a filtration:
More precisely, this is a filtered algebra, since the filtration preserves the algebraic properties of the subspaces. Note that the limit of this filtration is the tensor algebra
It was already established, above, that quotienting by the ideal is a natural transformation that takes one from to This also works naturally on the subspaces, and so one obtains a filtration whose limit is the universal enveloping algebra
Next, define the space
This is the space modulo all of the subspaces of strictly smaller filtration degree. Note that is not at all the same as the leading term of the filtration, as one might naively surmise. It is not constructed through a set subtraction mechanism associated with the filtration.
Quotienting by has the effect of setting all Lie commutators defined in to zero. One can see this by observing that the commutator of a pair of elements whose products lie in actually gives an element in . This is perhaps not immediately obvious: to get this result, one must repeatedly apply the commutation relations, and turn the crank. The essence of the Poincaré–Birkhoff–Witt theorem is that it is always possible to do this, and that the result is unique.
Since commutators of elements whose products are defined in lie in , the quotienting that defines has the effect of setting all commutators to zero. What PBW states is that the commutator of elements in is necessarily zero. What is left are the elements that are not expressible as commutators.
In this way, one is lead immediately to the symmetric algebra. This is the algebra where all commutators vanish. It can be defined as a filtration of symmetric tensor products . Its limit is the symmetric algebra . It is constructed by appeal to the same notion of naturality as before. One starts with the same tensor algebra, and just uses a different ideal, the ideal that makes all elements commute:
Thus, one can view the Poincaré–Birkhoff–Witt theorem as stating that is isomorphic to the symmetric algebra , both as a vector space and as a commutative algebra.
The also form a filtered algebra; its limit is This is the associated graded algebra of the filtration.
The construction above, due to its use of quotienting, implies that the limit of is isomorphic to In more general settings, with loosened conditions, one finds that is a projection, and one then gets PBW-type theorems for the associated graded algebra of a filtered algebra. To emphasize this, the notation is sometimes used for serving to remind that it is the filtered algebra.
Other algebras
The theorem, applied to Jordan algebras, yields the exterior algebra, rather than the symmetric algebra. In essence, the construction zeros out the anti-commutators. The resulting algebra is an enveloping algebra, but is not universal. As mentioned above, it fails to envelop the exceptional Jordan algebras.
Left-invariant differential operators
Suppose is a real Lie group with Lie algebra . Following the modern approach, we may identify with the space of left-invariant vector fields (i.e., first-order left-invariant differential operators). Specifically, if we initially think of as the tangent space to at the identity, then each vector in has a unique left-invariant extension. We then identify the vector in the tangent space with the associated left-invariant vector field. Now, the commutator (as differential operators) of two left-invariant vector fields is again a vector field and again left-invariant. We can then define the bracket operation on as the commutator on the associated left-invariant vector fields. This definition agrees with any other standard definition of the bracket structure on the Lie algebra of a Lie group.
We may then consider left-invariant differential operators of arbitrary order. Every such operator can be expressed (non-uniquely) as a linear combination of products of left-invariant vector fields. The collection of all left-invariant differential operators on forms an algebra, denoted . It can be shown that is isomorphic to the universal enveloping algebra .
In the case that arises as the Lie algebra of a real Lie group, one can use left-invariant differential operators to give an analytic proof of the Poincaré–Birkhoff–Witt theorem. Specifically, the algebra of left-invariant differential operators is generated by elements (the left-invariant vector fields) that satisfy the commutation relations of . Thus, by the universal property of the enveloping algebra, is a quotient of . Thus, if the PBW basis elements are linearly independent in —which one can establish analytically—they must certainly be linearly independent in . (And, at this point, the isomorphism of with is apparent.)
Algebra of symbols
The underlying vector space of may be given a new algebra structure so that and are isomorphic as associative algebras. This leads to the concept of the algebra of symbols : the space of symmetric polynomials, endowed with a product, the , that places the algebraic structure of the Lie algebra onto what is otherwise a standard associative algebra. That is, what the PBW theorem obscures (the commutation relations) the algebra of symbols restores into the spotlight.
The algebra is obtained by taking elements of and replacing each generator by an indeterminate, commuting variable to obtain the space of symmetric polynomials over the field . Indeed, the correspondence is trivial: one simply substitutes the symbol for . The resulting polynomial is called the symbol of the corresponding element of . The inverse map is
that replaces each symbol by . The algebraic structure is obtained by requiring that the product act as an isomorphism, that is, so that
for polynomials
The primary issue with this construction is that is not trivially, inherently a member of , as written, and that one must first perform a tedious reshuffling of the basis elements (applying the structure constants as needed) to obtain an element of in the properly ordered basis. An explicit expression for this product can be given: this is the Berezin formula. It follows essentially from the Baker–Campbell–Hausdorff formula for the product of two elements of a Lie group.
A closed form expression is given by
where
and is just in the chosen basis.
The universal enveloping algebra of the Heisenberg algebra is the Weyl algebra (modulo the relation that the center be the unit); here, the product is called the Moyal product.
Representation theory
The universal enveloping algebra preserves the representation theory: the representations of correspond in a one-to-one manner to the modules over . In more abstract terms, the abelian category of all representations of is isomorphic to the abelian category of all left modules over .
The representation theory of semisimple Lie algebras rests on the observation that there is an isomorphism, known as the Kronecker product:
for Lie algebras . The isomorphism follows from a lifting of the embedding
where
is just the canonical embedding (with subscripts, respectively for algebras one and two). It is straightforward to verify that this embedding lifts, given the prescription above. See, however, the discussion of the bialgebra structure in the article on tensor algebras for a review of some of the finer points of doing so: in particular, the shuffle product employed there corresponds to the Wigner-Racah coefficients, i.e. the 6j and 9j-symbols, etc.
Also important is that the universal enveloping algebra of a free Lie algebra is isomorphic to the free associative algebra.
Construction of representations typically proceeds by building the Verma modules of the highest weights.
In a typical context where is acting by infinitesimal transformations, the elements of act like differential operators, of all orders. (See, for example, the realization of the universal enveloping algebra as left-invariant differential operators on the associated group, as discussed above.)
Casimir operators
The center of can be identified with the centralizer of in Any element of must commute with all of and in particular with the canonical embedding of into Because of this, the center is directly useful for classifying representations of . For a finite-dimensional semisimple Lie algebra, the Casimir operators form a distinguished basis from the center . These may be constructed as follows.
The center corresponds to linear combinations of all elements that commute with all elements that is, for which That is, they are in the kernel of Thus, a technique is needed for computing that kernel. What we have is the action of the adjoint representation on we need it on The easiest route is to note that is a derivation, and that the space of derivations can be lifted to and thus to This implies that both of these are differential algebras.
By definition, is a derivation on if it obeys Leibniz's law:
(When is the space of left invariant vector fields on a group , the Lie bracket is that of vector fields.) The lifting is performed by defining
Since is a derivation for any the above defines acting on and
From the PBW theorem, it is clear that all central elements are linear combinations of symmetric homogeneous polynomials in the basis elements of the Lie algebra. The Casimir invariants are the irreducible homogeneous polynomials of a given, fixed degree. That is, given a basis , a Casimir operator of order has the form
where there are terms in the tensor product, and is a completely symmetric tensor of order belonging to the adjoint representation. That is, can be (should be) thought of as an element of Recall that the adjoint representation is given directly by the structure constants, and so an explicit indexed form of the above equations can be given, in terms of the Lie algebra basis; this is originally a theorem of Israel Gel'fand. That is, from , it follows that
where the structure constants are
As an example, the quadratic Casimir operator is
where is the inverse matrix of the Killing form That the Casimir operator belongs to the center follows from the fact that the Killing form is invariant under the adjoint action.
The center of the universal enveloping algebra of a simple Lie algebra is given in detail by the Harish-Chandra isomorphism.
Rank
The number of algebraically independent Casimir operators of a finite-dimensional semisimple Lie algebra is equal to the rank of that algebra, i.e. is equal to the rank of the Cartan–Weyl basis. This may be seen as follows. For a -dimensional vector space , recall that the determinant is the completely antisymmetric tensor on . Given a matrix , one may write the characteristic polynomial of as
For a -dimensional Lie algebra, that is, an algebra whose adjoint representation is -dimensional, the linear operator
implies that is a -dimensional endomorphism, and so one has the characteristic equation
for elements The non-zero roots of this characteristic polynomial (that are roots for all ) form the root system of the algebra. In general, there are only such roots; this is the rank of the algebra. This implies that the highest value of for which the is non-vanishing is
The are homogeneous polynomials of degree This can be seen in several ways: Given a constant , ad is linear, so that By plugging and chugging in the above, one obtains that
By linearity, if one expands in the basis,
then the polynomial has the form
that is, a is a tensor of rank . By linearity and the commutativity of addition, i.e. that , one concludes that this tensor must be completely symmetric. This tensor is exactly the Casimir invariant of order
The center corresponded to those elements for which for all by the above, these clearly corresponds to the roots of the characteristic equation. One concludes that the roots form a space of rank and that the Casimir invariants span this space. That is, the Casimir invariants generate the center
Example: Rotation group SO(3)
The rotation group SO(3) is of rank one, and thus has one Casimir operator. It is three-dimensional, and thus the Casimir operator must have order (3 − 1) = 2 i.e. be quadratic. Of course, this is the Lie algebra of As an elementary exercise, one can compute this directly. Changing notation to with belonging to the adjoint rep, a general algebra element is and direct computation gives
The quadratic term can be read off as , and so the squared angular momentum operator for the rotation group is that Casimir operator. That is,
and explicit computation shows that
after making use of the structure constants
Example: Pseudo-differential operators
A key observation during the construction of above was that it was a differential algebra, by dint of the fact that any derivation on the Lie algebra can be lifted to . Thus, one is led to a ring of pseudo-differential operators, from which one can construct Casimir invariants.
If the Lie algebra acts on a space of linear operators, such as in Fredholm theory, then one can construct Casimir invariants on the corresponding space of operators. The quadratic Casimir operator corresponds to an elliptic operator.
If the Lie algebra acts on a differentiable manifold, then each Casimir operator corresponds to a higher-order differential on the cotangent manifold, the second-order differential being the most common and most important.
If the action of the algebra is isometric, as would be the case for Riemannian or pseudo-Riemannian manifolds endowed with a metric and the symmetry groups SO(N) and SO (P, Q), respectively, one can then contract upper and lower indices (with the metric tensor) to obtain more interesting structures. For the quadratic Casimir invariant, this is the Laplacian. Quartic Casimir operators allow one to square the stress–energy tensor, giving rise to the Yang-Mills action. The Coleman–Mandula theorem restricts the form that these can take, when one considers ordinary Lie algebras. However, the Lie superalgebras are able to evade the premises of the Coleman–Mandula theorem, and can be used to mix together space and internal symmetries.
Examples in particular cases
If , then it has a basis of matriceswhich satisfy the following identities under the standard bracket:, , and this shows us that the universal enveloping algebra has the presentationas a non-commutative ring.
If is abelian (that is, the bracket is always ), then is commutative; and if a basis of the vector space has been chosen, then can be identified with the polynomial algebra over , with one variable per basis element.
If is the Lie algebra corresponding to the Lie group , then can be identified with the algebra of left-invariant differential operators (of all orders) on ; with lying inside it as the left-invariant vector fields as first-order differential operators.
To relate the above two cases: if is a vector space as abelian Lie algebra, the left-invariant differential operators are the constant coefficient operators, which are indeed a polynomial algebra in the partial derivatives of first order.
The center consists of the left- and right- invariant differential operators; this, in the case of not commutative, is often not generated by first-order operators (see for example Casimir operator of a semi-simple Lie algebra).
Another characterization in Lie group theory is of as the convolution algebra of distributions supported only at the identity element of .
The algebra of differential operators in variables with polynomial coefficients may be obtained starting with the Lie algebra of the Heisenberg group. See Weyl algebra for this; one must take a quotient, so that the central elements of the Lie algebra act as prescribed scalars.
The universal enveloping algebra of a finite-dimensional Lie algebra is a filtered quadratic algebra.
Hopf algebras and quantum groups
The construction of the group algebra for a given group is in many ways analogous to constructing the universal enveloping algebra for a given Lie algebra. Both constructions are universal and translate representation theory into module theory. Furthermore, both group algebras and universal enveloping algebras carry natural comultiplications that turn them into Hopf algebras. This is made precise in the article on the tensor algebra: the tensor algebra has a Hopf algebra structure on it, and because the Lie bracket is consistent with (obeys the consistency conditions for) that Hopf structure, it is inherited by the universal enveloping algebra.
Given a Lie group , one can construct the vector space of continuous complex-valued functions on , and turn it into a C*-algebra. This algebra has a natural Hopf algebra structure: given two functions
, one defines multiplication as
and comultiplication as
the counit as
and the antipode as
Now, the Gelfand–Naimark theorem essentially states that every commutative Hopf algebra is isomorphic to the Hopf algebra of continuous functions on some compact topological group —the theory of compact topological groups and the theory of commutative Hopf algebras are the same. For Lie groups, this implies that is isomorphically dual to ; more precisely, it is isomorphic to a subspace of the dual space
These ideas can then be extended to the non-commutative case. One starts by defining the quasi-triangular Hopf algebras, and then performing what is called a quantum deformation to obtain the quantum universal enveloping algebra, or quantum group, for short.
See also
Milnor–Moore theorem
Harish-Chandra homomorphism
References
Shlomo Sternberg (2004), Lie algebras, Harvard University.
Ring theory
Hopf algebras
Representation theory of Lie algebras | Universal enveloping algebra | [
"Mathematics"
] | 7,310 | [
"Fields of abstract algebra",
"Ring theory"
] |
363,628 | https://en.wikipedia.org/wiki/Tensor%20algebra | In mathematics, the tensor algebra of a vector space V, denoted T(V) or T(V), is the algebra of tensors on V (of any rank) with multiplication being the tensor product. It is the free algebra on V, in the sense of being left adjoint to the forgetful functor from algebras to vector spaces: it is the "most general" algebra containing V, in the sense of the corresponding universal property (see below).
The tensor algebra is important because many other algebras arise as quotient algebras of T(V). These include the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras.
The tensor algebra also has two coalgebra structures; one simple one, which does not make it a bialgebra, but does lead to the concept of a cofree coalgebra, and a more complicated one, which yields a bialgebra, and can be extended by giving an antipode to create a Hopf algebra structure.
Note: In this article, all algebras are assumed to be unital and associative. The unit is explicitly required to define the coproduct.
Construction
Let V be a vector space over a field K. For any nonnegative integer k, we define the kth tensor power of V to be the tensor product of V with itself k times:
That is, TkV consists of all tensors on V of order k. By convention T0V is the ground field K (as a one-dimensional vector space over itself).
We then construct T(V) as the direct sum of TkV for k = 0,1,2,…
The multiplication in T(V) is determined by the canonical isomorphism
given by the tensor product, which is then extended by linearity to all of T(V). This multiplication rule implies that the tensor algebra T(V) is naturally a graded algebra with TkV serving as the grade-k subspace. This grading can be extended to a Z-grading by appending subspaces for negative integers k.
The construction generalizes in a straightforward manner to the tensor algebra of any module M over a commutative ring. If R is a non-commutative ring, one can still perform the construction for any R-R bimodule M. (It does not work for ordinary R-modules because the iterated tensor products cannot be formed.)
Adjunction and universal property
The tensor algebra is also called the free algebra on the vector space , and is functorial; this means that the map extends to linear maps for forming a functor from the category of -vector spaces to the category of associative algebras. Similarly with other free constructions, the functor is left adjoint to the forgetful functor that sends each associative -algebra to its underlying vector space.
Explicitly, the tensor algebra satisfies the following universal property, which formally expresses the statement that it is the most general algebra containing V:
Any linear map from to an associative algebra over can be uniquely extended to an algebra homomorphism from to as indicated by the following commutative diagram:
Here is the canonical inclusion of into . As for other universal properties, the tensor algebra can be defined as the unique algebra satisfying this property (specifically, it is unique up to a unique isomorphism), but this definition requires to prove that an object satisfying this property exists.
The above universal property implies that is a functor from the category of vector spaces over , to the category of -algebras. This means that any linear map between -vector spaces and extends uniquely to a -algebra homomorphism from to .
Non-commutative polynomials
If V has finite dimension n, another way of looking at the tensor algebra is as the "algebra of polynomials over K in n non-commuting variables". If we take basis vectors for V, those become non-commuting variables (or indeterminates) in T(V), subject to no constraints beyond associativity, the distributive law and K-linearity.
Note that the algebra of polynomials on V is not , but rather : a (homogeneous) linear function on V is an element of for example coordinates on a vector space are covectors, as they take in a vector and give out a scalar (the given coordinate of the vector).
Quotients
Because of the generality of the tensor algebra, many other algebras of interest can be constructed by starting with the tensor algebra and then imposing certain relations on the generators, i.e. by constructing certain quotient algebras of T(V). Examples of this are the exterior algebra, the symmetric algebra, Clifford algebras, the Weyl algebra and universal enveloping algebras.
Coalgebra
The tensor algebra has two different coalgebra structures. One is compatible with the tensor product, and thus can be extended to a bialgebra, and can be further be extended with an antipode to a Hopf algebra structure. The other structure, although simpler, cannot be extended to a bialgebra. The first structure is developed immediately below; the second structure is given in the section on the cofree coalgebra, further down.
The development provided below can be equally well applied to the exterior algebra, using the wedge symbol in place of the tensor symbol ; a sign must also be kept track of, when permuting elements of the exterior algebra. This correspondence also lasts through the definition of the bialgebra, and on to the definition of a Hopf algebra. That is, the exterior algebra can also be given a Hopf algebra structure.
Similarly, the symmetric algebra can also be given the structure of a Hopf algebra, in exactly the same fashion, by replacing everywhere the tensor product by the symmetrized tensor product , i.e. that product where
In each case, this is possible because the alternating product and the symmetric product obey the required consistency conditions for the definition of a bialgebra and Hopf algebra; this can be explicitly checked in the manner below. Whenever one has a product obeying these consistency conditions, the construction goes through; insofar as such a product gave rise to a quotient space, the quotient space inherits the Hopf algebra structure.
In the language of category theory, one says that there is a functor from the category of -vector spaces to the category of -associative algebras. But there is also a functor taking vector spaces to the category of exterior algebras, and a functor taking vector spaces to symmetric algebras. There is a natural map from to each of these. Verifying that quotienting preserves the Hopf algebra structure is the same as verifying that the maps are indeed natural.
Coproduct
The coalgebra is obtained by defining a coproduct or diagonal operator
Here, is used as a short-hand for to avoid an explosion of parentheses. The symbol is used to denote the "external" tensor product, needed for the definition of a coalgebra. It is being used to distinguish it from the "internal" tensor product , which is already being used to denote multiplication in the tensor algebra (see the section Multiplication, below, for further clarification on this issue). In order to avoid confusion between these two symbols, most texts will replace by a plain dot, or even drop it altogether, with the understanding that it is implied from context. This then allows the symbol to be used in place of the symbol. This is not done below, and the two symbols are used independently and explicitly, so as to show the proper location of each. The result is a bit more verbose, but should be easier to comprehend.
The definition of the operator is most easily built up in stages, first by defining it for elements and then by homomorphically extending it to the whole algebra. A suitable choice for the coproduct is then
and
where is the unit of the field . By linearity, one obviously has
for all It is straightforward to verify that this definition satisfies the axioms of a coalgebra: that is, that
where is the identity map on . Indeed, one gets
and likewise for the other side. At this point, one could invoke a lemma, and say that extends trivially, by linearity, to all of , because is a free object and is a generator of the free algebra, and is a homomorphism. However, it is insightful to provide explicit expressions. So, for , one has (by definition) the homomorphism
Expanding, one has
In the above expansion, there is no need to ever write as this is just plain-old scalar multiplication in the algebra; that is, one trivially has that
The extension above preserves the algebra grading. That is,
Continuing in this fashion, one can obtain an explicit expression for the coproduct acting on a homogenous element of order m:
where the symbol, which should appear as ш, the sha, denotes the shuffle product. This is expressed in the second summation, which is taken over all (p, m − p)-shuffles. The shuffle is
By convention, one takes that Sh(m,0) and Sh(0,m) equals {id: {1, ..., m} → {1, ..., m}}. It is also convenient to take the pure tensor products and
to equal 1 for p = 0 and p = m, respectively (the empty product in ). The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right.
Equivalently,
where the products are in , and where the sum is over all subsets of .
As before, the algebra grading is preserved:
Counit
The counit is given by the projection of the field component out from the algebra. This can be written as for and for . By homomorphism under the tensor product , this extends to
for all
It is a straightforward matter to verify that this counit satisfies the needed axiom for the coalgebra:
Working this explicitly, one has
where, for the last step, one has made use of the isomorphism , as is appropriate for the defining axiom of the counit.
Bialgebra
A bialgebra defines both multiplication, and comultiplication, and requires them to be compatible.
Multiplication
Multiplication is given by an operator
which, in this case, was already given as the "internal" tensor product. That is,
That is, The above should make it clear why the symbol needs to be used: the was actually one and the same thing as ; and notational sloppiness here would lead to utter chaos. To strengthen this: the tensor product of the tensor algebra corresponds to the multiplication used in the definition of an algebra, whereas the tensor product is the one required in the definition of comultiplication in a coalgebra. These two tensor products are not the same thing!
Unit
The unit for the algebra
is just the embedding, so that
That the unit is compatible with the tensor product is "trivial": it is just part of the standard definition of the tensor product of vector spaces. That is, for field element k and any More verbosely, the axioms for an associative algebra require the two homomorphisms (or commuting diagrams):
on , and that symmetrically, on , that
where the right-hand side of these equations should be understood as the scalar product.
Compatibility
The unit and counit, and multiplication and comultiplication, all have to satisfy compatibility conditions. It is straightforward to see that
Similarly, the unit is compatible with comultiplication:
The above requires the use of the isomorphism in order to work; without this, one loses linearity. Component-wise,
with the right-hand side making use of the isomorphism.
Multiplication and the counit are compatible:
whenever x or y are not elements of , and otherwise, one has scalar multiplication on the field: The most difficult to verify is the compatibility of multiplication and comultiplication:
where exchanges elements. The compatibility condition only needs to be verified on ; the full compatibility follows as a homomorphic extension to all of The verification is verbose but straightforward; it is not given here, except for the final result:
For an explicit expression for this was given in the coalgebra section, above.
Hopf algebra
The Hopf algebra adds an antipode to the bialgebra axioms. The antipode on is given by
This is sometimes called the "anti-identity". The antipode on is given by
and on by
This extends homomorphically to
Compatibility
Compatibility of the antipode with multiplication and comultiplication requires that
This is straightforward to verify componentwise on :
Similarly, on :
Recall that
and that
for any that is not in
One may proceed in a similar manner, by homomorphism, verifying that the antipode inserts the appropriate cancellative signs in the shuffle, starting with the compatibility condition on and proceeding by induction.
Cofree cocomplete coalgebra
One may define a different coproduct on the tensor algebra, simpler than the one given above. It is given by
Here, as before, one uses the notational trick (recalling that trivially).
This coproduct gives rise to a coalgebra. It describes a coalgebra that is dual to the algebra structure on T(V∗), where V∗ denotes the dual vector space of linear maps V → F. In the same way that the tensor algebra is a free algebra, the corresponding coalgebra is termed cocomplete co-free. With the usual product this is not a bialgebra. It can be turned into a bialgebra with the product where (i,j) denotes the binomial coefficient for . This bialgebra is known as the divided power Hopf algebra.
The difference between this, and the other coalgebra is most easily seen in the term. Here, one has that
for , which is clearly missing a shuffled term, as compared to before.
See also
Braided vector space
Braided Hopf algebra
Monoidal category
Multilinear algebra
Fock space
References
(See Chapter 3 §5)
Algebras
Multilinear algebra
Tensors
Hopf algebras | Tensor algebra | [
"Mathematics",
"Engineering"
] | 3,031 | [
"Tensors",
"Algebras",
"Mathematical structures",
"Algebraic structures"
] |
363,695 | https://en.wikipedia.org/wiki/BLAST%20%28biotechnology%29 | In bioinformatics, BLAST (basic local alignment search tool) is an algorithm and program for comparing primary biological sequence information, such as the amino-acid sequences of proteins or the nucleotides of DNA and/or RNA sequences. A BLAST search enables a researcher to compare a subject protein or nucleotide sequence (called a query) with a library or database of sequences, and identify database sequences that resemble the query sequence above a certain threshold. For example, following the discovery of a previously unknown gene in the mouse, a scientist will typically perform a BLAST search of the human genome to see if humans carry a similar gene; BLAST will identify sequences in the human genome that resemble the mouse gene based on similarity of sequence.
Background
BLAST is one of the most widely used bioinformatics programs for sequence searching. It addresses a fundamental problem in bioinformatics research. The heuristic algorithm it uses is much faster than other approaches, such as calculating an optimal alignment. This emphasis on speed is vital to making the algorithm practical on the huge genome databases currently available, although subsequent algorithms can be even faster.
The BLAST program was designed by Eugene Myers, Stephen Altschul, Warren Gish, David J. Lipman and Webb Miller at the NIH and was published in J. Mol. Biol. in 1990. BLAST extended the alignment work of a previously developed program for protein and DNA sequence similarity searches, FASTA, by adding a novel stochastic model developed by Samuel Karlin and Stephen Altschul. They proposed "a method for estimating similarities between the known DNA sequence of one organism with that of another", and their work has been described as "the statistical foundation for BLAST." Subsequently, Altschul, Gish, Miller, Myers, and Lipman designed and implemented the BLAST program, which was published in the Journal of Molecular Biology in 1990 and has been cited over 100,000 times since.
While BLAST is faster than any Smith-Waterman implementation for most cases, it cannot "guarantee the optimal alignments of the query and database sequences" as Smith-Waterman algorithm does. The Smith-Waterman algorithm was an extension of a previous optimal method, the Needleman–Wunsch algorithm, which was the first sequence alignment algorithm that was guaranteed to find the best possible alignment. However, the time and space requirements of these optimal algorithms far exceed the requirements of BLAST.
BLAST is more time-efficient than FASTA by searching only for the more significant patterns in the sequences, yet with comparative sensitivity. This could be further realized by understanding the algorithm of BLAST introduced below.
Examples of other questions that researchers use BLAST to answer are:
Which bacterial species have a protein that is related in lineage to a certain protein with known amino-acid sequence
What other genes encode proteins that exhibit structures or motifs such as ones that have just been determined
BLAST is also often used as part of other algorithms that require approximate sequence matching.
BLAST is available on the web on the NCBI website. Different types of BLASTs are available according to the query sequences and the target databases. Alternative implementations include AB-BLAST (formerly known as WU-BLAST), FSA-BLAST (last updated in 2006), and ScalaBLAST.
The original paper by Altschul, et al. was the most highly cited paper published in the 1990s.
Input
Input sequences (in FASTA or Genbank format), database to search and other optional parameters such as scoring matrix.
Output
BLAST output can be delivered in a variety of formats. These formats include HTML, plain text, and XML formatting. For NCBI's webpage, the default format for output is HTML. When performing a BLAST on NCBI, the results are given in a graphical format showing the hits found, a table showing sequence identifiers for the hits with scoring related data, as well as alignments for the sequence of interest and the hits received with corresponding BLAST scores for these. The easiest to read and most informative of these is probably the table.
If one is attempting to search for a proprietary sequence or simply one that is unavailable in databases available to the general public through sources such as NCBI, there is a BLAST program available for download to any computer, at no cost. This can be found at BLAST+ executables. There are also commercial programs available for purchase. Databases can be found on the NCBI site, as well as on the Index of BLAST databases (FTP).
Process
Using a heuristic method, BLAST finds similar sequences, by locating short matches between the two sequences. This process of finding similar sequences is called seeding. It is after this first match that BLAST begins to make local alignments. While attempting to find similarity in sequences, sets of common letters, known as words, are very important. For example, suppose that the sequence contains the following stretch of letters, GLKFA. If a BLAST was being conducted under normal conditions, the word size would be 3 letters. In this case, using the given stretch of letters, the searched words would be GLK, LKF, and KFA. The heuristic algorithm of BLAST locates all common three-letter words between the sequence of interest and the hit sequence or sequences from the database. This result will then be used to build an alignment. After making words for the sequence of interest, the rest of the words are also assembled. These words must satisfy a requirement of having a score of at least the threshold T, when compared by using a scoring matrix.
One commonly used scoring matrix for BLAST searches is BLOSUM62, although the optimal scoring matrix depends on sequence similarity. Once both words and neighborhood words are assembled and compiled, they are compared to the sequences in the database in order to find matches. The threshold score T determines whether or not a particular word will be included in the alignment. Once seeding has been conducted, the alignment which is only 3 residues long, is extended in both directions by the algorithm used by BLAST. Each extension impacts the score of the alignment by either increasing or decreasing it. If this score is higher than a pre-determined T, the alignment will be included in the results given by BLAST. However, if this score is lower than this pre-determined T, the alignment will cease to extend, preventing the areas of poor alignment from being included in the BLAST results. Note that increasing the T score limits the amount of space available to search, decreasing the number of neighborhood words, while at the same time speeding up the process of BLAST
Algorithm
To run the software, BLAST requires a query sequence to search for, and a sequence to search against (also called the target sequence) or a sequence database containing multiple such sequences. BLAST will find sub-sequences in the database which are similar to subsequences in the query. In typical usage, the query sequence is much smaller than the database, e.g., the query may be one thousand nucleotides while the database is several billion nucleotides.
The main idea of BLAST is that there are often High-scoring Segment Pairs (HSP) contained in a statistically significant alignment. BLAST searches for high scoring sequence alignments between the query sequence and the existing sequences in the database using a heuristic approach that approximates the Smith-Waterman algorithm. However, the exhaustive Smith-Waterman approach is too slow for searching large genomic databases such as GenBank. Therefore, the BLAST algorithm uses a heuristic approach that is less accurate than the Smith-Waterman algorithm but over 50 times faster. The speed and relatively good accuracy of BLAST are among the key technical innovations of the BLAST programs.
An overview of the BLAST algorithm (a protein to protein search) is as follows:
Remove low-complexity region or sequence repeats in the query sequence.
"Low-complexity region" means a region of a sequence composed of few kinds of elements. These regions might give high scores that confuse the program to find the actual significant sequences in the database, so they should be filtered out. The regions will be marked with an X (protein sequences) or N (nucleic acid sequences) and then be ignored by the BLAST program. To filter out the low-complexity regions, the SEG program is used for protein sequences and the program DUST is used for DNA sequences. On the other hand, the program XNU is used to mask off the tandem repeats in protein sequences.
Make a k-letter word list of the query sequence.
Take k=3 for example, we list the words of length 3 in the query protein sequence (k is usually 11 for a DNA sequence) "sequentially", until the last letter of the query sequence is included. The method is illustrated in figure 1.
List the possible matching words.
This step is one of the main differences between BLAST and FASTA. FASTA cares about all of the common words in the database and query sequences that are listed in step 2; however, BLAST only cares about the high-scoring words. The scores are created by comparing the word in the list in step 2 with all the 3-letter words. By using the scoring matrix (substitution matrix) to score the comparison of each residue pair, there are 20^3 possible match scores for a 3-letter word. For example, the score obtained by comparing PQG with PEG and PQA is respectively 15 and 12 with the BLOSUM62 weighting scheme. For DNA words, a match is scored as +5 and a mismatch as -4, or as +2 and -3. After that, a neighborhood word score threshold T is used to reduce the number of possible matching words. The words whose scores are greater than the threshold T will remain in the possible matching words list, while those with lower scores will be discarded. For example, PEG is kept, but PQA is abandoned when T is 13.
Organize the remaining high-scoring words into an efficient search tree.
This allows the program to rapidly compare the high-scoring words to the database sequences.
Repeat step 3 to 4 for each k-letter word in the query sequence.
Scan the database sequences for exact matches with the remaining high-scoring words.
The BLAST program scans the database sequences for the remaining high-scoring word, such as PEG, of each position. If an exact match is found, this match is used to seed a possible un-gapped alignment between the query and database sequences.
Extend the exact matches to high-scoring segment pair (HSP).
The original version of BLAST stretches a longer alignment between the query and the database sequence in the left and right directions, from the position where the exact match occurred. The extension does not stop until the accumulated total score of the HSP begins to decrease. A simplified example is presented in figure 2.
To save more time, a newer version of BLAST, called BLAST2 or gapped BLAST, has been developed. BLAST2 adopts a lower neighborhood word score threshold to maintain the same level of sensitivity for detecting sequence similarity. Therefore, the list of possible matching words list in step 3 becomes longer. Next, the exact matched regions, within distance A from each other on the same diagonal in figure 3, will be joined as a longer new region. Finally, the new regions are then extended by the same method as in the original version of BLAST, and the HSPs' (High-scoring segment pair) scores of the extended regions are then created by using a substitution matrix as before.
List all of the HSPs in the database whose score is high enough to be considered.
We list the HSPs whose scores are greater than the empirically determined cutoff score S. By examining the distribution of the alignment scores modeled by comparing random sequences, a cutoff score S can be determined such that its value is large enough to guarantee the significance of the remaining HSPs.
Evaluate the significance of the HSP score.
BLAST next assesses the statistical significance of each HSP score by exploiting the Gumbel extreme value distribution (EVD). (It is proved that the distribution of Smith-Waterman local alignment scores between two random sequences follows the Gumbel EVD. For local alignments containing gaps it is not proved.). In accordance with the Gumbel EVD, the probability p of observing a score S equal to or greater than x is given by the equation
where
The statistical parameters and are estimated by fitting the distribution of the un-gapped local alignment scores, of the query sequence and a lot of shuffled versions (Global or local shuffling) of a database sequence, to the Gumbel extreme value distribution. Note that and depend upon the substitution matrix, gap penalties, and sequence composition (the letter frequencies). and are the effective lengths of the query and database sequences, respectively. The original sequence length is shortened to the effective length to compensate for the edge effect (an alignment start near the end of one of the query or database sequence is likely not to have enough sequence to build an optimal alignment). They can be calculated as
where is the average expected score per aligned pair of residues in an alignment of two random sequences. Altschul and Gish gave the typical values, , , and , for un-gapped local alignment using BLOSUM62 as the substitution matrix. Using the typical values for assessing the significance is called the lookup table method; it is not accurate. The expect score E of a database match is the number of times that an unrelated database sequence would obtain a score S higher than x by chance. The expectation E obtained in a search for a database of D sequences is given by
Furthermore, when , E could be approximated by the Poisson distribution as
This expectation or expect value "E" (often called an E score or E-value or e-value) assessing the significance of the HSP score for un-gapped local alignment is reported in the BLAST results. The calculation shown here is modified if individual HSPs are combined, such as when producing gapped alignments (described below), due to the variation of the statistical parameters.
Make two or more HSP regions into a longer alignment.
Sometimes, we find two or more HSP regions in one database sequence that can be made into a longer alignment. This provides additional evidence of the relation between the query and database sequence. There are two methods, the Poisson method and the sum-of-scores method, to compare the significance of the newly combined HSP regions. Suppose that there are two combined HSP regions with the pairs of scores (65, 40) and (52, 45), respectively. The Poisson method gives more significance to the set with the maximal lower score (45>40). However, the sum-of-scores method prefers the first set, because 65+40 (105) is greater than 52+45(97). The original BLAST uses the Poisson method; gapped BLAST and the WU-BLAST uses the sum-of scores method.
Show the gapped Smith-Waterman local alignments of the query and each of the matched database sequences.
The original BLAST only generates un-gapped alignments including the initially found HSPs individually, even when there is more than one HSP found in one database sequence.
BLAST2 produces a single alignment with gaps that can include all of the initially found HSP regions. Note that the computation of the score and its corresponding E-value involves use of adequate gap penalties.
Report every match whose expect score is lower than a threshold parameter E.
Types of BLAST
BLASTn (Nucleotide BLAST)
BLASTn compares one or more nucleotide sequence to a database or another sequence. This is useful when trying to identify evolutionary relationships between organisms.
tBLASTn
tBLASTn used to search for proteins in sequences that haven't been translated into proteins yet. It takes a protein sequence and compares it to all possible translations of a DNA sequence. This is useful when looking for similar protein-coding regions in DNA sequences that haven't been fully annotated, like ESTs (short, single-read cDNA sequences) and HTGs (draft genome sequences). Since these sequences don't have known protein translations, we can only search for them using tBLASTn.
BLASTx
BLASTx compares a nucleotide query sequence, which can be translated into six different protein sequences, against a database of known protein sequences. This tool is useful when the reading frame of the DNA sequence is uncertain or contains errors that might cause mistakes in protein-coding. BLASTx provides combined statistics for hits across all frames, making it helpful for the initial analysis of new DNA sequences.
BLASTp
BLASTp, or Protein BLAST, is used to compare protein sequences. You can input one or more protein sequences that you want to compare against a single protein sequence or a database of protein sequences. This is useful when you're trying to identify a protein by finding similar sequences in existing protein databases.
Parallel BLAST
Parallel BLAST versions of split databases are implemented using MPI and Pthreads, and have been ported to various platforms including Windows, Linux, Solaris, Mac OS X, and AIX. Popular approaches to parallelize BLAST include query distribution, hash table segmentation, computation parallelization, and database segmentation (partition). Databases are split into equal sized pieces and stored locally on each node. Each query is run on all nodes in parallel and the resultant BLAST output files from all nodes merged to yield the final output. Specific implementations include MPIblast, ScalaBLAST, DCBLAST and so on.
MPIblast makes use of a database segmentation technique to parallelize the computation process. This allows for significant performance improvements when conducting BLAST searches across a set of nodes in a cluster. In some scenarios a superlinear speedup is achievable. This makes MPIblast suitable for the extensive genomic datasets that are typically used in bioinformatics.
BLAST generally runs at a speed of O(n), where n is the size of the database. The time to complete the search increases linearly as the size of the database increases. MPIblast utilizes parallel processing to speed up the search. The ideal speed for any parallel computation is a complexity of O(n/p), with n being the size of the database and p being the number of processors. This would indicate that the job is evenly distributed among the p number of processors. This is visualized in the included graph. The superlinear speedup that can sometimes occur with MPIblast can have a complexity better than O(n/p). This occurs because the cache memory can be used to decrease the run time.
Alternatives to BLAST
The predecessor to BLAST, FASTA, can also be used for protein and DNA similarity searching. FASTA provides a similar set of programs for comparing proteins to protein and DNA databases, DNA to DNA and protein databases, and includes additional programs for working with unordered short peptides and DNA sequences. In addition, the FASTA package provides SSEARCH, a vectorized implementation of the rigorous Smith-Waterman algorithm. FASTA is slower than BLAST, but provides a much wider range of scoring matrices, making it easier to tailor a search to a specific evolutionary distance.
An extremely fast but considerably less sensitive alternative to BLAST is BLAT (Blast Like Alignment Tool). While BLAST does a linear search, BLAT relies on k-mer indexing the database, and can thus often find seeds faster. Another software alternative similar to BLAT is PatternHunter.
Advances in sequencing technology in the late 2000s has made searching for very similar nucleotide matches an important problem. New alignment programs tailored for this use typically use BWT-indexing of the target database (typically a genome). Input sequences can then be mapped very quickly, and output is typically in the form of a BAM file. Example alignment programs are BWA, SOAP, and Bowtie.
For protein identification, searching for known domains (for instance from Pfam) by matching with Hidden Markov Models is a popular alternative, such as HMMER.
An alternative to BLAST for comparing two banks of sequences is PLAST. PLAST provides a high-performance general purpose bank to bank sequence similarity search tool relying on the PLAST and ORIS algorithms. Results of PLAST are very similar to BLAST, but PLAST is significantly faster and capable of comparing large sets of sequences with a small memory (i.e. RAM) footprint.
For applications in metagenomics, where the task is to compare billions of short DNA reads against tens of millions of protein references, DIAMOND runs at up to 20,000 times as fast as BLASTX, while maintaining a high level of sensitivity.
The open-source software MMseqs is an alternative to BLAST/PSI-BLAST, which improves on current search tools over the full range of speed-sensitivity trade-off, achieving sensitivities better than PSI-BLAST at more than 400 times its speed.
Optical computing approaches have been suggested as promising alternatives to the current electrical implementations. OptCAM is an example of such approaches and is shown to be faster than BLAST.
Comparing BLAST and the Smith-Waterman Process
While both Smith-Waterman and BLAST are used to find homologous sequences by searching and comparing a query sequence with those in the databases, they do have their differences.
Due to the fact that BLAST is based on a heuristic algorithm, the results received through BLAST will not include all the possible hits within the database. BLAST misses hard to find matches.
An alternative in order to find all the possible hits would be to use the Smith-Waterman algorithm. This method varies from the BLAST method in two areas, accuracy and speed. The Smith-Waterman option provides better accuracy, in that it finds matches that BLAST cannot, because it does not exclude any information. Therefore, it is necessary for remote homology. However, when compared to BLAST, it is more time consuming and requires large amounts of computing power and memory. However, advances have been made to speed up the Smith-Waterman search process dramatically. These advances include FPGA chips and SIMD technology.
For more complete results from BLAST, the settings can be changed from their default settings. The optimal settings for a given sequence, however, may vary. The settings one can change are E-Value, gap costs, filters, word size, and substitution matrix.
Note, the algorithm used for BLAST was developed from the algorithm used for Smith-Waterman. BLAST employs an alignment which finds "local alignments between sequences by finding short matches and from these initial matches (local) alignments are created".
BLAST output visualization
To help users interpreting BLAST results, different software is available. According to installation and use, analysis features and technology, here are some available tools:
NCBI BLAST service
general BLAST output interpreters, GUI-based: JAMBLAST, Blast Viewer, BLASTGrabber
integrated BLAST environments: PLAN, BlastStation-Free, SequenceServer
BLAST output parsers: MuSeqBox, Zerg, BioParser, BLAST-Explorer, SequenceServer
specialized BLAST-related tools: MEGAN, BLAST2GENE, BOV, Circoletto
Example visualizations of BLAST results are shown in Figure 4 and 5.
Uses of BLAST
BLAST can be used for several purposes. These include identifying species, locating domains, establishing phylogeny, DNA mapping, and comparison.
Identifying species With the use of BLAST, you can possibly correctly identify a species or find homologous species. This can be useful, for example, when you are working with a DNA sequence from an unknown species.
Locating domains When working with a protein sequence you can input it into BLAST, to locate known domains within the sequence of interest.
Establishing phylogeny Using the results received through BLAST you can create a phylogenetic tree using the BLAST web-page. Phylogenies based on BLAST alone are less reliable than other purpose-built computational phylogenetic methods, so should only be relied upon for "first pass" phylogenetic analyses.
DNA mapping When working with a known species, and looking to sequence a gene at an unknown location, BLAST can compare the chromosomal position of the sequence of interest, to relevant sequences in the database(s). NCBI has a "Magic-BLAST" tool built around BLAST for this purpose.
Comparison When working with genes, BLAST can locate common genes in two related species, and can be used to map annotations from one organism to another.
Classifying taxonomy
BLAST can use genetic sequences to compare multiple taxa against known taxonomical data. By doing this, it can provide a picture of the evolutionary relationships between various species (Fig.6). This is a useful way to identify orphan genes, since if the gene shows up in an organism outside of the ancestral lineage, then it wouldn't be classified as an orphan gene.
Although this method is helpful, some more accurate options to find homologs would be through pairwise sequence alignment and multiple sequence alignment.
See also
PSI Protein Classifier
Needleman-Wunsch algorithm
Smith-Waterman algorithm
Sequence alignment
Sequence alignment software
Sequerome
eTBLAST
References
External links
BLAST+ executables — free source downloads
Bioinformatics algorithms
Phylogenetics software
Laboratory software
Public-domain software
Free bioinformatics software | BLAST (biotechnology) | [
"Biology"
] | 5,162 | [
"Bioinformatics",
"Bioinformatics algorithms"
] |
363,762 | https://en.wikipedia.org/wiki/Invariant%20theory | Invariant theory is a branch of abstract algebra dealing with actions of groups on algebraic varieties, such as vector spaces, from the point of view of their effect on functions. Classically, the theory dealt with the question of explicit description of polynomial functions that do not change, or are invariant, under the transformations from a given linear group. For example, if we consider the action of the special linear group SLn on the space of n by n matrices by left multiplication, then the determinant is an invariant of this action because the determinant of A X equals the determinant of X, when A is in SLn.
Introduction
Let be a group, and a finite-dimensional vector space over a field (which in classical invariant theory was usually assumed to be the complex numbers). A representation of in is a group homomorphism , which induces a group action of on . If is the space of polynomial functions on , then the group action of on produces an action on by the following formula:
With this action it is natural to consider the subspace of all polynomial functions which are invariant under this group action, in other words the set of polynomials such that for all . This space of invariant polynomials is denoted .
First problem of invariant theory: Is a finitely generated algebra over ?
For example, if and the space of square matrices, and the action of on is given by left multiplication, then is isomorphic to a polynomial algebra in one variable, generated by the determinant. In other words, in this case, every invariant polynomial is a linear combination of powers of the determinant polynomial. So in this case, is finitely generated over .
If the answer is yes, then the next question is to find a minimal basis, and ask whether the module of polynomial relations between the basis elements (known as the syzygies) is finitely generated over .
Invariant theory of finite groups has intimate connections with Galois theory. One of the first major results was the main theorem on the symmetric functions that described the invariants of the symmetric group acting on the polynomial ring ] by permutations of the variables. More generally, the Chevalley–Shephard–Todd theorem characterizes finite groups whose algebra of invariants is a polynomial ring. Modern research in invariant theory of finite groups emphasizes "effective" results, such as explicit bounds on the degrees of the generators. The case of positive characteristic, ideologically close to modular representation theory, is an area of active study, with links to algebraic topology.
Invariant theory of infinite groups is inextricably linked with the development of linear algebra, especially, the theories of quadratic forms and determinants. Another subject with strong mutual influence was projective geometry, where invariant theory was expected to play a major role in organizing the material. One of the highlights of this relationship is the symbolic method. Representation theory of semisimple Lie groups has its roots in invariant theory.
David Hilbert's work on the question of the finite generation of the algebra of invariants (1890) resulted in the creation of a new mathematical discipline, abstract algebra. A later paper of Hilbert (1893) dealt with the same questions in more constructive and geometric ways, but remained virtually unknown until David Mumford brought these ideas back to life in the 1960s, in a considerably more general and modern form, in his geometric invariant theory. In large measure due to the influence of Mumford, the subject of invariant theory is seen to encompass the theory of actions of linear algebraic groups on affine and projective varieties. A distinct strand of invariant theory, going back to the classical constructive and combinatorial methods of the nineteenth century, has been developed by Gian-Carlo Rota and his school. A prominent example of this circle of ideas is given by the theory of standard monomials.
Examples
Simple examples of invariant theory come from computing the invariant monomials from a group action. For example, consider the -action on sending
Then, since are the lowest degree monomials which are invariant, we have that
This example forms the basis for doing many computations.
The nineteenth-century origins
Cayley first established invariant theory in his "On the Theory of Linear Transformations (1845)." In the opening of his paper, Cayley credits an 1841 paper of George Boole, "investigations were suggested to me by a very elegant paper on the same subject... by Mr Boole." (Boole's paper was Exposition of a General Theory of Linear Transformations, Cambridge Mathematical Journal.)
Classically, the term "invariant theory" refers to the study of invariant algebraic forms (equivalently, symmetric tensors) for the action of linear transformations. This was a major field of study in the latter part of the nineteenth century. Current theories relating to the symmetric group and symmetric functions, commutative algebra, moduli spaces and the representations of Lie groups are rooted in this area.
In greater detail, given a finite-dimensional vector space V of dimension n we can consider the symmetric algebra S(Sr(V)) of the polynomials of degree r over V, and the action on it of GL(V). It is actually more accurate to consider the relative invariants of GL(V), or representations of SL(V), if we are going to speak of invariants: that is because a scalar multiple of the identity will act on a tensor of rank r in S(V) through the r-th power 'weight' of the scalar. The point is then to define the subalgebra of invariants I(Sr(V)) for the action. We are, in classical language, looking at invariants of n-ary r-ics, where n is the dimension of V. (This is not the same as finding invariants of GL(V) on S(V); this is an uninteresting problem as the only such invariants are constants.) The case that was most studied was invariants of binary forms where n = 2.
Other work included that of Felix Klein in computing the invariant rings of finite group actions on (the binary polyhedral groups, classified by the ADE classification); these are the coordinate rings of du Val singularities.
The work of David Hilbert, proving that I(V) was finitely presented in many cases, almost put an end to classical invariant theory for several decades, though the classical epoch in the subject continued to the final publications of Alfred Young, more than 50 years later. Explicit calculations for particular purposes have been known in modern times (for example Shioda, with the binary octavics).
Hilbert's theorems
proved that if V is a finite-dimensional representation of the complex algebraic group G = SLn(C) then the ring of invariants of G acting on the ring of polynomials R = S(V) is finitely generated. His proof used the Reynolds operator ρ from R to RG with the properties
ρ(1) = 1
ρ(a + b) = ρ(a) + ρ(b)
ρ(ab) = a ρ(b) whenever a is an invariant.
Hilbert constructed the Reynolds operator explicitly using Cayley's omega process Ω, though now it is more common to construct ρ indirectly as follows: for compact groups G, the Reynolds operator is given by taking the average over G, and non-compact reductive groups can be reduced to the case of compact groups using Weyl's unitarian trick.
Given the Reynolds operator, Hilbert's theorem is proved as follows. The ring R is a polynomial ring so is graded by degrees, and the ideal I is defined to be the ideal generated by the homogeneous invariants of positive degrees. By Hilbert's basis theorem the ideal I is finitely generated (as an ideal). Hence, I is finitely generated by finitely many invariants of G (because if we are given any – possibly infinite – subset S that generates a finitely generated ideal I, then I is already generated by some finite subset of S). Let i1,...,in be a finite set of invariants of G generating I (as an ideal). The key idea is to show that these generate the ring RG of invariants. Suppose that x is some homogeneous invariant of degree d > 0. Then
x = a1i1 + ... + anin
for some aj in the ring R because x is in the ideal I. We can assume that aj is homogeneous of degree d − deg ij for every j (otherwise, we replace aj by its homogeneous component of degree d − deg ij; if we do this for every j, the equation x = a1i1 + ... + anin will remain valid). Now, applying the Reynolds operator to x = a1i1 + ... + anin gives
x = ρ(a1)i1 + ... + ρ(an)in
We are now going to show that x lies in the R-algebra generated by i1,...,in.
First, let us do this in the case when the elements ρ(ak) all have degree less than d. In this case, they are all in the R-algebra generated by i1,...,in (by our induction assumption). Therefore, x is also in this R-algebra (since x = ρ(a1)i1 + ... + ρ(an)in).
In the general case, we cannot be sure that the elements ρ(ak) all have degree less than d. But we can replace each ρ(ak) by its homogeneous component of degree d − deg ij. As a result, these modified ρ(ak) are still G-invariants (because every homogeneous component of a G-invariant is a G-invariant) and have degree less than d (since deg ik > 0). The equation x = ρ(a1)i1 + ... + ρ(an)in still holds for our modified ρ(ak), so we can again conclude that x lies in the R-algebra generated by i1,...,in.
Hence, by induction on the degree, all elements of RG are in the R-algebra generated by i1,...,in.
Geometric invariant theory
The modern formulation of geometric invariant theory is due to David Mumford, and emphasizes the construction of a quotient by the group action that should capture invariant information through its coordinate ring. It is a subtle theory, in that success is obtained by excluding some 'bad' orbits and identifying others with 'good' orbits. In a separate development the symbolic method of invariant theory, an apparently heuristic combinatorial notation, has been rehabilitated.
One motivation was to construct moduli spaces in algebraic geometry as quotients of schemes parametrizing marked objects. In the 1970s and 1980s the theory developed
interactions with symplectic geometry and equivariant topology, and was used to construct moduli spaces of objects in differential geometry, such as instantons and monopoles.
See also
Gram's theorem
Representation theory of finite groups
Molien series
Invariant (mathematics)
Invariant of a binary form
Invariant measure
First and second fundamental theorems of invariant theory
References
Reprinted as
A recent resource for learning about modular invariants of finite groups.
An undergraduate level introduction to the classical theory of invariants of binary forms, including the Omega process starting at page 87.
An older but still useful survey.
A beautiful introduction to the theory of invariants of finite groups and techniques for computing them using Gröbner bases.
External links
H. Kraft, C. Procesi, Classical Invariant Theory, a Primer
V. L. Popov, E. B. Vinberg, ``Invariant Theory", in Algebraic geometry. IV. Encyclopaedia of Mathematical Sciences, 55 (translated from 1989 Russian edition) Springer-Verlag, Berlin, 1994; vi+284 pp.; | Invariant theory | [
"Physics"
] | 2,460 | [
"Invariant theory",
"Group actions",
"Symmetry"
] |
363,903 | https://en.wikipedia.org/wiki/Newtonian%20fluid | A Newtonian fluid is a fluid in which the viscous stresses arising from its flow are at every point linearly correlated to the local strain rate — the rate of change of its deformation over time. Stresses are proportional to the rate of change of the fluid's velocity vector.
A fluid is Newtonian only if the tensors that describe the viscous stress and the strain rate are related by a constant viscosity tensor that does not depend on the stress state and velocity of the flow. If the fluid is also isotropic (i.e., its mechanical properties are the same along any direction), the viscosity tensor reduces to two real coefficients, describing the fluid's resistance to continuous shear deformation and continuous compression or expansion, respectively.
Newtonian fluids are the easiest mathematical models of fluids that account for viscosity. While no real fluid fits the definition perfectly, many common liquids and gases, such as water and air, can be assumed to be Newtonian for practical calculations under ordinary conditions. However, non-Newtonian fluids are relatively common and include oobleck (which becomes stiffer when vigorously sheared) and non-drip paint (which becomes thinner when sheared). Other examples include many polymer solutions (which exhibit the Weissenberg effect), molten polymers, many solid suspensions, blood, and most highly viscous fluids.
Newtonian fluids are named after Isaac Newton, who first used the differential equation to postulate the relation between the shear strain rate and shear stress for such fluids.
Definition
An element of a flowing liquid or gas will endure forces from the surrounding fluid, including viscous stress forces that cause it to gradually deform over time. These forces can be mathematically first order approximated by a viscous stress tensor, usually denoted by .
The deformation of a fluid element, relative to some previous state, can be first order approximated by a strain tensor that changes with time. The time derivative of that tensor is the strain rate tensor, that expresses how the element's deformation is changing with time; and is also the gradient of the velocity vector field at that point, often denoted .
The tensors and can be expressed by 3×3 matrices, relative to any chosen coordinate system. The fluid is said to be Newtonian if these matrices are related by the equation
where is a fixed 3×3×3×3 fourth order tensor that does not depend on the velocity or stress state of the fluid.
Incompressible isotropic case
For an incompressible and isotropic Newtonian fluid in laminar flow only in the direction x (i.e. where viscosity is isotropic in the fluid), the shear stress is related to the strain rate by the simple constitutive equation
where
is the shear stress ("skin drag") in the fluid,
is a scalar constant of proportionality, the dynamic viscosity of the fluid
is the derivative in the direction y, normal to x, of the flow velocity component u that is oriented along the direction x.
In case of a general 2D incompressibile flow in the plane x, y, the Newton constitutive equation become:
where:
is the shear stress ("skin drag") in the fluid,
is the partial derivative in the direction y of the flow velocity component u that is oriented along the direction x.
is the partial derivative in the direction x of the flow velocity component v that is oriented along the direction y.
We can now generalize to the case of an incompressible flow with a general direction in the 3D space, the above constitutive equation becomes
where
is the th spatial coordinate
is the fluid's velocity in the direction of axis
is the -th component of the stress acting on the faces of the fluid element perpendicular to axis . It is the ij-th component of the shear stress tensor
or written in more compact tensor notation
where is the flow velocity gradient.
An alternative way of stating this constitutive equation is:
where
is the rate-of-strain tensor. So this decomposition can be made explicit as:
This constitutive equation is also called the Newton law of viscosity.
The total stress tensor can always be decomposed as the sum of the isotropic stress tensor and the deviatoric stress tensor ():
In the incompressible case, the isotropic stress is simply proportional to the thermodynamic pressure :
and the deviatoric stress is coincident with the shear stress tensor :
The stress constitutive equation then becomes
or written in more compact tensor notation
where is the identity tensor.
General compressible case
The Newton's constitutive law for a compressible flow results from the following assumptions on the Cauchy stress tensor:
the stress is Galilean invariant: it does not depend directly on the flow velocity, but only on spatial derivatives of the flow velocity. So the stress variable is the tensor gradient , or more simply the rate-of-strain tensor:
the deviatoric stress is linear in this variable: , where is independent on the strain rate tensor, is the fourth-order tensor representing the constant of proportionality, called the viscosity or elasticity tensor, and : is the double-dot product.
the fluid is assumed to be isotropic, as with gases and simple liquids, and consequently is an isotropic tensor; furthermore, since the deviatoric stress tensor is symmetric, by Helmholtz decomposition it can be expressed in terms of two scalar Lamé parameters, the second viscosity and the dynamic viscosity , as it is usual in linear elasticity:
where is the identity tensor, and is the trace of the rate-of-strain tensor. So this decomposition can be explicitly defined as:
Since the trace of the rate-of-strain tensor in three dimensions is the divergence (i.e. rate of expansion) of the flow:
Given this relation, and since the trace of the identity tensor in three dimensions is three:
the trace of the stress tensor in three dimensions becomes:
So by alternatively decomposing the stress tensor into isotropic and deviatoric parts, as usual in fluid dynamics:
Introducing the bulk viscosity ,
we arrive to the linear constitutive equation in the form usually employed in thermal hydraulics:
which can also be arranged in the other usual form:
Note that in the compressible case the pressure is no more proportional to the isotropic stress term, since there is the additional bulk viscosity term:
and the deviatoric stress tensor is still coincident with the shear stress tensor (i.e. the deviatoric stress in a Newtonian fluid has no normal stress components), and it has a compressibility term in addition to the incompressible case, which is proportional to the shear viscosity:
Note that the incompressible case correspond to the assumption that the pressure constrains the flow so that the volume of fluid elements is constant: isochoric flow resulting in a solenoidal velocity field with .
So one returns to the expressions for pressure and deviatoric stress seen in the preceding paragraph.
Both bulk viscosity and dynamic viscosity need not be constant – in general, they depend on two thermodynamics variables if the fluid contains a single chemical species, say for example, pressure and temperature. Any equation that makes explicit one of these transport coefficient in the conservation variables is called an equation of state.
Apart from its dependence of pressure and temperature, the second viscosity coefficient also depends on the process, that is to say, the second viscosity coefficient is not just a material property. Example: in the case of a sound wave with a definitive frequency that alternatively compresses and expands a fluid element, the second viscosity coefficient depends on the frequency of the wave. This dependence is called the dispersion. In some cases, the second viscosity can be assumed to be constant in which case, the effect of the volume viscosity is that the mechanical pressure is not equivalent to the thermodynamic pressure: as demonstrated below.
However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, where second viscosity coefficient becomes important) by explicitly assuming . The assumption of setting is called as the Stokes hypothesis. The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; for other gases and liquids, Stokes hypothesis is generally incorrect.
Finally, note that Stokes hypothesis is less restrictive that the one of incompressible flow. In fact, in the incompressible flow both the bulk viscosity term, and the shear viscosity term in the divergence of the flow velocity term disappears, while in the Stokes hypothesis the first term also disappears but the second one still remains.
For anisotropic fluids
More generally, in a non-isotropic Newtonian fluid, the coefficient that relates internal friction stresses to the spatial derivatives of the velocity field is replaced by a nine-element viscous stress tensor .
There is general formula for friction force in a liquid: The vector differential of friction force is equal the viscosity tensor increased on vector product differential of the area vector of adjoining a liquid layers and rotor of velocity:
where is the viscosity tensor. The diagonal components of viscosity tensor is molecular viscosity of a liquid, and not diagonal components – turbulence eddy viscosity.
Newton's law of viscosity
The following equation illustrates the relation between shear rate and shear stress for a fluid with laminar flow only in the direction x:
where:
is the shear stress in the components x and y, i.e. the force component on the direction x per unit surface that is normal to the direction y (so it is parallel to the direction x)
is the dynamic viscosity, and
is the flow velocity gradient along the direction y, that is normal to the flow velocity .
If viscosity does not vary with rate of deformation the fluid is Newtonian.
Power law model
The power law model is used to display the behavior of Newtonian and non-Newtonian fluids and measures shear stress as a function of strain rate.
The relationship between shear stress, strain rate and the velocity gradient for the power law model are:
where
is the absolute value of the strain rate to the (n−1) power;
is the velocity gradient;
n is the power law index.
If
n < 1 then the fluid is a pseudoplastic.
n = 1 then the fluid is a Newtonian fluid.
n > 1 then the fluid is a dilatant.
Fluid model
The relationship between the shear stress and shear rate in a casson fluid model is defined as follows:
where τ0 is the yield stress and
where α depends on protein composition and H is the Hematocrit number.
Examples
Water, air, alcohol, glycerol, and thin motor oil are all examples of Newtonian fluids over the range of shear stresses and shear rates encountered in everyday life. Single-phase fluids made up of small molecules are generally (although not exclusively) Newtonian.
See also
Fluid mechanics
Non-Newtonian fluid
Strain rate tensor
Viscosity
Viscous stress tensor
References
Viscosity
Fluid dynamics | Newtonian fluid | [
"Physics",
"Chemistry",
"Engineering"
] | 2,327 | [
"Physical phenomena",
"Physical quantities",
"Chemical engineering",
"Piping",
"Wikipedia categories named after physical quantities",
"Viscosity",
"Physical properties",
"Fluid dynamics"
] |
363,985 | https://en.wikipedia.org/wiki/Metacentric%20height | The metacentric height (GM) is a measurement of the initial static stability of a floating body. It is calculated as the distance between the centre of gravity of a ship and its metacentre. A larger metacentric height implies greater initial stability against overturning. The metacentric height also influences the natural period of rolling of a hull, with very large metacentric heights being associated with shorter periods of roll which are uncomfortable for passengers. Hence, a sufficiently, but not excessively, high metacentric height is considered ideal for passenger ships.
Different centres
The centre of buoyancy is at the centre of mass of the volume of water that the hull displaces. This point is referred to as B in naval architecture.
The centre of gravity of the ship is commonly denoted as point G or CG. When a ship is at equilibrium, the centre of buoyancy is vertically in line with the centre of gravity of the ship.
The metacentre is the point where the lines intersect (at angle φ) of the upward force of buoyancy of φ ± dφ. When the ship is vertical, the metacentre lies above the centre of gravity and so moves in the opposite direction of heel as the ship rolls. This distance is also abbreviated as GM. As the ship heels over, the centre of gravity generally remains fixed with respect to the ship because it just depends on the position of the ship's weight and cargo, but the surface area increases, increasing BMφ. Work must be done to roll a stable hull. This is converted to potential energy by raising the centre of mass of the hull with respect to the water level or by lowering the centre of buoyancy or both. This potential energy will be released in order to right the hull and the stable attitude will be where it has the least magnitude. It is the interplay of potential and kinetic energy that results in the ship having a natural rolling frequency. For small angles, the metacentre, Mφ, moves with a lateral component so it is no longer directly over the centre of mass.
The righting couple on the ship is proportional to the horizontal distance between two equal forces. These are gravity acting downwards at the centre of mass and the same magnitude force acting upwards through the centre of buoyancy, and through the metacentre above it. The righting couple is proportional to the metacentric height multiplied by the sine of the angle of heel, hence the importance of metacentric height to stability. As the hull rights, work is done either by its centre of mass falling, or by water falling to accommodate a rising centre of buoyancy, or both.
For example, when a perfectly cylindrical hull rolls, the centre of buoyancy stays on the axis of the cylinder at the same depth. However, if the centre of mass is below the axis, it will move to one side and rise, creating potential energy. Conversely if a hull having a perfectly rectangular cross section has its centre of mass at the water line, the centre of mass stays at the same height, but the centre of buoyancy goes down as the hull heels, again storing potential energy.
When setting a common reference for the centres, the molded (within the plate or planking) line of the keel (K) is generally chosen; thus, the reference heights are:
KB – to Centre of Buoyancy
KG – to Centre of Gravity
KMT – to Transverse Metacentre
Metacentre
When a ship heels (rolls sideways), the centre of buoyancy of the ship moves laterally. It might also move up or down with respect to the water line. The point at which a vertical line through the heeled centre of buoyancy crosses the line through the original, vertical centre of buoyancy is the metacentre. The metacentre remains directly above the centre of buoyancy by definition.
In the diagram above, the two Bs show the centres of buoyancy of a ship in the upright and heeled conditions. The metacentre, M, is considered to be fixed relative to the ship for small angles of heel; however, at larger angles the metacentre can no longer be considered fixed, and its actual location must be found to calculate the ship's stability.
It can be calculated using the formulae:
Where KB is the centre of buoyancy (height above the keel), I is the second moment of area of the waterplane around the rotation axis in metres4, and V is the volume of displacement in metres3. KM is the distance from the keel to the metacentre.
Stable floating objects have a natural rolling frequency, just like a weight on a spring, where the frequency is increased as the spring gets stiffer. In a boat, the equivalent of the spring stiffness is the distance called "GM" or "metacentric height", being the distance between two points: "G" the centre of gravity of the boat and "M", which is a point called the metacentre.
Metacentre is determined by the ratio between the inertia resistance of the boat and the volume of the boat. (The inertia resistance is a quantified description of how the waterline width of the boat resists overturning.) Wide and shallow hulls have high transverse metacentres, whilst narrow and deep hulls have low metacentres
.
Ignoring the ballast, wide and shallow means that the ship is very quick to roll, and narrow and deep means that the ship is very hard to overturn and is stiff.
"G", is the center of gravity. "GM", the stiffness parameter of a boat, can be lengthened by lowering the center of gravity or changing the hull form (and thus changing the volume displaced and second moment of area of the waterplane) or both.
An ideal boat strikes a balance. Very tender boats with very slow roll periods are at risk of overturning, but are comfortable for passengers. However, vessels with a higher metacentric height are "excessively stable" with a short roll period resulting in high accelerations at the deck level.
Sailing yachts, especially racing yachts, are designed to be stiff, meaning the distance between the centre of mass and the metacentre is very large in order to resist the heeling effect of the wind on the sails. In such vessels, the rolling motion is not uncomfortable because of the moment of inertia of the tall mast and the aerodynamic damping of the sails.
Righting arm
The metacentric height is an approximation for the vessel stability at a small angle (0-15 degrees) of heel. Beyond that range, the stability of the vessel is dominated by what is known as a righting moment. Depending on the geometry of the hull, naval architects must iteratively calculate the center of buoyancy at increasing angles of heel. They then calculate the righting moment at this angle, which is determined using the equation:
Where RM is the righting moment, GZ is the righting arm and is the displacement. Because the vessel displacement is constant, common practice is to simply graph the righting arm vs the angle of heel. The righting arm (known also as GZ — see diagram): the horizontal distance between the lines of buoyancy and gravity.
at small angles of heel
There are several important factors that must be determined with regards to righting arm/moment. These are known as the maximum righting arm/moment, the point of deck immersion, the downflooding angle, and the point of vanishing stability. The maximum righting moment is the maximum moment that could be applied to the vessel without causing it to capsize. The point of deck immersion is the angle at which the main deck will first encounter the sea. Similarly, the downflooding angle is the angle at which water will be able to flood deeper into the vessel. Finally, the point of vanishing stability is a point of unstable equilibrium. Any heel lesser than this angle will allow the vessel to right itself, while any heel greater than this angle will cause a negative righting moment (or heeling moment) and force the vessel to continue to roll over. When a vessel reaches a heel equal to its point of vanishing stability, any external force will cause the vessel to capsize.
Sailing vessels are designed to operate with a higher degree of heel than motorized vessels and the righting moment at extreme angles is of high importance.
Monohulled sailing vessels should be designed to have a positive righting arm (the limit of positive stability) to at least 120° of heel, although many sailing yachts have stability limits down to 90° (mast parallel to the water surface). As the displacement of the hull at any particular degree of list is not proportional, calculations can be difficult, and the concept was not introduced formally into naval architecture until about 1970.
Stability
GM and rolling period
The metacentre has a direct relationship with a ship's rolling period. A ship with a small GM will be "tender" - have a long roll period. An excessively low or negative GM increases the risk of a ship capsizing in rough weather, for example HMS Captain or the Vasa. It also puts the vessel at risk of potential for large angles of heel if the cargo or ballast shifts, such as with the Cougar Ace. A ship with low GM is less safe if damaged and partially flooded because the lower metacentric height leaves less safety margin. For this reason, maritime regulatory agencies such as the International Maritime Organization specify minimum safety margins for seagoing vessels. A larger metacentric height on the other hand can cause a vessel to be too "stiff"; excessive stability is uncomfortable for passengers and crew. This is because the stiff vessel quickly responds to the sea as it attempts to assume the slope of the wave. An overly stiff vessel rolls with a short period and high amplitude which results in high angular acceleration. This increases the risk of damage to the ship and to cargo and may cause excessive roll in special circumstances where eigenperiod of wave coincide with eigenperiod of ship roll. Roll damping by bilge keels of sufficient size will reduce the hazard. Criteria for this dynamic stability effect remain to be developed. In contrast, a "tender" ship lags behind the motion of the waves and tends to roll at lesser amplitudes. A passenger ship will typically have a long rolling period for comfort, perhaps 12 seconds while a tanker or freighter might have a rolling period of 6 to 8 seconds.
The period of roll can be estimated from the following equation:
where g is the gravitational acceleration, a44 is the added radius of gyration and k is the radius of gyration about the longitudinal axis through the centre of gravity and is the stability index.
Damaged stability
If a ship floods, the loss of stability is caused by the increase in KB, the centre of buoyancy, and the loss of waterplane area - thus a loss of the waterplane moment of inertia - which decreases the metacentric height. This additional mass will also reduce freeboard (distance from water to the deck) and the ship's downflooding angle (minimum angle of heel at which water will be able to flow into the hull). The range of positive stability will be reduced to the angle of down flooding resulting in a reduced righting lever. When the vessel is inclined, the fluid in the flooded volume will move to the lower side, shifting its centre of gravity toward the list, further extending the heeling force. This is known as the free surface effect.
Free surface effect
In tanks or spaces that are partially filled with a fluid or semi-fluid (fish, ice, or grain for example) as the tank is inclined the surface of the liquid, or semi-fluid, stays level. This results in a displacement of the centre of gravity of the tank or space relative to the overall centre of gravity. The effect is similar to that of carrying a large flat tray of water. When an edge is tipped, the water rushes to that side, which exacerbates the tip even further.
The significance of this effect is proportional to the cube of the width of the tank or compartment, so two baffles separating the area into thirds will reduce the displacement of the centre of gravity of the fluid by a factor of 9. This is of significance in ship fuel tanks or ballast tanks, tanker cargo tanks, and in flooded or partially flooded compartments of damaged ships. Another worrying feature of free surface effect is that a positive feedback loop can be established, in which the period of the roll is equal or almost equal to the period of the motion of the centre of gravity in the fluid, resulting in each roll increasing in magnitude until the loop is broken or the ship capsizes.
This has been significant in historic capsizes, most notably the and the .
Transverse and longitudinal metacentric heights
There is also a similar consideration in the movement of the metacentre forward and aft as a ship pitches. Metacentres are usually separately calculated for transverse (side to side) rolling motion and for lengthwise longitudinal pitching motion. These are variously known as and , GM(t) and GM(l), or sometimes GMt and GMl .
Technically, there are different metacentric heights for any combination of pitch and roll motion, depending on the moment of inertia of the waterplane area of the ship around the axis of rotation under consideration, but they are normally only calculated and stated as specific values for the limiting pure pitch and roll motion.
Measurement
The metacentric height is normally estimated during the design of a ship but can be determined by an inclining test once it has been built. This can also be done when a ship or offshore floating platform is in service. It can be calculated by theoretical formulas based on the shape of the structure.
The angle(s) obtained during the inclining experiment are directly related to GM. By means of the inclining experiment, the 'as-built' centre of gravity can be found; obtaining GM and KM by experiment measurement (by means of pendulum swing measurements and draft readings), the centre of gravity KG can be found. So KM and GM become the known variables during inclining and KG is the wanted calculated variable (KG = KM-GM)
See also
Kayak roll
Turtling
Angle of loll
Limit of positive stability
Weight distribution
References
Geometric centers
Buoyancy
Ship measurements
Vertical position | Metacentric height | [
"Physics",
"Mathematics"
] | 2,939 | [
"Vertical position",
"Point (geometry)",
"Physical quantities",
"Distance",
"Geometric centers",
"Symmetry"
] |
364,143 | https://en.wikipedia.org/wiki/Time-based%20currency | In economics, a time-based currency is an alternative currency or exchange system where the unit of account is the person-hour or some other time unit. Some time-based currencies value everyone's contributions equally: one hour equals one service credit. In these systems, one person volunteers to work for an hour for another person; thus, they are credited with one hour, which they can redeem for an hour of service from another volunteer. Others use time units that might be fractions of an hour (e.g. minutes, ten minutes – 6 units/hour, or 15 minutes – 4 units/hour). While most time-based exchange systems are service exchanges in that most exchange involves the provision of services that can be measured in a time unit, it is also possible to exchange goods by 'pricing' them in terms of the average national hourly wage rate (e.g. if the average hourly rate is $20/hour, then a commodity valued at $20 in the national currency would be equivalent to 1 hour).
History
19th century
Time-based currency exchanges date back to the early 19th century.
The Cincinnati Time Store (1827-1830) was the first in a series of retail stores created by American individualist anarchist Josiah Warren to test his economic labor theory of value. The experimental store operated from May 18, 1827, until May 1830. The Cincinnati Time Store experiment in use of labor as a medium of exchange antedated similar European efforts by two decades.
The National Equitable Labour Exchange was founded by Robert Owen, a Welsh socialist and labor reformer in London, England, in 1832. It was established in Birmingham, England, before folding in 1834. It issued "Labour Notes" similar to banknotes, denominated in units of 1, 2, 5, 10, 20, 40, and 80 hours. John Gray, a socialist economist, worked with Owen and later with Ricardian Socialists and postulated a National Chamber of Commerce as a central bank issuing a labour currency.
In 1848, the socialist and first self-designated anarchist Pierre-Joseph Proudhon postulated a system of time chits.
Josiah Warren published a book describing labor notes in 1852.
In 1875, Karl Marx wrote of "Labor Certificates" (Arbeitszertifikaten) in his Critique of the Gotha Program of a "certificate from society that [the labourer] has furnished such and such an amount of labour", which can be used to draw "from the social stock of means of consumption as much as costs the same amount of labour."
20th century
Teruko Mizushima (1920-1996) was a Japanese housewife, author, inventor, social commentator, and activist credited with creating the world's first time bank in 1973.
Mizushima was born in 1920 in Osaka to a merchant household. She performed well in school and was given the opportunity to study overseas in the United States in 1939. Her stay was shortened from three years to one due to rising tensions between the US, Japan, and China. Mizushima opted to pursue a short-term diploma course in sewing.
After returning home, she married. Her first daughter was born at the outbreak of the Pacific War, and her husband was soon conscripted into the army.
Mizushima's sewing skills proved invaluable to her family during and after the war. While the Japanese population was suffering immense material shortages, Mizushima offered her sewing skills in exchange for fresh vegetables. It was during this time that she began to develop her ideas about economics and the relative value of labor.
In 1950, Mizushima submitted an essay to a newspaper contest as part of a national event titled “Women's Ideas for the Creation of a New Life.” Her essay received the Newspaper Companies’ Prize. While it has since been lost, the ideas in the essay attracted widespread press attention.
Mizushima soon became a social commentator, with her views being aired on the radio, in the newspapers, and on television. She frequently appeared on the NHK, the country's national broadcaster, and toured the country giving talks about her ideas.
In 1973 she started her group the Volunteer Labour Bank (later renamed the Volunteer Labour Network). By 1978, the bank had grown to include approximately 2,600 members. The membership included people of all ages, from teenagers to women in their seventies. The majority of members were housewives in their thirties and forties. Members were organized into over 160 local branches throughout the country, coordinated by the headquarters located on Mizushima’s estate.
By 1983, the network had over 3,800 members organized in 262 branches, including a branch in California.
The political activist and philosopher Cornelius Castoriadis, after criticizing the incoherency of capitalist, Leninist, and Trotskyist justifications of wage differentials in his 1949 Socialisme ou Barbarie text translated as “The Relations of Production in Russia” in the first volume of his Political and Social Writings, responding to the Hungarian Revolution of 1956, advocated that workers “proclaim the abolition of work norms and instaurate full equality of wages and salaries” in his 1957 Socialisme ou Barbarie text translated as "On the Content of Socialism, II". He elaborated further on this advocacy of an “absolute equality of wages and incomes” in his 1974 text, "Hierarchy of Salaries and Incomes", and in the “Today” section of “Done and To Be Done” (1989).
Edgar S. Cahn coined the term "Time Dollars" in Time Dollars: The New Currency That Enables Americans to Turn Their Hidden Resource-Time-Into Personal Security & Community Renewal, a book co-authored with Jonathan Rowe in 1992. He also went on to trademark the terms "TimeBank" and "Time Credit".
Timebanking is a community development tool and works by facilitating the exchange of skills and experience within a community. It aims to build the 'core economy' of family and community by valuing and rewarding the work done in it. The world's first timebank was started in Japan by Teruko Mizushima in 1973 with the idea that participants could earn time credits which they could spend any time during their lives. She based her bank on the simple concept that each hour of time given as services to others could earn reciprocal hours of services for the giver at some stage in the future, particularly in old age when they might need it most. In the 1940s, Mizushima had already foreseen the emerging problems of an ageing society such as seen today. In the 1990s the movement took off in the US, with Dr Edgar Cahn pioneering it there, and in the United Kingdom, with Martin Simon from Timebanking UK and David Boyle, who brought in the London-based New Economics Foundation (Nef).
Paul Glover created Ithaca Hours in 1991. Each HOUR was valued at one hour of basic labor or $10.00. Professionals were entitled to charge multiple HOURS per hour, but often reduced their rate in the spirit of equity. Millions of dollars' worth of HOURS were traded among thousands of residents and 500 businesses. Interest-free HOUR loans were made, and HOUR grants given to over 100 community organizations.
The first British time bank opened in 1998 in Stroud, and a national charity and membership organisation, Timebanking UK, started in 2002.
21st century
According to Edgar S. Cahn, timebanking had its roots in a time when "money for social programs [had] dried up" and no dominant approach to social service in the U.S. was coming up with creative ways to solve the problem. He would later write that "Americans face at least three interlocking sets of problems: growing inequality in access by those at the bottom to the most basic goods and services; increasing social problems stemming from the need to rebuild family, neighborhood and community; and a growing disillusion with public programs designed to address these problems" and that "the crisis in support for efforts to address social problems stems directly from the failure of ... piecemeal efforts to rebuild genuine community." In particular Cahn focused on the top-down attitude prevalent in social services. He believed that one of the major failings of many social service organizations was their unwillingness to enroll the help of those people they were trying to help. He called this a deficit based approach to social service, where organizations view the people they were trying to help only in terms of their needs, as opposed to an asset based approach, which focuses on the contributions towards their communities that everyone can make. He theorized that a system like timebanking could "[rebuild] the infrastructure of trust and caring that can strengthen families and communities." He hoped that the system "would enable individuals and communities to become more self-sufficient, to insulate themselves from the vagaries of politics and to tap the capacity of individuals who were in effect being relegated to the scrap heap and dismissed as freeloaders."
As a philosophy, timebanking, also known as Time Trade is founded upon five principles, known as TimeBanking's Core Values:
Everyone is an asset
Some work is beyond a monetary price
Reciprocity in helping
Community (via social networks) is necessary
A respect for all human beings
Ideally, timebanking builds community. TimeBank members sometimes refer to this as a return to simpler times when the community was there for its individuals. An interview at a timebank in the Gorbals neighbourhood of Glasgow revealed the following sentiment:
[the time bank] involves everybody coming together as a community ... the Gorbals has never—not for a long time—had a lot of community spirit. Way back, years ago, it had a lot of community spirit, but now you see that in some areas, people won't even go to the chap next door for some sugar ... that's what I think the project's doing, trying to bring that back, that community sense ...
In 2017 Nimses offered a concept of a time-based currency Nim. 1 nim = 1 minute of life. The concept was first adopted in Eastern Europe.
The concept is based on the idea of universal basic income. Every person is an issuer of nims. For every minute of one's life, 1 nim is created, which can be spent or sent to another person, like money.
Time dollars
Time dollars are a tax-exempt complementary currency used as a means of providing mutual credit in TimeBanking. They are typically called "time credits" or "service credits" outside the United States. TimeBank members exchange services for Time Dollars. Each exchange is recorded as a corresponding credit and debit in the accounts of the participants. One hour of time is worth one Time Dollar, regardless of the service provided in one hour or how much skill is required to perform the task during that hour. This "one-for-one" system that relies on an abundant resource is designed to both recognize and encourage reciprocal community service, resist inflation, avoid hoarding, enable trade, and encourage cooperation among participants.
Timebanks
Timebanks have been established in 34 countries, with at least 500 timebanks established in 40 US states and 300 throughout the United Kingdom. TimeBanks also have a significant presence in Japan, South Korea, New Zealand, Taiwan, Senegal, Argentina, Israel, Greece, and Spain. TimeBanks have been used to reduce recidivism rates with diversionary programs for first-time juvenile offenders; facilitate re-entry of for ex-convicts; deliver health care, job training and social services in public housing complexes; facilitate substance abuse recovery; prevent institutionalization of severely disabled children through parental support networks; provide transportation for homebound seniors in rural areas; deliver elder care, community health services and hospice care; and foster women's rights initiatives in Senegal.
Timebanking
Timebanking is a pattern of reciprocal service exchange that uses units of time as currency. It is an example of a complementary monetary system. A timebank, also known as a service exchange, is a community that practices time banking. The unit of currency, always valued at an hour's worth of any person's labor, used by these groups has various names but is generally known as a time credit in the US and the UK (formerly a time dollar in the US). Timebanking is primarily used to provide incentives and rewards for work such as mentoring children, caring for the elderly, being neighborly—work usually done on a volunteer basis—which a pure market system devalues. Essentially, the "time" one spends providing these types of community services earns "time" that one can spend to receive services. As well as gaining credits, participating individuals, particularly those more used to being recipients in other parts of their lives, can potentially gain confidence, social contact and skills through giving to others. Communities, therefore, use time banking as a tool to forge stronger intra-community connections, a process known as "building social capital". Timebanking had its intellectual genesis in the US in the early 1980s. By 1990, the Robert Wood Johnson Foundation had invested US$1.2 million to pilot time banking in the context of senior care. Today, 26 countries have active TimeBanks. There are 250 TimeBanks active in the UK and over 276 TimeBanks in the U.S.
Timebanking and the timebank
Timebank members earn credit in Time Dollars for each hour they spend helping other members of the community. Services offered by members in timebanks include: Child Care, Legal Assistance, Language Lessons, Home Repair, and Respite Care for caregivers, among other things. Time Dollars AKA time credits earned are then recorded at the timebank to be accessed when desired. A Timebank can theoretically be as simple as a pad of paper, but the system was originally intended to take advantage of computer databases for record keeping. Some Timebanks employ a paid coordinator to keep track of transactions and to match requests for services with those who can provide them. Other Timebanks select a member or a group of members to handle these tasks. Various organizations provide specialized software to help local Timebanks manage exchanges. The same organizations also often offer consulting services, training, and other materials for individuals or organizations looking to start timebanks of their own.
Example services offered by timebank members
The mission of an individual timebank influences exactly which services are offered. In some places, timebanking is adopted as a means to strengthen the community as a whole. Other timebanks are more oriented towards social service, systems change, and helping underprivileged groups. In some timebanks, both are acknowledged goals.
Time credit
The time credit is the fundamental unit of exchange in a timebank, equal to one hour of a person's labor. In traditional timebanks, one hour of one person's time is equal to one hour of another's. Time credits are earned for providing services and spent receiving services. Upon earning a time credit, a person does not need to spend it right away: they can save it indefinitely. However, since the value of a time credit is fixed at one hour, it resists inflation and does not earn interest. In these ways it is intentionally designed to differ from the traditional fiat currency used in most countries. Consequently, it does little good to hoard time credits and, in practice, many timebanks also encourage the donation of excess time credits to a community pool which is then spent for those in need or on community events.
Criticisms
Some criticisms of timebanking have focused on the time credit's inadequacies as a form of currency and as a market information mechanism. Frank Fisher of MIT predicted in the 1980s that such a currency "would lead to the kind of distortion of market forces which had crippled Russia's economy."
Dr. Gill Seyfang's study of the Gorbals TimeBank—one of the few studies of timebanking done by the academic community—listed several other non-theoretical problems with timebanking. The first is the difficulty of communicating to potential members exactly what makes timebanking different, or "getting people to understand the difference between timebanking and traditional volunteering." She also notes that there is no guarantee that every person's needs will be provided for by a timebank by dint of the fact that the supply of certain skills may be lacking in a community.
One of the most stringent criticisms of timebanking is its organizational sustainability. While some member-run TimeBanks with relatively low overhead costs do exist, others pay a staff to keep the organization running. This can be quite expensive for smaller organizations and without a long-term source of funding, they may fold.
Timebanking around the world
Global timebanking
In 2013 TimeRepublik launched the first global Timebank. Its aim is to eliminate geographical limitations of previous timebanks.
Since 2015 TimeRepublik has been promoting Time Banking within local governments, municipalities, universities, and large companies.
In 2017 TimeRepublik won the first prize at the BAI Global Innovation Awards in the Innovation and Human Capital category
The Community Exchange System (CES) is a global network of communities using alternative exchange systems, many of which use timebanks. Timebanks can trade with each other wherever they are, as well as with mutual credit exchanges. The system uses a base 'currency' of one hour, and the conversion rates between the different exchange groups are based on national average hourly wage rates. This allows timebanks to trade with mutual credit exchanges in the same or different countries.
Studies and examples
Elderplan
Elderplan was a social HMO which incorporated timebanking as a way to promote active, engaged lifestyles for its older members. Funding for the "social" part of social HMOs has since dried up and much of the program has been cut, but at its height, members were able to pay portions of their premiums in time credits (back then called Time Dollars) instead of hard currency. The idea was to encourage older people to become more engaged in their communities while also to ask for help more often and "[foster] dignity by allowing people to contribute services as well as receive them."
Gorbals timebank study
In 2004, Dr. Gill Seyfang published a study in the Community Development Journal about the effects of a timebank located in the Gorbals area of Glasgow, Scotland, "an inner-city estate characterized by high levels of deprivation, poverty, unemployment, poor health and low educational attainment." The Gorbals Timebank is run by a local charity with the intent to combat the social ills that face the region. Seyfang concluded that the timebank was effective at "building community capacity" and "promoting social inclusion." She highlights the timebank's success at "[re-stitching] the social fabric of the Gorbals." by "[boosting] engagement in existing projects and activities" in a variety of projects including a community safety network, a library, a healthy living project, and a theatre. She writes that "the timebank had enabled people to access help they otherwise would have had to do without," help which included home repair, gardening, a funeral, and tuition paid in time credits to a continuing education course.
Timebank Florianópolis
The Time Bank of the City of Florianópolis (BTF) is one of the first and best known Time Banks in Brazil. The initiative was conceived in September 2015 at a local Zeitgeist meeting, part of the international sustainability movement. BTF works from a Facebook group that has more than 20,000 members, and exchanges are counted in a spreadsheet shared with users. Scientific research on BTF indicates that the time bank is a means for creating social capital in local society and that BTF members have different socioeconomic characteristics compared to residents of the city of Florianópolis. Younger, non-white, employed, female individuals, working in the informal sector, with a higher education level and with a higher monthly income are more likely to be BTF members.
Spice Timebank
Spice is a social enterprise that has developed a time-based currency called Time Credits. Spice works across health and social care, housing, community development and education, supporting organisations and services to use Time Credits to achieve their outcomes. Spice grew out of the work of the Wales Institute for Community Currencies in the former mining districts of South Wales, UK.
Several Studies are done based on Spice Timebank or referenced this timebank. In a 2016 survey, based on a 1000 members of Spice timebank, 77% of respondents said Time Credits have had a positive impact on their quality of life, 42% reported that they learned a new skill and 30% reported that they have less need to go to doctor.
See also
Cincinnati Time Store
Collaborative finance
Community currency
Community Exchange System (CES)
Coproduction of public services by service users and communities
Fiscal localism
Labour theory of value
Labour-time voucher
Local exchange trading system (LETS)
References
Further reading
Cahn, Edgar S. (1992). Time Dollars: The New Currency That Enables Americans to Turn Their Hidden Resource Time Into Personal Security and Community Renewal. Emmaus, Penn.: Rodale Press.
External links
TimeBanking on YouTube
Economics and time
Local currencies | Time-based currency | [
"Physics"
] | 4,383 | [
"Spacetime",
"Economics and time",
"Physical quantities",
"Time"
] |
364,380 | https://en.wikipedia.org/wiki/Quantum%20foam | Quantum foam (or spacetime foam, or spacetime bubble) is a theoretical quantum fluctuation of spacetime on very small scales due to quantum mechanics. The theory predicts that at this small scale, particles of matter and antimatter are constantly created and destroyed. These subatomic objects are called virtual particles. The idea was devised by John Wheeler in 1955.
Background
With an incomplete theory of quantum gravity, it is impossible to be certain what spacetime looks like at small scales. However, there is no definitive reason that spacetime needs to be fundamentally smooth. It is possible that instead, in a quantum theory of gravity, spacetime would consist of many small, ever-changing regions in which space and time are not definite, but fluctuate in a foam-like manner.
Wheeler suggested that the uncertainty principle might imply that over sufficiently small distances and sufficiently brief intervals of time, the "very geometry of spacetime fluctuates". These fluctuations could be large enough to cause significant departures from the smooth spacetime seen at macroscopic scales, giving spacetime a "foamy" character.
Experimental results
The experimental proof of the Casimir effect, which is possibly caused by virtual particles, is strong evidence for the existence of virtual particles. The g-2 experiment, which predicts the strength of magnets formed by muons and electrons, also supports their existence.
In 2005, during observations of gamma-ray photons arriving from the blazar Markarian 501, MAGIC (Major Atmospheric Gamma-ray Imaging Cherenkov) telescopes detected that some of the photons at different energy levels arrived at different times, suggesting that some of the photons had moved more slowly and thus were in violation of special relativity's notion that the speed of light is constant, a discrepancy which could be explained by the irregularity of quantum foam. Subsequent experiments were, however, unable to confirm the supposed variation on the speed of light due to graininess of space.
Other experiments involving the polarization of light from distant gamma ray bursts have also produced contradictory results. More Earth-based experiments are ongoing or proposed.
Constraints on the size of quantum fluctuations
The fluctuations characteristic of a spacetime foam would be expected to occur on a length scale on the order of the Planck length (≈ 10−35 m), but some models of quantum gravity predict much larger fluctuations.
Photons should be slowed by quantum foam, with the rate depending on the wavelength of the photons. This would violate Lorentz invariance. But observations of radiation from nearby quasars by Floyd Stecker of NASA's Goddard Space Flight Center failed to find evidence of violation of Lorentz invariance.
A foamy spacetime also sets limits on the accuracy with which distances can be measured because photons should diffuse randomly through a spacetime foam, similar to light diffusing by passing through fog. This should cause the image quality of very distant objects observed through telescopes to degrade. X-ray and gamma-ray observations of quasars using NASA's Chandra X-ray Observatory, the Fermi Gamma-ray Space Telescope and ground-based gamma-ray observations from the Very Energetic Radiation Imaging Telescope Array (VERITAS) showed no detectable degradation at the farthest observed distances, implying that spacetime is smooth at least down to distances 1000 times smaller than the nucleus of a hydrogen atom, setting a bound on the size of quantum fluctuations of spacetime.
Relation to other theories
The vacuum fluctuations provide vacuum with a non-zero energy known as vacuum energy.
Spin foam theory is a modern attempt to make Wheeler's idea quantitative.
See also
False vacuum
Geon
Hawking radiation
Holographic principle
Loop quantum gravity
Lorentzian wormhole
Planck time
Stochastic quantum mechanics
String theory
Wormhole
Virtual black hole
Notes
References
Minkel, J. R. (24 November 2003). "Borrowed Time: Interview with Michio Kaku". Scientific American
Swarup, A. (2006). "Sights set on quantum froth". New Scientist, 189, p. 18, accessed 10 February 2012
Quantum gravity
Wormhole theory | Quantum foam | [
"Physics",
"Astronomy"
] | 837 | [
"Astronomical hypotheses",
"Unsolved problems in physics",
"Quantum gravity",
"Physics beyond the Standard Model",
"Wormhole theory"
] |
364,774 | https://en.wikipedia.org/wiki/Conformal%20field%20theory | A conformal field theory (CFT) is a quantum field theory that is invariant under conformal transformations. In two dimensions, there is an infinite-dimensional algebra of local conformal transformations, and conformal field theories can sometimes be exactly solved or classified.
Conformal field theory has important applications to condensed matter physics, statistical mechanics, quantum statistical mechanics, and string theory. Statistical and condensed matter systems are indeed often conformally invariant at their thermodynamic or quantum critical points.
Scale invariance vs conformal invariance
In quantum field theory, scale invariance is a common and natural symmetry, because any fixed point of the renormalization group is by definition scale invariant. Conformal symmetry is stronger than scale invariance, and one needs additional assumptions to argue that it should appear in nature. The basic idea behind its plausibility is that local scale invariant theories have their currents given by where is a Killing vector and is a conserved operator (the stress-tensor) of dimension exactly . For the associated symmetries to include scale but not conformal transformations, the trace has to be a non-zero total derivative implying that there is a non-conserved operator of dimension exactly .
Under some assumptions it is possible to completely rule out this type of non-renormalization and hence prove that scale invariance implies conformal invariance in a quantum field theory, for example in unitary compact conformal field theories in two dimensions.
While it is possible for a quantum field theory to be scale invariant but not conformally invariant, examples are rare. For this reason, the terms are often used interchangeably in the context of quantum field theory.
Two dimensions vs higher dimensions
The number of independent conformal transformations is infinite in two dimensions, and finite in higher dimensions. This makes conformal symmetry much more constraining in two dimensions. All conformal field theories share the ideas and techniques of the conformal bootstrap. But the resulting equations are more powerful in two dimensions, where they are sometimes exactly solvable (for example in the case of minimal models), in contrast to higher dimensions, where numerical approaches dominate.
The development of conformal field theory has been earlier and deeper in the two-dimensional case, in particular after the 1983 article by Belavin, Polyakov and Zamolodchikov.
The term conformal field theory has sometimes been used with the meaning of two-dimensional conformal field theory, as in the title of a 1997 textbook.
Higher-dimensional conformal field theories have become more popular with the AdS/CFT correspondence in the late 1990s, and the development of numerical conformal bootstrap techniques in the 2000s.
Global vs local conformal symmetry in two dimensions
The global conformal group of the Riemann sphere is the group of Möbius transformations , which is finite-dimensional.
On the other hand, infinitesimal conformal transformations form the infinite-dimensional Witt algebra: the conformal Killing equations in two dimensions, reduce to just the Cauchy-Riemann equations, , the infinity of modes of arbitrary analytic coordinate transformations yield the infinity of Killing vector fields .
Strictly speaking, it is possible for a two-dimensional conformal field theory to be local (in the sense of possessing a stress-tensor) while still only exhibiting invariance under the global . This turns out to be unique to non-unitary theories; an example is the biharmonic scalar. This property should be viewed as even more special than scale without conformal invariance as it requires to be a total second derivative.
Global conformal symmetry in two dimensions is a special case of conformal symmetry in higher dimensions, and is studied with the same techniques. This is done not only in theories that have global but not local conformal symmetry, but also in theories that do have local conformal symmetry, for the purpose of testing techniques or ideas from higher-dimensional CFT. In particular, numerical bootstrap techniques can be tested by applying them to minimal models, and comparing the results with the known analytic results that follow from local conformal symmetry.
Conformal field theories with a Virasoro symmetry algebra
In a conformally invariant two-dimensional quantum theory, the Witt algebra of infinitesimal conformal transformations has to be centrally extended. The quantum symmetry algebra is therefore the Virasoro algebra, which depends on a number called the central charge. This central extension can also be understood in terms of a conformal anomaly.
It was shown by Alexander Zamolodchikov that there exists a function which decreases monotonically under the renormalization group flow of a two-dimensional quantum field theory, and is equal to the central charge for a two-dimensional conformal field theory. This is known as the Zamolodchikov C-theorem, and tells us that renormalization group flow in two dimensions is irreversible.
In addition to being centrally extended, the symmetry algebra of a conformally invariant quantum theory has to be complexified, resulting in two copies of the Virasoro algebra.
In Euclidean CFT, these copies are called holomorphic and antiholomorphic. In Lorentzian CFT, they are called left-moving and right moving. Both copies have the same central charge.
The space of states of a theory is a representation of the product of the two Virasoro algebras. This space is a Hilbert space if the theory is unitary.
This space may contain a vacuum state, or in statistical mechanics, a thermal state. Unless the central charge vanishes, there cannot exist a state that leaves the entire infinite dimensional conformal symmetry unbroken. The best we can have is a state that is invariant under the generators of the Virasoro algebra, whose basis is . This contains the generators of the global conformal transformations. The rest of the conformal group is spontaneously broken.
Conformal symmetry
Definition and Jacobian
For a given spacetime and metric, a conformal transformation is a transformation that preserves angles. We will focus on conformal transformations of the flat -dimensional Euclidean space or of the Minkowski space .
If is a conformal transformation, the Jacobian is of the form
where is the scale factor, and is a rotation (i.e. an orthogonal matrix) or Lorentz transformation.
Conformal group
The conformal group is locally isomorphic to (Euclidean) or (Minkowski). This includes translations, rotations (Euclidean) or Lorentz transformations (Minkowski), and dilations i.e. scale transformations
This also includes special conformal transformations. For any translation , there is a special conformal transformation
where is the inversion such that
In the sphere , the inversion exchanges with . Translations leave fixed, while special conformal transformations leave fixed.
Conformal algebra
The commutation relations of the corresponding Lie algebra are
where generate translations, generates dilations, generate special conformal transformations, and generate rotations or Lorentz transformations. The tensor is the flat metric.
Global issues in Minkowski space
In Minkowski space, the conformal group does not preserve causality. Observables such as correlation functions are invariant under the conformal algebra, but not under the conformal group. As shown by Lüscher and Mack, it is possible to restore the invariance under the conformal group by extending the flat Minkowski space into a Lorentzian cylinder. The original Minkowski space is conformally equivalent to a region of the cylinder called a Poincaré patch. In the cylinder, global conformal transformations do not violate causality: instead, they can move points outside the Poincaré patch.
Correlation functions and conformal bootstrap
In the conformal bootstrap approach, a conformal field theory is a set of correlation functions that obey a number of axioms.
The -point correlation function is a function of the positions and other parameters of the fields . In the bootstrap approach, the fields themselves make sense only in the context of correlation functions, and may be viewed as efficient notations for writing axioms for correlation functions. Correlation functions depend linearly on fields, in particular
.
We focus on CFT on the Euclidean space . In this case, correlation functions are Schwinger functions. They are defined for , and do not depend on the order of the fields. In Minkowski space, correlation functions are Wightman functions. They can depend on the order of the fields, as fields commute only if they are spacelike separated. A Euclidean CFT can be related to a Minkowskian CFT by Wick rotation, for example thanks to the Osterwalder-Schrader theorem. In such cases, Minkowskian correlation functions are obtained from Euclidean correlation functions by an analytic continuation that depends on the order of the fields.
Behaviour under conformal transformations
Any conformal transformation acts linearly on fields , such that is a representation of the conformal group, and correlation functions are invariant:
Primary fields are fields that transform into themselves via . The behaviour of a primary field is characterized by a number called its conformal dimension, and a representation of the rotation or Lorentz group. For a primary field, we then have
Here and are the scale factor and rotation that are associated to the conformal transformation . The representation is trivial in the case of scalar fields, which transform as
. For vector fields, the representation is the fundamental representation, and we would have
.
A primary field that is characterized by the conformal dimension and representation behaves as a highest-weight vector in an induced representation of the conformal group from the subgroup generated by dilations and rotations. In particular, the conformal dimension characterizes a representation of the subgroup of dilations. In two dimensions, the fact that this induced representation is a Verma module appears throughout the literature. For higher-dimensional CFTs (in which the maximally compact subalgebra is larger than the Cartan subalgebra), it has recently been appreciated that this representation is a parabolic or generalized Verma module.
Derivatives (of any order) of primary fields are called descendant fields. Their behaviour under conformal transformations is more complicated. For example, if is a primary field, then is a linear combination of and . Correlation functions of descendant fields can be deduced from correlation functions of primary fields. However, even in the common case where all fields are either primaries or descendants thereof, descendant fields play an important role, because conformal blocks and operator product expansions involve sums over all descendant fields.
The collection of all primary fields , characterized by their scaling dimensions and the representations , is called the spectrum of the theory.
Dependence on field positions
The invariance of correlation functions under conformal transformations severely constrain their dependence on field positions. In the case of two- and three-point functions, that dependence is determined up to finitely many constant coefficients. Higher-point functions have more freedom, and are only determined up to functions of conformally invariant combinations of the positions.
The two-point function of two primary fields vanishes if their conformal dimensions differ.
If the dilation operator is diagonalizable (i.e. if the theory is not logarithmic), there exists a basis of primary fields such that two-point functions are diagonal, i.e. .
In this case, the two-point function of a scalar primary field is
where we choose the normalization of the field such that the constant coefficient, which is not determined by conformal symmetry, is one. Similarly, two-point functions of non-scalar primary fields are determined up to a coefficient, which can be set to one. In the case of a symmetric traceless tensor of rank , the two-point function is
where the tensor is defined as
The three-point function of three scalar primary fields is
where , and is a three-point structure constant. With primary fields that are not necessarily scalars, conformal symmetry allows a finite number of tensor structures, and there is a structure constant for each tensor structure. In the case of two scalar fields and a symmetric traceless tensor of rank , there is only one tensor structure, and the three-point function is
where we introduce the vector
Four-point functions of scalar primary fields are determined up to arbitrary functions of the two cross-ratios
The four-point function is then
Operator product expansion
The operator product expansion (OPE) is more powerful in conformal field theory than in more general quantum field theories. This is because in conformal field theory, the operator product expansion's radius of convergence is finite (i.e. it is not zero). Provided the positions of two fields are close enough, the operator product expansion rewrites the product of these two fields as a linear combination of fields at a given point, which can be chosen as for technical convenience.
The operator product expansion of two fields takes the form
where is some coefficient function, and the sum in principle runs over all fields in the theory. (Equivalently, by the state-field correspondence, the sum runs over all states in the space of states.) Some fields may actually be absent, in particular due to constraints from symmetry: conformal symmetry, or extra symmetries.
If all fields are primary or descendant, the sum over fields can be reduced to a sum over primaries, by rewriting the contributions of any descendant in terms of the contribution of the corresponding primary:
where the fields are all primary, and is the three-point structure constant (which for this reason is also called OPE coefficient). The differential operator is an infinite series in derivatives, which is determined by conformal symmetry and therefore in principle known.
Viewing the OPE as a relation between correlation functions shows that the OPE must be associative. Furthermore,
if the space is Euclidean, the OPE must be commutative, because
correlation functions do not depend on the order of the fields, i.e. .
The existence of the operator product expansion is a fundamental axiom of the conformal bootstrap. However, it is generally not necessary to compute operator product expansions and in particular the differential operators . Rather, it is the decomposition of correlation functions into structure constants and conformal blocks that is needed.
The OPE can in principle be used for computing conformal blocks, but in practice there are more efficient methods.
Conformal blocks and crossing symmetry
Using the OPE , a four-point function can be written as a combination of three-point structure constants and s-channel conformal blocks,
The conformal block is the sum of the contributions of the primary field and its descendants. It depends on the fields and their positions. If the three-point functions or involve several independent tensor structures, the structure constants and conformal blocks depend on these tensor structures, and the primary field contributes several independent blocks. Conformal blocks are determined by conformal symmetry, and known in principle. To compute them, there are recursion relations and integrable techniques.
Using the OPE or , the same four-point function is written in terms of t-channel conformal blocks or u-channel conformal blocks,
The equality of the s-, t- and u-channel decompositions is called crossing symmetry: a constraint on the spectrum of primary fields, and on the three-point structure constants.
Conformal blocks obey the same conformal symmetry constraints as four-point functions. In particular, s-channel conformal blocks can be written in terms of functions of the cross-ratios. While the OPE only converges if , conformal blocks can be analytically continued to all (non pairwise coinciding) values of the positions. In Euclidean space, conformal blocks are single-valued real-analytic functions of the positions except when the four points lie on a circle but in a singly-transposed cyclic order [1324], and only in these exceptional cases does the decomposition into conformal blocks not converge.
A conformal field theory in flat Euclidean space is thus defined by its spectrum and OPE coefficients (or three-point structure constants) , satisfying the constraint that all four-point functions are crossing-symmetric. From the spectrum and OPE coefficients (collectively referred to as the CFT data), correlation functions of arbitrary order can be computed.
Features
Unitarity
A conformal field theory is unitary if its space of states has a positive definite scalar product such that the dilation operator is self-adjoint. Then the scalar product endows the space of states with the structure of a Hilbert space.
In Euclidean conformal field theories, unitarity is equivalent to reflection positivity of correlation functions: one of the Osterwalder-Schrader axioms.
Unitarity implies that the conformal dimensions of primary fields are real and bounded from below. The lower bound depends on the spacetime dimension , and on the representation of the rotation or Lorentz group in which the primary field transforms. For scalar fields, the unitarity bound is
In a unitary theory, three-point structure constants must be real, which in turn implies that four-point functions obey certain inequalities. Powerful numerical bootstrap methods are based on exploiting these inequalities.
Compactness
A conformal field theory is compact if it obeys three conditions:
All conformal dimensions are real.
For any there are finitely many states whose dimensions are less than .
There is a unique state with the dimension , and it is the vacuum state, i.e. the corresponding field is the identity field.
(The identity field is the field whose insertion into correlation functions does not modify them, i.e. .) The name comes from the fact that if a 2D conformal field theory is also a sigma model, it will satisfy these conditions if and only if its target space is compact.
It is believed that all unitary conformal field theories are compact in dimension . Without unitarity, on the other hand, it is possible to find CFTs in dimension four and in dimension that have a continuous spectrum. And in dimension two, Liouville theory is unitary but not compact.
Extra symmetries
A conformal field theory may have extra symmetries in addition to conformal symmetry. For example, the Ising model has a symmetry, and superconformal field theories have supersymmetry.
Examples
Mean field theory
A generalized free field is a field whose correlation functions are deduced from its two-point function by Wick's theorem. For instance, if is a scalar primary field of dimension , its four-point function reads
For instance, if are two scalar primary fields such that (which is the case in particular if ), we have the four-point function
Mean field theory is a generic name for conformal field theories that are built from generalized free fields. For example, a mean field theory can be built from one scalar primary field . Then this theory contains , its descendant fields, and the fields that appear in the OPE . The primary fields that appear in can be determined by decomposing the four-point function in conformal blocks: their conformal dimensions belong to : in mean field theory, the conformal dimension is conserved modulo integers. Structure constants can be computed exactly in terms of the Gamma function.
Similarly, it is possible to construct mean field theories starting from a field with non-trivial Lorentz spin. For example, the 4d Maxwell theory (in the absence of charged matter fields) is a mean field theory built out of an antisymmetric tensor field with scaling dimension .
Mean field theories have a Lagrangian description in terms of a quadratic action involving Laplacian raised to an arbitrary real power (which determines the scaling dimension of the field). For a generic scaling dimension, the power of the Laplacian is non-integer. The corresponding mean field theory is then non-local (e.g. it does not have a conserved stress tensor operator).
Critical Ising model
The critical Ising model is the critical point of the Ising model on a hypercubic lattice in two or three dimensions. It has a global symmetry, corresponding to flipping all spins. The two-dimensional critical Ising model includes the Virasoro minimal model, which can be solved exactly. There is no Ising CFT in dimensions.
Critical Potts model
The critical Potts model with colors is a unitary CFT that is invariant under the permutation group . It is a generalization of the critical Ising model, which corresponds to . The critical Potts model exists in a range of dimensions depending on .
The critical Potts model may be constructed as the continuum limit of the Potts model on d-dimensional hypercubic lattice. In the Fortuin-Kasteleyn reformulation in terms of clusters, the Potts model can be defined for , but it is not unitary if is not integer.
Critical O(N) model
The critical O(N) model is a CFT invariant under the orthogonal group. For any integer , it exists as an interacting, unitary and compact CFT in dimensions (and for also in two dimensions). It is a generalization of the critical Ising model, which corresponds to the O(N) CFT at .
The O(N) CFT can be constructed as the continuum limit of a lattice model with spins that are N-vectors, called the n-vector model.
Alternatively, the critical model can be constructed as the limit of Wilson-Fisher fixed point in dimensions. At , the Wilson-Fisher fixed point becomes the tensor product of free scalars with dimension . For the model in question is non-unitary.
When N is large, the O(N) model can be solved perturbatively in a 1/N expansion by means of the Hubbard–Stratonovich transformation. In particular, the limit of the critical O(N) model is well-understood.
The conformal data of the critical O(N) model are functions of N and of the dimension, on which many results are known.
Conformal gauge theories
Some conformal field theories in three and four dimensions admit a Lagrangian description in the form of a gauge theory, either abelian or non-abelian. Examples of such CFTs are conformal QED with sufficiently many charged fields in or the Banks-Zaks fixed point in .
Applications
Continuous phase transitions
Continuous phase transitions (critical points) of classical statistical physics systems with D spatial dimensions are often described by Euclidean conformal field theories. A necessary condition for this to happen is that the critical point should be invariant under spatial rotations and translations. However this condition is not sufficient: some exceptional critical points are described by scale invariant but not conformally invariant theories. If the classical statistical physics system is reflection positive, the corresponding Euclidean CFT describing its critical point will be unitary.
Continuous quantum phase transitions in condensed matter systems with D spatial dimensions may be described by Lorentzian D+1 dimensional conformal field theories (related by Wick rotation to Euclidean CFTs in D+1 dimensions). Apart from translation and rotation invariance, an additional necessary condition for this to happen is that the dynamical critical exponent z should be equal to 1. CFTs describing such quantum phase transitions (in absence of quenched disorder) are always unitary.
String theory
World-sheet description of string theory involves a two-dimensional CFT coupled to dynamical two-dimensional quantum gravity (or supergravity, in case of superstring theory). Consistency of string theory models imposes constraints on the central charge of this CFT, which should be c=26 in bosonic string theory and c=10 in superstring theory. Coordinates of the spacetime in which string theory lives correspond to bosonic fields of this CFT.
AdS/CFT correspondence
Conformal field theories play a prominent role in the AdS/CFT correspondence, in which a gravitational theory in anti-de Sitter space (AdS) is equivalent to a conformal field theory on the AdS boundary. Notable examples are d = 4, N = 4 supersymmetric Yang–Mills theory, which is dual to Type IIB string theory on AdS5 × S5, and d = 3, N = 6 super-Chern–Simons theory, which is dual to M-theory on AdS4 × S7. (The prefix "super" denotes supersymmetry, N denotes the degree of extended supersymmetry possessed by the theory, and d the number of space-time dimensions on the boundary.)
See also
Logarithmic conformal field theory
AdS/CFT correspondence
Operator product expansion
Critical point
Boundary conformal field theory
Primary field
Superconformal algebra
Conformal algebra
Conformal bootstrap
History of conformal field theory
References
Further reading
Martin Schottenloher, A Mathematical Introduction to Conformal Field Theory, Springer-Verlag, Berlin, Heidelberg, 1997. , 2nd edition 2008, .
External links
Symmetry
Scaling symmetries
Mathematical physics | Conformal field theory | [
"Physics",
"Mathematics"
] | 5,100 | [
"Symmetry",
"Applied mathematics",
"Theoretical physics",
"Geometry",
"Mathematical physics",
"Scaling symmetries"
] |
364,950 | https://en.wikipedia.org/wiki/Dubna | Dubna () is a town in Moscow Oblast, Russia. It has a status of naukograd (i.e. town of science), being home to the Joint Institute for Nuclear Research, an international nuclear physics research center and one of the largest scientific foundations in the country. It is also home to MKB Raduga, a defense aerospace company specializing in design and production of missile systems, as well as to the Russia's largest satellite communications center owned by Russian Satellite Communications Company. The modern town was developed in the middle of the 20th century and town status was granted to it in 1956. Population:
Geography
The town is above sea level, situated approximately north of Moscow, on the Volga River, just downstream from the Ivankovo Reservoir. The reservoir is formed by a hydroelectric dam across the Volga situated within the town borders. The town lies on both banks of the Volga. The western boundary of the town is defined by the Moscow Canal joining the Volga, while the eastern boundary is defined by the Dubna River joining the Volga.
Dubna is the northernmost town of Moscow Oblast.
History
Pre-World War II
Fortress Dubna () belonging to Rostov-Suzdal Principality was built in the area in 1132 by the order of Yuri Dolgoruki and existed until 1216. The fortress was destroyed during the feudal war between the sons of Vsevolod the Big Nest. The village of Gorodishche () was located on the right bank of the Volga River and was a part of the Kashin Principality. Dubna customs post ( was located in the area and was a part of the Principality of Tver.
Before the October Revolution, few villages were in the area: Podberezye was on the left bank of the Volga, and Gorodishche, Alexandrovka, Ivankovo, Yurkino, and Kozlaki () were on the right bank.
Right after the Revolution one of the first collective farms was organized in Dubna area.
In 1931, the Orgburo of the Communist Party made a decision to build the Volga-Moscow Canal. Genrikh Yagoda, then the leader of the State Political Directorate, was put in charge of construction. The Canal was completed in 1937. Ivankovo Reservoir and Ivankovo hydroelectrical plant were also created as a part of the project. Many villages and the town Korcheva were submerged under water. Dubna is mentioned in Aleksandr Solzhenitsyn's book The Gulag Archipelago as the town built by Gulag prisoners.
Science
The decision to build a proton accelerator for nuclear research was taken by the Soviet government in 1946. An impractical place where the current town is situated was chosen due to remoteness from Moscow and the presence of the Ivankovo power plant nearby. The scientific leader was Igor Kurchatov. The general supervisor of the project including construction of a settlement, a road and a railway connecting it to Moscow (largely involving penal labour of Gulag inmates) was the NKVD chief Lavrentiy Beria. After three years of intensive work, the accelerator was commissioned on 13 December 1949.
The town of Dubna was officially inaugurated in 1956, together with the Joint Institute for Nuclear Research (JINR), which has developed into a large international research laboratory involved mainly in particle physics, heavy ion physics, synthesis of transuranium elements, and radiobiology. In 1960, a town of Ivankovo situated on the opposite (left) bank of the Volga was merged into Dubna. In 1964, Dubna hosted the prestigious International Conference on High Energy Physics.
Currently, a construction of the NICA particle collider, a megascience project is underway in Dubna.
Outstanding physicists of the 20th century including Nikolay Bogolyubov, Georgy Flyorov, Vladimir Veksler, and Bruno Pontecorvo used to work at the institute. A number of elementary particles and nuclei of transuranium elements (most recently, element 117) have been discovered and investigated there, leading to the honorary naming of chemical element 105 dubnium (Db) for the town.
Administrative and municipal status
Within the framework of administrative divisions, it is incorporated as Dubna Town Under Oblast Jurisdiction—an administrative unit with the status equal to that of the districts. As a municipal division, Dubna Town Under Oblast Jurisdiction is incorporated as Dubna Urban Okrug.
Demographics
Economics
Before the dissolution of the Soviet Union, JINR and MKB Raduga were the main employers in the town. Since then their role has decreased significantly. Several small industrial enterprises have emerged, however the town still experiences some employment difficulties. Proximity to Moscow allows many to commute and work there. Plans by AFK Sistema and other investors including government structures have been announced to build a Russian analogue of Silicon Valley in Dubna. As of the beginning of 2007, nothing has commenced.
Transport
Dubna is the starting point of the Moscow Canal. In addition to the canal, Dubna is connected to Moscow with the А104 highway, and the Savyolovsky suburban railway line provides access to Moscow.
Public transport connections to Moscow include express trains, suburban trains, and bus shuttles departing from the Savyolovsky Rail Terminal.
Culture
Among the city's cultural facilities are: the Mir House of Culture, the Oktyabr Palace of Culture, a movie theater, 21 libraries, 4 music schools and a school of arts. In 1990, the Dubna Symphony Orchestra was established.
Museums
Museum of Archeology and Local History of Dubna
JINR Museum of the History of Science and Technology
Museum of Natural History at Dubna International University
Museum of Locks
Museum of Sports
Svetoch Culturohistorical Center
Cinema
A variety of movies and miniseries were filmed in the city, such as:
Volga-Volga (1938)
Ballad of Siberia (1948)
Nine Days in One Year (1962)
All Remains to People (1963)
Vasili and Vasilisa (1981)
Katya Ismailova (1994)
Law of the Lawless (2002)
Sports
Dubna is located on the Moscow Canal and the Ivankovo Reservoir, making it a good destination for water sports such as windsurfing, kitesurfing, and water skiing.. In 2004, for the first time, a stage of the Water Ski World Cup took place in the city. In 2011, Dubna hosted the World Waterskiing Championships.
Dubna's sports facilities include two stadiums, a waterskiing stadium on the Volga River, four swimming pools, tennis courts, and five sports complexes.
Trivia
One of the world's tallest statues of Vladimir Lenin, high, built in 1937, is located at Dubna at the confluence of the Volga River and the Moscow Canal. The accompanying statue of Joseph Stalin of similar size was demolished in 1961 during the period of de-stalinization.
Twin towns and sister cities
Dubna is twinned with:
Giv'at Shmuel, Israel
La Crosse, Wisconsin, United States
Alushta, Ukraine
Kurchatov, Kazakhstan
Lincang, China
Nová Dubnica, Slovakia
Gallery
References
Notes
Sources
External links
Official website of Dubna
Dubna Business Directory
News of Dubna
Cities and towns in Moscow Oblast
Populated places established in 1956
Populated places on the Volga
Nuclear research institutes
Cities and towns built in the Soviet Union
Naukograds | Dubna | [
"Engineering"
] | 1,504 | [
"Nuclear research institutes",
"Nuclear organizations"
] |
12,121,068 | https://en.wikipedia.org/wiki/Open%20cluster%20remnant | In astronomy, an open cluster remnant (OCR) is the final stage in the evolution of an open star cluster.
Theory
Viktor Ambartsumian (1938) and Lyman Spitzer (1940) showed that, from a theoretical point of view, it was impossible for a star cluster to evaporate completely; furthermore, Spitzer pointed out two possible final results for the evolution of a star cluster: evaporation provokes physical collisions between stars, or evaporation proceeds until a stable binary or higher multiplicity system is produced.
Observations
Using objective-prism plates, Lodén (1987, 1988, 1993) has investigated the possible population of open cluster remnants in our Galaxy under the assumption that the stars in these clusters should have similar luminosity and spectral type. He found that about 30% of the objects in his sample could be catalogued as a possible type of cluster remnant. The membership for these objects is ≥ 15. The typical age of these systems is about 150 Myr with a range of 50-200 Myr. They show a significant density of binaries and a large number of optical binaries. The stars of these OCRs have a trend to be massive and hence early-type (A-F) stars although this observational method includes a noticeable selection effect because bright early-type spectra are easier to detect than fainter and later ones. In fact, almost no stars with spectral type later than F appear among his objects.
On the other hand, his results were not fully conclusive because there are known regions in the sky with many stars of the same spectral type but in which it is difficult to find two stars with the same proper motions or radial velocity. A striking example of this fact is Upgren 1; initially, it was suggested that this small group of seven F stars was the remnant of an old cluster (Upgren & Rubin 1965) but later, Gatewood et al. (1988) concluded that Upgren 1 is only a chance alignment of F stars resulting from the close passage of members of two dynamically different sets of stars.
Very recently, Stefanik et al. (1997) have shown that one of the sets is formed by 5 stars including a long-period binary and an unusual triple system.
Simulations
Regarding numerical simulations, for systems with some 25 to 250 stars, von Hoerner (1960, 1963), Aarseth (1968) and van Albada (1968) suggested that the final outcome of the evolution of an open cluster is one or more tightly bound binaries (or even a hierarchical triple system). Van Albada pointed out several observational candidates (σ Ori, ADS 12696, ρ Oph, 1 Cas, 8 Lac and 67 Oph) as being OCRs and Wielen (1975) indicated another one, the Ursa Major moving group (Collinder 285).
References
Aarseth, S. J.; 1968, Bull. Astron. Ser., 3, 3, 105
van Albada, T. S.; 1968, Bull. Astron. Inst. Neth., 19, 479
Ambartsumian, V. A.; 1938, Ann. Len. State Univ., # 22, 4, 19 (English translation in: Dynamics of Star Clusters, eds. J. Goodman, P. Hut, (Dordrecht: Reidel) p. 521)
Gatewood, G.; De Jonge, J. K.; Castelaz, M.; et al., 1988, ApJ, 332, 917
von Hoerner, S.; 1960, Z. Astrophys., 50, 184
von Hoerner, S.; 1963, Z. Astrophys., 57, 47
Lodén, L. O.; 1987, Ir. Astron. J., 18, 95
Lodén, L. O.; 1988, A&SS, 142, 177
Lodén, L. O.; 1993, A&SS, 199, 165
Spitzer, L.; 1940, MNRAS, 100, 397
Stefanik, R. P.; Caruso, J. R.; Torres, G.; Jha, S.; Latham, D. W.; 1997, Baltic Astronomy, 6, 137
Upgren, A. R.; Rubin V. C.; 1965, PASP, 77, 355
Wielen, R.; 1975, in: Dynamics of Stellar Systems, ed. A. Hayli, (Dordrecht: Reidel) p. 97
Further reading
Bica, E.; Santiago, B. X.; Dutra, C. M.; Dottori, H.; de Oliveira, M. R.; Pavani D., 2001, A&A, 366, 827-833
Carraro, G.; 2002, A&A, 385, 471-478
Carraro, G.; de la Fuente Marcos, Raúl; Villanova, S.; Moni Bidin, C.; de la Fuente Marcos, Carlos; Baumgardt, H.; Solivella, G.; 2007, A&A, 466, 931-941
Carraro, G.; 2006, Bulletin of the Astronomical Society of India, 34, 153-162
de la Fuente Marcos, Raúl; 1998, A&A, 333, L27-L30
de la Fuente Marcos, Raúl; de la Fuente Marcos, Carlos; Moni Bidin, C.; Carraro, G.; Costa, E.; 2013, MNRAS, 434, 194-208
Kouwenhoven, M. B. N.; Goodwin, S. P.; Parker, R. J.; Davies, M. B.; Malmberg, D.; Kroupa, P.; 2010, MNRAS, 404, 1835-1848
Moni Bidin, C.; de la Fuente Marcos, Raúl; de la Fuente Marcos, Carlos; Carraro, G.; 2010, A&A, 510, A44
Pavani, D. B.; Bica, E.; 2007, A&A, 468, 139-150
Pavani, D. B.; Bica, E.; Ahumada, A. V.; Clariá, J. J.; 2003, A&A, 399, 113-120
Pavani, D. B.; Bica, E.; Dutra, C. M.; Dottori, H.; Santiago, B. X.; Carranza, G.; Díaz, R. J.; 2001, A&A, 374, 554-563
Pavani, D. B.; Kerber, L. O.; Bica, E.; Maciel, W. J.; 2011, MNRAS, 412, 1611-1626
Villanova, S., Carraro, G.; de la Fuente Marcos, Raúl; Stagni, R.; 2004, A&A, 428, 67-77
Star clusters
Remmant
Stellar evolution | Open cluster remnant | [
"Physics",
"Astronomy"
] | 1,479 | [
"Star clusters",
"Astronomical objects",
"Astrophysics",
"Stellar evolution"
] |
12,123,967 | https://en.wikipedia.org/wiki/Heat%20loss%20due%20to%20linear%20thermal%20bridging | The heat loss due to linear thermal bridging () is a physical quantity used when calculating the energy performance of buildings. It appears in both United Kingdom and Irish methodologies.
Calculation
The calculation of the heat loss due to linear thermal bridging is relatively simple, given by the formula below:
In the formula, if Accredited Construction details used, and otherwise, and is the sum of all the exposed areas of the building envelope,
References
Energy economics
Thermodynamic properties | Heat loss due to linear thermal bridging | [
"Physics",
"Chemistry",
"Mathematics",
"Environmental_science"
] | 98 | [
"Thermodynamics stubs",
"Thermodynamic properties",
"Physical quantities",
"Quantity",
"Energy economics",
"Environmental social science stubs",
"Thermodynamics",
"Environmental social science",
"Physical chemistry stubs"
] |
12,128,493 | https://en.wikipedia.org/wiki/Plant%20transformation%20vector | Plant transformation vectors are plasmids that have been specifically designed to facilitate the generation of transgenic plants. The most commonly used plant transformation vectors are T-DNA binary vectors and are often replicated in both E. coli, a common lab bacterium, and Agrobacterium tumefaciens, a plant-virulent bacterium used to insert the recombinant DNA into plants.
Plant transformation vectors contain three key elements:
Plasmids Selection (creating a custom circular strand of DNA)
Plasmids Replication (so that it can be easily worked with)
Transfer DNA (T-DNA) region (inserting the DNA into the agrobacteria)
Steps in plant transformation
A custom DNA plasmid sequence can be created and replicated in various ways, but generally, all methods share the following processes:
Plant transformation using plasmids begins with the propagation of the binary vector in E. coli. When the bacterial culture reaches the appropriate density, the binary vector is isolated and purified. Then, a foreign gene can be introduced. The engineered binary vector, including the foreign gene, is re-introduced in E. coli for amplification.
The engineered binary factor is isolated from E. coli and is introduced into Agrobacteria containing a modified (relatively small) Ti plasmid. This engineered Agrobacteria can be used to infect plant cells. The T-DNA, which contains the foreign gene, becomes integrated into the plant cell genome. In each infected cell, the T-DNA is integrated at a different site in the genome.
The entire plant will regenerate from a single transformed cell, resulting in an organism with the transformed DNA integrated identically across all cells.
Consequences of the insertion
Foreign DNA inserted
Insertional mutagenesis (but not lethal for the plant cell – as the organism is diploid)
Transformation DNA fed to rodents ends up in their phagocytes and rarely in other cells. Specifically, this refers to bacterial and M13 DNA. (This preferential accumulation in phagocytes is thought to be real and not a detection artefact since these DNA sequences are thought to provoke phagocytosis.) However, no gene expression is known to have resulted, and this is not thought to be possible.
Plasmid selection
A selector gene can be used to distinguish successfully genetically modified cells from unmodified ones. The selector gene is integrated into the plasmid along with the desired target gene, providing the cells with resistance to an antibiotic, such as kanamycin, ampicillin, spectinomycin or tetracycline. The desired cells, along with any other organisms growing within the culture, can be treated with an antibiotic, allowing only the modified cells to survive. The antibiotic gene is not usually transferred to the plant cell but instead remains within the bacterial cell.
Plasmids replication
Plasmids replicate to produce many plasmid molecules in each host bacterial cell. The number of copies of each plasmid in a bacterial cell is determined by the replication origin, which is the position within the plasmid molecule where DNA replication is initiated. Most binary vectors have a higher number of plasmid copies when they replicate in E. coli; however, the plasmid copy-number is usually lower when the plasmid is resident within Agrobacterium tumefaciens.
Plasmids can also be replicated using the polymerase chain reaction (PCR).
T-DNA region
T-DNA contains two types of genes: the oncogenic genes, encoding for enzymes involved in the synthesis of auxins and cytokinins and responsible for tumor formation, and the genes encoding for the synthesis of opines. These compounds, produced by the condensation between amino acids and sugars, are synthesized and excreted by the crown gall cells, and they are consumed by A. tumefaciens as carbon and nitrogen sources.
The genes involved in opine catabolism, T-DNA transfer from the bacterium to the plant cell and bacterium-bacterium plasmid conjugative transfer are located outside the T-DNA. The T-DNA fragment is flanked by 25-bp direct repeats, which act as a cis-element signal for the transfer apparatus. The process of T-DNA transfer is mediated by the cooperative action of proteins encoded by genes determined in the Ti plasmid virulence region (vir genes) and in the bacterial chromosome. The Ti plasmid also contains the genes for opine catabolism produced by the crown gall cells and regions for conjugative transfer and for its own integrity and stability. The 30 kb virulence (vir) region is a regulon organized in six operons essential for the T-DNA transfer (virA, virB, virD, and virG) or for the increasing of transfer efficiency (virC and virE). Several chromosomal-determined genetic elements have shown their functional role in the attachment of A. tumefaciens to the plant cell and bacterial colonization. The loci chvA and chvB are involved in the synthesis and excretion of the b -1,2 glucan, the required for the sugar enhancement of vir genes induction and bacterial chemotaxis. The cell locus is responsible for the synthesis of cellulose fibrils. The locus is involved in the synthesis of both cyclic glucan and acid succinoglycan. The att locus is involved in the cell surface proteins.
References
Technical Focus:a guide to Agrobacterium binary Ti vectors Trends in Plant Science 5(10): 446-451 2000
Transformation vector
Molecular biology
Mobile genetic elements
Molecular biology techniques
Gene delivery | Plant transformation vector | [
"Chemistry",
"Biology"
] | 1,201 | [
"Genetics techniques",
"Mobile genetic elements",
"Plants",
"Plant genetics",
"Molecular genetics",
"Molecular biology techniques",
"Molecular biology",
"Biochemistry",
"Gene delivery"
] |
17,650,136 | https://en.wikipedia.org/wiki/Military%20spectrum%20management | Every military force has a goal to ensure and have permanent access to radio frequencies to meet its vital military tasks. This is based on strategies, doctrines and different policies that military forces adhere to.
The nature of high mobility of military operations and their logistics support requires wide use with high speed capacities of voice, data and image communications, etc. Control, surveillance, reconnaissance and reporting systems play a vital role in the command and control system. Many of these requirements can be only met with the use of radio systems. The equipment of military communications adds and multiplies the power of forces. That is why the use of radio frequencies’ spectrum is evaluated as one of the preliminary conditions for successful military operations.
Need for access
Despite the continuous reduction of forces, especially after the 1990s, it is seen that the military inquiries for access to radio spectrum have not decreased. This is because of the high mobility of the joint forces together with the quick reaction, increased number of missions, etc., which need more exact and timely information in all the defined regions and those unpredicted as well. Also, the equipment of military forces’ systems work in different bands and with several frequencies at the same time.
As long as the electromagnetic spectrum is evaluated as an element of the asset list and the operational electronic architecture that today’s and future forces should have, the military forces make all the efforts to get all the necessary bands of the spectrum. However, the military forces, in their activities to manage the frequencies fight with different challenges.
The technology is running fast and has brought an extended variety of user services. The success of certain applications (mobile radio-telephony, equipment with low power, digital media, various military systems, etc.) naturally has caused an increase in the needs for frequencies from the civilian and military sectors. This has often brought civil administration to have tendencies to decrease the amount of frequencies in the interest of military forces.
Spectrum management
Spectrum management is complex and difficult. The terminology, legal and technical considerations, national, regional and international complex regulations and bilateral and multilateral agreements can confuse those less educated in the effort. The forces in operations often do not see the incompatibilities and interferences between systems in their own communication services and to the other systems. All of these dictate the need for specialized personnel to ensure relevant recommendations for the commanders and staffs in all the levels and to manage the spectrum. The effective continuous training of frequency administrators is an important factor in the improvement of frequency management.
Fulfillment
Based on the priority and the abilities of national security structures, comes the enforcement for the immediate and maximum fulfillment of their inquiries for electromagnetic spectrum. But, the civil administrations that manage the frequencies often do not understand and do not harmonize the spectrum requests in the benefit of national security. The developments in the national security structures do not get followed and do not get put into full consideration by them. So the military forces should be actively engaged for the definition of a clear and sole objective for the needs of spectrum in the internal and international context and also in the priority treatment in the discussions for spectrum definition. The fact of decreasing the military forces does not mean that their available spectrum should be decreased as well. The variety of operations (combat, non-combat and peace support) of military forces has increased and they usually use the frequencies based on the activities and not on the number of forces. The military equipment is designed to work using the entire traditional and harmonized military spectrum. Also, the support with frequencies is mandatory, to fulfill all the acquisition and procurement procedures.
Standards
An essential aspect in the management of frequencies is the orientation towards policies, agreements, and NATO procedures and standards. All of these should have the necessary reflection in those of military forces of one country, member or partner. This is a necessity and needs to achieve among others the interoperability between communication and information systems.
The frequency management in military forces has a dynamic nature. It is related to adjustment and implementation of time concepts for the spectrum, taking into consideration planning, allocation, and spectrum usage in accordance with systems characteristics currently available and those of the future. This implies the flexibility in the protection of frequencies that are approved in national plans of frequencies, available for military forces. Although, it looks for a time to time evaluation of the current and future needs for spectrum aiming at more exact redefinitions of spectrum resources and more effective ways of spectrum division with other non-governmental users.
Levels of command and control
The authorities of different levels of command and control have the responsibility to ensure the full support of needs with spectrum for their structures. They always need and look for more continuous support with equipment which functions with radio frequencies. But, often they do not conceive correctly and do not have the necessary knowledge for access in the necessary frequency spectrum for military tasks. That is why the specialized frequency management structures have the responsibility of developing the full necessary administrative, planning and technical activities for frequencies.
Prevention of interference
To ensure a better and interference-free usage by other users, military forces, through their corresponding structures, take care for the monitoring of the frequency bands defined for them, cooperating and exchanging data with other governmental institutions authorized for spectrum management and other non-governmental users, to identify and detect unauthorized transmissions and illegal interferences. Spectrum monitoring requires expensive equipment and qualified personnel.
Combined and joint operations
Combined and joint operations are still a major challenge for frequency managers. The cooperation of two or more forces together, with different training and organization and without appropriate frequency planning, brings failure of command, control, and communications. The realizations of combined and joint operations, in alliance or coalitions, are closely connected to communications and information systems. Frequency management is evaluated as one of the main points for communications planning. In a coalition force, where there are a huge number of countries and military forces, if there is not correct management and coordination of spectrum bands what is colloquially called “frequency fratricide” will happen. To allocate frequencies in such an operation is very difficult. The spectrum usage in these operations has more than ever showed the need for coordination between forces of different countries and with the country where they operate, rationality, standardization, and interoperability, in accordance with deployment sites, regions and national and international regulations.
Computer software applications
Effective frequency management is closely tied with computer software applications. Through these applications, the optimal administration and coordination of frequencies and fulfillment of inquiries is ensured in every situation. Such applications support centralized and decentralized management of frequencies. They provide the planning and coordination of frequencies throughout the defined bands and their effective usage. Of course, such applications have financial costs and require time for the preliminary preparation and the final implementation.
Policies, guides, procedures
The normal frequency management in military forces is based on policies, guides, procedures, organizational and technical manuals. The preparation, their harmonization with international, regional and national regulations and adherence to technological developments is a continuous task that requires time to be realized from proper military structures.
Technical capabilities
A fundamental problem in frequency allocation is the existence of technical, geographical and operational factors, which restrict frequency usage of military forces. Frequency managers have to take into consideration technical capabilities and the equipment limit for functioning of the systems in accordance with operational requirements. In case of overloaded frequency bands, the military forces are obliged to foresee some interference and to accept some damage in the normal operation of their communications systems.
References
Radio spectrum
Military communications | Military spectrum management | [
"Physics",
"Engineering"
] | 1,493 | [
"Telecommunications engineering",
"Radio spectrum",
"Spectrum (physical sciences)",
"Electromagnetic spectrum",
"Military communications"
] |
17,650,716 | https://en.wikipedia.org/wiki/Quasistatic%20loading | In solid mechanics, quasistatic loading refers to loading where inertial effects are negligible. In other words, time and inertial force are irrelevant.
References
Solid mechanics | Quasistatic loading | [
"Physics"
] | 38 | [
"Solid mechanics",
"Classical mechanics stubs",
"Mechanics",
"Classical mechanics"
] |
17,657,397 | https://en.wikipedia.org/wiki/MHV%20amplitudes | In theoretical particle physics, maximally helicity violating amplitudes (MHV) are amplitudes with massless external gauge bosons, where gauge bosons have a particular helicity and the other two have the opposite helicity. These amplitudes are called MHV amplitudes, because at tree level, they violate helicity conservation to the maximum extent possible. The tree amplitudes in which all gauge bosons have the same helicity or all but one have the same helicity vanish.
MHV amplitudes may be calculated very efficiently by means of the Parke–Taylor formula.
Although developed for pure gluon scattering, extensions exist for massive particles, scalars (the Higgs) and for fermions (quarks and their interactions in QCD).
Parke–Taylor amplitudes
Work done in 1980s by Stephen Parke and Tomasz Taylor found that when considering the scattering of many gluons, certain classes of amplitude vanish at tree level; in particular when fewer than two gluons have negative helicity (and all the rest have positive helicity):
The first non-vanishing case occurs when two gluons have negative helicity. Such amplitudes are known as "maximally helicity violating" and have an extremely simple form in terms of momentum bilinears, independent of the number of gluons present:
The compactness of these amplitudes makes them extremely attractive, particularly for data taking at the LHC, for which it is necessary to remove the dominant background of standard model events. A rigorous derivation of the Parke–Taylor amplitudes was given by Berends and Giele.
CSW rules
The MHV were given a geometrical interpretation using Witten's twistor string theory which in turn inspired a technique of "sewing" MHV amplitudes together (with some off-shell continuation) to build arbitrarily complex tree diagrams. The rules for this formalism are called the CSW rules (after Freddy Cachazo, Peter Svrcek, Edward Witten).
The CSW rules can be generalised to the quantum level by forming loop diagrams out of MHV vertices.
There are missing pieces in this framework, most importantly the vertex, which is clearly non-MHV in form. In pure Yang–Mills theory this vertex vanishes on-shell, but it is necessary to construct the amplitude at one loop. This amplitude vanishes in any supersymmetric theory, but does not in the non-supersymmetric case.
The other drawback is the reliance on cut-constructibility to compute the loop integrals. This therefore cannot recover the rational parts of amplitudes (i.e. those not containing cuts).
The MHV Lagrangian
A Lagrangian whose perturbation theory gives rise to the CSW rules can be obtained by performing a canonical change of variables on the light-cone Yang–Mills (LCYM) Lagrangian.
The LCYM Lagrangrian has the following helicity structure:
The transformation involves absorbing the non-MHV three-point vertex into the kinetic term in a new field variable:
When this transformation is solved as a series expansion in the new field variable, it gives rise to an effective Lagrangian with an infinite series
of MHV terms:
The perturbation theory of this Lagrangian has been shown (up to the five-point vertex) to recover
the CSW rules. Moreover, the missing amplitudes which plague the CSW approach turn out to be recovered
within the MHV Lagrangian framework via evasions of the S-matrix equivalence theorem.
An alternative approach to the MHV Lagrangian recovers the missing pieces mentioned above by using Lorentz-violating counterterms.
BCFW recursion
BCFW recursion, also known as the Britto–Cachazo–Feng–Witten (BCFW) on-shell recursion method, is a way of calculating scattering amplitudes. Extensive use is now made of these techniques.
References
Scattering theory
Quantum chromodynamics | MHV amplitudes | [
"Chemistry"
] | 859 | [
"Scattering",
"Scattering theory"
] |
8,975,663 | https://en.wikipedia.org/wiki/Coding%20gain | In coding theory, telecommunications engineering and other related engineering problems, coding gain is the measure in the difference between the signal-to-noise ratio (SNR) levels between the uncoded system and coded system required to reach the same bit error rate (BER) levels when used with the error correcting code (ECC).
Example
If the uncoded BPSK system in AWGN environment has a bit error rate (BER) of 10−2 at the SNR level 4 dB, and the corresponding coded (e.g., BCH) system has the same BER at an SNR of 2.5 dB, then we say the coding gain = , due to the code used (in this case BCH).
Power-limited regime
In the power-limited regime (where the nominal spectral efficiency [b/2D or b/s/Hz], i.e. the domain of binary signaling), the effective coding gain of a signal set at a given target error probability per bit is defined as the difference in dB between the required to achieve the target with and the required to achieve the target with 2-PAM or (2×2)-QAM (i.e. no coding). The nominal coding gain is defined as
This definition is normalized so that for 2-PAM or (2×2)-QAM. If the average number of nearest neighbors per transmitted bit is equal to one, the effective coding gain is approximately equal to the nominal coding gain . However, if , the effective coding gain is less than the nominal coding gain by an amount which depends on the steepness of the vs. curve at the target . This curve can be plotted using the union bound estimate (UBE)
where Q is the Gaussian probability-of-error function.
For the special case of a binary linear block code with parameters , the nominal spectral efficiency is and the nominal coding gain is kd/n.
Example
The table below lists the nominal spectral efficiency, nominal coding gain and effective coding gain at for Reed–Muller codes of length :
Bandwidth-limited regime
In the bandwidth-limited regime (, i.e. the domain of non-binary signaling), the effective coding gain of a signal set at a given target error rate is defined as the difference in dB between the required to achieve the target with and the required to achieve the target with M-PAM or (M×M)-QAM (i.e. no coding). The nominal coding gain is defined as
This definition is normalized so that for M-PAM or (M×M)-QAM. The UBE becomes
where is the average number of nearest neighbors per two dimensions.
See also
Channel capacity
Eb/N0
References
MIT OpenCourseWare, 6.451 Principles of Digital Communication II, Lecture Notes sections 5.3, 5.5, 6.3, 6.4
Coding theory
Error detection and correction | Coding gain | [
"Mathematics",
"Engineering"
] | 601 | [
"Discrete mathematics",
"Coding theory",
"Reliability engineering",
"Error detection and correction"
] |
8,978,043 | https://en.wikipedia.org/wiki/Loop-O-Plane | The Loop-O-Plane is an amusement park ride that originated in America. It was invented by Lee Eyerly and manufactured by the Eyerly Aircraft Company of Salem, Oregon, in 1933. The ride was immediately popular with customers and became a staple of amusement parks.
The ride was imported into Europe, where it was first used in the UK in 1937.
The ride has two 16-foot-long arms, each with an enclosed car at one end and a counterweight at the other. Each car holds four riders seated in pairs facing opposite directions making the maximum occupancy eight riders. Propelled by an electric motor, the arms swing in directions opposite to each other until they 'loop' taking the riders upside down. The minimum rider height requirement is 46 inches tall.
An updated version of this ride exists known as the Roll-O-Plane. Some of the surviving machines were also converted into a variation named Rock-O-Plane.
Ride locations
A partial list containing both open and closed rides and their locations follows.
Green Machine (Eylerly Loop-O-Plane) - Hydro Free Fair- Hydro, Oklahoma
Loop-O-Plane - Keansburg Amusement Park, Keansburg, New Jersey
Loop-O-Plane - Idora Park - Youngstown, Ohio
Loop-O-Plane - Kennywood - West Mifflin, Pennsylvania
Loop-O-Plane - Lagoon - Farmington, Utah
Loop-O-Plane - Lakeside Amusement Park - Lakeside, Colorado
Bullet - Miracle Strip Amusement Park - Panama City Beach, Florida
References
External links
The Flat Joint Loop-O-Plane page with photos
Amusement rides | Loop-O-Plane | [
"Physics",
"Technology"
] | 328 | [
"Physical systems",
"Machines",
"Amusement rides"
] |
8,978,415 | https://en.wikipedia.org/wiki/Cryogenic%20treatment | A cryogenic treatment is the process of treating workpieces to cryogenic temperatures (typically around -300°F / -184°C, or as low as ) in order to remove residual stresses and improve wear resistance in steels and other metal alloys, such as aluminum. In addition to seeking enhanced stress relief and stabilization, or wear resistance, cryogenic treatment is also sought for its ability to improve corrosion resistance by precipitating micro-fine eta carbides, which can be measured before and after in a part using a quantimet.
The process has a wide range of applications from industrial tooling to the improvement of musical signal transmission. Some of the benefits of cryogenic treatment include longer part life, less failure due to cracking, improved thermal properties, better electrical properties including less electrical resistance, reduced coefficient of friction, less creep and walk, improved flatness, and easier machining.
Processes
Cryogenic tempering
Cryogenic tempering is two phase metal treatment that involves a descent and ascent phase, including a cryogenic treatment process (known as "cryogenic processing") where the material is slowly cooled to ultra low temperatures (typically around -300°F / -184°C), which is then optionally reheated slowly (typically up to +325°F / 162°C). Materials do not "harden" during the temperature descent or ascent, rather their molecular structures are compressed together tightly in uniformity through a computer controlled process that typically uses liquid nitrogen to slowly descend temperatures.
Invention History of Cryogenic Processing & Cryogenic Tempering
The cryogenic treatment process was invented by Ed Busch (CryoTech) in Detroit, Michigan in 1966, inspired by NASA research, which later merged with 300 Below, Inc. in 2000 to become the world's largest and oldest commercial cryogenic processing company after Peter Paulin of Decatur, IL collaborated with process control engineers to invent the world's first computer-controlled "dry" cryogenic processor in 1992 (where he was featured on the Discovery Channel's Next Step TV Show for his invention). Whereas the industry initially submerged metal parts in liquid nitrogen by dunking them or pouring liquid nitrogen over the parts, the earliest results proved inconsistent, which led Mr. Paulin to develop 300 Below's "dry" computer-controlled cryogenic processing equipment to ensure consistent and accurate treatment results across every processing run by introducing liquid nitrogen into a chamber above its boiling point, in a "dry" gaseous state, to ensure that parts in a chamber are not thermally shocked from being exposed to direct liquid contact of ultra low temperatures. A "dry" cryogenic process does not submerge parts in liquid, but rather ensures that temperatures are slowly descended at less than one degree per minute using short bursts of cold gas being introduced via a solenoid-metered pipe, which is controlled by a computer equipment paired with highly accurate RTD (Resistance Temperature Detector) sensors.
Science Behind Dry Cryogenic Processing & Cryogenic Tempering
Because all changes to metals take place on the quench, the first phase of the initial descent is called cryogenic processing, and by adding a second phase to heat the molecular structure of materials after an initial molecular re-alignment, both processes together are called cryogenic tempering. By using liquid nitrogen, the temperature can go as low as −196 °C, though the typical dwell temperature of cryogenic processing equipment is slightly above the boiling point of liquid nitrogen (closer to -300°F / -184°C) due to being injected into the processing chamber as a gaseous state and making every attempt not to introduce liquid into the chamber that could cause parts to become thermally shocked. Cryogenic processing (and especially cryogenic tempering) can have a profound effect on the mechanical properties of certain materials, such as steels or tungsten carbide, but the heating phase in cryogenic tempering is typically omitted for softer metals like brass in musical instruments, for piano strings, in certain aerospace applications, or for sensitive electronic components like vacuum tubes and transistors in high-end audio equipment. In tungsten carbide (WC-Co), the crystal structure of cobalt is transformed from softer FCC to harder HCP phase whereas the hard tungsten carbide particle is unaffected by the treatment.
Applications of cryogenic processing
Aerospace & Defense: communication, optical housings, satellites, weapons platforms, guidance systems, landing systems.
Automotive: brake rotors, transmissions, clutches, brake parts, rods, crank shafts, camshafts axles, bearings, ring and pinion, heads, valve trains, differentials, springs, nuts, bolts, washers.
Cutting tools: cutters, knives, blades, drill bits, end mills, turning or milling inserts. Cryogenic treatments of cutting tools can be classified as Deep Cryogenic Treatments (around -196 °C) or Shallow Cryogenic Treatments (around -80 °C).
Forming tools: roll form dies, progressive dies, stamping dies.
Mechanical industry: pumps, motors, nuts, bolts, washers.
Medical: tooling, scalpels.
Motorsports and Fleet Vehicles: See Automotive for brake rotors and other automotive components.
Musical: Vacuum tubes, Audio cables, brass instruments, guitar strings and fret wire, piano wire, amplifiers, magnetic pickups, cables, connectors.
Sports: Firearms, knives, fishing equipment, auto racing, tennis rackets, golf clubs, mountain climbing gear, archery, skiing, aircraft parts, high pressure lines, bicycles, motor cycles.
Cryogenic machining
Cryogenic machining is a machining process where the traditional flood lubro-cooling liquid (an emulsion of oil into water) is replaced by a jet of either liquid nitrogen (LN2) or pre-compressed carbon dioxide (). Cryogenic machining is useful in rough machining operations, in order to increase the tool life. It can also be useful to preserve the integrity and quality of the machined surfaces in finish machining operations. Cryogenic machining tests have been performed by researchers for several decades, but the actual commercial applications are still limited to very few companies. Both cryogenic machining by turning and milling are possible. Cryogenic machining is a relatively new technique in machining. This concept was applied on various machining processes such as turning, milling, drilling etc. Cryogenic turning technique is generally applied on three major groups of workpiece materials—superalloys, ferrous metals, and viscoelastic polymers/elastomers. The roles of cryogen in machining different materials are unique.
Cryogenic deflashing
Cryogenic deburring
Cryogenic rolling
Cryogenic rolling or , is one of the potential techniques to produce nanostructured bulk materials from its bulk counterpart at cryogenic temperatures. It can be defined as rolling that is carried out at cryogenic temperatures. Nanostructured materials are produced chiefly by severe plastic deformation processes. The majority of these methods require large plastic deformations (strains much larger than unity). In case of cryorolling, the deformation in the strain hardened metals is preserved as a result of the suppression of the dynamic recovery. Hence large strains can be maintained and after subsequent annealing, ultra-fine-grained structure can be produced.
Advantages
Comparison of cryorolling and rolling at room temperature:
In cryorolling, the strain hardening is retained up to the extent to which rolling is carried out. This implies that there will be no dislocation annihilation and dynamic recovery. Where as in rolling at room temperature, dynamic recovery is inevitable and softening takes place.
The flow stress of the material differs for the sample which is subjected to cryorolling. A cryorolled sample has a higher flow stress compared to a sample subjected to rolling at room temperature.
Cross slip and climb of dislocations are effectively suppressed during cryorolling leading to high dislocation density which is not the case for room temperature rolling.
The corrosion resistance of the cryorolled sample comparatively decreases due to the high residual stress involved.
The number of electron scattering centres increases for the cryorolled sample and hence the electrical conductivity decreases significantly.
The cryorolled sample shows a high dissolution rate.
Ultra-fine-grained structures can be produced from cryorolled samples after subsequent annealing.
Cryogenic treatment in specific materials
Stainless steel
The torsional and tensional deformation under cryogenic temperature of stainless steel is found to be significantly enhance the mechanical strength while incorporating the gradual phase transformation inside the steel. This strength improvement is the result of following phenomenon.
The deformation induced phase transformation into martensitic phase which is stronger body centered cubic phase. The torsional and tensional deformation induces higher volume ratio of martensitic phase near the edge to prevent initial mechanical failure from the surface
The torsional deformation creates the gradient phase transformation along the radial direction protecting large hydrostatic tension
The high deformation triggers dislocation plasticity in martensitic phase to enhance overall ductility and tensile strength
Copper
Zhang et al. exploited the cryorolling to the dynamic plastic deformed copper at liquid nitrogen temperature (LNT-DPD) to greatly enhance tensile strength with high ductility. The key of this combined approach (Cryogenic hardening and Cryogenic rolling) is to engineer the nano-sized twin boundary embedded in the copper.
Processing with the plastic deformation of grained bulk metal decreases the size of the grain boundary and enhances the grain boundary strengthening. However, as the grain gets smaller, the interaction between grain and the dislocation inside impedes further process of grains. Among the grain boundaries, it is known that the twin boundaries, a special type of low-energy grain boundary has lower interaction energy with dislocation leading to much smaller saturation size of the grain.
The cryogenic dynamic plastic deformation creates higher fraction of the twin boundaries compared to the severe plastic deformation. Following cryorolling further reduces the grain boundary energy with relieving the twin boundary leading to higher Hall-Petch strengthening effect. In addition, this increases the ability of grain boundary to accommodate more dislocation leading to the improvement in ductility from cryorolling.
Titanium
Cryogenic hardening of Titanium is hard to manipulate compare to other face centered cubic (fcc) metals because these hexagonal close packed (hcp) metals has less symmetry and slip systems to exploit. Recently Zhao et al. introduced the efficient method to manipulate nanotwinned titanium which has higher strength, ductility and thermal stability. By cryoforging repetitively along the three principal axes in liquid nitrogen and following annealing process, pure Titanium can possess hierarchical twin boundary network structure which suppresses the motion of dislocation and significantly enhances its mechanical property. The microstructure analysis found that the repeated twinning and de-twinning keep increasing the fraction of nanosized twin boundaries and refining the grains to render much higher Hall-Petch strengthening effect even after the saturation of microscale twin boundary at high flow stress. Especially, the strength and ductility of nanotwinned titanium at 77 K, reaches about 2 GPa, and ~100% which far outweighs those of conventional cryogenic steels even without any inclusion of alloying.
References
External links
Cryogenics Society of America
CSA Cryogenic Treatment Database of Research Articles
300 Below - Founder of Commercial Cryogenic Industry (Since 1966)
Understanding how Deep Cryogenics works, and what applications are most effective
Cryogenics
Metal forming
Metal heat treatments | Cryogenic treatment | [
"Physics",
"Chemistry"
] | 2,337 | [
"Metallurgical processes",
"Metal heat treatments",
"Applied and interdisciplinary physics",
"Cryogenics"
] |
8,978,774 | https://en.wikipedia.org/wiki/Quantum%20nonlocality | In theoretical physics, quantum nonlocality refers to the phenomenon by which the measurement statistics of a multipartite quantum system do not allow an interpretation with local realism. Quantum nonlocality has been experimentally verified under a variety of physical assumptions.
Quantum nonlocality does not allow for faster-than-light communication, and hence is compatible with special relativity and its universal speed limit of objects. Thus, quantum theory is local in the strict sense defined by special relativity and, as such, the term "quantum nonlocality" is sometimes considered a misnomer. Still, it prompts many of the foundational discussions concerning quantum theory.
History
Einstein, Podolsky and Rosen
In the 1935 EPR paper, Albert Einstein, Boris Podolsky and Nathan Rosen described "two spatially separated particles which have both perfectly correlated positions and momenta" as a direct consequence of quantum theory. They intended to use the classical principle of locality to challenge the idea that the quantum wavefunction was a complete description of reality, but instead they sparked a debate on the nature of reality.
Afterwards, Einstein presented a variant of these ideas in a letter to Erwin Schrödinger, which is the version that is presented here. The state and notation used here are more modern, and akin to David Bohm's take on EPR. The quantum state of the two particles prior to measurement can be written as
where .
Here, subscripts “A” and “B” distinguish the two particles, though it is more convenient and usual to refer to these particles as being in the possession of two experimentalists called Alice and Bob. The rules of quantum theory give predictions for the outcomes of measurements performed by the experimentalists. Alice, for example, will measure her particle to be spin-up in an average of fifty percent of measurements. However, according to the Copenhagen interpretation, Alice's measurement causes the state of the two particles to collapse, so that if Alice performs a measurement of spin in the z-direction, that is with respect to the basis , then Bob's system will be left in one of the states . Likewise, if Alice performs a measurement of spin in the x-direction, that is, with respect to the basis , then Bob's system will be left in one of the states . Schrödinger referred to this phenomenon as "steering". This steering occurs in such a way that no signal can be sent by performing such a state update; quantum nonlocality cannot be used to send messages instantaneously and is therefore not in direct conflict with causality concerns in special relativity.
In the Copenhagen view of this experiment, Alice's measurement—and particularly her measurement choice—has a direct effect on Bob's state. However, under the assumption of locality, actions on Alice's system do not affect the "true", or "ontic" state of Bob's system. We see that the ontic state of Bob's system must be compatible with one of the quantum states or , since Alice can make a measurement that concludes with one of those states being the quantum description of his system. At the same time, it must also be compatible with one of the quantum states or for the same reason. Therefore, the ontic state of Bob's system must be compatible with at least two quantum states; the quantum state is therefore not a complete descriptor of his system. Einstein, Podolsky and Rosen saw this as evidence of the incompleteness of the Copenhagen interpretation of quantum theory, since the wavefunction is explicitly not a complete description of a quantum system under this assumption of locality. Their paper concludes:
Although various authors (most notably Niels Bohr) criticised the ambiguous terminology of the EPR paper, the thought experiment nevertheless generated a great deal of interest. Their notion of a "complete description" was later formalised by the suggestion of hidden variables that determine the statistics of measurement results, but to which an observer does not have access. Bohmian mechanics provides such a completion of quantum mechanics, with the introduction of hidden variables; however the theory is explicitly nonlocal. The interpretation therefore does not give an answer to Einstein's question, which was whether or not a complete description of quantum mechanics could be given in terms of local hidden variables in keeping with the "Principle of Local Action".
Bell inequality
In 1964 John Bell answered Einstein's question by showing that such local hidden variables can never reproduce the full range of statistical outcomes predicted by quantum theory. Bell showed that a local hidden variable hypothesis leads to restrictions on the strength of correlations of measurement results. If the Bell inequalities are violated experimentally as predicted by quantum mechanics, then reality cannot be described by local hidden variables and the mystery of quantum nonlocal causation remains. However, Bell notes that the non-local hidden variable model of Bohm are different:
Clauser, Horne, Shimony and Holt (CHSH) reformulated these inequalities in a manner that was more conducive to experimental testing (see CHSH inequality).
In the scenario proposed by Bell (a Bell scenario), two experimentalists, Alice and Bob, conduct experiments in separate labs. At each run, Alice (Bob) conducts an experiment in her (his) lab, obtaining outcome . If Alice and Bob repeat their experiments several times, then they can estimate the probabilities , namely, the probability that Alice and Bob respectively observe the results when they respectively conduct the experiments x,y. In the following, each such set of probabilities will be denoted by just . In the quantum nonlocality slang, is termed a box.
Bell formalized the idea of a hidden variable by introducing the parameter to locally characterize measurement results on each system: "It is a matter of indifference ... whether λ denotes a single variable or a set ... and whether the variables are discrete or continuous". However, it is equivalent (and more intuitive) to think of as a local "strategy" or "message" that occurs with some probability when Alice and Bob reboot their experimental setup. Bell's assumption of local causality then stipulates that each local strategy defines the distributions of independent outcomes if Alice conducts experiment x and Bob conducts experiment
Here () denotes the probability that Alice (Bob) obtains the result when she (he) conducts experiment and the local variable describing her (his) experiment has value ().
Suppose that can take values from some set . If each pair of values has an associated probability of being selected (shared randomness is allowed, i.e., can be correlated), then one can average over this distribution to obtain a formula for the joint probability of each measurement result:
A box admitting such a decomposition is called a Bell local or a classical box. Fixing the number of possible values which can each take, one can represent each box as a finite vector with entries . In that representation, the set of all classical boxes forms a convex polytope.
In the Bell scenario studied by CHSH, where can take values within , any Bell local box must satisfy the CHSH inequality:
where
The above considerations apply to model a quantum experiment. Consider two parties conducting local polarization measurements on a bipartite photonic state. The measurement result for the polarization of a photon can take one of two values (informally, whether the photon is polarized in that direction, or in the orthogonal direction). If each party is allowed to choose between just two different polarization directions, the experiment fits within the CHSH scenario. As noted by CHSH, there exist a quantum state and polarization directions which generate a box with equal to . This demonstrates an explicit way in which a theory with ontological states that are local, with local measurements and only local actions cannot match the probabilistic predictions of quantum theory, disproving Einstein's hypothesis. Experimentalists such as Alain Aspect have verified the quantum violation of the CHSH inequality as well as other formulations of Bell's inequality, to invalidate the local hidden variables hypothesis and confirm that reality is indeed nonlocal in the EPR sense.
Possibilistic nonlocality
Bell's demonstration is probabilistic in the sense that it shows that the precise probabilities predicted by quantum mechanics for some entangled scenarios cannot be met by a local hidden variable theory. (For short, here and henceforth "local theory" means "local hidden variables theory".) However, quantum mechanics permits an even stronger violation of local theories: a possibilistic one, in which local theories cannot even agree with quantum mechanics on which events are possible or impossible in an entangled scenario. The first proof of this kind was due to Daniel Greenberger, Michael Horne, and Anton Zeilinger in 1993 The state involved is often called the GHZ state.
In 1993, Lucien Hardy demonstrated a logical proof of quantum nonlocality that, like the GHZ proof is a possibilistic proof. It starts with the observation that the state defined below can be written in a few suggestive ways:
where, as above, .
The experiment consists of this entangled state being shared between two experimenters, each of whom has the ability to measure either with respect to the basis or . We see that if they each measure with respect to , then they never see the outcome . If one measures with respect to and the other , they never see the outcomes However, sometimes they see the outcome when measuring with respect to , since
This leads to the paradox: having the outcome we conclude that if one of the experimenters had measured with respect to the basis instead, the outcome must have been or , since and are impossible. But then, if they had both measured with respect to the basis, by locality the result must have been , which is also impossible.
Nonlocal hidden variable models with a finite propagation speed
The work of Bancal et al. generalizes Bell's result by proving that correlations achievable in quantum theory are also incompatible with a large class of superluminal hidden variable models. In this framework, faster-than-light signaling is precluded. However, the choice of settings of one party can influence hidden variables at another party's distant location, if there is enough time for a superluminal influence (of finite, but otherwise unknown speed) to propagate from one point to the other. In this scenario, any bipartite experiment revealing Bell nonlocality can just provide lower bounds on the hidden influence's propagation speed. Quantum experiments with three or more parties can, nonetheless, disprove all such non-local hidden variable models.
Analogs of Bell’s theorem in more complicated causal structures
The random variables measured in a general experiment can depend on each other in complicated ways. In the field of causal inference, such dependencies are represented via Bayesian networks: directed acyclic graphs where each node represents a variable and an edge from a variable to another signifies that the former influences the latter and not otherwise, see the figure.
In a standard bipartite Bell experiment, Alice's (Bob's) setting (), together with her (his) local variable (), influence her (his) local outcome (). Bell's theorem can thus be interpreted as a separation between the quantum and classical predictions in a type of causal structures with just one hidden node . Similar separations have been established in other types of causal structures. The characterization of the boundaries for classical correlations in such extended Bell scenarios is challenging, but there exist complete practical computational methods to achieve it.
Entanglement and nonlocality
Quantum nonlocality is sometimes understood as being equivalent to entanglement. However, this is not the case. Quantum entanglement can be defined only within the formalism of quantum mechanics, i.e., it is a model-dependent property. In contrast, nonlocality refers to the impossibility of a description of observed statistics in terms of a local hidden variable model, so it is independent of the physical model used to describe the experiment.
It is true that for any pure entangled state there exists a choice of measurements that produce Bell nonlocal correlations, but the situation is more complex for mixed states. While any Bell nonlocal state must be entangled, there exist (mixed) entangled states which do not produce Bell nonlocal correlations (although, operating on several copies of some of such states, or carrying out local post-selections, it is possible to witness nonlocal effects). Moreover, while there are catalysts for entanglement, there are none for nonlocality. Finally, reasonably simple examples of Bell inequalities have been found for which the quantum state giving the largest violation is never a maximally entangled state, showing that entanglement is, in some sense, not even proportional to nonlocality.
Quantum correlations
As shown, the statistics achievable by two or more parties conducting experiments in a classical system are constrained in a non-trivial way. Analogously, the statistics achievable by separate observers in a quantum theory also happen to be restricted. The first derivation of a non-trivial statistical limit on the set of quantum correlations, due to B. Tsirelson, is known as Tsirelson's bound.
Consider the CHSH Bell scenario detailed before, but this time assume that, in their experiments, Alice and Bob are preparing and measuring quantum systems. In that case, the CHSH parameter can be shown to be bounded by
The sets of quantum correlations and Tsirelson’s problem
Mathematically, a box admits a quantum realization if and only if there exists a pair of Hilbert spaces , a normalized vector and projection operators such that
For all , the sets represent complete measurements. Namely, .
, for all .
In the following, the set of such boxes will be called . Contrary to the classical set of correlations, when viewed in probability space, is not a polytope. On the contrary, it contains both straight and curved boundaries. In addition, is not closed: this means that there exist boxes which can be arbitrarily well approximated by quantum systems but are themselves not quantum.
In the above definition, the space-like separation of the two parties conducting the Bell experiment was modeled by imposing that their associated operator algebras act on different factors of the overall Hilbert space describing the experiment. Alternatively, one could model space-like separation by imposing that these two algebras commute. This leads to a different definition:
admits a field quantum realization if and only if there exists a Hilbert space , a normalized vector and projection operators such that
For all , the sets represent complete measurements. Namely, .
, for all .
, for all .
Call the set of all such correlations .
How does this new set relate to the more conventional defined above? It can be proven that is closed. Moreover, , where denotes the closure of . Tsirelson's problem consists in deciding whether the inclusion relation is strict, i.e., whether or not . This problem only appears in infinite dimensions: when the Hilbert space in the definition of is constrained to be finite-dimensional, the closure of the corresponding set equals .
In January 2020, Ji, Natarajan, Vidick, Wright, and Yuen claimed a result in quantum complexity theory that would imply that , thus solving Tsirelson's problem.
Tsirelson's problem can be shown equivalent to Connes embedding problem, a famous conjecture in the theory of operator algebras.
Characterization of quantum correlations
Since the dimensions of and are, in principle, unbounded, determining whether a given box admits a quantum realization is a complicated problem. In fact, the dual problem of establishing whether a quantum box can have a perfect score at a non-local game is known to be undecidable. Moreover, the problem of deciding whether can be approximated by a quantum system with precision is NP-hard. Characterizing quantum boxes is equivalent to characterizing the cone of completely positive semidefinite matrices under a set of linear constraints.
For small fixed dimensions , one can explore, using variational methods, whether can be realized in a bipartite quantum system , with , . That method, however, can just be used to prove the realizability of , and not its unrealizability with quantum systems.
To prove unrealizability, the most known method is the Navascués–Pironio–Acín (NPA) hierarchy. This is an infinite decreasing sequence of sets of correlations with the properties:
If , then for all .
If , then there exists such that .
For any , deciding whether can be cast as a semidefinite program.
The NPA hierarchy thus provides a computational characterization, not of , but of . If , (as claimed by Ji, Natarajan, Vidick, Wright, and Yuen) then a new method to detect the non-realizability of the correlations in is needed.
If Tsirelson's problem was solved in the affirmative, namely, , then the above two methods would provide a practical characterization of .
The physics of supra-quantum correlations
The works listed above describe what the quantum set of correlations looks like, but they do not explain why. Are quantum correlations unavoidable, even in post-quantum physical theories, or on the contrary, could there exist correlations outside which nonetheless do not lead to any unphysical operational behavior?
In their seminal 1994 paper, Popescu and Rohrlich explore whether quantum correlations can be explained by appealing to relativistic causality alone. Namely, whether any hypothetical box would allow building a device capable of transmitting information faster than the speed of light. At the level of correlations between two parties, Einstein's causality translates in the requirement that Alice's measurement choice should not affect Bob's statistics, and vice versa. Otherwise, Alice (Bob) could signal Bob (Alice) instantaneously by choosing her (his) measurement setting appropriately. Mathematically, Popescu and Rohrlich's no-signalling conditions are:
Like the set of classical boxes, when represented in probability space, the set of no-signalling boxes forms a polytope. Popescu and Rohrlich identified a box that, while complying with the no-signalling conditions, violates Tsirelson's bound, and is thus unrealizable in quantum physics. Dubbed the PR-box, it can be written as:
Here take values in , and denotes the sum modulo two. It can be verified that the CHSH value of this box is 4 (as opposed to the Tsirelson bound of ). This box had been identified earlier, by Rastall and Khalfin and Tsirelson.
In view of this mismatch, Popescu and Rohrlich pose the problem of identifying a physical principle, stronger than the no-signalling conditions, that allows deriving the set of quantum correlations. Several proposals followed:
Non-trivial communication complexity (NTCC). This principle stipulates that nonlocal correlations should not be so strong as to allow two parties to solve all 1-way communication problems with some probability using just one bit of communication. It can be proven that any box violating Tsirelson's bound by more than is incompatible with NTCC.
No Advantage for Nonlocal Computation (NANLC). The following scenario is considered: given a function , two parties are distributed the strings of bits and asked to output the bits so that is a good guess for . The principle of NANLC states that non-local boxes should not give the two parties any advantage to play this game. It is proven that any box violating Tsirelson's bound would provide such an advantage.
Information Causality (IC). The starting point is a bipartite communication scenario where one of the parts (Alice) is handed a random string of bits. The second part, Bob, receives a random number . Their goal is to transmit Bob the bit , for which purpose Alice is allowed to transmit Bob bits. The principle of IC states that the sum over of the mutual information between Alice's bit and Bob's guess cannot exceed the number of bits transmitted by Alice. It is shown that any box violating Tsirelson's bound would allow two parties to violate IC.
Macroscopic Locality (ML). In the considered setup, two separate parties conduct extensive low-resolution measurements over a large number of independently prepared pairs of correlated particles. ML states that any such “macroscopic” experiment must admit a local hidden variable model. It is proven that any microscopic experiment capable of violating Tsirelson's bound would also violate standard Bell nonlocality when brought to the macroscopic scale. Besides Tsirelson's bound, the principle of ML fully recovers the set of all two-point quantum correlators.
Local Orthogonality (LO). This principle applies to multipartite Bell scenarios, where parties respectively conduct experiments in their local labs. They respectively obtain the outcomes . The pair of vectors is called an event. Two events , are said to be locally orthogonal if there exists such that and . The principle of LO states that, for any multipartite box, the sum of the probabilities of any set of pair-wise locally orthogonal events cannot exceed 1. It is proven that any bipartite box violating Tsirelson's bound by an amount of violates LO.
All these principles can be experimentally falsified under the assumption that we can decide if two or more events are space-like separated. This sets this research program aside from the axiomatic reconstruction of quantum mechanics via Generalized Probabilistic Theories.
The works above rely on the implicit assumption that any physical set of correlations must be closed under wirings. This means that any effective box built by combining the inputs and outputs of a number of boxes within the considered set must also belong to the set. Closure under wirings does not seem to enforce any limit on the maximum value of CHSH. However, it is not a void principle: on the contrary, in it is shown that many simple, intuitive families of sets of correlations in probability space happen to violate it.
Originally, it was unknown whether any of these principles (or a subset thereof) was strong enough to derive all the constraints defining . This state of affairs continued for some years until the construction of the almost quantum set . is a set of correlations that is closed under wirings and can be characterized via semidefinite programming. It contains all correlations in , but also some non-quantum boxes . Remarkably, all boxes within the almost quantum set are shown to be compatible with the principles of NTCC, NANLC, ML and LO. There is also numerical evidence that almost-quantum boxes also comply with IC. It seems, therefore, that, even when the above principles are taken together, they do not suffice to single out the quantum set in the simplest Bell scenario of two parties, two inputs and two outputs.
Device independent protocols
Nonlocality can be exploited to conduct quantum information tasks which do not rely on the knowledge of the inner workings of the prepare-and-measurement apparatuses involved in the experiment. The security or reliability of any such protocol just depends on the strength of the experimentally measured correlations . These protocols are termed device-independent.
Device-independent quantum key distribution
The first device-independent protocol proposed was device-independent quantum key distribution (QKD). In this primitive, two distant parties, Alice and Bob, are distributed an entangled quantum state, that they probe, thus obtaining the statistics . Based on how non-local the box happens to be, Alice and Bob estimate how much knowledge an external quantum adversary Eve (the eavesdropper) could possess on the value of Alice and Bob's outputs. This estimation allows them to devise a reconciliation protocol at the end of which Alice and Bob share a perfectly correlated one-time pad of which Eve has no information whatsoever. The one-time pad can then be used to transmit a secret message through a public channel. Although the first security analyses on device-independent QKD relied on Eve carrying out a specific family of attacks, all such protocols have been recently proven unconditionally secure.
Device-independent randomness certification, expansion and amplification
Nonlocality can be used to certify that the outcomes of one of the parties in a Bell experiment are partially unknown to an external adversary. By feeding a partially random seed to several non-local boxes, and, after processing the outputs, one can end up with a longer (potentially unbounded) string of comparable randomness or with a shorter but more random string. This last primitive can be proven impossible in a classical setting.
Device-independent (DI) randomness certification, expansion, and amplification are techniques used to generate high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography, where high-quality random numbers are essential for ensuring the security of cryptographic protocols.
Randomness certification is the process of verifying that the output of a random number generator is truly random and has not been tampered with by an adversary. DI randomness certification does this verification without making assumptions about the underlying devices that generate random numbers. Instead, randomness is certified by observing correlations between the outputs of different devices that are generated using the same physical process. Recent research has demonstrated the feasibility of DI randomness certification using entangled quantum systems, such as photons or electrons. Randomness expansion is taking a small amount of initial random seed and expanding it into a much larger sequence of random numbers. In DI randomness expansion, the expansion is done using measurements of quantum systems that are prepared in a highly entangled state. The security of the expansion is guaranteed by the laws of quantum mechanics, which make it impossible for an adversary to predict the expansion output. Recent research has shown that DI randomness expansion can be achieved using entangled photon pairs and measurement devices that violate a Bell inequality.
Randomness amplification is the process of taking a small amount of initial random seed and increasing its randomness by using a cryptographic algorithm. In DI randomness amplification, this process is done using entanglement properties and quantum mechanics. The security of the amplification is guaranteed by the fact that any attempt by an adversary to manipulate the algorithm's output will inevitably introduce errors that can be detected and corrected. Recent research has demonstrated the feasibility of DI randomness amplification using quantum entanglement and the violation of a Bell inequality.
DI randomness certification, expansion, and amplification are powerful techniques for generating high-quality random numbers that are secure against any potential attacks on the underlying devices used to generate random numbers. These techniques have critical applications in cryptography and are likely to become increasingly crucial as quantum computing technology advances. In addition, a milder approach called semi-DI exists where random numbers can be generated with some assumptions on the working principle of the devices, environment, dimension, energy, etc., in which it benefits from ease-of-implementation and high generation rate.
Self-testing
Sometimes, the box shared by Alice and Bob is such that it only admits a unique quantum realization. This means that there exist measurement operators and a quantum state giving rise to such that any other physical realization of is connected to via local unitary transformations. This phenomenon, that can be interpreted as an instance of device-independent quantum tomography, was first pointed out by Tsirelson and named self-testing by Mayers and Yao. Self-testing is known to be robust against systematic noise, i.e., if the experimentally measured statistics are close enough to , one can still determine the underlying state and measurement operators up to error bars.
Dimension witnesses
The degree of non-locality of a quantum box can also provide lower bounds on the Hilbert space dimension of the local systems accessible to Alice and Bob. This problem is equivalent to deciding the existence of a matrix with low completely positive semidefinite rank. Finding lower bounds on the Hilbert space dimension based on statistics happens to be a hard task, and current general methods only provide very low estimates. However, a Bell scenario with five inputs and three outputs suffices to provide arbitrarily high lower bounds on the underlying Hilbert space dimension. Quantum communication protocols which assume a knowledge of the local dimension of Alice and Bob's systems, but otherwise do not make claims on the mathematical description of the preparation and measuring devices involved are termed semi-device independent protocols. Currently, there exist semi-device independent protocols for quantum key distribution and randomness expansion.
See also
Action at a distance
Popper's experiment
Quantum pseudo-telepathy
Quantum contextuality
Quantum foundations
References
Further reading
Nonlocality
Nonlocality | Quantum nonlocality | [
"Physics"
] | 5,932 | [
"Quantum field theory",
"Quantum measurement",
"Quantum mechanics"
] |
8,981,301 | https://en.wikipedia.org/wiki/Vegard%27s%20law | In crystallography, materials science and metallurgy, Vegard's law is an empirical finding (heuristic approach) resembling the rule of mixtures. In 1921, Lars Vegard discovered that the lattice parameter of a solid solution of two constituents is approximately a weighted mean of the two constituents' lattice parameters at the same temperature:
e.g., in the case of a mixed oxide of uranium and plutonium as used in the fabrication of MOX nuclear fuel:
Vegard's law assumes that both components A and B in their pure form (i.e., before mixing) have the same crystal structure. Here, is the lattice parameter of the solid solution, and are the lattice parameters of the pure constituents, and is the molar fraction of B in the solid solution.
Vegard's law is seldom perfectly obeyed; often deviations from the linear behavior are observed. A detailed study of such deviations was conducted by King. However, it is often used in practice to obtain rough estimates when experimental data are not available for the lattice parameter for the system of interest.
For systems known to approximately obey Vegard's law, the approximation may also be used to estimate the composition of a solution from knowledge of its lattice parameters, which are easily obtained from diffraction data. For example, consider the semiconductor compound . A relation exists between the constituent elements and their associated lattice parameters, , such that:
When variations in lattice parameter are very small across the entire composition range, Vegard's law becomes equivalent to Amagat's law.
Relationship to band gaps in semiconductors
In many binary semiconducting systems, the band gap in semiconductors is approximately a linear function of the lattice parameter. Therefore, if the lattice parameter of a semiconducting system follows Vegard's law, one can also write a linear relationship between the band gap and composition. Using as before, the band gap energy, , can be written as:
Sometimes, the linear interpolation between the band gap energies is not accurate enough, and a second term to account for the curvature of the band gap energies as a function of composition is added. This curvature correction is characterized by the bowing parameter, :
Mineralogy
The following excerpt from Takashi Fujii (1960) summarises well the limits of the Vegard’s law in the context of mineralogy and also makes the link with the Gladstone–Dale equation:
See also
When considering the empirical correlation of some physical properties and the chemical composition of solid compounds, other relationships, rules, or laws, also closely resembles the Vegard's law, and in fact the more general rule of mixtures:
Amagat's law
Gladstone–Dale equation
Kopp's law
Kopp–Neumann law
Rule of mixtures
References
Crystallography
Materials science
Metallurgy
Mineralogy
Eponyms | Vegard's law | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 579 | [
"Applied and interdisciplinary physics",
"Metallurgy",
"Materials science",
"Crystallography",
"Condensed matter physics",
"nan"
] |
8,982,401 | https://en.wikipedia.org/wiki/Nanakshahi%20bricks | Nanakshahi bricks (; meaning "belonging to the reign of Guru Nanak"), also known as Lakhuri bricks, were decorative bricks used for structural walls during the Mughal era. They were employed for constructing historical Sikh architecture, such as at the Golden Temple complex. The British colonists also made use of the bricks in Punjab.
Uses
Nanakshahi bricks were used in the Mughal-era more for aesthetic or ornamental reasons rather than structural reasons. This variety of brick tiles were of moderate dimensions and could be used for reinforcing lime concretes in the structural walls and other thick components. But, as they made moldings, cornices, plasters, etc. easy to work into a variety of shapes, they were more often used as cladding or decorative material. In the present-day, the bricks are sometimes used to give a "historical" look to settings, such as when the surrounding of the Golden Temple complex was heavily renovated in the 2010s.
General specifications
Nanakshahi bricks are moderate in-size. More often than not, the structures on which they were used, especially the Sikh temples (Gurudwaras), were a combination of two systems: trabeated and post-and-lintel, or based on arches. The surfaces were treated with lime or gypsum plaster which was molded into cornices, pilasters, and other structural as well as non-structural embellishments. Brick and lime mortar as well as lime or gypsum plaster, and lime concrete were the most favoured building materials, although stone (such as red stone and white marble) were also used in a number of shrines. Many fortresses were built using these bricks. They come in 4”x4” and 4”x6’’ sizes.
Relationship with Lakhuri bricks
Due to a lack of understanding, sometimes contemporary writers confuse the Lakhuri bricks with other similar but distinct regional variants. For example, some writers use "Lakhuri bricks and Nanakshahi bricks" implying two different things, and others use "Lakhuri bricks or Nanakshahi bricks" inadvertently implying either are the same or two different things, leading to confusion on if they are the same, especially if these words are casually mentioned interchangeably.
Lakhuri bricks were used by the Mughal Empire that spanned across the Indian subcontinent, whereas Nanak Shahi bricks were used mainly across the Sikh Empire, that was spread across the Punjab region in the north-west Indian subcontinent, when Sikhs were in conflict with the Mughal Empire due to the religious persecution of Sikhs by Mughals. Coins struck by Sikh rulers between 1764 CE to 1777 CE were called Gobind Shahi coins (bearing an inscription in the name of Guru Gobind Singh), and coins struck from 1777 onward were called Nanak Shahi coins (bearing an inscription in the name of Guru Nanak).
Mughal-era Lakhuri bricks predate Nanakshahi bricks, as seen in Bahadurgarh Fort of Patiala that was built by the Mughal Nawab Saif Khan in 1658 CE using earlier-era Lakhuri bricks, and nearly 80 years later it was renovated using later-era Nanakshahi bricks and renamed in the honor of Guru Tegh Bahadur (as Guru Teg Bahadur had stayed at this fort for three months and nine days before leaving for Delhi when he was executed by Aurangzeb in 1675 CE) by Maharaja of Patiala Karam Singh in 1837 CE. Since the timeline of both the Mughal Empire and Sikh Empire overlapped, both Lakhuri and Nanakshahi bricks were used around the same time in their respective dominions. Restoration architect author Anil Laul clarifies "We, therefore, had slim bricks known as the Lakhori and Nanakshahi bricks in India and the slim Roman bricks or their equivalents for many other parts of the world."
Conservation
Peter Bance, when evaluating the status of Sikh sites in present-day India, where the majority of Sikhs live today, criticizes the destruction of the originality of 19th century Sikh sites under the guise of "renovation", whereby historical structures are toppled and new buildings take their former place. An example cited by him of sites losing their originality relates to nanakshahi bricks, which are characteristic of Sikh architecture from the 19th century, being replaced by renovators of historical Sikh sites in India by marble and gold.
See also
Lakhori bricks
Sikh architecture
Notes
References
External links
Nanak Shahi Bricks
Ancient Home of Baba Sohan Singh Bhakna,(of Ghadar Party fame) in trouble
Viraasat Haveli frozen in Time
Indian architectural history
Sikh architecture
Mughal architecture elements
Building materials | Nanakshahi bricks | [
"Physics",
"Engineering"
] | 949 | [
"Building engineering",
"Construction",
"Materials",
"Building materials",
"Matter",
"Architecture"
] |
8,982,920 | https://en.wikipedia.org/wiki/Bedtime | Bedtime (also called putting to bed or tucking in) is a ritual part of parenting to help children feel more secure and become accustomed to a more rigid schedule of sleep than they might prefer. The ritual of bedtime is aimed at facilitating the transition from wakefulness to sleep. It may involve bedtime stories, children's songs, nursery rhymes, bed-making and getting children to change into nightwear. In some religious households, prayers are said shortly before going to bed. Sleep training may be part of the bedtime ritual for babies and toddlers.
In adult use, the term means simply "time for bed", similar to curfew, as in "It's past my bedtime". Some people are accustomed to drinking a nightcap or herbal tea at bedtime. Sleeping coaches are also used to help individuals reach their bedtime goals. Researchers studying sleep are finding patterns revealing that cell phone use at night disturbs going to sleep at one's bedtime and achieving a good night's sleep.
Synonyms
In boarding schools and on trips or holidays that involve young people, the equivalent of bedtime is lights out or lights-out - this term is also used in prisons, hospitals, in the military, and in sleep research.
Newspapers
Print newspapers, usually a daily, was "put to bed" when editorial work on the issue had formally ceased, the content was fixed, and printing could begin.
See also
Crib talk
Lullaby
Sleep cycle
References
Parenting
Sleep
Culture of beds | Bedtime | [
"Biology"
] | 305 | [
"Behavior",
"Sleep"
] |
8,983,270 | https://en.wikipedia.org/wiki/Trimmer%20%28electronics%29 | A trimmer, or preset, is a miniature adjustable electrical component. It is meant to be set correctly when installed in some device, and never seen or adjusted by the device's user. Trimmers can be variable resistors (potentiometers), variable capacitors, or trimmable inductors. They are common in precision circuitry like A/V components, and may need to be adjusted when the equipment is serviced. Trimpots (trimmer potentiometers) are often used to initially calibrate equipment after manufacturing. Unlike many other variable controls, trimmers are mounted directly on circuit boards, turned with a small screwdriver and rated for many fewer adjustments over their lifetime. Trimmers like trimmable inductors and trimmable capacitors are usually found in superhet radio and television receivers, in the intermediate frequency (IF), oscillator and radio frequency (RF) circuits. They are adjusted into the right position during the alignment procedure of the receiver.
General considerations
Trimmers come in a variety of sizes and levels of precision. For example, multi-turn trim potentiometers exist, in which it takes several turns of the adjustment screw to reach the end value. This allows for very high degrees of accuracy. Often they make use of a worm-gear (rotary track) or a leadscrew (linear track).
The position on the component of the adjustment often needs to be considered for accessibility after the circuit is assembled. Both top- and side-adjust trimmers are available to facilitate this. The adjustment of presets is often fixed in place with sealing wax after the adjustment is made to prevent movement by vibration. This also serves as an indication if the device has been tampered with.
Resistors
Resistor trimmers generally come in the form of a potentiometer (pot), often called a trimpot. Potentiometers have three terminals, but can be used as a normal two-terminal resistor by joining the wiper to one of the other terminals, or just using two terminals. Trimpot is a registered trademark of Bourns, Inc., and the device was patented by Marlan Bourns in 1952. The term has since become generic.
Two types of preset resistor are commonly found in circuits. The skeleton potentiometer works like a regular circular potentiometer, but is stripped of its enclosure, shaft, and fixings. The full movement of a skeleton potentiometer is less than a single turn. The other type is the multi-turn potentiometer which moves the slider along the resistive track via a gearing arrangement. The gearing is such that multiple turns of the adjustment screw are required to move the slider the full distance along the resistive track, leading to very high precision of setting. Some, possibly the majority, of multi-turn pots have a linear track rather than a circular one. Typically, a worm gear is used with rotary track presets and a leadscrew is used with linear track presets.
Capacitors
Trimming capacitors can be multi-plate parallel-plate capacitors with a dielectric for between plates for increased capacitance. However, at SHF only very small values of capacitance are needed. Presets at these frequencies are commonly a glass tube with plates at either end. The top plate is adjusted by means of a screw to which it is attached at the top of the cylinder.
Inductors
A common way of making preset inductors in radios is to wind the inductor on a plastic tube. A high permeability core material is inserted into the cylinder in the form of a screw. Winding the core further in to the inductor increases inductance and vice versa. It is normally necessary to use non-metallic tools to adjust inductors. A steel screwdriver will increase the inductance while it is being adjusted and it will fall again when the screwdriver is removed.
At VHF and SHF, only small values of inductance are usually needed. Inductors can be made of open coils of a few turns. They can be tuned by squeezing the coils together or by pulling them apart as the inductance needs to be increased or decreased respectively.
Tuned circuits
An adjustable tuned circuit can be formed in the same way as a preset inductor. The inductor and its resonant capacitor are commonly contained in a metal can for shielding with a hole at the top to give access to the adjustable core. Tuned transformers can also be constructed this way with two windings on the same core. This is a common component in the IF stage of radios which have a double-tuned amplifier format.
Distributed-element circuit
Distributed-element circuits often use the component known as a stub. In printed planar formats such as microstrip, stubs can be trimmed by removing material with a scalpel or adding material by soldering on copper foil or even just pressing on strips of indium. This is useful for prototypes and pre-production runs, but is usually not done on production items.
Applications
They are common in precision circuitry like A/V components, and may need to be adjusted when the equipment is serviced. Trimpots are often used to initially calibrate equipment after manufacturing. Unlike many other variable controls, trimmers are mounted directly on circuit boards, turned with a small screwdriver and rated for many fewer adjustments over their lifetime. Trimmers like trimmable inductors and trimmable capacitors are usually found in superhet radio and television receivers, in the intermediate frequency (IF), oscillator and radio frequency (RF) circuits. They are adjusted into the correct position during the alignment procedure.
Electronic symbols
In circuit diagrams, the symbol for a variable component is the symbol for a fixed component with a diagonal line through it terminating in an arrow head. For a preset component, the diagonal line terminates in a bar.
See also
Laser trimming
References
External links
Trimmer potentiometers (examples and internals), Robot Room
Highlights from Trimmer Primers - Bourns
Resistive components
Capacitors
de:Potentiometer#Trimmpotentiometer | Trimmer (electronics) | [
"Physics"
] | 1,284 | [
"Physical quantities",
"Resistive components",
"Capacitors",
"Capacitance",
"Electrical resistance and conductance"
] |
8,983,708 | https://en.wikipedia.org/wiki/Lam%C3%A9%20parameters | In continuum mechanics, Lamé parameters (also called the Lamé coefficients, Lamé constants or Lamé moduli) are two material-dependent quantities denoted by λ and μ that arise in strain-stress relationships. In general, λ and μ are individually referred to as Lamé's first parameter and Lamé's second parameter, respectively. Other names are sometimes employed for one or both parameters, depending on context. For example, the parameter μ is referred to in fluid dynamics as the dynamic viscosity of a fluid (not expressed in the same units); whereas in the context of elasticity, μ is called the shear modulus, and is sometimes denoted by G instead of μ. Typically the notation G is seen paired with the use of Young's modulus E, and the notation μ is paired with the use of λ.
In homogeneous and isotropic materials, these define Hooke's law in 3D, where is the stress tensor, the strain tensor, the identity matrix and the trace function. Hooke's law may be written in terms of tensor components using index notation as where is the Kronecker delta.
The two parameters together constitute a parameterization of the elastic moduli for homogeneous isotropic media, popular in mathematical literature, and are thus related to the other elastic moduli; for instance, the bulk modulus can be expressed as . Relations for other moduli are found in the (λ, G) row of the conversions table at the end of this article.
Although the shear modulus, μ, must be positive, the Lamé's first parameter, λ, can be negative, in principle; however, for most materials it is also positive.
The parameters are named after Gabriel Lamé. They have the same dimension as stress and are usually given in SI unit of stress [Pa].
See also
Elasticity tensor
Further reading
K. Feng, Z.-C. Shi, Mathematical Theory of Elastic Structures, Springer New York, , (1981)
G. Mavko, T. Mukerji, J. Dvorkin, The Rock Physics Handbook, Cambridge University Press (paperback), , (2003)
W.S. Slaughter, The Linearized Theory of Elasticity, Birkhäuser, , (2002)
References
Elasticity (physics) | Lamé parameters | [
"Physics",
"Materials_science"
] | 464 | [
"Deformation (mechanics)",
"Physical phenomena",
"Physical properties",
"Elasticity (physics)"
] |
8,984,493 | https://en.wikipedia.org/wiki/Ion%20pump | An ion pump (also referred to as a sputter ion pump) is a type of vacuum pump which operates by sputtering a metal getter. Under ideal conditions, ion pumps are capable of reaching pressures as low as 10−11 mbar. An ion pump first ionizes gas within the vessel it is attached to and employs a strong electrical potential, typically 3–7 kV, which accelerates the ions into a solid electrode. Small bits of the electrode are sputtered into the chamber. Gasses are trapped by a combination of chemical reactions with the surface of the highly-reactive sputtered material, and being physically trapped underneath that material.
History
The first evidence for pumping from electrical discharge was found 1858 by Julius Plücker, who did early experiments on electrical discharge in vacuum tubes. In 1937, Frans Michel Penning observed some evidence of pumping in the operation of his cold cathode gauge. These early effects were comparatively slow to pump, and were therefore not commercialized. A major advance came in the 1950s, when Varian Associates were researching improvements for the performance of vacuum tubes, particularly on improving the vacuum inside the klystron. In 1957, Lewis D Hall, John C Helmer, and Robert L Jepsen filed a patent for a significantly improved pump, one of the earliest pumps that could get a vacuum chamber to ultra-high vacuum pressures.
Working principle
The basic element of the common ion pump is a Penning trap. A swirling cloud of electrons produced by an electric discharge is temporarily stored in the anode region of a Penning trap. These electrons ionize incoming gas atoms and molecules. The resultant swirling ions are accelerated to strike a chemically active cathode (usually titanium). On impact the accelerated ions will either become buried within the cathode or sputter cathode material onto the walls of the pump. The freshly sputtered chemically active cathode material acts as a getter that then evacuates the gas by both chemisorption and physisorption resulting in a net pumping action. Inert and lighter gases, such as He and H2 tend not to sputter and are absorbed by physisorption. Some fraction of the energetic gas ions (including gas that is not chemically active with the cathode material) can strike the cathode and acquire an electron from the surface, neutralizing it as it rebounds. These rebounding energetic neutrals are buried in exposed pump surfaces.
Both the pumping rate and capacity of such capture methods are dependent on the specific gas species being collected and the cathode material absorbing it. Some species, such as carbon monoxide, will chemically bind to the surface of a cathode material. Others, such as hydrogen, will diffuse into the metallic structure. In the former example, the pump rate can drop as the cathode material becomes coated. In the latter, the rate remains fixed by the rate at which the hydrogen diffuses.
Types
There are three main types of ion pumps: the conventional or standard diode pump, the noble diode pump and the triode pump.
Standard diode pump
A standard diode pump is a type of ion pump employed in high vacuum processes which contains only chemically active cathodes, in contrast to noble diode pumps.
Two sub-types may be distinguished: the sputter ion pumps and the orbitron ion pumps.
Sputter ion pump
In the sputter ion pumps, one or more hollow anodes are placed between two cathode plates, with an intense magnetic field parallel to the axis of the anodes in order to augment the path of the electrons in the anode cells.
Orbitron ion pump
In the orbitron vacuum pumps, electrons are caused to travel in spiral orbits between a central anode, normally in the form of a cylindrical wire or rod, and an outer or boundary cathode, generally in the form of a cylindrical wall or cage. The orbiting of the electrons is achieved without the use of a magnetic field, even though a weak axial magnetic field may be employed.
Noble diode pump
A noble diode pump is a type of ion pump used in high-vacuum applications that employs both a chemically reactive cathode, such as titanium, and an additional cathode composed of tantalum. The tantalum cathode serves as a high-inertia crystal lattice structure for the reflection and burial of neutrals, increasing pumping effectiveness of inert gas ions. Pumping intermittently high quantities of hydrogen with noble diodes should be done with great care, as hydrogen might over months get re-emitted out of the tantalum.
Applications
Ion pumps are commonly used in ultra-high vacuum (UHV) systems, as they can attain ultimate pressures less than 10−11 mbar. In contrast to other common UHV pumps, such as turbomolecular pumps and diffusion pumps, ion pumps have no moving parts and use no oil. They are therefore clean, need little maintenance, and produce no vibrations. These advantages make ion pumps well-suited for use in scanning probe microscopy, molecular beam epitaxy and other high-precision apparatuses.
Radicals
Recent work has suggested that free radicals escaping from ion pumps can influence the results of some experiments.
See also
Electroosmotic flow
Marklund convection
References
Sources
External links
An Introduction to Ion Pumps
Vacuum pumps | Ion pump | [
"Physics",
"Engineering"
] | 1,089 | [
"Vacuum pumps",
"Vacuum systems",
"Vacuum",
"Matter"
] |
4,290,894 | https://en.wikipedia.org/wiki/Lie%20bialgebra | In mathematics, a Lie bialgebra is the Lie-theoretic case of a bialgebra: it is a set with a Lie algebra and a Lie coalgebra structure which are compatible.
It is a bialgebra where the multiplication is skew-symmetric and satisfies a dual Jacobi identity, so that the dual vector space is a Lie algebra, whereas the comultiplication is a 1-cocycle, so that the multiplication and comultiplication are compatible. The cocycle condition implies that, in practice, one studies only classes of bialgebras that are cohomologous to a Lie bialgebra on a coboundary.
They are also called Poisson-Hopf algebras, and are the Lie algebra of a Poisson–Lie group.
Lie bialgebras occur naturally in the study of the Yang–Baxter equations.
Definition
A vector space is a Lie bialgebra if it is a Lie algebra,
and there is the structure of Lie algebra also on the dual vector space which is compatible.
More precisely the Lie algebra structure on is given
by a Lie bracket
and the Lie algebra structure on is given by a Lie
bracket .
Then the map dual to is called the cocommutator,
and the compatibility condition is the following cocycle relation:
where is the adjoint.
Note that this definition is symmetric and is also a Lie bialgebra, the dual Lie bialgebra.
Example
Let be any semisimple Lie algebra.
To specify a Lie bialgebra structure we thus need to specify a compatible Lie algebra structure on the dual vector space.
Choose a Cartan subalgebra and a choice of positive roots.
Let be the corresponding opposite Borel subalgebras, so that and there is a natural projection .
Then define a Lie algebra
which is a subalgebra of the product , and has the same dimension as .
Now identify with dual of via the pairing
where and is the Killing form.
This defines a Lie bialgebra structure on , and is the "standard" example: it underlies the Drinfeld-Jimbo quantum group.
Note that is solvable, whereas is semisimple.
Relation to Poisson–Lie groups
The Lie algebra of a Poisson–Lie group G has a natural structure of Lie bialgebra.
In brief the Lie group structure gives the Lie bracket on as usual, and the linearisation of the Poisson structure on G
gives the Lie bracket on
(recalling that a linear Poisson structure on a vector space is the same thing as a Lie bracket on the dual vector space).
In more detail, let G be a Poisson–Lie group, with being two smooth functions on the group manifold. Let be the differential at the identity element. Clearly, . The Poisson structure on the group then induces a bracket on , as
where is the Poisson bracket. Given be the Poisson bivector on the manifold, define to be the right-translate of the bivector to the identity element in G. Then one has that
The cocommutator is then the tangent map:
so that
is the dual of the cocommutator.
See also
Lie coalgebra
Manin triple
References
H.-D. Doebner, J.-D. Hennig, eds, Quantum groups, Proceedings of the 8th International Workshop on Mathematical Physics, Arnold Sommerfeld Institute, Claausthal, FRG, 1989, Springer-Verlag Berlin, .
Vyjayanthi Chari and Andrew Pressley, A Guide to Quantum Groups, (1994), Cambridge University Press, Cambridge .
Lie algebras
Coalgebras
Symplectic geometry | Lie bialgebra | [
"Mathematics"
] | 766 | [
"Mathematical structures",
"Algebraic structures",
"Coalgebras"
] |
4,296,490 | https://en.wikipedia.org/wiki/Completeness%20%28cryptography%29 | In cryptography, a boolean function is said to be complete if the value of each output bit depends on all input bits.
This is a desirable property to have in an encryption cipher, so that if one bit of the input (plaintext) is changed, every bit of the output (ciphertext) has an average of 50% probability of changing. The easiest way to show why this is good is the following: consider that if we changed our 8-byte plaintext's last byte, it would only have any effect on the 8th byte of the ciphertext. This would mean that if the attacker guessed 256 different plaintext-ciphertext pairs, he would always know the last byte of every 8byte sequence we send (effectively 12.5% of all our data). Finding out 256 plaintext-ciphertext pairs is not hard at all in the internet world, given that standard protocols are used, and standard protocols have standard headers and commands (e.g. "get", "put", "mail from:", etc.) which the attacker can safely guess. On the other hand, if our cipher has this property (and is generally secure in other ways, too), the attacker would need to collect 264 (~1020) plaintext-ciphertext pairs to crack the cipher in this way.
See also
Correlation immunity
References
Cryptography | Completeness (cryptography) | [
"Mathematics",
"Engineering"
] | 282 | [
"Applied mathematics",
"Cryptography",
"Cybersecurity engineering"
] |
3,152,853 | https://en.wikipedia.org/wiki/Large%20set%20%28Ramsey%20theory%29 | In Ramsey theory, a set S of natural numbers is considered to be a large set if and only if Van der Waerden's theorem can be generalized to assert the existence of arithmetic progressions with common difference in S. That is, S is large if and only if every finite partition of the natural numbers has a cell containing arbitrarily long arithmetic progressions having common differences in S.
Examples
The natural numbers are large. This is precisely the assertion of Van der Waerden's theorem.
The even numbers are large.
Properties
Necessary conditions for largeness include:
If S is large, for any natural number n, S must contain at least one multiple (equivalently, infinitely many multiples) of n.
If is large, it is not the case that sk≥3sk-1 for k≥ 2.
Two sufficient conditions are:
If S contains n-cubes for arbitrarily large n, then S is large.
If where is a polynomial with and positive leading coefficient, then is large.
The first sufficient condition implies that if S is a thick set, then S is large.
Other facts about large sets include:
If S is large and F is finite, then S – F is large.
is large.
If S is large, is also large.
If is large, then for any , is large.
2-large and k-large sets
A set is k-large, for a natural number k > 0, when it meets the conditions for largeness when the restatement of van der Waerden's theorem is concerned only with k-colorings. Every set is either large or k-large for some maximal k. This follows from two important, albeit trivially true, facts:
k-largeness implies (k-1)-largeness for k>1
k-largeness for all k implies largeness.
It is unknown whether there are 2-large sets that are not also large sets. Brown, Graham, and Landman (1999) conjecture that no such sets exists.
See also
Partition of a set
Further reading
External links
Mathworld: van der Waerden's Theorem
Basic concepts in set theory
Ramsey theory
Theorems in discrete mathematics | Large set (Ramsey theory) | [
"Mathematics"
] | 450 | [
"Discrete mathematics",
"Mathematical theorems",
"Combinatorics",
"Theorems in discrete mathematics",
"Basic concepts in set theory",
"Mathematical problems",
"Ramsey theory"
] |
3,153,173 | https://en.wikipedia.org/wiki/Leptoquark | Leptoquarks are hypothetical particles that would interact with quarks and leptons. Leptoquarks are color-triplet bosons that carry both lepton and baryon numbers. Their other quantum numbers, like spin, (fractional) electric charge and weak isospin vary among models. Leptoquarks are encountered in various extensions of the Standard Model, such as technicolor theories, theories of quark–lepton unification (e.g., Pati–Salam model), or GUTs based on SU(5), SO(10), E6, etc. Leptoquarks are currently searched for in experiments ATLAS and CMS at the Large Hadron Collider in CERN.
In March 2021, there were some reports to hint at the possible existence of leptoquarks as an unexpected difference in how bottom quarks decay to create electrons or muons. The measurement has been made at a statistical significance of 3.1σ, which is well below the 5σ level that is usually considered a discovery.
Overview
Leptoquarks, if they exist, must be heavier than any of the currently known elementary particles, otherwise they would have already been discovered. Current experimental lower limits on leptoquark mass (depending on their type) are around (i.e., about 1000 times the proton mass).
By definition, leptoquarks decay directly into a quark and a lepton or an antilepton. Like most of other elementary particles, they live for a very short time and are not present in ordinary matter.
However, they might be produced in high energy particle collisions such as in particle colliders or from cosmic rays hitting the Earth's atmosphere.
Like quarks, leptoquarks must carry color and therefore must also interact with gluons. This strong interaction of theirs is important for their production in hadron colliders (such as the Tevatron or LHC).
Simplified typology
Several kinds of leptoquarks, depending on their electric charge, can be considered:
Q = : Such a leptoquark decays into up-type quarks (up, charm, top) and charged antileptons (e+, μ+, τ+).
Q = : Such a leptoquark decays into up-type quarks and neutrinos (or antineutrinos), and/or to down-type quarks (down, strange, bottom) and charged antileptons.
Q = −: Such a leptoquark decays into down-type quarks and (anti)neutrinos, and/or to up-type quark and a charged lepton.
Q = −: Such a leptoquark decays into down-type quarks and charged leptons.
If a leptoquark with a given charge exists, its antiparticle with an opposite charge and which would decay into conjugated states to those listed above, must exist as well.
A leptoquark with given electric charge may, in general, interact with any combination of a lepton and quark with given electric charges (this yields up to distinct interactions of a single type of a leptoquark). However, experimental searches usually assume that only one of those "channels" is possible. Especially, a Q = charged leptoquark that decays into a positron and a down quark is called a "first generation leptoquark", a leptoquark that decays into strange quark and antimuon is a "second-generation leptoquark" etc. Nevertheless, most theories do not bring much of a theoretical motivation to believe that leptoquarks have only a single interaction and that the generation of the quark and lepton involved is the same.
Proton decay
Existence of pure leptoquarks would not spoil the baryon number conservation. However, some theories allow (or require) the leptoquark to also have a diquark interaction vertex. For example, a Q = charged leptoquark might also decay into two down-type antiquarks. Existence of such a leptoquark-diquark would cause protons to decay. The current limits on proton lifetime are strong probes of existence of these leptoquark-diquarks. These fields emerge in grand unification theories; for example, in the Georgi–Glashow SU(5) model, they are called X and Y bosons.
Experimental searches
In 1997, an excess of events at the HERA accelerator created a stir in the particle physics community, because one possible explanation of the excess was the involvement of leptoquarks. However, later studies performed both at HERA and at the Tevatron with larger samples of data ruled out this possibility for masses of the leptoquark up to around . Second generation leptoquarks were also looked for and not found.
Current best limits on leptoquarks are set by LHC, which has been searching for the first, second, and third generation of leptoquarks and some mixed-generation leptoquarks
and have raised the lower mass limit to about . For leptoquarks coupling to a neutrino and a quark to be proven to exist, the missing energy in particle collisions attributed to neutrinos would have to be excessively energetic. It is likely that the creation of leptoquarks would mimic the creation of massive quarks.
For leptoquarks coupling to electrons and up or down quarks, experiments of atomic parity violation and parity-violating electron scattering set the best limits.
The LHeC project to add an electron ring to collide bunches with the existing LHC proton ring is proposed as a project to look for higher-generation leptoquarks.
See also
X and Y bosons
Quark–lepton complementarity
References
Hypothetical elementary particles
Grand Unified Theory
Gauge bosons | Leptoquark | [
"Physics"
] | 1,281 | [
"Hypothetical elementary particles",
"Unsolved problems in physics",
"Physics beyond the Standard Model",
"Grand Unified Theory"
] |
3,155,399 | https://en.wikipedia.org/wiki/Mostafa%20El-Sayed | Mostafa A. El-Sayed (Arabic: مصطفى السيد) is an Egyptian-American physical chemist, nanoscience researcher, member of the National Academy of Sciences and US National Medal of Science laureate. He is known for the spectroscopy rule named after him, the El-Sayed rule.
Early life and academic career
El-Sayed was born in Zifta, Egypt and spent his early life in Cairo. He earned his B.Sc. in chemistry from Ain Shams University Faculty of Science, Cairo in 1953. El-Sayed earned his doctoral degree in chemistry from Florida State University working with Michael Kasha, the last student of the legendary G. N. Lewis. While attending graduate school he met and married Janice Jones, his wife of 48 years. He spent time as a post-doctoral researcher at Harvard University, Yale University and the California Institute of Technology before joining the faculty of the University of California at Los Angeles in 1961. In 1994, he retired from UCLA and accepted the position of Julius Brown Chair and Regents Professor of Chemistry and Biochemistry at the Georgia Institute of Technology. He led the Laser Dynamics Lab there until his full retirement in 2020.
El-Sayed is a former editor-in-chief of the Journal of Physical Chemistry (1980–2004).
Research
El-Sayed's research interests include the use of steady-state and ultra fast laser spectroscopy to understand relaxation, transport and conversion of energy in molecules, in solids, in photosynthetic systems, semiconductor quantum dots and metal nanostructures. The El-Sayed group has also been involved in the development of new techniques such as magnetophotonic selection, picosecond Raman spectroscopy and phosphorescence microwave double resonance spectroscopy. A major focus of his lab is currently on the optical and chemical properties of noble metal nanoparticles and their applications in nanocatalysis, nanophotonics and nanomedicine. His lab is known for the development of the gold nanorod technology. As of 2021, El-Sayed has produced over 1200 publications in refereed journals in the areas of spectroscopy, molecular dynamics and nanoscience, with over 130,000 citations.
Honors
For his work in the area of applying laser spectroscopic techniques to study of properties and behavior on the nanoscale, El-Sayed was elected to the National Academy of Sciences in 1980. In 1989 he received the Tolman Award, and in 2002, he won the Irving Langmuir Award in Chemical Physics. He has been the recipient of the 1990 King Faisal International Prize ("Arabian Nobel Prize") in Sciences, Georgia Tech's highest award, "The Class of 1943 Distinguished Professor", an honorary doctorate of philosophy from the Hebrew University, and several other awards including some from the different American Chemical Society local sections. He was a Sherman Fairchild Distinguished Scholar at the California Institute of Technology and an Alexander von Humboldt Senior U.S. Scientist Awardee. He served as editor-in-chief of the Journal of Physical Chemistry from 1980 to 2004 and has also served as the U.S. editor of the International Reviews in Physical Chemistry. He is a Fellow of the American Academy of Arts and Sciences, a member of the American Physical Society, the American Association for the Advancement of Science and the Third World Academy of Science. Mostafa El-Sayed was awarded the 2007 US National Medal of Science "for his seminal and creative contributions to our understanding of the electronic and optical properties of nanomaterials and to their applications in nanocatalysis and nanomedicine, for his humanitarian efforts of exchange among countries and for his role in developing the scientific leadership of tomorrow." Mostafa was also announced to be the recipient of the 2009 Ahmed Zewail prize in molecular sciences. In 2011, he was listed #17 in Thomson-Reuters listing of the Top Chemists of the Past Decade. Professor El-Sayed also received the 2016 Priestley Medal, the American Chemical Society’s highest honor, for his decades-long contributions to chemistry.
The El-Sayed rule
This rule pertains to phosphorescence and similar phenomena. Electrons vibrate and resonate around molecules in different modes (electronic state), usually depending on the energy of the system of electrons. This law states that constant-energy flipping between two electronic states happens more readily when the vibrations of the electrons are preserved during the flip: any change in the spin of an electron is compensated by a change in its orbital motion (spin-orbit coupling).
Intersystem crossing (ISC) is a photophysical process involving an isoenergetic radiationless transition between two electronic states having different multiplicities. It often results in a vibrationally excited molecular entity in the lower electronic state, which then usually decays to its lowest molecular vibrational level. ISC is forbidden by rules of conservation of angular momentum. As a consequence, ISC generally occurs on very long time scales. However, the El-Sayed rule states that the rate of intersystem crossing, e.g. from the lowest singlet state to the triplet manifold, is relatively large if the radiationless transition involves a change of molecular orbital type. For example, a (π,π*) singlet could transition to a (n,π*) triplet state, but not to a (π,π*) triplet state and vice versa. Formulated by El-Sayed in the 1960s, this rule found in most photochemistry textbooks as well as the IUPAC Gold Book. The rule is useful in understanding phosphorescence, vibrational relaxation, intersystem crossing, internal conversion and lifetimes of excited states in molecules.
Notes
References
El-Sayed, M.A., Acc. Chem. Res. 1968,1,8.
Lower, S.K.; El-Sayed, M.A., Chem. Rev. 1966,66,199
Mostafa Amr El-Sayed (8 May 1933 – Egyptian-American, b. Zifta, Egypt)
Biographical References: McMurray, Emily J. (ed.), Notable Twientieth-Century Scientists, Gale Research, Inc.: New York, 1995.
External links
Faculty web page at Georgia Tech
Laser Dynamics Lab at Georgia Tech
President Bush to laud Georgia Tech’s Mostafa El-Sayed
Mostafa El-Sayed praised for contributions to nanotechnology
Biochemists
Egyptian chemists
Egyptian Muslims
American Muslims
Egyptian inventors
Egyptian emigrants to the United States
Harvard University staff
Members of the United States National Academy of Sciences
Florida State University alumni
Georgia Tech faculty
Living people
1933 births
National Medal of Science laureates
Ain Shams University alumni
American physical chemists
Fellows of the American Physical Society | Mostafa El-Sayed | [
"Chemistry",
"Biology"
] | 1,400 | [
"Biochemistry",
"Biochemists"
] |
3,156,443 | https://en.wikipedia.org/wiki/Anagama%20kiln | The anagama kiln (Japanese Kanji: 穴窯/ Hiragana: あながま) is an ancient type of pottery kiln brought to Japan from China via Korea in the 5th century. It is a version of the climbing dragon kiln of south China, whose further development was also copied, for example in breaking up the firing space into a series of chambers in the noborigama kiln.
An anagama (a Japanese term meaning "cave kiln") consists of a firing chamber with a firebox at one end and a flue at the other. Although the term "firebox" is used to describe the space for the fire, there is no physical structure separating the stoking space from the pottery space. The term anagama describes single-chamber kilns built in a sloping tunnel shape. In fact, ancient kilns were sometimes built by digging tunnels into banks of clay.
The anagama is fueled with firewood, in contrast to the electric or gas-fueled kilns commonly used by most modern potters. A continuous supply of fuel is needed for firing, as wood thrown into the hot kiln is consumed very rapidly. Stoking occurs round the clock until a variety of variables are achieved including the way the fired pots look inside the kiln, the temperatures reached and sustained, the amount of ash applied, the wetness of the walls and the pots, etc.
Burning wood not only produces heat of up to 1400°C (2,500 °F), it also produces fly ash and volatile salts. Wood ash settles on the pieces during the firing, and the complex interaction between flame, ash, and the minerals of the clay body forms a natural ash glaze. This glaze may show great variation in color, texture, and thickness, ranging from smooth and glossy to rough and sharp. The placement of pieces within the kiln distinctly affects the pottery's appearance, as pieces closer to the firebox may receive heavy coats of ash, or even be immersed in embers, while others deeper in the kiln may only be softly touched by ash effects. Other factors that depend on positioning include temperature and oxidation/reduction. Besides location in the kiln, (as with other fuel-fired updraft kilns) the way pieces are placed near each other affects the flame path, and, thus, the appearance of pieces within localized zones of the kiln can vary as well. It is said that loading an anagama kiln is the most difficult part of the firing. The potter must imagine the flame path as it rushes through the kiln, and use this sense to 'paint the pieces with fire'.
The length of the firing depends on the volume of the kiln and may take anywhere from 48 hours to 12 or more days. The kiln generally takes the same amount of time to cool down. Records of historic firings in large Asian kilns shared by several village potters describe several weeks of steady stoking per firing.
Kiln variants
One variant on the anagama style is the waritake kiln. A waritake kiln is akin to anagama in structure, but it has partition walls built every several meters through the length of the kiln. Each partition can be side stoked.
A noborigama 登り窯 chambered climbing kiln is also built on a slope, and each succeeding chamber is situated higher than the one before it. The chambers in a noborigama are pierced at intervals with stoking ports. Such climbing kilns have been used in Japan since the 17th century. The largest working Noborigama kiln in Japan is located in Shigaraki, in the southern portion of Shiga Prefecture.
The renboshiki noborigama is a multi-chambered climbing kiln. There are many distinguishing characteristics between the noborigama and anagama style. For example, an anagama is somewhat like a half-tube (long vault) with a fire burned at the lower end. A noborigama is like a set of half-tubes (arches or short vaults, buttressing each other) placed side-by-side with piercings that allow each chamber to feed into the next.
The jagama (snake kiln or dragon kiln) is related to anagama, noborigama, and waritake kilns, and was used extensively in China since at least the 3rd century CE. Jagama are tube shaped similarly to anagama kilns, but can be longer at around 60 m. Although partitioned and side stoked, jagama do not have partition walls, rather, improvised walls are created by densely stacking pottery at intervals.
Characteristics
The main advantage of climbing kilns is that heat from the burning fuel is re-used, the same heat heating more than one part of the kiln. Exhaust heat created during firing of the lower part of the kiln, preheats the chambers above. In addition, the cooling ware and walls below preheat the incoming air. Thus, firing of ware in the upper chambers requires only the additional fuel needed to bring the ware, walls and air to peak temperature. (From a thermodynamic point of view, the higher temperature of combustion and cooler exhaust suggest greater efficiency.) A modern type, called a tube kiln improves the efficiency and output still further by having the ware move through the kiln in a direction opposite to that of the hot gasses.
All of these kilns use two counter-flow exchange mechanisms to maintain the air supply and to change the ware being fired, with minimal loss of heat. Each of these exchanges works on the same principle as countercurrent exchange, the principal difference being that the ware is not a fluid.
An advantage of the chambered and semi-chambered variants appears to be that they are partly downdraft, which makes the firing results less sensitive to the way the ware is loaded.
A disadvantage of smaller climbing kilns is a tendency to rapid cooling, caused by the incoming air.
See also
Six Ancient Kilns
References
External links
Cofield, Jay. "Montevallo's Anagama." Southern Spaces, 10 June 2008.
The Log Book, the magazine about anagamas and woodfiring in general
An anagama kiln built according to Furutani Michio's design principles, podcasts with woodfire potters, photogalleries of woodfired work
Carlson, Scott. Earth, Wind, and Fire: Richard Bresnahan's elemental approach to art — and life Chronicle of Higher Education, 13 February 2009. A story about a potter who built the Johanna Kiln, the largest wood-fired kiln in North America.
Japanese pottery
Kilns | Anagama kiln | [
"Chemistry",
"Engineering"
] | 1,406 | [
"Chemical equipment",
"Kilns"
] |
3,156,532 | https://en.wikipedia.org/wiki/Zinc%20dithiophosphate | Zinc dialkyldithiophosphates (often referred to as ZDDP) are a family of coordination compounds developed in the 1940s that feature zinc bound to the anion of a dialkyldithiophosphoric salt (e.g., ammonium diethyl dithiophosphate). These uncharged compounds are not salts. They are soluble in nonpolar solvents, and the longer-chain derivatives easily dissolve in mineral and synthetic oils used as lubricants. They come under CAS number . In aftermarket oil additives, the percentage of ZDDP ranges approximately between 2 and 15%. Zinc dithiophosphates have many names, including ZDDP, ZnDTP, and ZDP.
Applications
The main application of ZDDPs are as anti-wear additives in lubricants including greases, hydraulic oils, and motor oils. ZDDPs also act as corrosion inhibitors and antioxidants. Concentrations in lubricants range from 600 ppm for modern, energy-conserving low-viscosity oils to 2000 ppm in some racing oils.
It has been reported that zinc and phosphorus emissions may damage catalytic converters and standard formulations of lubricating oils for gasoline engines now have reduced amounts of the additive due to the API limiting the concentration of this additive in new API SM and SN oils; however, this affects only 20- and 30-grade "ILSAC" oils. Grades 40 and higher have no regulation regarding the concentration of ZDDP, except for diesel oils meeting the API CJ-4 specification which have had the level of zddp reduced slightly, although most diesel Heavy-Duty Engine oils still have a higher concentration of this additive. Crankcase oils with reduced ZDDP have been cited as causing damage to, or failure of, classic/collector car flat-tappet camshafts and lifters which undergo very high boundary layer pressures and/or shear forces at their contact faces, and in other regions such as main bearings, and piston rings and pins. Roller camshafts/followers are more commonly used to reduce camshaft lobe friction in modern engines. There are additives, such as STP Oil Treatment, and some racing oils such as PurOl, PennGrade 1, and Valvoline VR-1, Kixx Hydraulic Oil which are available in the retail market with the necessary amount of ZDDP for engines using increased valve spring pressures.
Tribofilm formation mechanism
Various mechanisms have been proposed for how ZDDP forms protective tribofilms on solid surfaces. In-situ atomic-force microscopy (AFM) experiments show that the growth of ZDDP tribofilms increases exponentially with both the applied pressure and temperature, consistent with a stress-promoted thermal activation reaction rate model. Subsequently, experiments with negligible solid-solid contact demonstrated that film formation rate depends on the applied shear stress.
Synthesis and structure
With the formula Zn[(S2P(OR)2]2, zinc dithiophosphate features diverse R groups. Typically, R is a branched or linear alkyl between 1-14 carbons in length. Examples include 2-butyl, pentyl, hexyl, 1,3-dimethylbutyl, heptyl, octyl, isooctyl (2-ethylhexyl), 6-methylheptyl, 1-methylpropyl, dodecylphenyl, and others. A mix of zinc dialkyl(C3-C6)dithiophosphates come under CAS number . A list of other examples with their CAS numbers is here.
Zinc dithiophosphate are produced in two steps. First phosphorus pentasulfide is treated with suitable alcohols (ROH) to give the dithiophosphoric acid. A wide variety of alcohols can be employed, which allows the lipophilicity of the final zinc product to be fine tuned. The resulting dithiophosphate is then neutralized by adding zinc oxide:
P2S5 + 4 ROH → 2 (RO)2PS2H + H2S
2 (RO)2PS2H + ZnO → Zn[(S2P(OR)2]2 + H2O
Structural chemistry
In Zn[(S2P(OR)2]2, the zinc has tetrahedral geometry. This monomeric compound Zn[(S2P(OR)2]2 exists in equilibrium with dimers, oligomers, and polymers [Zn[(S2P(OR)2]2]n (n > 1). For example, zinc diethyldithiophosphate, Zn[(S2P(OEt)2]2, crystallizes as a polymeric solid consisting of linear chains. Reaction of Zn[(S2P(OR)2]2 with additional zinc oxide gives rise to the oxygen-centered cluster, Zn4O[(S2P(OR)2]6, which adopts the structure seen for basic zinc acetate.
See also
Transition metal dithiophosphate complexes
References
dithiophosphate
Phosphorothioates
Lubricants
Corrosion inhibitors | Zinc dithiophosphate | [
"Chemistry"
] | 1,096 | [
"Corrosion inhibitors",
"Phosphorothioates",
"Functional groups",
"Process chemicals"
] |
3,157,156 | https://en.wikipedia.org/wiki/Phosphorus%20sulfides | Phosphorus sulfides comprise a family of inorganic compounds containing only phosphorus and sulfur. These compounds have the formula with n ≤ 10. Two are of commercial significance, phosphorus pentasulfide (), which is made on a kiloton scale for the production of other organosulfur compounds, and phosphorus sesquisulfide (), used in the production of "strike anywhere matches".
There are several other phosphorus sulfides in addition to and . Six of these phosphorus sulfides exist as isomers: . These isomers are distinguished by Greek letter prefixes. The prefix is based on the order of the discovery of the isomers, not their structure. All known molecular phosphorus sulfides contain a tetrahedral array of four phosphorus atoms. is also known but is unstable above −30 °C.
Phosphorus monosulfide monomer, PS, is highly unstable and only exists at elevated temperatures. Its bond, worth about 55 kcal/mol, is about 2.4 angstroms long.
Preparation
The main method for preparing these compounds is thermolysis of mixtures of phosphorus and sulfur. The product distributions can be analyzed by 31P-NMR spectroscopy. More selective syntheses entail:
desulfurization, e.g. using triphenylphosphine and, complementarily,
sulfidation using triphenylarsine sulfide.
Phosphorus sesquisulfide is prepared by treating red phosphorus with sulfur above 450 K, followed by careful recrystallization with carbon disulfide and benzene. An alternative method involves the controlled fusion of white phosphorus with sulfur in an inert, non-flammable solvent.
The α- and β- forms of can be prepared by treating the corresponding isomers of with :
can be synthesized by the reaction of stoichiometric amounts of phosphorus, sulfur, and iodine.
can be prepared by treating stoichiometric amounts of with sulfur in carbon disulfide solution, in the presence of light and a catalytic amount of iodine. The respective product distribution is then analyzed by using 31P-NMR spectroscopy.
In particular, α- can be easily made by the photochemical reaction of with red phosphorus. Note that is unstable when heated, tending to disproportionate to and before reaching its melting point.
can be made by abstracting a sulfur atom from using triphenylphosphine:
Treating α- with in also yields α-. The two new polymorphs δ- and ε- can be made by treating α- with in .
is most conveniently made by direct union of the corresponding elements, and is one of the most easily purified binary phosphorus sulfides.
β- can be made by treating α- with in , which yields a mixture between α- and β-.
can be made by two methods. One method involves the heating of in excess sulfur. Another
method involves the heating of and in 1:2 mole ratio, where is reversibly formed:
is one of the most stable phosphorus sulfides. It is most easily made by heating white phosphorus with sulfur above 570 K in an evacuated tube.
See also
Diphosphorus trisulfide ()
References
Inorganic phosphorus compounds
Sulfides | Phosphorus sulfides | [
"Chemistry"
] | 662 | [
"Inorganic phosphorus compounds",
"Inorganic compounds"
] |
3,157,369 | https://en.wikipedia.org/wiki/Phosphorus%20pentasulfide | Phosphorus pentasulfide is the inorganic compound with the formula (empirical) or (molecular). This yellow solid is the one of two phosphorus sulfides of commercial value. Samples often appear greenish-gray due to impurities. It is soluble in carbon disulfide but reacts with many other solvents such as alcohols, DMSO, and DMF.
Structure and synthesis
Its tetrahedral molecular structure is similar to that of adamantane and almost identical to the structure of phosphorus pentoxide.
Phosphorus pentasulfide is obtained by the reaction of liquid white phosphorus () with sulfur above 300 °C. The first synthesis of by Berzelius in 1843 was by this method. Alternatively, can be formed by reacting elemental sulfur or pyrite, , with ferrophosphorus, a crude form of (a byproduct of white phosphorus () production from phosphate rock):
Applications
Approximately 150,000 tons of are produced annually. The compound is mainly converted to other derivatives for use as lubrication additives such as zinc dithiophosphates.
It is widely used in the production of sodium dithiophosphate for applications as a flotation agent in the concentration of molybdenite minerals. It is also used in the production of pesticides such as Parathion and Malathion. It is also a component of some amorphous solid electrolytes (e.g. -) for some types of lithium batteries.
Phosphorus pentasulfide is a dual-use material, for the production of early insecticides such as Amiton and also for the manufacture of the related VX nerve agents.
Reactivity
Due to hydrolysis by atmospheric moisture, evolves hydrogen sulfide , thus is associated with a rotten egg odour. Aside from , hydrolysis of eventually gives phosphoric acid:
Other mild nucleophiles react with , including alcohols and amines. Reaction with ammonium chloride gives the polymeric (SPN)∞. Aromatic compounds such as anisole, ferrocene and 1-methoxynaphthalene react to form 1,3,2,4-dithiadiphosphetane 2,4-disulfides such as Lawesson's reagent.
is used as a thionation reagent. Reactions of this type require refluxing solvents such as benzene, dioxane, or acetonitrile with dissociating into . Some ketones, esters, and imides are converted to the corresponding thiocarbonyls. Amides give thioamides. With 1,4-diketones the reagent forms thiophenes. It is also used to deoxygenate sulfoxides. The use of has been displaced by the aforementioned Lawesson's reagent.
reacts with pyridine to form the complex .
References
Inorganic phosphorus compounds
Sulfides
Adamantane-like molecules | Phosphorus pentasulfide | [
"Chemistry"
] | 620 | [
"Inorganic phosphorus compounds",
"Inorganic compounds"
] |
3,157,586 | https://en.wikipedia.org/wiki/Dimethyl%20methylphosphonate | Dimethyl methylphosphonate is an organophosphorus compound with the chemical formula CH3PO(OCH3)2. It is a colourless liquid, which is primarily used as a flame retardant.
Synthesis
Dimethyl methylphosphonate can be prepared from trimethyl phosphite and a halomethane (e.g. iodomethane) via the Michaelis–Arbuzov reaction.
Dimethyl methylphosphonate is a schedule 2 chemical as it may be used in the production of chemical weapons. It will react with thionyl chloride to produce methylphosphonic acid dichloride, which is used in the production of sarin and soman nerve agents. Various amines can be used to catalyse this process. It can be used as a sarin-simulant for the calibration of organophosphorus detectors.
Uses
The primary commercial use of dimethyl methylphosphonate is as a flame retardant. Other commercial uses are a preignition additive for gasoline, anti-foaming agent, plasticizer, stabilizer, textile conditioner, antistatic agent, and an additive for solvents and low-temperature hydraulic fluids. It can be used as a catalyst and a reagent in organic synthesis, as it can generate a highly reactive ylide. The yearly production in the United States varies between .
About 190 liters of dimethyl methylphosphonate, together with other chemicals, were released during the crash of El Al Flight 1862 at Bijlmer in Amsterdam in 1992.
References
Methyl esters
Phosphonate esters
Flame retardants
Plasticizers
Chemical weapons
Antistatic agents
Fuel additives
Nerve agent precursors | Dimethyl methylphosphonate | [
"Chemistry",
"Biology"
] | 364 | [
"Chemical accident",
"Chemical weapons",
"Antistatic agents",
"Biochemistry",
"Process chemicals"
] |
3,157,738 | https://en.wikipedia.org/wiki/Naval%20stores | Naval stores refers to the industry that produces rosin, turpentine, tall oil, pine oil, and other oleoresin collected from conifers. The term was originally applied to the compounds used in building and maintaining wooden sailing ships. Presently, pine compounds produced by the naval stores industry are used to manufacture soap, paint, varnish, shoe polish, lubricants, linoleum, and roofing materials.
History
Colonial North America
The Royal Navy relied heavily upon naval stores from American colonies, and naval stores were an essential part of the colonial economy. Masts came from the large white pines of New England, while pitch came from the longleaf pine forests of Carolina, which also produced sawn lumber, shake shingles, and staves. In the early 1700s the British Crown was involved in the transplantation of Palatine refuges in Great Britain to the New York Province to produce naval stores.
Naval stores played a role during the American Revolutionary War. As Britain attempted to cripple French and Spanish capacities through blockade, they declared naval stores to be contraband. At the time Russia was Europe's chief producer of naval stores, leading to the seizure of 'neutral' Russian vessels. In 1780 Catherine the Great announced that her navy would be used against anyone interfering with neutral trade, and she gathered together European neutrals in the League of Armed Neutrality. These actions were beneficial for the struggling colonists as the British were forced to act with greater caution.
Zallen tells in detail how turpentine (and rosin) are produced as naval stores. Pine trees especially in North Carolina were tapped for sap which was doubly distilled to make turpentine and rosin (aka resin)–hence the name tar heels. The trees were scored with a ledge called a “box” to collect the sap. Large numbers of slaves were used to score the trees, collect and process the sap. Zallen describes this as industrial slavery–different from the more common vision of slaves in agriculture. By the 1840s camphine, a blend of turpentine and grain alcohol, became the dominant lamp fuel in the US. [Zallen prefers the camphene spelling.]
The pine trees of North Carolina were well suited to camphine production. The business also provided additional need for slaves as production expanded. Backwoods became more productive. Slaves were often leased in winter when agriculture was slower. The value of many was protected by life insurance. Wilmington, NC became a center of the camphene industry. In cities, gaslighting was also available, but used by the upper classes. Camphine was the fuel of the average family.
Zallen reports that after Ft. Sumter, turpentine producers were cut off from major markets. Emancipation left them without manpower to collect and process turpentine. The camps were flammable. Many were burned in William Tecumseh Sherman’s march from Savannah to Goldsboro, NC. Congress also imposed taxes on alcohol to pay for the Civil War. That made camphine more costly than kerosene. Kerosene first produced as coal oil became abundant after the discovery of oil in Pennsylvania.
The major producers of naval stores in the 19th and 20th century were the United States of America, and France, where Napoleon encouraged planting of pines in areas of sand dunes. In the 1920s the United States exported eleven million gallons of spirits. By 1927, France exported about 20 percent of the world's resin.
Naval stores also included cordage, mask, pitch and tar. These materials were used for water- and weather-proofing wooden ships. were traditionally used for Masts, spars, and cordage needed protecting, and hulls made of wood required a flexible material, insoluble in water, to seal the spaces between planks. Pine pitch was often mixed with fibers like hemp to caulk spaces which might otherwise leak.
Separation techniques
Today naval stores are recovered from the tall oil byproduct stream of Kraft process pulping of pines in the US, though tapping of living pines remains common in other parts of the world. Turpentine and pine oil may be recovered by steam distillation of oleoresin or by destructive distillation of pine wood. Solvent extraction of shredded stumps and roots has become more common with the availability of inexpensive naphtha. Rosin remains in the still after turpentine and water have boiled off.
See also
Shipbuilding
Naval stores industry
Bark hack
Footnotes
External links
https://web.archive.org/web/20101009032732/http://www.srs.fs.usda.gov/organization/history/naval_stores.htm
http://www.maritime.org/conf/conf-kaye-tar.htm
http://www.fao.org/documents/show_cdr.asp?url_file=/docrep/V6460E/v6460e04.htm
http://www.hchsonline.org/places/turpentine.html
https://web.archive.org/web/20070928044420/http://www.unctv.org/exploringNC/episode308.html
Resins
Shipbuilding
Timber industry
History of forestry
Non-timber forest products | Naval stores | [
"Physics",
"Engineering"
] | 1,107 | [
"Resins",
"Unsolved problems in physics",
"Shipbuilding",
"Marine engineering",
"Amorphous solids"
] |
3,157,929 | https://en.wikipedia.org/wiki/Original%20design%20manufacturer | An original design manufacturer is a company that designs and manufactures a product, in contrast to "OEM" which only manufactures a product.
Post-2016 Nokia phones (HMD) is an example of a firm which relies on original design manufacturers. In late 2019, it switched from relying on only one original design manufacturer to multiple original design manufacturers.
Examples
Foxconn is one example of an ODM, which helps upstream manufacturers such as Dell, Lenovo to manufacture laptops. It has also manufactured products for Apple, Nintendo, Sony, Microsoft, and many other companies.
ZOTAC, a Hong Kong graphics card manufacturer that has its own factories, designs and manufactures some special Nvidia graphics cards, and then rebrands and provides them to companies like Lenovo.
Intellectual property
Original design manufacturers create their own intellectual property and are very proactive in patenting it. Most of their patents are filed in the US, China, and Taiwan.
See also
Electronics manufacturing services
Original equipment manufacturer
Contract manufacturer
References
Brands
Design companies | Original design manufacturer | [
"Engineering"
] | 207 | [
"Design",
"Engineering companies",
"Design companies"
] |
14,800,212 | https://en.wikipedia.org/wiki/Cyclin-dependent%20kinase%203 | Cell division protein kinase 3 is an enzyme that in humans is encoded by the CDK3 gene.
Function
CDK3 complements cdc28 mutants of Saccharomyces cerevisiae suggesting that it may be involved in cell cycle control. CDK3 can phosphorylate histone H1 and interacts with an unknown type of cyclin.
References
Further reading
External links
Cell cycle
Proteins
EC 2.7.11 | Cyclin-dependent kinase 3 | [
"Chemistry",
"Biology"
] | 91 | [
"Biomolecules by chemical classification",
"Cellular processes",
"Molecular biology",
"Proteins",
"Cell cycle"
] |
14,800,736 | https://en.wikipedia.org/wiki/PHB2 | Prohibitin-2 is a protein that in humans is encoded by the PHB2 gene.
Interactions
PHB2 has been shown to interact with PTMA.
References
Further reading | PHB2 | [
"Chemistry"
] | 37 | [
"Biochemistry stubs",
"Protein stubs"
] |
14,810,768 | https://en.wikipedia.org/wiki/Ferromagnetic%20superconductor | Ferromagnetic superconductors are materials that display intrinsic coexistence of ferromagnetism and superconductivity. They include UGe2, URhGe, and UCoGe. Evidence of ferromagnetic superconductivity was also reported for ZrZn2 in 2001, but later reports question these findings. These materials exhibit superconductivity in proximity to a magnetic quantum critical point.
The nature of the superconducting state in ferromagnetic superconductors is currently under debate. Early investigations studied the coexistence of conventional s-wave superconductivity with itinerant ferromagnetism. However, the scenario of spin-triplet pairing soon gained the upper hand. A mean-field model for coexistence of spin-triplet pairing and ferromagnetism was developed in 2005.
These models consider uniform coexistence of ferromagnetism and superconductivity, i.e. the same electrons which are both ferromagnetic and superconducting at the same time. Another scenario where there is an interplay between magnetic and superconducting order in the same material is superconductors with spiral or helical magnetic order. Examples of such include ErRh4B4 and HoMo6S8. In these cases, the superconducting and magnetic order parameters entwine each other in a spatially modulated pattern, which allows for their mutual coexistence, although it is no longer uniform. Even spin-singlet pairing may coexist with ferromagnetism in this manner.
Theory
In conventional superconductors, the electrons constituting the Cooper pair have opposite spin, forming so-called spin-singlet pairs. However, other types of pairings are also permitted by the governing Pauli principle. In the presence of a magnetic field, spins tend to align themselves with the field, which means that a magnetic field is detrimental for the existence of spin-singlet Cooper pairs. A viable mean-field Hamiltonian for modelling itinerant ferromagnetism coexisting with a non-unitary spin-triplet state may after diagonalization be written as
See also
Bean's critical state model
Ferromagnetic superconducting 2D materials
Reentrant superconductivity
References
Further reading
Ferromagnetic superconductors – List of Authority Articles on arxiv.org
Superconductivity
Ferromagnetism | Ferromagnetic superconductor | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 527 | [
"Physical quantities",
"Superconductivity",
"Materials science",
"Magnetic ordering",
"Ferromagnetism",
"Condensed matter physics",
"Electrical resistance and conductance"
] |
7,464,700 | https://en.wikipedia.org/wiki/Theorem%20of%20corresponding%20states | According to van der Waals, the theorem of corresponding states (or principle/law of corresponding states) indicates that all fluids, when compared at the same reduced temperature and reduced pressure, have approximately the same compressibility factor and all deviate from ideal gas behavior to about the same degree.
Material constants that vary for each type of material are eliminated, in a recast reduced form of a constitutive equation. The reduced variables are defined in terms of critical variables.
The principle originated with the work of Johannes Diderik van der Waals in about 1873 when he used the critical temperature and critical pressure to derive a universal property of all fluids that follow the van der Waals equation of state. It predicts a value of that is found to be an overestimate when compared to real gases.
Edward A. Guggenheim used the phrase "Principle of Corresponding States" in an oft-cited paper to describe the phenomenon where different systems have very similar behaviors when near a critical point.
There are many examples of non-ideal gas models which satisfy this theorem, such as the van der Waals model, the Dieterici model, and so on, that can be found on the page on real gases.
Compressibility factor at the critical point
The compressibility factor at the critical point, which is defined as , where the subscript indicates physical quantities measured at the critical point, is predicted to be a constant independent of substance by many equations of state.
The table below for a selection of gases uses the following conventions:
: critical temperature [K]
: critical pressure [Pa]
: critical specific volume [m3⋅kg−1]
: gas constant (8.314 J⋅K−1⋅mol−1)
: Molar mass [kg⋅mol−1]
See also
Van der Waals equation
Equation of state
Compressibility factors
Johannes Diderik van der Waals equation
Noro-Frenkel law of corresponding states
References
External links
Properties of Natural Gases. Includes a chart of compressibility factors versus reduced pressure and reduced temperature (on last page of the PDF document)
Theorem of corresponding states on SklogWiki.
Laws of thermodynamics
Engineering thermodynamics
Continuum mechanics
Johannes Diderik van der Waals | Theorem of corresponding states | [
"Physics",
"Chemistry",
"Engineering"
] | 455 | [
"Thermodynamics stubs",
"Continuum mechanics",
"Engineering thermodynamics",
"Classical mechanics",
"Thermodynamics",
"Mechanical engineering",
"Physical chemistry stubs",
"Laws of thermodynamics"
] |
7,465,720 | https://en.wikipedia.org/wiki/Laser%20peening | Laser peening (LP), or laser shock peening (LSP), is a surface engineering process used to impart beneficial residual stresses in materials. The deep, high-magnitude compressive residual stresses induced by laser peening increase the resistance of materials to surface-related failures, such as fatigue, fretting fatigue, and stress corrosion cracking. Laser shock peening can also be used to strengthen thin sections, harden surfaces, shape or straighten parts (known as laser peen forming), break up hard materials, compact powdered metals and for other applications where high-pressure, short duration shock waves offer desirable processing results.
History
Discovery and development (1960s)
Initial scientific discoveries towards modern-day laser peening began in the early 1960s as pulsed-laser technology began to proliferate around the world. In an early investigation of the laser interaction with materials by Gurgen Askaryan and E.M. Moroz, they documented pressure measurements on a targeted surface using a pulsed laser. The pressures observed were much larger than could be created by the force of the laser beam alone. Research into the phenomenon indicated the high-pressure resulted from a momentum impulse generated by material vaporization at the target surface when rapidly heated by the laser pulse. Throughout the 1960s, a number of investigators further defined and modeled the laser beam pulse interaction with materials and the subsequent generation of stress waves. These, and other studies, observed that stress waves in the material were generated from the rapidly expanding plasma created when the pulsed laser beam struck the target. Subsequently, this led to interest in achieving higher pressures to increase the stress wave intensity. To generate higher pressures it was necessary to increase the power density and focus the laser beam (concentrate the energy), requiring that the laser beam-material interaction occur in a vacuum chamber to avoid dielectric breakdown within the beam in air. These constraints limited study of high-intensity pulsed laser–material interactions to a select group of researchers with high-energy pulsed lasers.
In the late 1960s a major breakthrough occurred when N.C. Anderholm discovered that much higher plasma pressures could be achieved by confining the expanding plasma against the target surface. Anderholm confined the plasma by placing a quartz overlay, transparent to the laser beam, firmly against the target surface. With the overlay in place, the laser beam passed through the quartz before interacting with the target surface. The rapidly expanding plasma was now confined within the interface between the quartz overlay and the target surface. This method of confining the plasma greatly increased the resulting pressure, generating pressure peaks of , over an order of magnitude greater than unconfined plasma pressure measurements. The significance of Anderholm's discovery to laser peening was the demonstration that pulsed laser–material interactions to develop high-pressure stress waves could be performed in air, not constrained to a vacuum chamber.
Laser shocking as a metallurgical process (1970s)
The beginning of the 1970s saw the first investigations of the effects of pulsed laser irradiation within the target material. L. I. Mirkin observed twinning in ferrite grains in steel under the crater created by laser irradiation in vacuum. S. A. Metz and F. A. Smidt, Jr. irradiated nickel and vanadium foils in air with a pulsed laser at a low power density and observed voids and vacancy loops after annealing the foils, suggesting that a high concentration of vacancies was created by the stress wave. These vacancies subsequently aggregated during post-iradiation annealing into the observed voids in nickel and dislocation loops in vanadium.
In 1971, researchers at Battelle Memorial Institute in Columbus, Ohio began investigating whether the laser shocking process could improve metal mechanical properties using a high-energy pulsed laser. In 1972, the first documentation of the beneficial effects of laser shocking metals was published, reporting the strengthening of aluminum tensile specimens using a quartz overlay to confine the plasma. Subsequently, the first patent on laser shock peening was granted to Phillip Mallozzi and Barry Fairand in 1974. Research into the effects and possible applications of laser peening continued throughout the 1970s and early 1980s by Allan Clauer, Barry Fairand, and coworkers, supported by funding from the National Science Foundation, NASA, Army Research Office, U.S. Air Force, and internally by Battelle. This research explored the in-material effects in more depth and demonstrated the creation of deep compressive stresses and the accompanying increase in fatigue and fretting fatigue life achieved by laser peening.
Practical laser peening (1980s)
Laser shocking during the initial development stages was severely limited by the laser technology of the time period. The pulsed laser used by Battelle encompassed one large room and required several minutes of recovery time between laser pulses. To become a viable, economical, and practical industrial process, the laser technology had to mature into equipment with a much smaller footprint and be capable of increased laser pulse frequencies. In the early 1980s, Wagner Castings Company located in Decatur, Illinois became interested in laser peening as a process that could potentially increase the fatigue strength of cast iron to compete with steel, but at a lower cost. Laser peening of various cast irons showed modest fatigue life improvement, and these results along with others, convinced them to fund the design and construction of a pre-prototype pulsed laser in 1986 to demonstrate the industrial viability of the process. This laser was completed and demonstrated in 1987. Although the technology had been under investigation and development for about 15 years, few people in industry had heard of it. So, with the completion of the demonstration laser, a major marketing effort was launched by Wagner Castings and Battelle engineers to introduce laser peening to potential industrial markets.
Also in the mid 1980s, Remy Fabbro of the Ecole Polytechnique was initiating a laser shock peening program in Paris. He and Jean Fournier of the Peugeot Company visited Battelle in 1986 for an extended discussion of laser shock peening with Allan Clauer. The programs initiated by Fabbro and carried forward in the 1990s and early 2000s by Patrice Peyre, Laurent Berthe, and co-workers have made major contributions, both theoretical and experimental, to the understanding and implementation of laser peening. In 1998, they measured using VISAR (Velocimeter Interferometer System for Any Reflector) pressure loadings in water confinement regime as function of wavelength. They demonstrate the detrimental effect of breakdown in water limiting maximum pressure at the surface of material.
Creation of an industry (1990s)
In the early 1990s, the market was becoming more familiar with the potential of laser peening to increase fatigue life. In 1991, the U.S. Air Force introduced Battelle and Wagner engineers to GE Aviation to discuss the potential application of laser peening to address a foreign object damage (FOD) problem with fan blades in the General Electric F101 engine powering the Rockwell B-1B Lancer Bomber. The resulting tests showed that laser peened fan blades severely notched after laser peening had the same fatigue life as a new blade. After further development, GE Aviation licensed the laser shock peening technology from Battelle, and in 1995, GE Aviation and the U.S. Air Force made the decision to move forward with production development of the technology. GE Aviation began production laser peening of the F101 fan blades in 1998.
The demand for industrial laser systems required for GE Aviation to go into production attracted several of the laser shock peening team at Battelle to start LSP Technologies, Inc. in 1995 as the first commercial supplier of laser peening equipment. Led by founder Jeff Dulaney, LSP Technologies designed and built the laser systems for GE Aviation to perform production laser peening of the F101 fan blades. Through the late 1990s and early 2000s, the U.S. Air Force continued to work with LSP Technologies to mature the laser shock peening production capabilities and implement production manufacturing cells.
In the mid 1990s, independent of the laser peening developments ongoing in the United States and France, Yuji Sano of the Toshiba Corporation in Japan initiated the development of a laser peening system capable of laser peening welds in nuclear plant pressure vessels to mitigate stress corrosion cracking in these areas. The system used a low-energy pulsed laser operating at a higher pulse frequency than the higher powered lasers. The laser beam was introduced into the pressure vessels through articulated tubes. Because the pressure vessels were filled with water, the process did not require a water overlay over the irradiated surface. However, the beam had to travel some distance through the water, necessitating using a shorter wavelength beam, 532 nm, to minimize dielectric breakdown of the beam in the water, instead of the 1054 nm beam used in the United States and France. Also, it was impractical to consider using an opaque overlay. This process is now known as Laser Peening without Coating (LPwC). It began to be applied to Japanese boiling water and pressurized water reactors in 1999.
Also in the 1990s a significant laser peening research group was formed at the Madrid Polytechnic University by José Ocaña. Their work includes both experimental and theoretical studies using low-energy pulsed lasers both without and with an opaque overlay.
Supplier foundation and industry growth (1990s – 2000s)
With the major breakthrough of commercial application of laser peening on the F101 engine to resolve a major operational problem, laser peening attracted attention around the globe. Researchers in many countries and industries undertook investigations to extend understanding of the laser shock peening process and material property effects. As a result, a large volume of research papers and patents were generated in the United States, France, and Japan. In addition to the work being done in these countries and Spain, laser peening programs were initiated in China, Britain, Germany and several other countries. The continuing growth of the technology and its applications led to the appearance of several commercial laser shock peening providers in the early 2000s.
GE Aviation and LSP Technologies were the first companies performing laser peening commercially, having licensed the technology from Battelle. GE Aviation performed laser peening for its aerospace engine components and LSP Technologies marketed laser shock peening services and equipment to a broader industrial base. In the late 1990s, Metal Improvement Company (MIC is now part of Curtis Wright Surface Technologies) partnered with Lawrence Livermore National Laboratory (LLNL) to develop its own laser peening capabilities. In Japan, Toshiba Corporation expanded the commercial applications of its LPwC system to pressurized water reactors, and in 2002 implemented fiber optic beam delivery to the underwater laser peening head. Toshiba also redesigned the laser and beam delivery into a compact system, enabling the entire system to be inserted into the pressure vessel. This system was ready for commercial use in 2013 MIC developed and adapted laser shock peening for forming the wing shapes on the Boeing 747-8.
The growth of industrial suppliers and commercial proof of laser peening technology lead to many companies adopting laser peening technology to solve and prevent problems. Some of the companies who have adopted laser peening include: GE, Rolls-Royce, Siemens, Boeing, Pratt & Whitney, and others.
In the 1990s and continuing through present day, laser peening developments have targeted decreasing costs and increasing throughput to reach markets outside of high-cost low-volume components. High costs in the laser peening process were previously attributable to laser system complexity, processing rates, manual labor and overlay applications. Numerous ongoing advancements addressing these challenges have reduced laser peening costs dramatically: laser peening systems are designed to handle robust operations; pulse rates of laser systems are increasing; routine labor operations are increasingly automated; application of overlays are automated in many cases. These reduced operational costs of laser peening have made it a valuable tool for solving an extended range of fatigue and related applications.
Process description
Laser peening uses the dynamic mechanical effects of a shock wave imparted by a laser to modify the surface of a target material. It does not utilize thermal effects. Fundamentally, laser peening can be accomplished with only two components: a transparent overlay and a high-energy pulsed laser system. The transparent overlay confines the plasma formed at the target surface by the laser beam. It is also often beneficial to use a thin overlay, opaque to the laser beam, between the water overlay and the target surface. This opaque overlay can provide either or each of three benefits: protect the target surface from potentially detrimental thermal effects from the laser beam, provide a consistent surface for the laser beam-material interaction and, if the overlay impedance is less than that of the target surface, increase the magnitude of the shock wave entering the target. However, there are situations where an opaque overlay is not used; in the Toshiba process, LPwC, or where the tradeoff between decreased cost and possibly somewhat lowered surface residual stress allows superficial grinding or honing after laser peening to remove the thin thermally effected layer.
The laser peening process originated with high-energy Nd-glass lasers producing pulse energies up to 50 J (more commonly 5 to 40 J) with pulse durations of 8 to 25 ns. Laser spot diameters on target are typically in the range of 2 to 7 mm. The processing sequence begins by applying the opaque overlay on the workpiece or target surface. Commonly used opaque overlay materials are black or aluminum tape, paint or a proprietary liquid, RapidCoater. The tape or paint is generally applied over the entire area to be processed, while the RapidCoater is applied over each laser spot just before triggering the laser pulse. After application of the opaque overlay, the transparent overlay is placed over it. The transparent overlay used in production processing is water; it is cheap, easily applied, readily conforms to most complex surface geometries, and is easily removed. It is applied to the surface just before triggering the laser pulse. Quartz or glass overlays produce much higher pressures than water, but are limited to flat surfaces, must be replaced after each shot and would be difficult to handle in a production setting. Clear tape may be used, but requires labor to apply and is difficult to conform to complex surface features. The transparent overlay allows the laser beam to pass through it without appreciable absorption of the laser energy or dielectric breakdown. When the laser is triggered, the beam passes through the transparent overlay and strikes the opaque overlay, immediately vaporizing a thin layer of the overlay material. This vapor is trapped in the interface between the transparent and opaque overlays. The continued delivery of energy during the laser pulse rapidly heats and ionizes the vapor, converting it into a rapidly expanding plasma. The rising pressure exerted on the opaque overlay surface by the expanding plasma enters the target surface as a high-amplitude stress wave or shock wave. Without a transparent overlay, the unconfined plasma plume moves away from the surface and the peak pressure is considerably lower. If the amplitude of the shock wave is above the Hugoniot Elastic Limit (HEL), i.e., the dynamic yield strength, of the target, the material plastically deforms during passage of the shock wave. The magnitude of the plastic strain decreases with distance from the surface as the peak pressure of the shock wave attenuates, i.e., decreases, and becomes zero when the peak pressure falls below the HEL. After the shock wave passes, the residual plastic strain creates a compressive residual stress gradient below the target surface, highest at or immediately below the surface and decreasing with depth. By varying the laser power density, pulse duration, and number of successive shots on an area, a range of surface compressive stress magnitudes and depths can be achieved. The magnitude of surface stresses are comparable to shot peening, but the depths are much greater, ranging up to 5 mm when using multiple shots on a spot. Generally spot densities of about 10 spots/cm2 to 40 spots/cm2 are applied. The compressive stress depth achieved with the most common processing parameters ranges from deep. The deep compressive stresses are due to the shock wave peak pressure being maintained above the HEL to greater depths than for other peening technologies.
There may be instances where it is cost effective not to apply the opaque overlay and laser peen the bare surface of the work piece directly. When laser peening a bare, metallic surface a thin, micrometer-range, layer of surface material is vaporized. The rapid rise in temperature causes surface melting to a depth dependent on pulse energy and duration, and target melting point. On aluminum alloys this depth is nominally 10–20 μm, but on steels and other higher melting point alloys the depths may be just a few micrometers. Due to the short duration of the pulse, the in-depth heating of the surface is limited to a few tens of micrometers due to the rapid quenching effect of the cold substrate. Some superficial surface staining of the work piece may occur, typically from oxidation products. These detrimental effects of bare surface processing, both aesthetic and metallurgical, can be removed after laser peening by light grinding or honing. With an opaque overlay in place, the target surface experiences temperature rises of less than on a nanosecond time scale.
Laser pulses are generally applied sequentially on the target to treat areas larger than the laser spot size. Laser pulse shapes are customizable to circular, elliptical, square, and other profiles to provide the most convenient and efficient processing conditions. The spot size applied depends on a number of factors that include material HEL, laser system characteristics and other processing factors. The area to be laser peened is usually determined by the part geometry, the extent of the fatigue critical area and considerations of moving the compensating tensile stresses out of this area.
The more recently developed laser peening process, the Toshiba LPwC process, varies in significant ways from the process described above. The LPwC process utilizes low-energy high-frequency Nd-YAG lasers producing pulse energies of and pulse durations of , using spot sizes diameter. Because the process originally was intended to operate in large water-filled vessels, the wave frequency was doubled to halve the wavelength to 532 nm. The shorter wavelength decreases the absorption of beam energy while traveling through water to the target. Due to access constraints, no opaque overlay is applied to the target surface. This factor, combined with the small spot size, requires many shots to achieve a significant surface compressive stress and depths of 1 mm. The first layers applied produce a tensile surface stress due to surface melting, although a compressive stress is developed below the melt layer. However, as more layers are added, the increasing subsurface compressive stress "bleeds" back through the melted surface layer to produce the desired surface compressive stress. Depending on material properties and the desired compressive stresses, generally about 18 spots/mm2 to 70 spots/mm2 or greater spot densities are applied, about 100 times the spot densities of the high-pulse-energy process. The effects of the higher spot densities on processing times are compensated for in part by the higher pulse frequency, 60 Hz, of the low-energy lasers. Newer generations of these laser systems are projected to operate at higher frequencies. This low-energy process achieves compressive residual stress magnitudes and depths equivalent to the high-energy process with nominal depths of . However, the smaller spot size will not permit depths deeper than this.
Quality systems for laser peening
The laser peening process using computer control is described in AMS 2546. Like many other surface enhancement technologies, direct measuring of the results of the process on the workpiece during processing is not practical. Therefore, the process parameters of pulse energy and duration, water and opaque overlays are closely monitored during processing. Other quality control systems are also available that rely on pressure measurements such as electromagnetic acoustic transducers (EMAT), Velocity Interferometer System for Any Reflector (VISAR) and PVDF gauges, and plasma radiometers. Almen strips are also used, but they function as a comparison tool and do not provide a definitive measure of laser peening intensity. The resultant residual stresses imparted by the laser peening process are routinely measured by industry using x-ray diffraction techniques for the purposes of process optimization and quality assurance.
Laser peening systems
The initial laser systems used during the development of laser peening were large research lasers providing high-energy pulses at very low pulse frequencies. Since the mid-late 1990s, lasers designed specifically for laser peening featured steadily smaller size and higher pulse frequencies, both of these more desirable for production environments. The laser peening systems include both rod laser systems and a slab laser system. The rod laser systems can be separated roughly into three primary groups, recognizing that there is some overlap between them: (1) high-energy low-repetition rate lasers operating typically at 10–40 J per pulse with 8–25 ns pulse length at nominally 0.5–1 Hz rep rate, nominal spot sizes of 2 to 8 mm; (2) intermediate energy, intermediate repetition rate lasers operating at 3–10 J with 10–20 ns pulse width at 10 Hz rep rate, nominal spot sizes of 1–4 mm; (3) low-energy, high-repetition rate lasers operating at per pulse with ≤10 ns pulse length at 60+ Hz rep rate, spot size. The slab laser system operates in the range of 10–25 J per pulse with 8–25 ns pulse duration at 3–5 Hz rep rate, nominal spot sizes of 2–5 mm. The commercial systems include rod lasers represented by all three groups and the slab laser system.
For each laser peening system the output beam from the laser is directed into a laser peening cell containing the work pieces or parts to be processed. The peening cell contains the parts handling system and provides the safe environment necessary for efficient commercial laser peening. The parts to be processed are usually introduced into the cell in batches. The parts are then picked and placed in the beam path by robots or other customized parts handling systems. Within the work cell, the beam is directed to the surface of the work piece via an optical chain of mirrors and/or lenses. If tape is used, it is applied before the part enters the work cell, whereas water or RapidCoater overlays are applied within the cell individually for each spot. The workpiece, or sometimes the laser beam, is repositioned for each shot as necessary via a robot or other parts handling system. When the selected areas on each part have been processed, the batch is replaced in the work cell by another.
Process effect
The shockwave generated coldwork (plastic strain) in the workpiece material creates compressive and tensile residual stresses to maintain an equilibrium state of the material. These residual stresses are compressive at the workpiece surface and gradually fade into low tensile stresses below and surrounding the laser peened area. The cold work also work hardens the surface layer. The compressive residual stresses, and to a lesser extent, the cold work, from laser peening have been shown to prevent and mitigate high cycle fatigue (HCF), low cycle fatigue (LCF), stress corrosion cracking, fretting fatigue, and, to some degree, wear and corrosion pitting. It is outstanding at mitigating foreign object damage in turbine blades.
The plastic strain introduced by laser peening is much lower than that introduced by other impact peening technologies. As a result, the residual plastic strain has much greater thermal stability than the more heavily cold worked microstructures. This enables the laser peened compressive stresses to be retained at higher operating temperatures during long exposures than is the case for the other technologies. Among the applications benefiting from this are gas turbine fan and compressor blades and nuclear plant components.
By enhancing material performance, laser peening enables more-efficient designs that reduce weight, extend component lifetimes, and increase performance. In the future, it is anticipated that laser peening will be incorporated into the design of fatigue critical components to achieve longer life, lighter weight, and perhaps a simpler design to manufacture.
Other applications of laser peening technologies
Originally, the use of laser-induced shock waves on metals to achieve property or functional benefits was referred to as laser shock processing, a broader, more inclusive term. As it happened, laser peening was the first commercial aspect of laser shock processing. However, laser-induced shock waves have found uses in other industrial applications outside of surface enhancement technologies.
One application is for metal shaping or forming. By selectively laser shocking areas on the surface of metal sheets or plates, or smaller items such as airfoils, the associated compressive residual stresses cause the material to flex in a controllable manner. In this way a particular shape can be imparted to a component, or a distorted component might be brought back into the desired shape. Thus, this process is capable of bringing manufactured parts back into design tolerance limits and form shaping thin section parts.
Another variation is to use the shock wave for spallation testing of materials. This application is based on the behavior of shockwaves to reflect from the rear free surface of a work piece as a tensile wave. Depending on the material properties and the shock wave characteristics, the reflected tensile wave may be strong enough to form microcracks or voids near the back surface, or actually "blow-off" or spall material from the back surface. This approach has some value for testing ballistic materials.
Use of laser shocks to measure the bond strength of coatings on metals has been developed over a period of years in France called LASAT for Laser Adhesion Test. This application is also based on the behavior of shockwaves to reflect from the rear free surface of a work piece as a tensile wave. If the back surface is coated with an adherent coating, the tensile wave can be tailored to fracture the bond upon reflection from the surface. By controlling the characteristics of the shock wave, the bond strength of the coating can be measured, or alternatively, determined in a comparative sense.
Careful tailoring of the shockwave shape and intensity has also enabled the inspection of bonded composite structures via laser shocking. The technology, termed Laser Bond Inspection initiates a shockwave that reflects off the backside of a bonded structure and returns as a tensile wave. As the tensile wave passes back through the adhesive bond, depending on the strength of the bond and the peak tensile stress of the stress wave, the tensile wave will either pass through the bond or rupture it. By controlling the pressure of the tensile wave this procedure is capable of reliably locally testing adhesion strength between bonded joints. This technology is most often found in application to bonded fiber composite material structures but has also been shown to be successful in evaluating bonds between metal-composite material. Fundamental issues are also studied to characterize and quantify the effect of shock wave produced by laser inside these complex materials.
See also
Autofrettage
Corrosion fatigue
Damage tolerance
Fatigue (material)
Foreign object damage
Fretting
High-frequency impact treatment – aftertreatment of weld transitions
Low plasticity burnishing
Peening
Plastic deformation
Residual stress
Shot peening
Stress corrosion cracking
Ultrasonic impact treatment
References
External links
Information on Laser Peening and Other Surface Enhancement Methods
Laser Peening Metallurgical Effects
Collection of Technical Papers, Including Those Listed in References on laser peening, etc.
Peening
Peening
Metalworking
Shot peening
Corrosion | Laser peening | [
"Chemistry",
"Materials_science"
] | 5,660 | [
"Metallurgy",
"Corrosion",
"Electrochemistry",
"Shot peening",
"Strengthening mechanisms of materials",
"Materials degradation"
] |
7,466,120 | https://en.wikipedia.org/wiki/Sleep%20and%20breathing | When we sleep, our breathing changes due to normal biological processes that affect both our respiratory and muscular systems.
Physiology
Sleep Onset
Breathing changes as we transition from wakefulness to sleep. These changes arise due to biological changes in the processes that regulate our breathing. When we fall asleep, minute ventilation (the amount of air that we breathe per minute) reduces due to decreased metabolism.
Non-REM (NREM) Sleep
During NREM sleep, we move through three sleep stages, with each progressively deeper than the last. As our sleep deepens, our minute ventilation continues to decrease, reducing by 13% in the second NREM stage and by 15% in the third. For example, a study of 19 healthy adults revealed that the minute ventilation in NREM sleep was 7.18 liters/minute compared to 7.66 liters/minute when awake.
Ribcage & Abdominal Muscle Contributions
Rib cage contribution to ventilation increases during NREM sleep, mostly by lateral movement, and is detected by an increase in EMG amplitude during breathing. Diaphragm activity is little increased or unchanged and abdominal muscle activity is slightly increased during these sleep stages.
Upper Airway Resistance
Airway resistance increases by about 230% during NREM sleep. Elastic and flow resistive properties of the lung do not change during NREM sleep. The increase in resistance comes primarily from the upper airway in the retro-epiglottic region. Tonic activity of the pharyngeal dilator muscles of the upper airway decreases during the NREM sleep, contributing to the increased resistance, which is reflected in increased esophageal pressure swings during sleep. The other ventilatory muscles compensate for the increased resistance, and so the airflow decreases much less than the increase in resistance.
Arterial Blood Gases
The Arterial blood gasses pCO2 increases by 3-7mmHg, pO2 drops by 3-9mmHg and SaO2 drops by 2% or less. These changes occur despite a reduced metabolic rate, reflected by a 10-20% decrease in O2 consumption, suggesting overall hypoventilation instead of decreased production/metabolism.
Pulmonary Arterial Pressure
Periodic oscillations of the pulmonary arterial pressure occur with respiration. Pulmonary arterial systolic and diastolic pressure and PAD increase by 4-5mm in NREM sleep
Effects Of Arousals
Induced transient arousal from NREM sleep cause the following:
Increase EMG activity of the diaphragm 150%, increased activity of upper airway dilating muscles 250%, increased airflow and tidal volume 160% and decreased upper airway resistance.
Steady REM Sleep
Ventilation
Irregular breathing with sudden changes in both amplitude and frequency at times interrupted by central apneas lasting 10–30 seconds are noted in Rapid Eye Movement (REM) sleep. (These are physiologic changes and are different from abnormal breathing patterns noted in sleep disordered breathing). These breathing irregularities are not random, but correspond to bursts of eye movements. This breathing pattern is not controlled by the chemoreceptors, but is due to the activation of behavioral respiratory control system by REM sleep processes. Quantitative measure of airflow is quite variable in this sleep stage and has been shown to be increased, decreased or unchanged. Tidal volume has also been shown to be increased, decreased or unchanged by quantitative measures in REM sleep. So breathing during REM sleep is somewhat discordant.
In a study of 19 healthy adults, the minute ventilation in REM sleep was 6.46 +/- 0.29(SEM) liters/minute compared to 7.66 +/- 0.34 liters/minute when awake.
Ribcage & Abdominal Muscle Contributions
Intercostal muscle activity decreases in REM sleep and contribution of rib cage to respiration decreases during REM sleep. This is due to REM related supraspinal inhibition of alpha motoneuron drive and specific depression of fusimotor function. Diaphraghmatic activity correspondingly increases during REM sleep. Although paradoxical thoracoabdominal movements are not observed, the thoracic and abdominal displacements are not exactly in phase. This decrease in intercostal muscle activity is primarily responsible for hypoventilation that occurs in patients with borderline pulmonary function.
Upper Airway Function
Upper airway resistance is expected to be highest during REM sleep because of atonia of the pharyngeal dilator muscles and partial airway collapse. Many studies have shown this, but not all. Some have shown unchanged airway resistance during REM sleep, others have shown it to increase to NREM levels.
Arterial Blood Gases
Hypoxemia due to hypoventilation is noted in REM sleep but this is less well studied than NREM sleep. These changes are equal to or greater than NREM sleep
Pulmonary Arterial Pressure
Pulmonary arterial pressure fluctuates with respiration and rises during REM sleep.
Effect of Arousals
Arousals cause return of airway resistance and airflow to near awake values. Refer arousals in NREM sleep.
Sleep and Breathing in High Altitudes
At a lower altitude, the link between breathing and sleep has been established. At a higher altitude, disruptions in sleep are often linked to changes in the respiratory (breathing ) rhythm. Changes in altitude cause variations in sleep time (reduced to 0% up to 93%), as shown in a study that examined people at sea level and Pikes Peak (4300 meters). These subjects also experienced more frequent arousals and diminished stage 3 and stage 4 sleep. A poorer quality of sleep was indicated, but not due to less sleep time, but more frequent awakenings during the night.
Sleep-disordered breathing (abnormal sleep and breathing or sleep-related breathing disorders)
Primary snoring
Snoring is a condition characterized by noisy breathing during sleep. Usually, any medical condition where the airway is blocked during sleeping, like obstructive sleep apnea, may give rise to snoring. Snoring, when not associated with an obstructive phenomenon is known as primary snoring. Apart from the specific condition of obstructive sleep apnea, other causes of snoring include alcohol intake prior to sleeping, stuffy nose, sinusitis, obesity, long tongue or uvula, large tonsil or adenoid, smaller lower jaw, deviated nasal septum, asthma, smoking and sleeping on one's back. Primary snoring is also known as "simple" or "benign" snoring, and is not associated with sleep apnea.
Upper airway resistance syndrome
Obstructive sleep apnea (including hypopnea) syndrome
Obstructive sleep apnea is apnea either as the result of obstruction of the air passages or inadequate respiratory muscle activity.
Central sleep apnea syndrome
Sleep apnea (or sleep apnoea in British English; /æpˈniːə/) is a sleep disorder characterized by pauses in breathing or instances of shallow or infrequent breathing during sleep. Each pause in breathing, called an apnea, can last for several seconds to several minutes, and may occur 5 to 30 times or more in an hour.
Complex sleep disordered syndrome
Sleep related hypoventilation syndromes
References
Sleep physiology | Sleep and breathing | [
"Biology"
] | 1,485 | [
"Behavior",
"Sleep physiology",
"Sleep"
] |
7,466,623 | https://en.wikipedia.org/wiki/Seafloor%20massive%20sulfide%20deposits | Seafloor massive sulfide deposits or SMS deposits, are modern equivalents of ancient volcanogenic massive sulfide ore deposits or VMS deposits. The term has been coined by mineral explorers to differentiate the modern deposit from the ancient.
SMS deposits were first recognized during the exploration of the deep oceans and the mid ocean ridge spreading centers in the early 1960s. Deep ocean research submersibles, bathyspheres and remote operated vehicles have visited and taken samples of black smoker chimneys, and it has been long recognised that such chimneys contain appreciable grades of Cu, Pb, Zn, Ag, Au and other trace metals.
SMS deposits form in the deep ocean around submarine volcanic arcs, where hydrothermal vents exhale sulfide-rich mineralising fluids into the ocean.
SMS deposits are laterally extensive and consist of a central vent mound around the area where the hydrothermal circulation exits, with a wide apron of unconsolidated sulfide silt or ooze which precipitates upon the seafloor.
Beginning about 2008, technologies were being developed for deepsea mining of these deposits.
Minerals
Mineralization in submarine magmatic-hydrothermal systems is a product of the chemical and thermal exchange between the ocean, the lithosphere, and the magmas emplaced within it. Different mineral associations precipitate during the typical stages of mineralization that characterize the life span of such systems.
Minerals present in a hydrothermal system or a fossil volcanogenic massive sulfide deposit are deposited passively or reactively. Mineral associations may vary (1) in different mineralized structures, either syngenetic (namely, passive precipitation in chimneys, mounds and stratiform deposits) or epigenetic (structures that correspond to feeder channels, and replacements of host rocks or pre-existing massive sulfide bodies), or structural zonation, (2) from proximal to distal associations with respect to venting areas within the same stratigraphic horizon, or horizontal zonation, (3) from deep to shallow associations (i.e., stockworks to mounds), or vertical zonation, (4) from early and climactic to late stages of mineralization (dominated by sulfides, and sulfates or oxides, respectively), or temporal zonation, and (5) in various volcano sedimentary contexts, depending essentially on the composition of volcanic rocks and, ultimately, on the tectonomagmatic context. The most common minerals in ore-bearing associations of volcanogenic massive sulfide deposits (non-metamorphosed or oxidized) and their modern analogues are pyrite, pyrrhotite, chalcopyrite, covellite, sphalerite, galena, tetrahedrite-tennantite, marcasite, realgar, orpiment, proustite-pyrargyrite, wurtzite, stannite (sulfides), Mn oxides, cassiterite, magnetite, hematite (oxides), barite, anhydrite (sulfates), calcite, siderite (carbonates) quartz and native gold, and are differently distributed in the various associations schematized above. The most common hydrothermal alteration assemblages are chloritic (including Mg-rich ones) and phyllic alteration (dominated by “sericite”, mostly illite), and also silicification, deep and shallow talcose alteration, and ferruginous (including Fe oxides, carbonates and sulfides) alteration.
Economic importance
Economic extraction of SMS deposits is in the theoretical stage, the greatest complication being the extreme water depths at which these deposits are forming. However, apparent vast areas of the peripheral areas of these black smoker zones contain a sulfide ooze which could, theoretically, be vacuumed up off the seafloor. Nautilus Minerals Inc. (Nautilus) was engaged in commercially exploring the ocean floor for copper, gold, silver and zinc seafloor massive sulphide (SMS) deposits, and mineral extraction from an SMS system. Nautilus' Solwara 1 Project located at 1,600 metres water depth in the Bismarck Sea, Papua New Guinea, was an attempt at the world's first deep-sea mining project, with first production originally expected in 2017. However, the company went bankrupt in 2019 after failing to secure funding for the project.
Known SMS deposits
Deep ocean drilling, seismic bathymetry surveys and mineral exploration deep sea drilling has delineated several areas worldwide with potentially economically viable SMS deposits, including:
Lau Basin
Kermadec Volcanic Arc
Colville Ridge
Bismarck Sea
Okinawa Trough
North Fiji Basin (see d'Entrecasteaux Ridge)
Red Sea
See also
Hydrothermal circulation
Mid ocean ridge
Ore genesis
RISE project
References
External links
The dawn of deep ocean mining, Steven Scott, Feb. 2006
Bertram C., A. Krätschell, K. O'Brien, W. Brückmann, A. Proelss, K. Rehdanz (2011). Metalliferous sediments in the Atlantis II deep -Assessing the geological and economic resource potential and legal constraints. Resources Policy 36(2011), 315–329.
Economic geology
Oceanography
Sedimentary rocks | Seafloor massive sulfide deposits | [
"Physics",
"Environmental_science"
] | 1,090 | [
"Oceanography",
"Hydrology",
"Applied and interdisciplinary physics"
] |
7,466,964 | https://en.wikipedia.org/wiki/Enterprise%20modelling | Enterprise modelling is the abstract representation, description and definition of the structure, processes, information and resources of an identifiable business, government body, or other large organization.
It deals with the process of understanding an organization and improving its performance through creation and analysis of enterprise models. This includes the modelling of the relevant business domain (usually relatively stable), business processes (usually more volatile), and uses of information technology within the business domain and its processes.
Overview
Enterprise modelling is the process of building models of whole or part of an enterprise with process models, data models, resource models and/or new ontologies etc. It is based on knowledge about the enterprise, previous models and/or reference models as well as domain ontologies using model representation languages. An enterprise in general is a unit of economic organization or activity. These activities are required to develop and deliver products and/or services to a customer. An enterprise includes a number of functions and operations such as purchasing, manufacturing, marketing, finance, engineering, and research and development. The enterprise of interest are those corporate functions and operations necessary to manufacture current and potential future variants of a product.
The term "enterprise model" is used in industry to represent differing enterprise representations, with no real standardized definition. Due to the complexity of enterprise organizations, a vast number of differing enterprise modelling approaches have been pursued across industry and academia. Enterprise modelling constructs can focus upon manufacturing operations and/or business operations; however, a common thread in enterprise modelling is an inclusion of assessment of information technology. For example, the use of networked computers to trigger and receive replacement orders along a material supply chain is an example of how information technology is used to coordinate manufacturing operations within an enterprise.
The basic idea of enterprise modelling according to Ulrich Frank is "to offer different views on an enterprise, thereby providing a medium to foster dialogues between various stakeholders - both in academia and in practice. For this purpose they include abstractions suitable for strategic planning, organisational (re-) design and software engineering. The views should complement each other and thereby foster a better understanding of complex systems by systematic abstractions. The views should be generic in the sense that they can be applied to any enterprise. At the same time they should offer abstractions that help with designing information systems which are well integrated with a company's long term strategy and its organisation. Hence, enterprise models can be regarded as the conceptual infrastructure that support a high level of integration."
History
Enterprise modelling has its roots in systems modelling and especially information systems modelling. One of the earliest pioneering works in modelling information systems was done by Young and Kent (1958), who argued for "a precise and abstract way of specifying the informational and time characteristics of a data processing problem". They wanted to create "a notation that should enable the analyst to organize the problem around any piece of hardware". Their work was a first effort to create an abstract specification and invariant basis for designing different alternative implementations using different hardware components. A next step in IS modelling was taken by CODASYL, an IT industry consortium formed in 1959, who essentially aimed at the same thing as Young and Kent: the development of "a proper structure for machine independent problem definition language, at the system level of data processing". This led to the development of a specific IS information algebra.
The first methods dealing with enterprise modelling emerged in the 1970s. They were the entity-relationship approach of Peter Chen (1976) and SADT of Douglas T. Ross (1977), the one concentrate on the information view and the other on the function view of business entities. These first methods have been followed end 1970s by numerous methods for software engineering, such as SSADM, Structured Design, Structured Analysis and others. Specific methods for enterprise modelling in the context of Computer Integrated Manufacturing appeared in the early 1980s. They include the IDEF family of methods (ICAM, 1981) and the GRAI method by Guy Doumeingts in 1984 followed by GRAI/GIM by Doumeingts and others in 1992.
These second generation of methods were activity-based methods which have been surpassed on the one hand by process-centred modelling methods developed in the 1990s such as Architecture of Integrated Information Systems (ARIS), CIMOSA and Integrated Enterprise Modeling (IEM). And on the other hand by object-oriented methods, such as Object-oriented analysis (OOA) and Object-modelling technique (OMT).
Enterprise modelling basics
Enterprise model
An enterprise model is a representation of the structure, activities, processes, information, resources, people, behavior, goals, and constraints of a business, government, or other enterprises. Thomas Naylor (1970) defined a (simulation) model as "an attempt to describe the interrelationships among a corporation's financial, marketing, and production activities in terms of a set of mathematical and logical relationships which are programmed into the computer." These interrelationships should according to Gershefski (1971) represent in detail all aspects of the firm including "the physical operations of the company, the accounting and financial practices followed, and the response to investment in key areas" Programming the modelled relationships into the computer is not always necessary: enterprise models, under different names, have existed for centuries and were described, for example, by Adam Smith, Walter Bagehot, and many others.
According to Fox and Gruninger (1998) from "a design perspective, an enterprise model should provide the language used to explicitly define an enterprise... From an operations perspective, the enterprise model must be able to represent what is planned, what might happen, and what has happened. It must supply the information and knowledge necessary to support the operations of the enterprise, whether they be performed by hand or machine."
In a two-volume set entitled The Managerial Cybernetics of Organization Stafford Beer introduced a model of the enterprise, the Viable System Model (VSM). Volume 2, The Heart of Enterprise, analyzed the VSM as a recursive organization of five systems: System One (S1) through System Five (S5). Beer's model differs from others in that the VSM is recursive, not hierarchical: "In a recursive organizational structure, any viable system contains, and is contained in, a viable system."
Function modelling
Function modelling in systems engineering is a structured representation of the functions, activities or processes within the modelled system or subject area.
A function model, also called an activity model or process model, is a graphical representation of an enterprise's function within a defined scope. The purposes of the function model are: to describe the functions and processes, assist with discovery of information needs, help identify opportunities, and establish a basis for determining product and service costs. A function model is created with a functional modelling perspective. A functional perspectives is one or more perspectives possible in process modelling. Other perspectives possible are for example behavioural, organisational or informational.
A functional modelling perspective concentrates on describing the dynamic process. The main concept in this modelling perspective is the process, this could be a function, transformation, activity, action, task etc. A well-known example of a modelling language employing this perspective is data flow diagrams. The perspective uses four symbols to describe a process, these being:
Process: Illustrates transformation from input to output.
Store: Data-collection or some sort of material.
Flow: Movement of data or material in the process.
External Entity: External to the modelled system, but interacts with it.
Now, with these symbols, a process can be represented as a network of these symbols. This decomposed process is a DFD, data flow diagram. In Dynamic Enterprise Modeling, for example, a division is made in the Control model, Function Model, Process model and Organizational model.
Data modelling
Data modelling is the process of creating a data model by applying formal data model descriptions using data modelling techniques. Data modelling is a technique for defining business requirements for a database. It is sometimes called database modelling because a data model is eventually implemented in a database.
The figure illustrates the way data models are developed and used today. A conceptual data model is developed based on the data requirements for the application that is being developed, perhaps in the context of an activity model. The data model will normally consist of entity types, attributes, relationships, integrity rules, and the definitions of those objects. This is then used as the start point for interface or database design.
Business process modelling
Business process modelling, not to be confused with the wider Business Process Management (BPM) discipline, is the activity of representing processes of an enterprise, so that the current ("as is") process may be analyzed and improved in future ("to be"). Business process modelling is typically performed by business analysts and managers who are seeking to improve process efficiency and quality. The process improvements identified by business process modelling may or may not require Information Technology involvement, although that is a common driver for the need to model a business process, by creating a process master.
Change management programs are typically involved to put the improved business processes into practice. With advances in technology from large platform vendors, the vision of business process modelling models becoming fully executable (and capable of simulations and round-trip engineering) is coming closer to reality every day.
Systems architecture
The RM-ODP reference model identifies enterprise modelling as providing one of the five viewpoints of an open distributed system. Note that such a system need not be a modern-day IT system: a banking clearing house in the 19th century may be used as an example ().
Enterprise modelling techniques
There are several techniques for modelling the enterprise such as
Active Knowledge Modeling,
Design & Engineering Methodology for Organizations (DEMO)
Dynamic Enterprise Modeling
Enterprise Modelling Methodology/Open Distributed Processing (EMM/ODP)
Extended Enterprise Modeling Language
Multi-Perspective Enterprise Modelling (MEMO),
Process modelling such as BPMN, CIMOSA, DYA, IDEF3, LOVEM, PERA, etc.
Integrated Enterprise Modeling (IEM), and
Modelling the enterprise with multi-agent systems.
More enterprise modelling techniques are developed into Enterprise Architecture framework such as:
ARIS - ARchitecture of Integrated Information Systems
DoDAF - the US Department of Defense Architecture Framework
RM-ODP - Reference Model of Open Distributed Processing
TOGAF - The Open Group Architecture Framework
Zachman Framework - an architecture framework, based on the work of John Zachman at IBM in the 1980s
Service-oriented modeling framework (SOMF), based on the work of Michael Bell
And metamodelling frameworks such as:
Generalised Enterprise Reference Architecture and Methodology
Enterprise engineering
Enterprise engineering is the discipline concerning the design and the engineering of enterprises, regarding both their business and organization. In theory and practice two types of enterprise engineering has emerged. A more general connected to engineering and the management of enterprises, and a more specific related to software engineering, enterprise modelling and enterprise architecture.
In the field of engineering a more general enterprise engineering emerged, defined as the application of engineering principals to the management of enterprises. It encompasses the application of knowledge, principles, and disciplines related to the analysis, design, implementation and operation of all elements associated with an enterprise. In essence this is an interdisciplinary field which combines systems engineering and strategic management as it seeks to engineer the entire enterprise in terms of the products, processes and business operations. The view is one of continuous improvement and continued adaptation as firms, processes and markets develop along their life cycles. This total systems approach encompasses the traditional areas of research and development, product design, operations and manufacturing as well as information systems and strategic management. This fields is related to engineering management, operations management, service management and systems engineering.
In the context of software development a specific field of enterprise engineering has emerged, which deals with the modelling and integration of various organizational and technical parts of business processes. In the context of information systems development it has been the area of activity in the organization of the systems analysis, and an extension of the scope of Information Modelling. It can also be viewed as the extension and generalization of the systems analysis and systems design phases of the software development process. Here Enterprise modelling can be part of the early, middle and late information system development life cycle. Explicit representation of the organizational and technical system infrastructure is being created in order to understand the orderly transformations of existing work practices. This field is also called Enterprise architecture, or defined with Enterprise Ontology as being two major parts of Enterprise architecture.
Related fields
Business reference modelling
Business reference modelling is the development of reference models concentrating on the functional and organizational aspects of the core business of an enterprise, service organization or government agency. In enterprise engineering a business reference model is part of an enterprise architecture framework. This framework defines in a series of reference models, how to organize the structure and views associated with an Enterprise Architecture.
A reference model in general is a model of something that embodies the basic goal or idea of something and can then be looked at as a reference for various purposes. A business reference model is a means to describe the business operations of an organization, independent of the organizational structure that perform them. Other types of business reference model can also depict the relationship between the business processes, business functions, and the business area’s business reference model. These reference model can be constructed in layers, and offer a foundation for the analysis of service components, technology, data, and performance.
Economic modelling
Economic modelling is the theoretical representation of economic processes by a set of variables and a set of logical and/or quantitative relationships between them. The economic model is a simplified framework designed to illustrate complex processes, often but not always using mathematical techniques. Frequently, economic models use structural parameters. Structural parameters are underlying parameters in a model or class of models. A model may have various parameters and those parameters may change to create various properties.
In general terms, economic models have two functions: first as a simplification of and abstraction from observed data, and second as a means of selection of data based on a paradigm of econometric study. The simplification is particularly important for economics given the enormous complexity of economic processes. This complexity can be attributed to the diversity of factors that determine economic activity; these factors include: individual and cooperative decision processes, resource limitations, environmental and geographical constraints, institutional and legal requirements and purely random fluctuations. Economists therefore must make a reasoned choice of which variables and which relationships between these variables are relevant and which ways of analyzing and presenting this information are useful.
Ontology engineering
Ontology engineering or ontology building is a subfield of knowledge engineering that studies the methods and methodologies for building ontologies. In the domain of enterprise architecture, an ontology is an outline or a schema used to structure objects, their attributes and relationships in a consistent manner. As in enterprise modelling, an ontology can be composed of other ontologies. The purpose of ontologies in enterprise modelling is to formalize and establish the sharability, re-usability, assimilation and dissemination of information across all organizations and departments within an enterprise. Thus, an ontology enables integration of the various functions and processes which take place in an enterprise.
One common language with well articulated structure and vocabulary would enable the company to be more efficient in its operations. A common ontology will allow for effective communication, understanding and thus coordination among the various divisions of an enterprise. There are various kinds of ontologies used in numerous environments. While the language example given earlier dealt with the area of information systems and design, other ontologies may be defined for processes, methods, activities, etc., within an enterprise.
Using ontologies in enterprise modelling offers several advantages. Ontologies ensure clarity, consistency, and structure to a model. They promote efficient model definition and analysis. Generic enterprise ontologies allow for reusability of and automation of components. Because ontologies are schemata or outlines, the use of ontologies does not ensure proper enterprise model definition, analysis, or clarity. Ontologies are limited by how they are defined and implemented. An ontology may or may not include the potential or capability to capture all of the aspects of what is being modelled.
Systems thinking
The modelling of the enterprise and its environment could facilitate the creation of enhanced understanding of the business domain and processes of the extended enterprise, and especially of the relations—both those that "hold the enterprise together" and those that extend across the boundaries of the enterprise. Since enterprise is a system, concepts used in system thinking can be successfully reused in modelling enterprises.
This way a fast understanding can be achieved throughout the enterprise about how business functions are working and how they depend upon other functions in the organization.
See also
Business process modelling
Enterprise architecture
Enterprise Architecture framework
Enterprise integration
Enterprise life cycle
ISO 19439
Enterprise Data Modeling
References
Further reading
August-Wilhelm Scheer (1992). Architecture of Integrated Information Systems: Foundations of Enterprise Modelling. Springer-Verlag.
François Vernadat (1996) Enterprise Modeling and Integration: Principles and Applications, Chapman & Hall, London,
External links
Agile Enterprise Modeling. by S.W. Ambler, 2003-2008.
Enterprise Modeling Anti-patterns. by S.W. Ambler, 2005.
Enterprise Modelling and Information Systems Architectures - An International Journal (EMISA) is a scholarly open access journal with a unique focus on novel and innovative research on Enterprise Models and Information Systems Architectures.
Business terms
Scientific modelling
Systems engineering | Enterprise modelling | [
"Engineering"
] | 3,516 | [
"Systems engineering",
"Enterprise modelling"
] |
11,147,646 | https://en.wikipedia.org/wiki/Purple%20Earth%20hypothesis | The Purple Earth Hypothesis (PEH) is an astrobiological hypothesis, first proposed by molecular biologist Shiladitya DasSarma in 2007, that the earliest photosynthetic life forms of Early Earth were based on the simpler molecule retinal rather than the more complex porphyrin-based chlorophyll, making the surface biosphere appear purplish rather than its current greenish color. It is estimated to have occurred between 3.5 and 2.4 billion years ago during the Archean eon, prior to the Great Oxygenation Event and Huronian glaciation.
Retinal-containing cell membranes exhibit a single light absorption peak centered in the energy-rich green-yellow region of the visible spectrum, but transmit and reflect red and blue light, resulting in a magenta color. Chlorophyll pigments, in contrast, absorb red and blue light, but little or no green light, which results in the characteristic green reflection of plants, green algae, cyanobacteria and other organisms with chlorophyllic organelles. The simplicity of retinal pigments in comparison to the more complex chlorophyll, their association with isoprenoid lipids in the cell membrane, as well as the discovery of archaeal membrane components in ancient sediments on the Early Earth are consistent with an early appearance of life forms with purple membranes prior to the turquoise of the Canfield ocean and later green photosynthetic organisms.
Evidence
The discovery of archaeal membrane components in ancient sediments on the Early Earth support the PEH.
Modern examples of retinal-based photosynthesis
An example of retinal-based organisms that exist today are photosynthetic microbes collectively called Haloarchaea. Many Haloarchaea contain the retinal derivative protein bacteriorhodopsin in their cell membrane, which carries out photon-driven proton pumping, generating a proton-motive gradient across the membrane and driving ATP synthesis. The process is a form of anoxygenic photosynthesis that does not involve carbon fixation, and the haloarchaeal membrane protein pump constitutes one of the simplest known bioenergetic systems for harvesting light energy.
Evolutionary history
Microorganisms with purple and green photopigments frequently co-exist in stratified colonies known as microbial mats, where they may utilize complementary regions of the solar spectrum. Co-existence of purple and green pigment-containing microorganisms in many environments suggests their co-evolution.
It is possible that the Early Earth's biosphere was initially dominated by retinal-powered archaeal colonies that absorbed all the green light, leaving the eubacteria that "lived in their shadows" to evolve utilizing the residual red and blue light spectrum. However, when porphyrin-based photoautotrophs evolved and started to photosynthesize, which included both the primitive purple bacteria using bacteriochlorophyll and cyanobacteria using chlorophyll, highly reactive dioxygen was released as a byproduct of water splitting and started to accumulate, first in the ocean and then in the atmosphere. Over the course of a billion years, large enough quantities of oxygen had been produced, the reducing capabilities of chemical compounds on the Earth's surface were depleted, and the once-reducing atmosphere eventually became a permanently oxidizing one with abundant free oxygen molecules — an event known as Great Oxygenation Event. This coincided with a 300 million year-long global ice age at beginning of the Proterozoic known as the Huronian glaciation (which might also have been partly caused by the oxidative depletion of the atmospheric methane — a powerful greenhouse gas — due to the Great Oxygenation) and devastated the anaerobic biota, leaving the niches open for eubacteria that evolved antioxident capabilities (both the aerobic proteobacteria and the photosynthetic cyanobacteria) to exploit and prosper. This also forced the surviving anaerobes to either live only in anoxic waters and deep sea oxygen minimum zones, or adapt a symbiotic life among aerobes (whose colonies would sometimes consume enough free oxygen to create pockets of hypoxia where anaerobes can thrive), which might have paved way for the long-term endosymbiosis between anaerobic archaea and aerobic eubacteria (which evolved into mitochondria) that enabled eukaryotes to evolve.
However, the porphyrin-based nature of chlorophyll had created an evolutionary trap, dictating that chlorophyllic organisms cannot re-adapt to absorb the energy-rich and now-available green light, and therefore ended up reflecting and presenting a greenish color. The subsequent success of more advanced chlorophyllic organisms (particularly green algae and early plants) in terrestrial colonization created an overall green biosphere all over Earth.
Implications for astrobiology
Astrobiologists have suggested that retinal pigments may serve as remote biosignatures in exoplanet research. The Purple Earth hypothesis has great implications for the search for extraterrestrial life. Historically, scientists sought out planets reflecting light in the green-yellow range as possible hosts to photosynthetic organisms, due to the implied presence of chlorophyll. The hypothesis suggests that search methods should be expanded to planets reflecting blue and red light, since evolution of retinal-based photosynthesis is also probable, or possibly even more likely than the evolution of chlorophyllic systems.
See also
Microbial rhodopsin
Bacteriorhodopsin — A proton pump used by Haloarchaea to harvest light energy.
Archaerhodopsin — A family of retinal-containing photoreceptor proteins found in Halobacterium and ''Halorubrum
Boring Billion — a later phase during the Proterozoic when the seas may have been turquoise.
References
External links
Colorful Worlds: Plants on Other Planets Might Not Be Green
PBS Eons: When the Earth was purple
CNN Colorscope-When life on Earth began, it was purple
Astrobiology
Earth
Earth hypothesis | Purple Earth hypothesis | [
"Astronomy",
"Biology"
] | 1,285 | [
"Origin of life",
"Speculative evolution",
"Astrobiology",
"Biological hypotheses",
"Astronomical sub-disciplines"
] |
11,148,523 | https://en.wikipedia.org/wiki/2.5D%20%28machining%29 | In machining, 2.5D refers to a surface which is a projection of a plane into 3rd dimension – although the object is 3-dimensional, there are no overhanging elements possible. Objects of this type are often represented as a contour map that gives the height (i.e., thickness or depth) of the object at each point. A 2.5D image is a simplified three-dimensional ((x, y, z) Cartesian coordinates system) surface representation that contains at most one depth (z) value for every point in the (x, y) plane. All features of the part will be visible from one view meaning that it can written with simple codes and accessible technology. It also means the machining portion of the task can be completed without the need of manually removing it and re-centering the part. This leads to increased efficiency and affordability for manufacturers. 2.5 axis machining is also used in education to build an understanding of the concepts and gain experience.
Advantages
2.5D objects are often greatly preferred for machining, as it is easy to generate G-code for them in an efficient, often close to optimal fashion, while optimal cutting tool paths for true 3-dimensional objects can be NP-complete (nondeterministic polynomial time complete), although many algorithms exist. Many milling operations can be completed using 2.5 axes. Operations that can be completed on 2.5 axes are simplistic designs containing flat bottom pockets and other terrace-like features. Drilling and tapping operations are also possible on a 2.5-axis mill. 2.5D objects can be machined on a 3-axis milling machine, and do not require any of the features of a higher-axis machine to produce. CNC machines use G-code and M-code in order to control the machine and the positioning of the spindle. Canned cycles use G-code to machine specific features such as flat-bottom pockets, drilling, or tapping cycles. These make use of 2.5 axis machines, and are used more in education then industry.
Applications
A 2.5D machine, also called a two-and-a-half-axis mill, possesses the capability to translate in all three axes but can perform the cutting operation only in two of the three axes at a time due to hardware or software limitations, or a machine that has a solenoid instead of a true, linear Z axis. A typical example involves an XY table that positions each hole center, where the spindle (Z-axis) then completes a fixed cycle for drilling by plunging and retracting axially. The code for 2.5D machining is significantly less complex than 3D contour machining, and the software and hardware requirements are (traditionally) less expensive. Drilling and tapping centers are inexpensive, limited-duty machining centers that began as a 2.5-axis market category, although many late-model ones are 3-axis because the software and hardware costs have dropped with advancing technology. CNC(computer numerical control) routers are another example of machines that use 2.5 axes. Routers typically operate in 2 dimensions (x,y), and (z) travel is for positioning. Although routers are not capable of drilling and tapping, they can perform basic milling processes. CNC router technology is quickly becoming more advanced as companies move to produce parts for less, and some routers can operate on the (x,y,z) planes just as mills do. The key difference is the capabilities of the spindle; the spindles are often less precise and cannot produce the same torque at low RPMs compared to modern milling machines. This is why routers are rarely used in drilling and tapping operations.
References
Computer-aided engineering | 2.5D (machining) | [
"Engineering"
] | 775 | [
"Construction",
"Industrial engineering",
"Computer-aided engineering"
] |
11,148,549 | https://en.wikipedia.org/wiki/Abstract%20family%20of%20languages | In computer science, in particular in the field of formal language theory,
an abstract family of languages is an abstract mathematical notion generalizing characteristics common to the regular languages, the context-free languages and the recursively enumerable languages, and other families of formal languages studied in the scientific literature.
Formal definitions
A formal language is a set for which there exists a finite set of abstract symbols such that , where * is the Kleene star operation.
A family of languages is an ordered pair , where
is an infinite set of symbols;
is a set of formal languages;
For each in there exists a finite subset such that ; and
for some in .
A trio is a family of languages closed under homomorphisms that do not introduce the empty word, inverse homomorphisms, and intersections with a regular language.
A full trio, also called a cone, is a trio closed under arbitrary homomorphism.
A (full) semi-AFL is a (full) trio closed under union.
A (full) AFL is a (full) semi-AFL closed under concatenation and the Kleene plus.
Some families of languages
The following are some simple results from the study of abstract families of languages.
Within the Chomsky hierarchy, the regular languages, the context-free languages, and the recursively enumerable languages are all full AFLs. However, the context sensitive languages and the recursive languages are AFLs, but not full AFLs because they are not closed under arbitrary homomorphisms.
The family of regular languages are contained within any cone (full trio). Other categories of abstract families are identifiable by closure under other operations such as shuffle, reversal, or substitution.
Origins
Seymour Ginsburg of the University of Southern California and Sheila Greibach of Harvard University presented the first AFL theory paper at the IEEE Eighth Annual Symposium on Switching and Automata Theory in 1967.
Notes
References
Seymour Ginsburg, Algebraic and automata theoretic properties of formal languages, North-Holland, 1975, .
John E. Hopcroft and Jeffrey D. Ullman, Introduction to Automata Theory, Languages, and Computation, Addison-Wesley Publishing, Reading Massachusetts, 1979. . Chapter 11: Closure properties of families of languages.
Formal languages
Applied mathematics | Abstract family of languages | [
"Mathematics"
] | 456 | [
"Formal languages",
"Mathematical logic",
"Applied mathematics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.